 Figure, I'll go ahead and get kickin'. Oh, good, Adam's here, you can heckle me. Sweet, all right. So, State of the Project Keystone. Just to update you on what we've done over the last six months, we've got a little bit of an overview of what Keystone is for those of you who are new to OpenStack, just coming here for the first time. And we've got a little bit of detail of the history of how it's gotten to where we're going and what we're going to be talking about in the future. First, a little intro about me. My name's Joe Heck. I live in Seattle, born and raised in the Midwest. I kind of fell into the Keystone project because I was annoyed it wasn't being done right and somebody said, put up or shut up. So, I got involved. Since then, I've been working with the Keystone project for the last, probably year, PTL for the last nine months, and just moving it forward, keeping it really stable with just the standard pattern of making this thing as boring and simple and absolutely obvious as possible so that everything else can be built on it. I don't want it to be fancy. I don't want it to be glorious or anything like that. Glorious usually means something's blown up or there's terrible security alert and the hair's on fire. It's just really not a good thing. So, that's what we're doing with Keystone. So, outline of what I'm going to talk about today. First, why Keystone, where it came from, the origins of OpenStack, what it is, how it's working a little bit, the basic concepts and a high level architecture, then a little bit of the history lesson that I spoke of earlier and then our upcoming plans and what we're kind of thinking about right now to move forward. Why Keystone? In a lot of respects, this was the very first of the OpenStack Commons. So, when Nova and Swift came together as the rack space and NASA initiatives and they started working together, right off the bat, in the Austin and Bear releases, you had separate IDs and you had to make separate accounts to work between them and they're like, duh, it's obvious, can't we just use the same account system? But what really highlighted it was kind of Glance at the time because Glance wanted to make images available out of Swift for Nova, very obvious. But to do that, you started running into, well, identity and problems along those lines, what are we going to do? So it was resolved that we should do something with making a common identity mechanism, framework for authentication and basic authorization and then start moving that forward. So it started off as this common set of internal APIs expressing relevant identity for the projects that make up OpenStack. And kind of out of that, it also sort of became a little bit of a service catalog, not as a primary means, but because some of the URI structures for the REST APIs for specifically Swift had identity built into those URIs. So for Joe, if you want to go to my container, you have to know what my ID is and so putting it into the service catalog made that very easy to expose that out along with the relevant other identity information across other OpenStack projects, different button. At least I don't have a clicker. You guys see Chris's presentation this morning with a clicker thing? It was awesome. So what is Keystone? It's a single source of authentication and authorization. It's the same account and credentials for starting a virtual machine instances as it is to access the glance containers, make a NOVA, where I guess it's Cinder volume back up now, snapshots, put it into object storage, get it back out. All of that requires identity being passed around some sort of trust mechanism going around between the whole thing. That's what Keystone's providing. Keystone's also expanded beyond just authentication to provide a little bit of authorization. It's worth to note though that the pattern that we've set up within OpenStack is Keystone does not answer the question of are you allowed to do this? It simply is the single source of truth and the actual enforcement of policy related to authorization is distributed in the services themselves. And some of that code is in OpenStack common and we're starting to push more and more of it to OpenStack common as it's just sort of this base common framework. And so actually the Keystone project works very actively in both the OpenStack common area and the Keystone project itself which is what's providing this interface to the identity systems. It's also a common means of expressing API endpoints. As I mentioned in the service catalog earlier, hey, where can I get to to go to NOVA? Where can I get to to go to Cinder? Does Swift exist in this deployment? If so, where is it? Same for any other project along the lines. Keystone is internally kind of broken up into four large functional areas, identity, policy, token, and catalog. Now the basic mechanism by which we're doing all the authorization today is a token based mechanism. And so that's where the token comes in. The token is just something that we pass around that has a bunch of metadata associated with it that represents an identity. And so Keystone is intended to be the thing that you can query, you know, I have a token, tell me about the identity of this person and get a lot more information about them so you can get the projects they're involved with, the roles that they have, anything along those lines. Then that sort of fits over into policy and it hasn't been really explicit in releases up until the stuff coming up now, but with the V3 API that I'll talk about a little bit in the future, we're making Keystone also the central source of truth for the policy that all the services can then have a single source of being able to get it from, being able to pull it in and then enforce it themselves. So we also have a single representation of that as opposed to it being splattered across all the projects, different documented elsewhere, making it a little bit easier for deployers and everything else just to consolidate this and keep it together. And then that final part is catalog which just represents that service catalog of what's available and we just try to keep it steady and obvious so that we can have some capabilities there to be able to have one URL where you can go to, you can authorize and then from that you will know where everything else in your environment happens to exist. For the basic concepts of identity, there's three key things right now, there's gonna be some new pieces coming up in this next release of the authentication API. The first is the concept of a tenant or project, horrible naming, I know, I'm very sorry, didn't pick it, I had to sort of inherit that. But it's the basic unit of ownership and what we intended or what was intended when this thing was put together was that that level of ownership wasn't necessarily locked to a single user. So if Adam and I were going to share an account, we could have our different user credentials and be able to use this one thing that had to shared whatever and that thing is called a project and dashboard or a tenant in some of the APIs as you see them today. The user is the actual individual, it's also what we've set up as a service just because of some of the deficiencies of the original V2 API, it didn't quite go all the way to being able to answer questions in a solid and consistent fashion about, okay, well who can reset Adam's password? Who can reset mine when we're sharing this tenant or when we're not and so forth along those lines. The new thing that's coming up that is gonna help resolve some of this was contributed by our friends at HP, it was called domains and they put it in as originally as an extension for the V2 APIs, but it just kinda didn't really make sense there because everybody else had to really respect it, it would have been a cascade of changes through everything. So we went ahead and rolled it into just a new V3 API that's coming up and so domains will be added to this list at the hopefully on the next set of the slides when I do it in six months or so. And then the final bit is role which is simply that named relationship which is always between a user and a project or user and in the future domain. So you can have sort of a domain role where you can reset the password that answers the question of, how do you set this up and so forth as well as having roles on objects or roles on projects that you can then transfer and use else wise. I think it covers the basics. Policy is right now, this says in Essex obviously I didn't quite edit all the slides properly. It's the same as it was in Essex in Folsom as it turns out. It's just this internal concept right now that is what became the code when it went into OpenStack Common and a bunch of great updates got made to it kind of the past week I think to sort of rewrite it to make the format a lot easier to understand and that's the common policy file that we'll be using across all the projects to enforce whatever you get back with user IDs, projects, roles and so forth so you can say are you allowed to do this or are you not allowed to do this? It's a simple rule based mechanism. Nova Glance and Keystone were all using it as a Folsom I believe actually Quantum I had also fully integrated it in the Folsom release and I think Cinder is in the process or may have already finished it up. Didn't quite pay attention to where it all was in those cases towards the end of this release cycle. The token is just an arbitrary string that represents a simple thing and at most basic it's a fairly at the moment short string in the future a much larger string that you just put in an HTTP header and you pass around with your requests the client asks for it whether the client is a CLI client or the dashboard doing it for you it takes care of managing it for you and that's what gets passed around to the very services the services that can then look up your identity information based on that token that's going on. The reason that I mentioned that the token is going to become a little bit longer string is because in this latest release one of the things that we did was enable tokens to be PKI signed entities and so that string went from just a simple UUID thing to an entire signed PKI token that we can then extract a lot more information we can put up additional information on and it really sets the stage to make some really nice advancements in the future to make this whole thing move forward. And catalog, we struggled with this back and forth actually in this last API run there's the idea of services and then each service can have multiple endpoints in many simple deployments you only have one endpoint per service but in some of the more complex deployments like Rackspace, HP and so forth you might have quite a number of endpoints for a single service and so there's two different concepts in there they basically end up giving you a bunch of URL sets and then there's a common pattern that's sort of a convention with an actual policy right now of internal and external API endpoints so that you can have something that just your internal service is used to be able to talk to those endpoints that's not necessarily exposed to customer network or exposed to your customers and then your customers can have a public URL that's the one that they would be expected to use to be able to access whatever. And I mentioned the token it was this string and it had a lot of other stuff associated with it so I went ahead and just pasted a copy in here so you can get a sense of what this thing is actually under the covers. When you actually get a token and go to the keystone and ask for the service this is the data that gives you back that says okay here's the valid token and here's all the information associated with it and this is a little scrubbed up I removed a bunch of endpoints in there just to collapse it down a little bit but you get the sense of it and really the key is just that token ID and that's what you pass in the HTTP headers and that takes care of the authentication. You don't actually have to worry about that that's because we went to all the work to put it into off middle where we went through all the work to put in the pieces into open stack common to make that just very simple you don't have to mess with it it's just there. So at a high level architecture keystone matches almost every other open stack project really because the biggest thing that's different between our project and many of the others is that we don't sit on that RPC bus that everybody else does that sort of came out of that core that was Nova we're strictly an HTTP based mechanism there's no reason we couldn't sit on the RPC bus that just hasn't been a real burning need to be able to do so. There might be in the future. There's been some talk about hey, how can we do authenticated RPC mechanisms and what the implications for that might be kind of interesting. But it's just a standard whiskey application configured with paste, URI routes mapped to configurable back ends and when keystone was originally prototyped it was kind of a monolithic thing and then we basically rewrote the whole kit keeping the API compatibility and we moved it to a much more pluggable interface much like all the other components that you see in open stack today. Cinder and Quantum explicitly where you can put in a pluggable back end that does the thing that you wanted to do. So we have a number of back ends that you can have for identity, for catalog, for whatever else. Some of them are very simple SQL back ends you can have a key value back end or you can plug it into some read-only service that you might already have that does authentication for your organization so that you don't have to re-implement that whole thing. Yeah, sort of already covered this tonight. It's basically keystone is an operational facade to existing systems much like Cinder is to whatever your existing volume systems is and Quantum is for the existing network systems. The supported back ends today include SQL, LDAP, Active Directory, put a big asterisk nest to that and a big thank you to the guys at CERN by the way. PAM and in a simple key value store. Catalog, same thing, there's a template mechanism or a SQL based back end. Tokens, you can throw it in a memcache have a random key value store or use SQL so forth and so on like that. The thing about the Active Directory there's been some recent pieces added by Jose. Fantastic work dude. Thank you. It does mean that you have to configure your Active Directory in some specific ways to represent information that keystone's going to expect to be able to get out. In particular, the projects and the users and roles and know how they match up to your Active Directory schema. If you can't modify your schema there's probably some other work we need to do to really make that happen but it's a wonderful first step. It built off Adam's original LDAP support. Now we've got Active Directory support and we're going to continue to move forward to make it more obvious. So keystone history in a nutshell. I sort of talked about this at the very beginning back when it was all coming together at Glance and Nova and Swift. It was hey, we need to have a common account. We don't want to have to have everybody creating account in every darn system that we have especially as OpenStack was pulling together and wanting to grow. So it really highlighted it up and by the time Cactus rolled around it was under really active prototyping and a lot of good discussions. A lot of things going back and forth. Experiments going out there, trying things around and really kind of nailing things down. And then Diablo is when it was really, was really kind of fleshed out to its full form for what is the V2 API today. That's not to say it was perfect. You know, there's a lot of lessons to be learned there but it really nailed it down. The downside was it was kind of changing up to the last minute there in Diablo which made life a little bit difficult and it's really changed how we've chosen to work on keystone now in the future so we take it a little bit more slow and make sure it's really solid. And most importantly to try not to impact all the rest of the projects because we're at a foundation level that they're all relying on for authentication and we don't wanna make a change and then suddenly break stuff. So it has also had an administrative API and that's where you would do things like changing passwords and so forth. There wasn't much in the way of capability for a user to change their own password and that's still the API that we're with actually. That prototype was great. It really got us out the door. It got us solid and it got us moving forward and then right a little bit after the Diablo release is kind of when I came in and worked with some other folks to step up and move keystone forward past that and so that became the work that went into Essex and that's the time when we went through and we basically gutted the code base, we replaced it and we kept all the API functionality exactly the same. We didn't wanna change both things at once. Let's not do big forklifts. I don't know if you saw Dan went let's talk but the big thing is no forklifts. I mean we need to keep this project solid and I mean opens deck solid and stable and be able to move it forward in well-considered increments and be able to move it as we need to still be able to deprecate things but do it in a very intentional pattern. That was also the time for the architectural shift to more independent drivers so that we could start back ending it into read-only systems as opposed to having just a SQL based system. Pieces have been added in. That's when we had in the original domains API and it was really a tough choice at the time. Do we try to push this all in and do this all at once or do we take it piece by piece and we chose to do it piece by piece so we maintain that 100% API compatibility as we went out the Essex release and really fixed up the stuff if you will to allow us to move forward in the future and then that's exactly what we did in Folsom. So in Folsom the big addition really was this PKI based mechanism for the tokens. That's not enabled by default but hopefully by the end of Grizzly that's exactly what we will be using for defaults and then we'll be able to build on that whatever by the time Grizzly goes out the door. Okay so we'll look at it sure quicker. I said by the end of I got us lots of time to review it. Right, totally derailed me, good heckler. Kept everything rock solid. Again we maintained that 100% API compatibility and then kind of notable we really kind of focused on making sure that when we understood security issues as they came up we dealt with them as quickly as possible, as efficiently as possible. We back ported them, we loaded them out, we worked with the OpenStack security team which was a bit of a learning experience for me because I really hadn't done that before but it worked out really well. And I think we got six or seven total security fixes out. Some of them were last minute, oh my God, oh hair and fire things. And some of them were like, yeah, we really know this is not the best way of doing it. Let's fix it up. And then sort of at the end of the Folsom release Russell was going through the list and he's like, hey, we really ought to kind of call that a CVE because that was actually a security fix. Oh yeah, right, okay. So then we released a whole bunch of announcements that we fixed it even though we'd fixed it months before. So we're learning how to make that all work properly but that's probably pretty much what we did in Folsom and really set the stage to move it forward and make things go even further in Grizzly. Now what's gonna happen in Grizzly, this is actually sort of a difficult thing to say because our sessions aren't until Thursday. So we haven't actually talked about everything that we're going to do and nailed down all these plans but these are kind of the highlights from talking with some of the core in Keystone and the things that I've been hearing from other people and other talks during the first part of the session. And that is, we're gonna take the V3 API which is actually up there in an implemented state right now in a feature branch, we're gonna land it in master and we're gonna start rolling it out. We'll be rolling V3 out with V2, running them in parallel for some time and then moving that forward. So this means off changes that will impact every project which means going to every project and making sure it works smoothly, fixing the bugs, figuring out what could be done wrong, could be done better. Although we nailed down the V3 API spec in terms of, okay, this is what we're going to do as in we have it now in source control and we're doing it with revisions to update that spec, we're still leaving it open to lessons learned and really that was some advice that I got from Brian in the Glance team that they had done a spec exactly that way, started implementing it and then realized, oh shit, we really screwed this up and then had to go back and retrace some steps and redo some things. So we hope this not gonna be significant changes but we'll start with where we are, we've got initial implementation out there, we've got tests wrapped around it, we're going for full coverage, really move this forward slowly, intentionally, carefully and then get it out there. The idea being have the V3 API out and running as of the fall, ideally by Grizzly and then an H and I somewhere in that timeframe deprecate out the V2. So that we'll switch all the way over to the V3 API somewhere in the next couple releases down the road. So not fast, not all at once, no forklift, a very intentional, thoughtful step forward. The other thing that we're really gonna be focusing on, well, one of the other things that we're gonna focus on is consolidating the policy files. Really heard this loud and clear actually during the end of the Folsom release. It was like, well, what do I do about this? We had a lot of capabilities for our back that were already in the system and it just wasn't clear how to do it from a deployment perspective. And I promised Anne and she's been so nice to me and hasn't told me that I broke all of my promises to her but I did in that I didn't get the documentation consolidated for here's some examples of how to do that. We just ran out of manpower in the Keystone project actually is what it amounted to in terms of being able to consolidate this together, get it out there and put some more examples in place. And really that's also prep work for being able to consolidate all the policy files in Keystone as a single source of truth. So that's gonna be something we'll be focusing on just pulling it together. As a matter of fact, we're having a session on that on Thursday, I think it's at two. Is that when we decided to do it? Yeah, 320. I changed it in, I figured out how to change it in Sketch, so I did. So it's this policy now. So if you're implementing Keystone and have thoughts about what you wanna see in terms of specific RBAC capabilities and segregation and example policy, basically configurations across an entire OpenStack deployment, please come and give us your feedback at that time because I really wanna try to collect this together. I'm mostly in the just collected document phase so we can put them together as recommended deployments and ways to be able to do this. Other things that have been on the list for Grizzly is extending the authorization mechanism. It's been a need for quite some time to have something that would allow us to do delegation or impersonation, something equivalent to that, so that you can say, hey, on behalf of me, I want the system to go ahead and take that block volume snapshot and throw it up in Swift, right? So to do that, Glantz is gonna be talking, or something, Zender's gonna be talking to Swift. And how's it actually gonna do that? I mean, is it gonna take my credentials and pass them all around? What if it's time delayed and the token's expired? There's a lot of different pieces that are going on. So we have some proposals out, some discussions going on, and some of this is also up for discussion on Thursday. We're gonna continue to extend down the Active Directory support. I'll hopefully get it to the point where you can use it on a totally stock Active Directory system. And some of that also goes to, what if we wanna use different authentication systems beyond Active Directory? That's been the most common request. But there's also, hey, you know, I've already got Kerberos in my environment. Maybe I'm a research university, something along those lines. I wanna be able to plug into that. And that's been a fairly common request as well. So allowing us to plug into something that doesn't have all the authentication, doesn't have all the details we need for authorization as in projects and roles, but would be able to make the authentication pluggable specifically outside of an identity system and come up with some more patterns for along those lines. And that's where the externalizing authentication really comes into, just as a kind of a combination of that stuff with Active Directory. Obviously moving the default token to PKI, review is up. And then there's a bunch of work that we're doing around the CLI and common authentication. There were some great discussions earlier related to just common CLI tools and making it very easy to use them so you didn't have to know Nova versus Glance versus Keystone to interact with OpenStack. But more importantly also is being able to use KeyStack from the other clients so that they didn't all have to redo the auth implementations, which all of them do right now. But none of them are particularly satisfactory. So we're gonna reset that a little bit, get it set up with a clear Python API, make it available to the other APIs, move that forward. And with luck, we'll have a couple other alternative APIs out there, maybe not in Keystone Core, maybe not anything Core or whatever else, but there'll be pieces out there that are available in OpenContribution that you could use if you wanted to work for multiple languages, whether it be PHP or Ruby or Objective-C or whatever you like. The last thing is, we've got a talk on Thursday at the end of the day. David Chadwick has been talking about how to enable federation among multiple instances for Keystone. I totally blackballed it for discussion last year or last summit because I just wanted to focus on, let's take one thing at a time, but we've got it at the point where we're stable enough now that I think it's really worth talking about what are the use cases, what do we wanna enable, how do we wanna enable trust delegation, and how do we wanna support that? Not for expected, honestly, implementation and grisly, but to set the stage for it, because I expect that we don't have some of the things we need to be able to get there. Maybe we do, maybe it'll go much faster than I thought, but I've learned that this does not always go as fast as I want it to. So do the learning, do the discussions, and then go from there. And that is what I have. So with that, let me open the floor to questions. We'll go from there. Mr. Henry. So in talking about blacking out, as much as possible, yes. The ideal way to have anything in my mind that's reasonably supported is to make sure it either goes into Tempest or it gets into DevStack in some form so that you can run a variation of DevStack that has it available so that we can test it and make sure it runs. I imagine there's gonna be some plugins that are just by so and so. There've been some companies that have approached me over time saying, hey, I'm an identity provider and I wanna write a plugin for Keystone and I'm like, sweet, go ahead and do that. But it's gonna be up to them to really validate that it all functions and assert that it's a good plugin, if you will. Yeah, whoever does it. You are totally channeling Gabriel, aren't you? It looks like you're trying to... No idea. But it's self-supports and it curve rules. You're not talking to Keystone directly about the horizon, so you curve those there and then what I think was gonna happen, especially with the edge of it that fall, if we take a lot of the sort of design capabilities that is then gonna say, so long as you have the HALA certificate and we'll be able to open the beat that provides itself in the form of the, at least some of the sort of generation or at least capacity, you know, only a nice little bit of sky-off, but okay. And that's gonna be a good one. Okay, they gave literally the same that's sort of important because we also need, do we show that we're correct? Ironically, I took... You're using all that, right? Ironically, I heckled Dan just before this, saying, well, how are you doing? How are you exposing these capability things to horizon? They're using the extensions mechanism. We'll clearly need to support that, but I think maybe there's something more that we need to do across all of OpenStack, especially with more and more of us being plugins with those plugins having custom capabilities that some might have and some others might not or optional implementations, if you will, to really expose that up in some fashion to horizon. Anyone? Other questions? Yes. I was wondering for your authorization tokens, in the process of building an application which is likely to use lots of tokens in lots of places, shut them around. So I want them to be very efficient to process. So I'm thinking something like OpenID, which is, you know, HMAT-based JSON objects rather than RSA operations or XML and chaotic-wise, that's something I'm kind of stuck. Is there any desire, plan, thought to support that kind of approach? We are very intentionally supporting a common Python API for the OpenStack projects so that they don't have to mess with doing exactly what you're doing. Whether the underlying implementation becomes something else in the future, right now? No, I mean right now we're gonna move to the PKI stuff and then use that to leverage. But where we go in the future, there's nothing that says we couldn't switch out the entire authentication layer to use OpenID or straight up Kerberos. Shouldn't we be insane enough to want to drive those libraries? You know, something along those lines. It's capable of doing that, but it's not in our media plans at this time. I should recognize it. You must be David. Yeah, I think so. Brilliant, very nice to meet you. I've been wanting to meet you. It's that you actually would define an API and then you would just plug in our own token. You could have a token in any format with just the keystone just called give you the token, we'll get the token, pass it out to the client, the client would pass it back and use it in the call of code. Right. If you did that way, then you can actually say you can have any base node. Right. And that's totally what's on the table for discussion on Thursday afternoon. So if you're interested, please attend. Yes, I am. In fact, that's kind of the proposal that I would find. I would find you, look out for the start. Looks like you're getting support already. Awesome. Yeah. My question is about a new background. So we have all the information we're having, which I would actually probably suggest to all of the federations that we treat federations as sort of a scapegoat where we have some of them. You don't require any of you who intend to do any of that? There is an intention to do something with the state of the token to at least expose a little bit more. Right now it's very simple. It's either valid or not. And it would be nice to be able to support an intermediate state, even if it's not necessarily defined on how that is. And that would be necessary to support something like a multi-factor authentication, whatever you wanted to do. And there's been a lot of talk and thought about it, but nobody stood up and said, yeah, I'm gonna go do that work right yet. If you're interested, it's totally there. The blueprints are up. The support is desired. It's simply basically lacking resources to make it implemented right now. I think we would pull it like this today. Lots of talks on Thursday. I'm trying to get it down. Yeah. Right now, we're working on it. Still gives you a new token. Yes. That is still the case. Reuse intentional reuse of token. I don't think anybody's been opposed to it just so much as nobody stepped up and said, that really pisses me off. I'm gonna go do it. I actually really thought, I actually really implemented it that way. If you're asking the same token, and if you have a special role, how long do you want to do a sliding window if you re-use the same token? And then if you might need a token for, I don't agree with you. If you get a token for $8 or more, you have to really be nervous. I think the right solution is, let's start looking at, keeping it for some other support, but you don't have to actually go for it. Absolutely. Everything that you do, you do it. Yeah, I mean, the right thing to do, but then I'm gonna do it. Well, that's a little bit of a comment to see a lot of stuff. That's the right question. And that's exactly where we're planning on addressing that. Because it asks for basically a new token every time you even kind of look at it. Sometimes it asks for three or four every time you look at it. It's just sort of strange. So it's like, no, let's get that fixed, shall we? Other questions? Yeah. In that delegation discussion, is there any talk about constrained delegation to other users? Yes. And how do I learn more about it or contribute? Thursday. The concept is a policy thing. It's a policy and what I call a free office is the ability to say, I want to let that user become the, are all either very close. In some constrained fashion, which might be constraints on privileges, time to do whatever, right? We've got all Thursdays on piece of design. We'd be there and then we'd be in, in case they'd be built. And it's what you want to substrate. Thank you. So yes. It would be that security. Anything else? All right, obviously Thursday, Keystone Day upstairs. Please, if you're interested, if you want to help contribute, if it's just something you want to find out what's going on and what the actual conversations are and how we're doing the design, join us. It's open for all. Thank you very much.