 I already think it's time to begin. Should apologize up front. We had some small scheduling confusion. Myself and Martin Glofflin are going to be presenting in the next 40 minutes. He will be presenting about Oslo. He, Oslo was on schedule up until, I think, this morning when Keystone was added to it. So he'll be presenting in about 20 minutes. So I am Dolph Matthews. I am picking up the torch from Joseph Heck as PTL for the Havana release cycle. I've been a developer at Rackspace since about as long as Keystone has existed. They brought me on for the project. That was mid Diablo, I believe. So I know it pretty well. Go ahead and begin. So today I'll be talking about what Keystone is, a little bit of the history, why it exists, how it fits into the OpenStack ecosystem, what it is that we actually do for our users and for the other OpenStack services. We'll be talking about what we accomplished in Grizzly, looking back on some of the higher points and things we're providing. And I'll be looking forward a little bit to Havana, but unfortunately, this is actually scheduled in the middle of our design track. So I can speak to some of the decisions we've made and a lot of the things that we're thinking about. I can't tell you both the things that we'll be talking about this afternoon and actually we have a session right now on LDAP. And then at the end, I will be taking questions. So Keystone fits into OpenStack as the identity management interface. And our goal is not to necessarily own your identity management data, which you may have in LDAP, say, but we can also do that. We can house it in, say, MySQL or you can build your own backend. Or maybe you have something that's not SQL and not LDAP. We are totally pluggable and we can handle that as well. But our goal is to provide the OpenStack internals with an interface to your data. So our OpenStack end users authenticate with Keystone and typically they use a username and password. You may have some other authentication needs. Say something based on certificates or something based on API access keys. Any other credential you can imagine, we have ways to handle that and we are pluggable in a lot of different ways. Out of the box, we support username and password. We convert that into a token of which we have a couple different formats in Grizzly. And that token is passed around to authenticate the user's requests throughout OpenStack. So that authentication, we use roles to map to the permissions and capabilities that that user has within the OpenStack deployment. The role assignments themselves are owned by Keystone or whatever backend you put behind Keystone. Again, we're just an interface to that data. And the enforcement of those policies, the enforcement of what those roles actually mean and definition of what those roles actually mean is actually distributed among the services. So I'm sure Mark will be talking in a minute about the centralized policy engine which has moved into Oslo. Maybe, maybe not. He's looking at me like, no. So anyway, we've decentralized policy enforcement among all the different services. So you configure through Keystone basically what a user is capable of doing and then what that actually means is decided by each service and the services are what say yes or no to if can this user perform this action on this resource. Resources are something else that Keystone has some oversight on. And I'll get a little bit more into that in a minute. We used to refer to tenants as being something that Keystone owns. Tenants being collections of resources in OpenStack, whether they are instances or images or containers, clients, Swift, whatever. We don't actually own any of that but we own the collections of them and we own the assignments of roles and users to give them access to those resources. So service discovery is last thing that Keystone is responsible for and it's basically the concept of user comes into OpenStack and knows the identity service endpoint Keystone. After they're authenticated, we give them back a service catalog which describes all the services of all the different types, any endpoints, public, admin, internal, and any region and any availability zone within the deployment. That allows the user to decide on their own what endpoints they actually want to talk to. So if they need Nova and Glantz and some AZ or whatever, they can figure out where they need to go and go talk to them. The goal being that you only need to know Keystone and how to get to Keystone and how to talk to Keystone before you get to the other services. So in Grizzly, actually let me step back a little bit, Havana, our token format that we used throughout OpenStack was basically just UUIDs. We generated a unique string, gave that back to the user and it was I think 32 characters and quite short and convenient. And it worked pretty well. The thing is they are technically bearer tokens and if say that token gets logged somewhere and logs are compromised and then whoever has access to the logs has then has access to all those tokens, many of which are probably still valid and then you have basically total control over large portions of an OpenStack deployment. So we got around that a little bit by signing tokens, still bearer tokens. So we didn't quite solve that problem. We're talking about that for Havana. I'll get to that in a minute. The advantage of signed tokens here is when the user makes a request to a service we were producing quite a bit of network chatter because again Keystone owns that token and the remote services were having to call back to Keystone and ask is this token valid? Can this user actually perform what they're claiming they could perform? So by signing the tokens on behalf of Keystone, other services can decrypt that and make sure that the token is actually valid without actually hitting the network. So it's actually resulted in quite a bit of a performance game from a network perspective. So we've had a lot of use cases brought up over the past few releases that we actually could not solve from an API perspective. So since late Essex, early Folsom development, we've been talking about creating a whole new API so that we can start to accommodate some of these new use cases and features that are old API was basically too rigid to be able to handle. So in Grizzly we started implementing identity API V3 which has actually the next four things on this slide. We're all enabled by implementing this API. So we have full feature parity with the old V2 API. We actually run both APIs side by side out of the box. If you'd like, you can run only V2 or if you'd like, you can only run V3. But again, it's all configurable and all pluggable and all based on pipelines and things. So domains are an interesting concept and probably the first ask we got after Diablo and it was really something we could not handle. The use case being in a public cloud, each customer should be able to have multiple projects associated with their account which got a little bit tricky, especially if the users wanted to say give themselves whatever username they wanted or create users with whatever username they wanted. That makes a lot of sense and it makes even more sense than a private cloud deployment when you're working within a department or some smaller entity in a larger cloud. So the namespacing issue is one of the first things that brought about domains, the segregation of concerns between two different organizations within a larger cloud so that they wouldn't have to have entirely separate clouds just to have, say, distinct user names. That's something else I was gonna say there, so I'll just move on. So user groups, we now support the ability to group users into just arbitrary collections which you can then perform administrative actions on such as assigning a role to an entire group rather than to individual users which is sort of a maintenance nightmare. I think we'd like to extend this in Havana towards like being able to delete a group and also have that delete the users in the group. So we're gonna extend that all to extend the applicability of user groups to other administrative operations. The goal right now was basically to ease role assignments and role management and role verification and things like that. So trusts are what we currently call our implementation for delegation and impersonation features. The idea being that one user can express that they trust another user to perform a particular role or a set of roles on a given entity project. That second user, the trustee can then turn around and say, okay, I wanna consume that trust and then go perform whatever action that allows me to do. The use case for that being Nova needs to talk to Glantz and Glantz needs to turn around and do something else on behalf of the user without actually using the user's credentials. Glantz can, or a user can say, I trust Glantz to perform Operation X on behalf of me and then Glantz can go off and do that. Impersonation sits on top of that and you can actually say, I trust this user to perform this operation as me. So the credentials that you're provided by Keystone actually represent the other user, but they're flagged as, hey, you're actually impersonating somebody. Here's who it actually is. So we have a little bit of an audit trail there. One of the other big focuses in Grizzly based on a lot of use cases we've had is for external authentication mechanisms. So Keystone from the start has always been intended to be very pluggable. One of the deployment scenarios that we see a lot is to run Keystone behind something like Apache for load balancing or TLS termination or whatever. Another one of the awesome things that Apache can do for you is say HTTP digest authentication. So to allow Keystone to consume that type of authentication, Apache will pass down a remote user environment variable and then Keystone could just implicitly trust that and I'll talk a little bit more about that in a minute. But Keystone will trust that and then authenticate the user as whatever Apache says they were. Apache just being one use case for that feature. And pluggable authentication is something we introduced on top of V3 from both the API level and from a backend perspective. So the identity backend behind Keystone may run on SQL, but you may only want to authenticate users against an LDAP backend. And then perhaps as users authenticate with OpenStack against your identity backend, or sorry against your LDAP backend, you can actually provision users in real time or sorry, lazily into your identity backend and Keystone or update them or do whatever kind of role management you need as you see new users coming in. So we've made all of that pluggable. We've provided two plugins out of the box, one of them for actually authenticating tokens and one for authenticating like username, password. And at the same time, we've also made that pluggable at the API layer. So if you have some other use case, say API key, we can actually allow you through the API to provide whatever credentials you want and then write your own plugin to handle those credentials and authenticate it back to an OpenStack user. And then it would proceed on as normal and we'd give them a token and they can work with OpenStack. So for Havanna, had a lot of discussion about OAuth back since Diablo, one of the most common questions we get is why don't we support OAuth? And our answer has always been go write it. We finally had somebody write it. So we now have to treat hasn't merged yet, but it's part way implemented and we have a lot of support behind it. I expect to see OAuth 1.0a specifically. That's what's currently in development. We have a lot of talk about OAuth 2 as well and perhaps using OpenID Connect on top of that. OpenID Connect is a specification that as far as I understand it is still in draft phase and still changing rapidly. So I'm not quite sure what we'll see there in Havanna. May not actually land in core but you may see implementations floating around. And we've also specifically from CERN been asked to support X509 certificate validation which I think we are going to make pluggable through the remote user interface. Because as I understand it, Apache already supports that. And so all we need to do is take advantage of it but it requires making the way we handle remote users also pluggable. So that'll be a new feature coming early soon because it's actually easy for us to do. So we've implemented a whole new API. We actually have a client that has parity on most of those features but we don't have full client support throughout OpenStack yet. Specifically our middleware support is a little bit lacking. We can validate V3 tokens and things like that in middleware. We do not have a full OpenStack so we don't have a full parity on the CLI yet. We've started another OpenStack project probably sometime during Folsom development called OpenStack Client. And its goal is to own the CLI basically. So we've implemented a Python library that exposes all of the V3 features. We haven't actually exposed them on the command line. OpenStack Client will be our way to do that and I'm hoping it will reach sort of an alpha state sometime real soon. And then a lot of the V3 features that we've worked on have not been exposed in Horizon yet such as domains. So when you log into Horizon, Keystone actually makes the assumption that you're only working with one specific domain that's actually configurable in the backend and so we call that our default domain. You can start working with other domains in Grizzly. Horizon just won't be able to express that back to you and will be scoped to one domain. So we're talking, we're gonna be working a lot with Horizon guys to figure out how to expose multiple domains to the UI and things like that. Event notifications is something that we've also talked about for a long time. And again, Oslo I think has made this a lot easier for us recently. So we're gonna go ahead and do it. Is the use case where say a tenant is deleted in Keystone but the instances are still running because Keystone doesn't have any way to say notify other services that events are taking place and they need to respond somehow. So we're gonna start sending out notifications on basic card operations in Keystone. Tenant deletion being no significant use case there. I should also apologize. We are effectively using the terms tenant and project interchangeably during this release. So that's probably really confusing. Historically they've been called tenants and the V3 API we decided to change the name to projects. Just sort of a nomenclature issue we've had and I think we're trying to resolve it and everyone will be speaking in terms of projects by next release. Availability zones and region management. Keystone supports concept of regions. Nova and I believe sender support availability zones. It's really a feature that should belong to Keystone. So we just had a session to sort of compress that into sort of a hierarchical region construct and we'll support that in Keystone moving forward. Everything on this slide is either taking place right now or this afternoon. So I can just kind of like give you some hints as to what we're talking about but really no decisions or strong backers have been seen on most of the stuff except for key management. Key management I believe is going to be seen as a separate service possibly proposed for incubation sometime during Havana. So I don't want to talk too much about that. Something that people have talked about putting into Keystone, it seems like something Keystone should own but in reality the expertise and contributors behind that is a whole distinct group and from a deployment scenario it makes sense to like load balance Keystone and a key manager service separately. So we just had a session on that and that was kind of the outcome of it. LDAP integration, obviously one of our strong priorities. We're constantly seeing more and more complicated use cases for LDAP, the provisioning users case being one. So one of our goals for LDAP is to simply publish some real simple plugins that maybe don't make sense in a vanilla open stack install but would make sense for more complicated LDAP deployment to say pick up and then modify just a few bits like I actually want these other roles assigned to some user when I first see the user. Centralized quotas, something else it's a concept that makes sense to have centralized into Keystone because they kind of are the central service and yet every other service seems to be implementing their own quota stuff. So we're having a discussion on that this afternoon. Secure endpoint to endpoint communication refers to perhaps yet another token format or another authentication authorization within open stack. OAuth2 and OpenID connect may be the solution to that but we're still talking about it. Ultimately it would need a strong encryption between services within open stack and tokens that can only be read and acknowledged by the intended service that it was intended to use that token, consume that token. And fine-grained access control refers to the idea of bringing in references to resource groups I think is the proposal. So subdividing tenants and perhaps cross tenants into resources that they own into resource groups that can be then managed via policy in Keystone and yet enforced in their remote services at a much more fine-grained layer. So it may be able to express capabilities that users have on say specific containers or even specific objects in Swift or volumes in Nova or anything like that. Or sender, sorry. That's it. Does anybody have any questions? Pass it off to Mark. For some reason my laptop has decided not to do VGA anymore. So I guess just to get started to introduce myself I'm Mark McLaughlin. I'm from Red Hat and I'm the Oslo PTO. I've been involved with OpenStack for about 18 months now. As well as Oslo, I guess I run the stable branch effort. I'm on the technical committee and I'm on the foundation board. And I work on Nova whenever I get a chance. Cool. Okay, so this is my, how I describe Oslo, the kind of mandate for Oslo. It's a program, we're starting to call it a program now instead of a project, I'll explain that a bit, to create some libraries, Python libraries of shared infrastructure code for OpenStack. We're aiming to have stable APIs, high quality consistent and generally useful and stable, the stable one and bold is probably the most important aspect of the APIs we're trying to build here that's influenced how we're trying to build it and I'll explain all that in a minute. But basically we're trying to create a bunch of libraries that all of the OpenStack services can use for implementing stuff like REST APIs, reading configuration files, logging, all of the basics of implementing an OpenStack service. So thank you very much, best service for a smile. It's a Mac keyboard. What are you doing? I actually don't know what you're doing. How do you people use these things? Anyway, okay. The error keys don't work, no. Okay, another way to think about what we're trying to do in Oslo is we're fighting copying and paste. Basically when the project got started, we built a bunch of services, we all borrowed kind of similar code from different projects. Those code bases kind of evolved separately. So every project basically does all of these things like REST APIs and logging configuration management, similarly but differently. And it's all a big mess of copying based code and Oslo is trying to clean that up. So it's, you know, technical debt we're trying to clean up here. Hm? Two fingers. Oh, two fingers. And the other aspect of this is we're kind of making this up as we go along. This isn't so much of a technical problem, but kind of a more meta kind of project building, community building problem here, right? We're trying to get people interested in this technical debt problem, working together, collaborating to solve that problem. And it might seem simple to develop an API, but to build a community from all of these separate projects that want to work on cross-project issues, that's the hard part of what we're trying to achieve here. Okay, so I've started calling it a program this week because of an idea Monty Taylor had and the idea is that rather than being like a project but just a single Git repository, it's actually multiple projects under an umbrella, the Oslo program. And what's in common there is this, the people working on the program, right? So we've got a crew of generalists. So we've got an Oslo core team who basically are, you know, generalist Python developers who are interested in, you know, everything that might come into Oslo and they're, you know, they might be good at like designing Python APIs or whatever. Then we've got kind of specialist developers like guys who are interested in, you know, REST APIs or guys who are interested in messaging and that kind of stuff. So we've got this combination of generalists and specialists on the project and what's uniting them is this cross-project focus. They're not just interested in NOVA or they're not just interested in Keystone, they actually want to work together on cross-project issues. I'm getting good at this. Okay, so we've come up with this idea of having an incubator repository, right? And the idea here is we have this problem with copy and paste debt and it might seem like a trivial problem to take all of that code and make a library out of it. But it's, in order to produce a library, you need to have a stable API and the code that's littered across these projects, there is no well-defined, nicely designed stable API for all of these things that we can be able to evolve in a backwards compatible way. So we need to get from where we are now to a nice stable clean library and what happens in between. And what happens in between is what this incubator repository is about. So we take, say, the code from NOVA, import it into the Oslo incubator and we evolve it there until it's ready to be released as a library. And as we're evolving it, we're also making other projects standardize on that code. And as we get projects to standardize on that code, we're learning about problems with the API design or whatever and that's helping us clean up the API design. So we have a git repository called Oslo incubator.git and that's what holds all these work in progress APIs. But no API is intended to stay in that repository forever. It's a stepping stone. So when an API moves out of the incubator, we do a library release. And the current, the initial thinking anyway was that we'd have a Python namespace package called Oslo and then we'd have little libraries of packages that install and start that namespace basically. So the first one was Oslo.config. We'd have Oslo.rpc. Each of these libraries are separate projects but their schedule, like their release management schedule is aligned with the server releases. And that's because these are all APIs that are used by the server projects. So their development cycle is naturally aligned. We, the versioning scheme we're using for the libraries is like 1.1.x, 1.2.x. So somatic style versioning basically. And the caveat here is we might actually do this differently for some of the libraries in the Oslo program. So a lot of the code we're talking here is kind of open stack specific code. So the library would kind of naturally be kind of open stack specific in at least the way we kind of implement this stuff. But there is some stuff that I think won't be so open stack specific and we will want to try and promote usage of these libraries outside of open stack. So it's things like build tools like for checking coding style and stuff or how we do packaging of Python and stuff. So those libraries may actually be not named using the Oslo name just to not have a kind of a branding issue where other projects won't use it because they think it's open stack specific. And final meta issue I guess is that Oslo is not an acronym so please don't a little bug bear in mind. So in Grizzly, what can we do in Grizzly? I guess in the Falsum, in Falsum we were just calling this project open stack common and we renamed it to Oslo just because we like our code names. We renamed their repository from just open stack common to Oslo incubator and that was to enforce, you know to really get across this message that this repository is a stepping stone for APIs as they're being evolved. We settled on a debate between Python and period separation for the names of these things and that was very important. And we set up a versioning scheme. We were going to go for the same versioning scheme, the date based versioning scheme of server releases but instead went for a somatic style versioning and we released our first library package Oslo, Dr. Faye, which, yeah, yeah. So, okay, so that's all meta stuff, right? You like when Dolph was talking you heard like technical stuff and what I've talked about is all meta kind of project management stuff. There is actually some technical work on this project and in Grizzly, what's we do? Well, we took the database code from Nova and we've imported it into the incubator now. I'm not 100% sure whether other projects are using it yet, but that'll be happening very early in Havana. Database, like session code for connected to database and nothing to do with specific schemas. There is a base class for your base model with some helpers in it, but not schema code. There's also the root wrapper utility, which if you know OpenStack, it's how we basically elevate, how a service briefly elevates its privileges to root to perform some system operations. That started off in Nova, but it's now being imported into Oslo and I think at least Quantum and Cinder use it and it's in pretty good shape now. Some of the other stuff we've imported from Nova is the kind of service infrastructure, so how a service kind of starts off and how it spawns off worker processes and that kind of stuff. We had another interesting case where in terms of copy and paste problems, I guess splitting the Cinder project out of Nova was just the most extreme possible example of a copy and paste issue. We literally took Nova, duplicated it, and deleted some stuff. So there's a whole lot of stuff common between Nova and Cinder and that's all in scope for going into the Oslo project. So the first example we have with that really is some of the code from the Nova scheduler got imported into Oslo and Cinder is now reusing that. Dolph mentioned our policy engine and there's a new policy language for that that came in Grizzly. In terms of the configuration API, we ported it from opt parse to arg parse, which is the future. And we've added some support for being able to move configuration options from the default, the one big kind of default section in the configuration file to kind of more specific sections just in terms of being able to group configuration options to kind of help users that make sense of Nova's 500 odd configuration options. On the RPC side of things, we've versioned our underwear message format so we can actually make compatible changes to that underwear format. We've added this single response queue feature which should be, is a performance improvement, especially where you're using cluster brokers. And we've also added HGA support to the Cupid driver. And all in all, that was 18 blueprints, 100 bugs, 320 commits, and I couldn't believe it when I saw 73 contributors to the project in the last release. I had no idea. It's one of those long tail things where there was, must have been 30 people who contributed maybe one patch. So I think that was really positive. So plans for Havana. So I think in terms of, we've released one library now, the configuration library, and we should be able to do a bunch more quickly because we kind of understand the model and how to do it. We hopefully do the messaging, the RPC library, this release, but we've had a bunch of work to do to clean up that API before we can do that. So I'm thinking, easier ones might be logging, roof wrap, roof wrap seems in good shape, maybe some translation utilities. What you also see happening in the Havana release is we'll have new packages for how we do Python code packaging and our kind of cross-project build and kind of develop and code compliance, coding style compliance checking. You'll see a lot more work, hopefully, on the database side of things and more projects adopting Nova's database infrastructure code. And hopefully you'll also see the service infrastructure code being adopted by other projects. Hopefully a lot more work on moving Nova's scheduler code. And Nova and Cinder said scheduler code into a low incubator too. The last one is Whiskey, that's how we implement our REST APIs, which is a key part of all of OpenStack services and all done quite differently in each project. There's no one currently actively working on cleaning all that up. So I'm putting that up there as a really kind of optimistic. Maybe someone will work on it in the future. So that's me. Any questions on Oslo? So it will be separate libraries? Yep. Yeah, so once the project moves out of incubator, it moves into its own Git repository and its own separate library package on PyPy. No, the reason we do incubation is because basically before I'm willing for us to release a library, I want to be sure that we can commit to API stability because I don't want to be making... These APIs are going to be... Before they're kind of properly cleaned up, there's going to be a ton of incompatible changes to these APIs, and I don't want to do that with incompatible RESTs on PyPy. You're going to use this as a period of time, right? Yeah. Oh, absolutely, yeah. But I guess the incubator will always exist because there's always going to be new technical debt that we're trying to... So this is never going to be a finished problem. So we're going to be attacking it at the time. No, no, it's moved out of incubator now and it's gone, yeah. Oh, absolutely, yeah, yeah, yeah. Okay, so two questions there. The first was how does someone find out about all the projects available? Good question. I guess I need a better wiki page for the Oslo project. Apart from that, I kind of guess when someone's working on a new service or whether they're going to be looking at what all the other services they're doing and seeing what libraries they're using, that's kind of the way I expect this to happen, but I'll do a better wiki page describing the projects available. And in terms of is the scope limited to code that's shared across projects or is code that's just used by one project in scope? No, it's definitely code shared between projects. That's basically the definition of the scope of the Oslo program. Any other questions? Python three, yeah, absolutely. I wasn't in the Python three design summit session, but my understanding is they're starting with Oslo just because it's a smaller project and I guess it has a lot of impact in that it's a cross-project issue. So I should have mentioned this. I expect them to see someone hopefully working on Python three support in Oslo in the baby steps. This Python three issue is a big problem and any baby steps is progress. So any other questions? Cool, thank you very much.