 So, my name is Lance, I'm the current Keystone PTL, and I'm joined by Colleen Murphy, she's the newest addition to our Keystone core team, and today we're here to talk to you about the Pyke project update. So quick outline, we're going to give a brief introduction into what OpenStack identity is, and kind of what roles it fills within OpenStack deployments. And then we're going to go into what we accomplished in Okada, what we want to achieve in Pyke, and then look down the road to the next couple of releases. And then we should have plenty of time for questions at the end, so feel free to use a microphone, but we'll have plenty of time, so if you have feedback or questions, we'll take those at the end. So now I'll hand it off to Colleen for a brief intro. Okay, so if you're new to OpenStack, you're not sure what we're actually talking about when we say OpenStack Identity, we'll go over that really quickly. So the Keystone project is the shared service for authentication, authorization, and auditing. So it's the project that's responsible for users, faking sure users are who they say they are, that they have permissions to do what they want to do, and for leaving an audit trail for security managers to review. It supplies identity information to end users and services, so it implements that off-layer that so other projects don't have to deal with that. And it acts as a broker between OpenStack and other identity services, so whether that's your LDAP directory or a federated single sign-on service, it can translate users and those external services into users that can use OpenStack. And as of the last user survey, it had a 98% adoption rate, meaning it is possible to deploy an OpenStack cloud without Keystone, but you really wouldn't find it very much in the wild. So now I'll go into what we accomplished in the last cycle. So one of the big things that we got done in this cycle was easing the burden of long-running operations. So for a long time, operators had the problem of needing to run some sort of long-running operation that involves service-to-service communication. And in the middle of the operation, the token might expire and that would cause the whole job to interrupt. And people were working around this by increasing the lifetime of tokens, but that didn't always work, and it's just also inherently less secure. So what we've done to help ease this problem is we're allowing services to present users just expired tokens to other services in conjunction with a special service token. And that way, if a user starts a job with a valid token, they can finish the job with the same token. One of the major changes that came in this cycle was making the Furnet token format the default format. So the traditional UUID tokens were at random strings stored in a database. They were validated by just reading from the database. The Furnet token format was introduced in Kilo and it's a non-persistent format, so there's no interaction with the database. There's no need to replicate across clusters, and that helps improve scalability. It lowers your database traffic, so you're not putting so much load on your database servers. And it's just easier database management, because you don't have to clear out these token tables anymore. And we were concerned about changing the format to this improved token type, because it does require a little extra setup. And we didn't want to surprise operators with this. But at the Austin Summit, we got positive feedback that when operators plan upgrades, they plan them very carefully. They read all of the release notes and config management helps take care of all these problems. So we got the positive feedback that this was going to be a net positive for everybody changing this default. And a lot of the work that we did making Furnet, making it ready to be the default token provider, it forced us to really think more carefully about how we're dealing with tokens. And that allowed us to make smarter use of token revocation by cleaning up how we were simplifying how we were validating tokens that allowed us to clean up a lot of these unnecessary revocation events and help reduce the flood of notifications. Another thing is we improved the usability of our PCI DSS feature. So in previous cycles, we'd enabled account controls to help satisfy PCI data security standards. And this cycle we built on that by making it easier to use. So we created an API for password complexity requirements. So it makes it easy for tools like Horizon to query and display password requirements and also to do some client side validation if they want to. We created an API that enables tooling for admins to query for users with expired passwords or passwords that are about to expire so they can help notify users that's happening. And we enhanced PCI related notifications by adding reasons for the notifications. So for example, if a user is locked out of their account because they've had too many failed authentication attempts, the reason for that happening is more obvious than the admin can notify the user of what's going on. And there's gonna be a talk on Thursday at 4.10 in room 311. It's gonna be all about this PCI DSS feature. So if you're interested in that, if you need that for your corporation, that's gonna be really informative for you. All right, another thing, we enabled multi-factor authentication. So we now have the ability to enhance user account security by requiring on a per user basis multiple authentication mechanisms. So for example, requiring both a password and a time-based one-time passcode. And we sort of avoided adding this for a while. We were punting the responsibility off to federated identity services. But we've got feedback that users really wanted this for native Keystone users. So that's something we were able to accomplish in this cycle. Federated auto provisioning. So what that means is there used to be no real straightforward way of assigning federated users roles and projects. We were able to do it via groups and some people had some hacks to sort of pre-populate the database with IDs from federated users before the federated user had logged in. But we've now enhanced our mapping rules to actually make it a first-class feature to link federated users to projects before they've had to log in. Even dynamically create projects if we need to. And the last thing here is we've made the V3 API the default in our integration gate testing. So the V3 API is the domain aware API. And this has been sort of a long time coming because all these, all the OpenStack projects have for a long time made these sort of hard coded assumptions about the old API. So we finally sort of chased all those down and resolved those. And having this while gate tested it is gonna help ensure stability of this API and get us further down the road of deprecating the old V2 API. And so with that, I'm gonna hand it back off to Lance who can talk about what's gonna come up in the future cycles. Cool, so now that we know what we've talked about in Elcada, let's focus on what we're working on in Pyke. One of the first things that we did as Pyke development opened was move all of our default policies into code and registers those in code. As a result, we're also documenting them better. We understand that there's a lot of issues with policy and there's gonna be a long road to fixing that. We're making progress on it, but what this does today is that it makes it easier for operators to maintain their policies and not have to keep duplicated our default policies in their files. They can trim those down and they can run with only the overrides that they absolutely need. I'll kind of touch on how that sets us up for some future policy work that we're gonna be looking forward to in the next couple releases. If you were in the PTG in Atlanta, we had a lot of good discussion around quotas. And it all kind of boiled up to a unified limits implementation. Since Keystone is the thing that knows about projects, it makes sense to associate the limits to those projects in Keystone. And so what that's gonna set projects up for or services up for later on is consuming that limit information to make consistent quota usage across OpenStack. We've had a lot of really good discussion about it. We're in the progress of proposing a spec and getting that implementation fleshed out for Keystone. And that's something we're gonna be targeting for this release. Another thing that we want to do with projects is add the concept of tags. So if you've used Nova or Neutron and used resource tags, you're gonna be doing that same thing with projects. We're only targeting the project's resource initially, but it's a very big use case. We've had it come up several times. So we'll be doing that in bite. Federation integration testing is a big one. The last couple releases we've been working on getting the plumbing in place to do integration testing, that's there. And so now we really need to increase that test coverage, make that better. And going hand in hand with that, Keystone has supported rolling up grades since Newton. The last thing that we really need to do to assert that tag is to get it tested in the gate. The good news for us is that OpenStack Ansible and some of the other communities in OpenStack have done a lot of great work to get that integrated into their gate already. So we can reuse a lot of that code to get every patch that's proposed to Keystone tested in a rolling upgrade fashion using containers. Which is, that's really exciting. Like we can stop breakages before they happen or before they merge. So that's a big accomplishment that we're gonna be looking forward to in bike. So let's look ahead to the next couple releases. Something that we need to do is provide some sort of standardized default rules and we can't do this as just Keystone. It's not just a Keystone project. We need to talk about this across OpenStack services and really come together and start solving it as a cross project initiative. And a lot of the other projects are really starting to see that. And that's going to be a great thing to eventually get out the door. OpenStack's made a lot of evolution in the last five years. The policy that we use hasn't. So that's gonna be kind of a really good thing to have. And along that same vein, we want to improve policy security. So if you know the admins issue that we have, you're either giving up all the keys to the kingdom. Adam's sitting right over there. 968696, 968696, yeah. You're either giving up all the keys to the castle or you're not giving users enough delegation to do the things that they need. And I would argue that both of those violate the principle of least privilege. So we need to fix that. And I think our first step is defining what that end state for policy looks like in five years, in two years, right? And share that with other projects so that we can start bite-sized increments to make that thing happen. We've had a lot of good progress on it. So we've dedicated meeting time every week to just talking about issues with policy. So that's gonna be something that we're really looking forward to in Queens and Rocky. And we also have a talk on a proof of concept that Adam is working on. So catch that tomorrow at 430, it's in room 311. Yep, so if you wanna see it, if you wanna give feedback, we'd love to hear it. Go check it out. Another thing that I kind of touched on with the pike release is hierarchal limits and quotas. This is really gonna set things up for services to consume this information in a new release or two and really smooth out a lot of the quota usage across to OpenStack. Which is another big problem that's OpenStack-wide that we can focus on and kind of help enable other projects to make it better. And so my shameless plug to John Maxwell. If you're familiar with his stuff or if you're familiar with 101% principle, I found it that really applies to what we're seeing with quota and what we're seeing with policy. As the OpenStack services or as these project development teams are noticing these problems, we're coming together. We're finding common ground on things that we agree on and we're dedicating effort to those things. We're not focusing on the conflict and it's really paid off in the last release or two. Because we're making some good progress on things that have been issues in OpenStack for the last five years. And for me that's really exciting. That's gonna be a lot of fun to continue doing in the next couple releases and I'm looking forward to that. One other thing that we talked about in Atlanta was API keys. With all the federation work that we've been doing, we've been, right? We've been doing a good job of trying to isolate the authentication from the identity, right? And so as we kind of took a step back in Atlanta and looked at all that, we saw a really good seam to natively support API keys. Which is gonna give people more security. You don't have to have passwords in config files and all that kind of stuff. There's a bunch of really good benefits of it. We have a spec proposed. I don't think it's going to be making it into Pyke, but we certainly have it staged for a subsequent release. In addition to that, one of the good federation use cases is native SAML support. We have a proof of concept up for this. We had one of our developers do a bunch of great work with SAML. He understood it, put together a great proof of concept. And so that's something that we want to incorporate in a subsequent release as well and where this really makes sense is if you are, if you have a customer and you give them a domain and you want to give them domain admin and you want them to manage their own identity providers. They have different sources of their identities for their employees and you want them to come in via different ways. They don't have to ask the cloud admin to go re-roll Apache configurations in order to make that happen anymore. Instead, they can do it over the API, which is just going to be another alternative to the Apache stuff. Like if you're using Apache to do federation today, I think that's another useful thing that we can give operators as another tool for their federation toolbox. In addition to that, with the concept of API keys, I kind of talked about isolating authentication from identities. So with OpenStack deployments today, it's becoming more possible to authenticate via federation, authenticate via LDAP. And maybe you have a local user in your SQL database. We want to try and find a way to link those accounts so that you can come in through different methods of authentication and still be tied back to the same resources. That way you're not duplicating your identities. And as always, continuous integration testing. That's important to us, like performance stability. Like those things are key tenants that we need to be focused on as an off project. So building that out now that we have a good framework in place is paramount and that's going to be a recurring theme as we keep moving forward. So with that, we have time for questions. If you have any questions, feel free to use the microphone. Otherwise, we can repeat them. Native SAML stuff, and you mentioned that they have a customer and you give them a domain. But I'd argue that it's not just for customers if you're someone kind of a provider. I mean, for us, we want to do that just with our own internal users and projects. And especially when we get API keys, they need to be able to manage that themselves. Absolutely. That's a great point. It's not just useful to domain admins or people that you're delegating that responsibility to. It's a much easier method of operation for operators to. Maintaining your own stuff, you don't have to re-roll a cloud in order to add an identity provider. Just on the changes that went into Ocata around the Federation stuff, on the mappings. So now with the final stuff, in my testing up, I notice that the IDPs have to be unique per domain. Actually, unique in the whole service, essentially. It's not just unique per domain. It's in a way that that can be worked around in some way so that we can still use domains, still use Federation, but just have a single Apache config. OK. Well, we had to associate domains to an IDP in order to make that something that could happen. But I'm super interested to hear your use case for that. We'll just have multiple domains and everybody using the same IDP, essentially. OK. Yeah, that would be something that I would like to sit down and work through that a little bit more. If you could say something to people that are looking to contribute, what would it be? Well, we have our project onboarding session tomorrow. I think it starts at 11. Upstairs, I forget the room. But come to that. I'll be there. So that'd be great. And we need contributors. Like we have a bunch of really cool work staged. And we need the hands to do it. So feel free to come stop by our IRC channel. We're always willing to help and get new people involved. Awesome. Thank you.