 OK, I guess it's time to get started. And we're going to try to push Keystone over the edge. I'm Adam Young, Cloud Solutions Architect with Red Hat. Yes, that's true. I went over to the dark side. I'm in sales now. But I have been doing Keystone since before it was through the incubation process. It was during the Diablo release when an old stacker friend of ours decided to rewrite it from Java-based Python to real Python, as he said, out of spite. If any of you know, he must not be named. Tell me. My co-presenters here have their own mics. I'm Kristin Nicola. I'm a software engineer with the Mass Open Cloud. I've been doing Keystone for almost three years now. And I'm Morgan Feinberg. I'm an engineer with Red Hat. I've been doing Keystone almost as long as Adam has. I joined and worked on it around Diablo. Yep. So when talking about the edge, probably the most important thing is to find a really good edge to push Keystone over. So we have here 3,000 feet of unbroken granite from top to bottom, the cell cap. And I figure if we're going to push it over an edge, this is about as good an edge as we can find. What is the edge? I actually realized that I was supposed to cut in at the end of this. Or should I just keep going? OK, I'll keep going. The edge, and what we're talking about is where you have to talk to some remote site, right? We have to talk to the example I've been given time again is one noted a cell phone tower, or a car, or a set of registers at a retail store. And of course, I'm now in the financial services industry, so I had to struggle to come up on that. So a single ATM machine, although I don't know if we run an open stack on ATM machines, trying to get the idea of what we're focusing on here. And that you've got some system that you can't assume that all connectivity is there at all the time, that you might have network access, but you might not have access to every network-based service in the organization all the time. I'm going to get where I can see the slides. It's off, turn them backwards. Is this off here? No. Don't get us here. OK, so we have to have some sort of hierarchy. We have to have some way of being able to reach up higher and get delegation to come down the hierarchy, and we need some way to be able to do failover. And so we need some way for just jumping into a detail. The Keystone middle, where the pieces that's in the middle needs a way to find some other author all to be able to authenticate with. And that's a big part of what we're going to be talking about here, how to deal with not just the edge itself, but the vagaries that that gives you as things go back and forth. So we're trying to get the sizes that we're talking about at all the different things. And so one of the working numbers I've had when talking about sizing is like 256 nodes per control set. And the problem, once you start doing this, you start looking at the number of nodes that we want to handle in OpenStack. I've always talked about the million node data center. Talking in chunks of 256, it gets hard to get to that number. If you're using Galera to have Keystone in multiple sites, and you put three of them at a single site with a 12 node Galera cluster, you can have about four distinct sites. So we're nowhere near where we need to be. Well, to add on to that, we've explored the concept of expanding and stretching a Galera cluster from across many sites. And realistically, the replication works until the high single digits of sites. You realistically can say, hey, yeah, I can get to maybe nine to 12 sites, three Galera nodes apiece, and have a cluster of that size. After that, the replication starts becoming really, really obnoxious and slow, waiting for the ax from every node to say, hey, yeah, we've in fact got the data. And so what I just did below that is that, OK, if these are a limitation, I did it with four, because I was being conservative. We're at about 30,000 nodes that we can put together. If you put another layer on top. Now, my definition of OpenStack, as many of you have heard me say in the past, is Keystone, and then a bunch of entries in the service catalog. You can take out any single piece of OpenStack, and things work the same. But that whole pattern of go to Keystone, get a token, find out what's in the service catalog, go to it to do something, that workflow has to remain viable for, or there is no consistency in what is OpenStack. You can even remove Nova from that equation, and things still make sense. I guess I'm talking about standalone sender, or anything like that. So now, as we start looking at this, one of the things that came up, and yes, this is Morgan's dogs, we've had the request to say, hey, we have this data center that has, it's our special, all the servers are made out of gold, and we have gold customers who are allowed to use it, and they should be able to use it. They should be able to use everything else too, so it's not like you can just give them access to this. But how do you segregate them out that way? How do you make sure there's something to say? So this has been something that's been buzzing around in the back of our heads for a few releases now as well. So we want to talk about how to make this happen. And this is my dog, and she might not be clear there, in addition to her two bones, as we're talking about ownership, she has one of my socks. So what owns things in OpenStack? Domains, that's the top level thing in Keystone. And if you're not dealing with Keystone, then you probably are, domains are just something you think about in terms of Keystone. But we've been talking about spreading this out regionally, being able to spread it out geographically, being able to spread it out over multiple data centers, and the abstraction for that in OpenStack is regions. And regions only own things in the service catalog. When we get down to concrete objects, it owns endpoints. Domains own projects, domains own users, and the abstraction is based on those. And they're not hierarchical, but regions are hierarchical. So we have a bit of a mismatch. And so one of the things that we've identified is to be able to talk about what is where on the project side or being able to access resources at a specific, and only at a specific site or target, we need a way to at least link regions to domains. And part of it is to be able to control access and if you deactivate a domain, you can deactivate access to all the things that are in that domain. But also much more important is the ability to write locally. If you remember that original requirement we put up there of saying you don't have access to the whole network at all times and all those issues with Galera, we still wanna make it so that you can make local rights. And we're gonna talk about some of the resources that you wanna write locally. And one of the big ones is Apocred. So we'll get to that. And importantly, when you're talking about local rights, the reason for that is the case that many people who are trying to deploy to this kind of tiered model with your central, an edge or even a far edge type deployment is that you want to make sure that in the case of a network partition that the remote nodes cannot talk back to central, you can still function. There's a lot of places where you still need to be able to do the auto scaling or the other sideband tools or spin up new nodes, spin them down, maintenance, all that type of stuff. You just wanna make sure that those rights and the access for that data is available. So Adam, where do users come from? This is a good question. So Christie's gonna go a little bit into what we have in the existing mechanisms and the ones that we're gonna lie on heavily for dealing with the edge use cases. Cool, so this is probably the most esoteric part of the talk. The edge part was really fun and federation is always scary and everyone asks me all the time and I guess I'm only here for that part of the talk because a few people... No, to be fair, you're also here to make sure we got the talk accepted so we can say Massachusetts Open Cloud. Guess that worked? Yeah. So whereas if you use the Keystone database to store the users, you own the user database and you can make changes to it. You have full control over it but in the case of an edge cloud or a federated cloud, you don't own the user database. However, you need a mechanism to make use of the users who come from the database and allow those users to use your cloud. This makes it your problem and a problem which protocols like SAML help you solve. However, they can be annoying at times and you have a lot of them. You have SAML, which is XML based. You have OpenID Connect, Kerberos, X509 and you need something to translate from those protocols into something that Keystone can understand and this is where Apache gets in and helps us. The way these protocols interact with Keystone is through Apache models. So with SAML, you would either use ModShibuleth or ModAuthMelen. With OpenIDC, you would use ModAuth, OpenIDC and so on. With LDAP, you don't use ModAuth LDAP. Just to be clear, I said anything you can do in a patching module, you don't use ModAuth LDAP. That's the one thing that's different. And this is all based upon how we pass data through in a whiskey stack. We pass through headers and we can consume it. We can pop it off and pass it further down the stack or continue the data pass through. So we're just looking at, hey, we just have a bunch of headers, pull it out, extract the data. In the case of the Apache modules or nginx modules, we know that this is validated in trusted data. So we know that this is in fact who you say you are. Yeah, and so the link that I have up there was from a co-worker who basically kind of exhaustively went through and said, what set of identification information might you get out of one of these federated sources beyond just remote user? Typically, we talk about all caps, remote underscore user as the variable that's passed through that gives you your identity. But if you're doing X509 certificates, there are many other fields in there you might want to get. So that link right there will give you a sense of the breadth of what you have available. And in fact, it could be beyond this if you're using some other system. And you need a way to map those attributes that come in the headers through Apache into attributes of the user. And this is where mappings. Super accurate map, by the way. Oh yeah. Accurate as of last year. Two years ago. Two years ago, okay. So you need a way to map remote attributes that come from the assertion, well not actually from the assertion, but from the headers after Apache goes through them to attributes of the users and the groups that the user will have ownership of. So this is how a mapping rule looks like. You have the local attributes and the local attributes of the user that the external user will get mapped to and the remote attributes. So here's one example you have. So you can see in the remote part in the bottom you have first name, last name, email and OIDC groups. Those are attributes that will be present in the assertion coming from the open ID connect IDP. And those will be mapped respectively to the user's username. So we see we have a concatenation of first name and last name. The email address will be the email attribute and the group ownership will be a group named as the OIDC group coming from the identity provider in the domain with that ID right there. And so this assumes that every time you wanna use the cloud, you're gonna have to go through the IDP. But there's a way to skip that step and to create a user local identity. So for that you can use application credentials. Once you log in, you create an application credential and then you use that as you would normally log in. The problem with that right now is that that only works if you have concrete role assignments. So when you first log in into Keystone through federated identity, a shadow user gets created. And the role assignments that the user gets will be ephemeral if they're coming through group mappings. So in this case, if you have a group named staff which has a role assignment on project X, then this user will only have a role assigned to that project for the lifetime of this token and you cannot create an application credential with it. Because group ownership may change over time and if you concretely create that role assignment and you don't make it expire, then if the group membership changes, then that application credential will still be valid which is a security issue. So one thing we're doing to resolve that is to have application credentials, get the role assignments through the mapping but also to expire after a certain period of time. So you would have to log in again to refresh those credentials. And in the case that you log in from your identity provider and the conveyed role assignments, conveyed permissions change, then the details of how those application credentials are handled, it'll either be that they will automatically close down because you no longer are given permissions or we can reduce the roles that are conveyed in the application credential, which seems like the wrong choice. We're still working on some of those details but we just wanna make sure that when you come back in to refresh, if the refresh is, hey, you don't have access anymore, that we take care of that, we don't let you continue to log in. It's an immediate lockout. There'll also be a sideband process that has to be developed in the case that, well, somebody malicious like James here decides that he wants to ruin your day and ruin your cloud, exactly. We don't wanna let them do that. We have to let somebody, we need to go and say lockout access immediately and have a guaranteed action that has happened in all the sites. And we're looking at including tools with Agitant or other such projects or at least hooked into your CMS system to be able to do that. Yeah, and for the mechanism to refresh Upcrest, there's a spec which is being written right now. So if you have any ideas or suggestions or one of the four opens of OpenStack is open design, so please come talk to us. And I guess that's where I stop. And actually what he just said is true of a few of the things we're gonna be talking about here. I'd like to pause for a moment and say thank you because I know what you had to do to get here. James is, we've been talking with him because they do a lot of the stuff that we're talking about here on oath. And he had a talk at the same time. So there's a whole room of people over there wondering where their speaker is because he just came over here and decided to sit in our talk. So thank you very much. Please go see James' talk later today. Thanks for being here. It's the Zero Trust, Authentic Zero Trust? Yeah, Zero Trust with Athens. Zero Trust with Athens. And a lot of the stuff that we're talking about here is really, really driven off of the discussions we've had about Athens, especially I think the one we're gonna see here. So as you can see that sometimes identifiers are less than stellar like our man here is holding. If you look at the IDs in OpenStack, specifically user IDs and project IDs, they're UU IDs. They're designed to be globally random numbers or character strings and they're auto-generated, which means that each time a user came in, even if all the identification information they provide is identical, they're gonna get a unique identifier. This is fine if you have one monolithic Keystone deployment, but if you have ones that you're trying to synchronize that have a way of talking to each other other than Galera synchronization, you get a different identifier. And the federation mechanism that Christy just talked about is in heavy use in these cases and you find out that the audit trail is broken because I get one UID in this little cluster over here and a different one over there. And I've got some people I know that have like 20, 30 distinct clusters and it's a real problem. It also means that with federation, because the UU ID is generated when the user first visits the system, you can't assign a role to them. It has to be, either you have to write a really complicated mapping rule which identifies that username equals ayoungadredhat.com, or you have to do the role assignment to a group so that you know the user's gonna have when they come on in, or you assign them to that group afterwards. It works fine again if you're work, I don't know about fine, but it works if you're doing with one monolith where you have one central point of authentication. But once you start distributing it, it gets really problematic. And it turns out that when you delete a project from Keystone, unless things are listening for that notification, trapping it and properly cleaning up, you leave orphan resources all throughout your cloud. And right now some people are doing auto provisioning or the first time the user comes in, they get a project and all that kind of stuff. But that requires actually modification of the Keystone code. And all these things, specifically, in order to make auto provisioning work, they all need consistent IDs. Before we go on to the next slide, we are definitely going to be enhancing what we have, we have some basic auto provisioning in Keystone. But to really make those consistent IDs work, we're looking at drastically expanding how we do that so that every ID is predictable. It's not something that you can deal with a conflict. I thought you were going to solve with blockchain. Well, it solves everything. But what about? Everything. No, no blockchain. No blockchain. No, but how do we know? She's a witch. So let's talk about how we're going to generate the identifiers. Now, we do something like this for LDAP already, where we take apart from Keystone, what is assigned by Keystone, in this case, the domain ID and some unique name of the resource. Typically the CN at this point. Yeah, CN or the DN or something that uniquely identifies you from the identity source and in hash it. And we want to continue to use the domain ID because we do want to allow people potentially to change names and stuff like that. Domain IDs are considered globally unique across all deployments. It's a UUID again. And we have made a change that lets you specify them but they are intended to be globally unique. Yeah, what she was pointing out is when you create a domain, there's now the option, I don't know if this merged, but it's the reason for being able to explicitly define a domain ID. And please only use this where you're trying to synchronize a new deployment with one beforehand. And these, unlike the default domain, which is a string default, these are going to are forcing you to be UUID like patterns. But with that, when I go into a system that has this chashing approach, now I have a consistent hash. So if I hash UID XYZ1234 with a young at redhat.com, I'm always going to get the same thing. And one of the nice side effects of this is if you've deleted that project or if that project has never been created in specific Keystone and there's resources there with that ID for whatever reason, and perhaps due to a failover event, the project can be recreated and restored after it's deleted with the same identifier. It's not a panacea there because you need to know what those original values were. If all you know is the orphan resources, it's kind of hard to work backwards from a hash, but it's a nice benefit. And we're gonna do this, I think, pretty much across the board. We definitely know for users and for projects that we're gonna do this kind of mechanism, possibly even for role. We'll do this for pretty much anything that is specifically identified by ID for the most part across OpenStack. Don't expect it for things that are exclusively names. That's just not really where it's gonna go because names need to be a little bit more fluid. Yeah, exactly. So let's talk a little bit about automated provisioning. Okay, I guess it's me. So when somebody's coming into the system, you wanna make sure that they're very welcomed. I mean, when James logs into OpenStack, we wanna make sure that he has an account. He can actually do what he wants to do. At this point- And quota. Quota. We're talking about creating, at this point, the user, the project, and extracting if they are part of the assertion, the role assignments, and possibly even maybe quota. Pulling those attributes out of- Quota. Thanks, Adam. Quota. Pulling the attributes out of your federated identity document, whether that is your IDC data, your SAML assertion, or what future technology may come out of this. Though, thankfully, we usually only see a new federated technology once every, what, 10 years or so. Makes it a little bit easier to work with. The currently we use mapping for some of the auto generation and auto provisioning from and extracted with the federation mapping engine. However, it's not fully expensive. Also, it's never gonna work for LDAP. The map, as I said before, the mapping part isn't used for LDAP. And a lot of people, that's their current approach to be able to synchronize across this distinct deployments because that is the one thing that has the consistent ID mechanism today. So we sat down actually had a discussion and people in the past have asked for us to open up the APIs in Keystone to be able to do explicit synchronization between two of them. And what we wanted to do is talk through whether this was possible or not and what some of the drawbacks would be. So this is explicitly the non-galera case. We have two Keystone servers, whether it's in a hierarchy or as peers, and we want to keep some set of the data in sync between them. Perhaps it's a role assignment. And in the case of most Keystones, it's a lot more data than you would think. Yeah. So first of all, the mapping rules. Now mapping rules don't change all that often. And these are a really important thing to have in sync because this is how you make Federation work. These are idempotent, and so something like Ansible is the right solution for that, where you have a workflow that says make all of my Keystones have this particular mapping and if a specific Keystone is not available at a given point in time, retry it later, that's not a hard one to do. Now, what's much trickier is if you want to have local rights and you want to get the data out of Keystone to put it into other Keystones. If you're using notifications, which are going on to RabbitMQ, Rabbit, I know it's rock solid. I don't know why people keep telling me the Rabbit died. That might have nothing to do with OpenStack. But Rabbit, if it's not available, then you, or if something in the notification chain is not available, you lose that data. Even if the remote site is down, or even if you have some listener that's supposed to stick it into Keystone, because Keystone is just a whiskey app, and it doesn't have a process to spin up that can listen to notifications from others, you've lost that data. Keystone is fairly stateless. Yeah, and so if you decide that you do want to synchronize two Keystones and one of those conditions happened, you would need to replay all since some last known good date or time. You could do it, if you could do it item potent, you might be able to select it. So it's possible, but it's a tricky thing to get right. And that's one of the reasons why Galera is in such high use and one of the reasons why this whole concept of edge is a little tricky. And I see Ken Giusti sitting here because he spent at least the entire career of his that I know of working in messaging and dealing with exactly these kind of problems. And so this is not specific to OpenStack even. It's just, how do you synchronize data? It's a hard problem. So you can set up systems along these lines, but know that you're gonna be dealing with these kind of issues. And so one of the things that we might do if people push us in this direction, so at least we've identified here, but it was not currently on the design block, is the ability to tag, how can I query all since this last known good point, but that means we have to put a timestamp for change for every single object and we don't have that right now. And then the final one that was marked there is the custom database driver. A lot of folks have talked about, well, we'll just intercept the rights to the database and we'll send them everywhere. Well, you're recreating Galera without the benefits of Galera. Yeah. And again, all those other issues still apply, so. And that's also my dog. Yeah. So at this point, I'd like to feel questions. I know that it's, there's not a lot that we're saying that we're gonna do. What we're trying to do here is focus on those details that are essential. What we've learned from the oath approach to being able to scale out and what we've learned from talking with people and in our own experiences with our own customers and installations, what are the rough edges we need to file off first just to be able to do this stuff? So Keystone, as I said, OpenStack is Keystone and a bunch of entries in the service catalog. If you can't reach that Keystone, Glance can't validate your token and you can't do anything. So we need to make Keystone available as close to the edge as possible or those edge nodes can't work. But if it's the closer we get it to the edge, the further it is from the center and the further it is from that central voice of authority and truth and how do we bridge those steps in between? Are there questions at this point? No, there's no question. Are there questions? Please come on up. There's a couple of microphones here and I can also run this on up. So in the case of consistent IDs, what you're essentially also saying is that when you rename a project, its ID changes, which can be fundamentally a problem because a lot of people will change their projects and assume the ID stays the same and the name change will not affect anything which is the case currently and people do use this. So you're saying that you have, when the name changes, you're seeing an ID change? Well, no, no. What you're proposing is consistent IDs, which would in turn be based on name hash of some variant. Consistent IDs are not always named. With projects, we do not do consistent IDs at the moment. No, but that's one of the things that's happening. No, I think it's a good point. Basically what you're saying is we're not gonna allow people to change the name of the project. And I think that that's fair. That's gonna be the trade-off that we have to make there. That if you want them to be consistent across the cluster, then you're gonna have to somehow record what that original name was and use that when you create the additional thing. And I don't have a good answer for you there. Other than saying that if we're going to use this and use this consistently, then we're gonna have to have a, perhaps a config flag or something that says that the names are immutable. It has to be a unique identifier, because you're right. If I go to another one and I create it, but I'm doing with a new name, it's gonna get a new ID. There's also in many cases, you have unique identifiers that come along with that do not change even within a federated assertion. And we may need to lean on those to make sure that we have that consistency even if the name changes. It's really the goal is to have that predictable ID. However we end up doing that, there's some details that need to be worked out in all the different variations in the matrix. But we definitely wanna make sure that that's it. And one of the options would be immutable names that does, however, break the contract of Keystone's API and is a little shaky on if we can do it. Mostly because I know people in our cloud that they authenticate with IDs rather than names because at that point, if they rename the project or so choose to, it doesn't break anything. So this would be a breaking change and therefore hopefully would have to occur with an API difference. Yeah, Jim, I'm gonna ask you. How does oath handle project changes? Cause I know you're doing the hash of the ID there. Do you just say that project names are immutable? Yeah, the, thank you. Yeah, the trade off is that project names are intrinsically immutable in this case. That's just the trade off. So if you wanna be able to ensure that you have the same ID everywhere globally without having to sync, because syncing is the devil. If you wanna be able to do that, the trade off is you don't get to change the names. And if you change names, you change projects. I'd like to, you know, thinking through how we would address this with hierarchical multi-tenancy, which is a big mouthful, I know. But the idea, one of the things that we need to push towards is being able to make better use of that. Cause as we start doing these larger deployments of these larger projects, we're gonna need more complex hierarchies. Now we have the mechanism in place, creating a new project within some sub-project where the roles are assigned down. Now you have at least a commonality of assignment between the two of them. Granted, your resources will not move from one to the other. So that would be problematic. If you have a lot of stuff there, oh, I need to change my thing from dev to, you know, to app name dev, you're not gonna be able to do that and have all the other resources out there be the same. But hopefully people will be better able to use the structure of the projects themselves to get that kind of namespacing in the rationales for why people wanna rename projects can get better managed at the structure level. It might be worthwhile at, and when we talk to operators, figuring out when people need to change project names and what the rationale is for those. And there may be alternative ways of handling that such as disabling a project and making that part of the uniqueness constrained through something. There is also a talk there this afternoon about being able to move resources from one project to another, which does also solve the problem. So that is a possibility as well is if we do make project names immutable allowing resource ownership to change between projects does solve the problem to some extent. As long as you're not trying to change ownership, like move projects within a tree because that's a terrible security problem. No, I still want that. Move resources between projects. We will talk about that extensively later and the answer right now is, I'm skeptical. Yeah. The way that I was thinking about generating the project IDs would actually include the parent project ID because that's part of the uniqueness that we want to have in there. So that would kind of preclude moving projects outside from the same parent. Are there any other questions? Good. All right. Given the known scaling limitations of Galera clusters, right? Slow down, I can actually have not understood a word you've said thus far, so please. Well, you outlined the scaling limitations, operational limitations for Galera cluster. Like it's limited to 12 nodes and some replica counts and other things. Well, I mean, you cannot scale Galera cluster unlimitedly to meet the HK cases, right? And you just said that for the second slide, I guess. And the previous slide states what non-Galera cases, there is a much of tricky corner cases to handle like replaying and handling partitioned sites and so on. So what do you think, or do you have a vision for other tooling on our weekends to be supported for Keystone? You want to answer that one? Yeah. Maybe to compliment. So I think I probably should repeat the question before we go on, because I had trouble understanding that. Okay. And it's really echoing in here. I don't know, has it been this hard to understand me the whole time? I can try to rephrase. Okay. What about supporting other big guns than Galera? So the point is, the question is that we have made a lot of discussion about Galera and synchronizing between other sites and he's asking about being able to support other synchronization mechanisms. What do you have in mind for those synchronization mechanisms? What would you like to see? So what is the vision for Keystone to introduce? To be the first, probably, project in OpenStack. Introducing another big guns. Maybe not transactional based, I don't know. Using causal consistency models, you know? So I can speak a little bit to synchronization projects. A lot of it is, it's difficult, no matter how you cut it. You have a lot of different technologies that have tried to solve this problem. And we have complex data structures like we have in Keystone. And part of this is the architecture of Keystone. Based upon database schemas and foreign keys, it's just fundamentally difficult to capture that data and synchronize it. Again, you can do something like capture the CADF notifications or capture underneath between the Keystone and the database and the driver layer where we do the writes or where we access the writes and send that data around. You're re-implementing effectively Galera. And the question then is, are you trying to achieve the same level of, say, ACID, compliancy or not? You have a lot of these pieces. Yeah, you could create a project that does this. Ideally, we look at the synchronization, we say it's not about one node or 10 nodes or even 100 nodes. I can probably make Galera or any synchronization technology work up to 100. It'll look ugly, but it'll work. And I have ideas on how to do this. What about 1,000 nodes? What about 10,000 nodes? And at 10,000, synchronization looks really, really ugly no matter how you cut it. It's a lot better to say, hey, these are isolated, independent, and they work if we have to go through and do an emergency break glass because James is a bad person, not that he really is. But sometimes, maybe, we want to. My colleague wants to complement the question. So before we move on to that question, I'd like to address, I think, a part of what drove it. So if you look at the data that you need to synchronize, it's not necessarily everything. Remember that whole thing about what should or should not be shared and being able to link domains to regions? What I think we're going to see is with federation and predictable IDs, we don't need to synchronize user data now. That's going to come from the top on down. Now, it has to be synchronized, but it's done by the user at that point. The projects, maybe if it's created centrally, then those should flow down the tree. And that is the kind of thing that you can do a parent to child relationship on down. Local rights maybe not necessarily need to be synchronized everywhere. You might say these two edge nodes are peers, and so they're going to both be able to do a local right for, say, a specific domain. And only that data is going to be shared, but just between two sites. So you can do things like that. We have time for one more question with that one minute. Do you know that Galleria doesn't provide a strong consistency? There may be side effects when it detects that it's out of consistency, and it needs house just to synchronize data. And the question is, did you think about alternatives like ETCD or Cockroach DB? These databases provide strong consistencies. ETCD is really cool technology. I like it. It has a lot of benefits. But when you're talking about 10,000 sites, it also has problems. Cockroach DB is fantastic, but it has partition issues where if you have a partition, you can't access the data. That's the second question. Did you think about implementing Raft or Paxos in Keystone itself just to synchronize some data, not whole data of other? Unfortunately, and this is to directly contradict Adam. Today, you really do have to have Keystone, almost all the data from Keystone when it talks about the user data, the project data, the domain data, the role data. All of this has to be synchronized. The stuff that you don't have to synchronize might be the service catalog. So right now, the architecture dictates that it's partial, but it's a huge amount of the data that Keystone manages that has to be synchronized. So we could do that, and I've looked at it. Part of the concern is again, adding Raft, Paxos, that type of stuff to Keystone is a, we're not gonna stop somebody from saying, hey, this is a great tool to use. And we've tried to make Keystone as pluggable as possible, so you could implement that from the standpoint of the general cases, adding extra processes like that versus hey, just take my identity assertion and go to the Keystone and it works. It's far simpler, it's much more in line with how your typical web SSO paths work. I just log in and it does what it needs to do. Sometimes you have to create an account and then link it, but it's really more of how web technology works. We try to fall down onto as much of the web standard technologies without getting into too much background syncing background tasks. We want Keystone to be effectively stateless. It's super important because that means that we can do things like, hey, the database is in sync, we're good, we're happy, everything works, you wanna upgrade, cool, do a database migration, you don't have to worry about the problems that come with. Did we get this just right and are we waiting for a sync or are we waiting for this and what if we have changed the database scheme and we have pending changes coming from external. It's trying to keep it down to the simplest, easiest to manage for both our benefits and the employers that have many, many, many, many, many of these and troubleshooting, hey, my RAF stuff didn't synchronize to this one site, why? You may have seen this in the web. Thank you. All right, thanks everybody and we'll be around for these feel free to come and ask us questions if you see us or send emails, what not and definitely go to James' talk later today.