 Hi everybody, this is a talk on our experience integrating Active Directory with OpenStack. I'm glad you're all able to get past the rain and the smoking escalator to get here. This is a quick overview of what we're going to talk about today. First to give you some context, I'll tell you what we're doing with OpenStack at GoDaddy. Then I'll jump right into Keystone integration with Active Directory, then how we, the things we did to integrate Nova with it. I'll talk about our DNS integration a little bit because it has some implications to how we integrate it with Active Directory. Then Mike will take over to talk about deployment with Puppet, domain controller proxying, and PBIS integration. Those last two will make more sense as the talk goes on. Okay, so first off, we started with OpenStack at GoDaddy early, mid last year, and we launched a pilot in February. We released this internal pilot to all our technical and corporate employees. To date, we have about 200 users who have created a VM, and we have right around 300 active VMs. Those are all Linux VMs. We're going to be rolling out Windows VMs pretty soon. We also have a production customer facing pilot in the works. This is what our OpenStack looks like. We're running the latest release version of Havana, which is 2013.2.3. We use Anvil to build and patch OpenStack from source, and we used the OpenStack puppet module on Stackforge for configuration management. We use Neutron for networking the ML2 driver with the OVS agent on the compute nodes. Our operating system on the host and in the guest is CentOS. We use KVM as the hypervisor, and in true minimal viable product fashion, we didn't release any object or block storage with this dev pilot. That's just to kind of limit the scope of the dev pilot. All right, so we can jump right into AD integration. Just to give you a little more background, GoDaddy has a large preexisting Active Directory infrastructure, like I think a lot of companies do. With any piece of infrastructure that's grown over a number of years, there were some pain points with it, but it was, for the most part, it just worked. It's the source of truth for authenticating users. It's also the source of truth for computer objects. So basically all servers have to register into AD. And then that last note there is just to kind of explicitly state that both are, it's a typical uses pattern at GoDaddy is to have both real users and service account users in AD. I put an asterisk next to read only. That means we could only use it to authenticate users. We couldn't write projects to it or anything. I put the asterisk there because when we were working with the AD admins, they actually were willing to work with us. And we had a solution going where we were going to have sort of a separate OU that we had right access to in order to create projects and roles. But as we discovered later, we didn't really need to do that. So this first bullet point here is how we discovered we didn't need to do that. In Havana, the identity and assignment back ends were broken out into two separate pieces. So we started off with Grizzly and then just sort of Havana was released and we made the decision to jump to it. And then we found that the identity and assignment back ends could be separated. So we could authenticate users and find out what groups are in using AD, but we could write all our projects, roles, and associations between those things to the database. For the DevPilot, we had a one-to-one mapping between users and projects. This kind of just limited our scope and made it really easy to automate that. So basically find the users out of AD, create projects for them. We could do that all scripted and sort of like a batch style process. The next step for the DevPilot is to create group-based projects. So actual product teams can have shared resources and they all have access to the UI and API and can just generally see what each other is doing. That will also entail tying into a lot of other systems like budget codes and more group-based permissions inside of AD. The last note is as we sort of piloted with one or two groups inside the company, we found that a typical need is to have service account users associated with your project so that you can script them using Puppet or whatever to make the calls out to the API to spin up VMs. So we've done that on sort of a one or two off basis. I think when we get to fully roll out the group-based project approach, we'll have that sort of more self-service. Okay, so largely integrating Keystone, the Keystone identity back end with Active Directory went pretty smoothly. There's really just a couple bugs that slowed us down. I list them explicitly here just because if you ran into the same problems over the last six months, you can have a reference point. The first one was really the only showstopper, excuse me. And that was basically because we needed to search from the root DN of our Active Directory tree, that doesn't work very well with how Keystone was written. But the fix for that is upstreamed and I think actually the back port of it to Havana is like in process now. So hopefully that should no longer be an issue, which is cool. The second one is an interesting one. So basically the way we had to configure Keystone was with SAM account name as both the user ID and user name. Common name is Craig space Jellic, SAM account name is C Jellic. And when you do a user get, regardless of how you have the user ID attribute configured, you'll get the CN back. So where I expected to see C Jellic, I saw Craig space Jellic. There's a patch for that one. There's a fix submitted. It's not fully flushed out. So I think we need to revisit it to actually get it accepted. And then the last note, horizon performance issues. Horizon we found just kind of really fell apart with the project management piece if you had a lot of users. So our user base was all the users in a particular OU and Active Directory and that was a lot of them. And the way that Horizon would work when you went to assign users to a project, it would get all of the users as a list. And then it would do a single get for every single one of them before it even loaded the assign a user to a project screen. So that just didn't scale with our number of users. And frankly we circumvented it by not using that portion of the UI. So all of our user assignment was scripted and automated anyway. And if we needed to do anything one off, we would just do it through the CLI. Sure, I mean it was, I guess at that point we were looking at a couple thousand users. So here's a link to our configuration. I'm not gonna jump to that link, but it's there for your reference. And I put in some of the interesting points right here. The first four lines are how you separate the identity and assignment back ends. The LDAP connection information is not terribly interesting. Local host is not a sanitization. That's actually how we had it configured. And Mike will talk more about that later. QueryScope equal sub is important because that's how you can kind of search infinitely deep in your tree. The user filter is interesting because we needed that because we're searching from sort of the root of our tree. We would also kind of catch computer objects. So we needed to explicitly add a filter to remove the computer objects. And like I said earlier, we had both username attribute and user ID attribute configured to be SAM account name. There's nothing terribly interesting in the group section of that, other than just the ID attribute in that case actually is the CN. Cuz that's sort of the part that people see and it makes sense. Well, that's the part that makes sense as an ID and then the name is what people see. Okay, so moving on to Nova integration with Active Directory. As I said earlier, all servers at GoDaddy have to register into AD because other systems feed off that information. Mike will talk a little more about that later on. But because of that, we have some additional constraints we have to impose on server names. Specifically, they need to be, when I say globally unique, they need to be unique across the entire Active Directory tree. Because the server name is used as the CN and it needs to be unique and the SAM account name. They need to match a regex, so there's no sort of special character enforcement. So it just sort of need to match a very well-defined regex. And they need to adhere to the name length restriction we have in our Active Directory installation. To do all those things, we patched it locally. I've linked to that patch here, just if you want it for your reference, it's kind of very specific, some parts of it are specific to, I don't think it applies to Nova as a whole. So that's why we never tried to upstream it. The last note, I put it on there just because it was really hard for us to find this configuration setting. OSAPI, compute unique server name, scope equals global. I think the other option is project or tenant. That makes the name unique within your open stack installation, the server name unique. That's important because of replication lag between the different domain controllers. So if someone spun up one VM with name foo and then immediately after that someone else spent up a VM with name foo because of the possibility of going to different domain controllers to check the name uniqueness, we couldn't really guarantee that we would avoid replication lag. So by turning this on, it makes the name unique within open stack that ensures that at least in our use case you can't have that immediate issue, I guess, race condition you'd run into. And then this is just a quick snippet of the patch that we added to do all these extra checks. So the validate server name function was already there. We just had to add a few extra checks to it. We made the max length configurable because it was hard coded to 255. We have that regex check there and the underscore check server name uniqueness is what would call out to Active Directory to see if the name existed. OK, and DNS integration with Nova. So our internal DNS at GoDaddy is powered by Active Directory. Those details are for the most part hidden behind a REST API that another team wrote to encapsulate the internal DNS. Windows VMs, which we'll be rolling out shortly and we've already been testing and everything, auto register into that DNS system. So we don't have to worry about adding those Windows VMs into DNS. But for Linux VMs, they don't do that automatic registration into Active Directory DNS. So to get around that, what we did was hook into the Nova notifications topic. We basically have a system that sits, little application that sits next to OpenStack, listens on that topic, has its own dedicated queue when it receives a compute instance create end event. It will get the relevant information, the fixed IP, the fully qualified domain name, and it will submit entries into that REST API. Both for Windows and Linux VMs, when they're destroyed, we also listen to that topic and we do cleanup. So we know the server name that's destroyed and we just call the REST API to delete those entries. All right, so with that, I'll hand it off to Mike to talk about some other bits of it. So good morning, my name is Mike Dorman as Craig kind of introduced me earlier. As he mentioned on one of the earlier slides, we do use a lot of the Stackforge modules in Puppet to do all the deployments and configuration of all the OpenStack services. For the most part, those work pretty well for us. There were a few modifications that we had to do earlier on, mainly around supporting the SSL options for each of the different services. We kind of had a requirement to do SSL on as much as we could in OpenStack, so that's where that came from. For the most part, all that stuff has been implemented in the meantime in those modules. We were pretty bad at first about getting that stuff submitted back, but kind of separately, the community has implemented those and those modules now. So on almost every one of them, you've got the SSL options now. The piece that we did need to take a look at and modify was around the prefetching that Puppet does as part of the resource providers internally to Puppet. What that does is it attempts to kind of load in a cache of all the different resources on the front end so that every time you're operating on another resource within Puppet, it doesn't have to go fetch that information on the fly. This works pretty good for a lot of the resource types, but for Keystone users, Keystone tenants, those types of things, not so well, especially in this environment where we've got lots and lots of users. Basically what this boiled down to on the back end is for every user that was in the directory, it translated into two Keystone CLI commands because that's the way the custom providers in the Puppet modules implement this today. They don't go directly to the API or anything like that yet. So it was very, very slow. For our directory of a couple thousand users, I mean a single Puppet run would take like 45 minutes because it would just be sitting there spinning on running Keystone after Keystone after Keystone command and it would take forever. So you can imagine how much worse this would be if you had 30,000 users in your directory. So the solution to this is to implement lazy loading in the module in that back end provider for the users and tenants. We had implemented this in sort of a real ugly way internally. We kind of took advantage of the fact that from our perspective AD was read only. So even though we were technically managing users in Puppet, we couldn't actually write to the director and create new users so we were able to strip out a lot of this code that was going out and prefetching a lot of the parameters for the users. Separately though, Dan Bodie in this commit had implemented a more lazy loading way of doing this in the providers which is now out there in the main open stack Keystone module. And obviously, you know, this scales much better to directories where you have thousands of users and those types of things. And actually the other day in the Puppet labs session upstairs there was a lot of talk about switching these providers over to actually hitting the Keystone API directly rather than going to the CLI tool. So there should be some improvements there as well. And then I've provided the short link there for this commit if anyone's interested in more of those details. Then the next challenge that we kind of had to think about was how do you tell Keystone what domain controller to go to to look up the information? Across GoDaddy we've got several domain controllers spread across all the different company sites across all the different data centers around the world. And AD has some tricks that it does to do kind of site awareness. So if you're a Windows box and you're on the domain and you need to talk to the domain controller there's server SRV records and some other stuff where you can kind of figure out where it is in the network and be able to hit a domain controller that's relatively local network wise to it. But LDAP out of the box and the way that Keystone gets configured with LDAP doesn't really have that notion. So you just give it an LDAP URL and that's where it goes to. So it's a little difficult to try to figure out which is the best one to go to. So we're kind of left with a couple different options. I mean just from a very basic level. The first one is you just pick a domain controller that you know is gonna be local to you. So you just pick one that's in your local data center. You configure Keystone to go directly to that one and you just kind of hope and assume that it never goes down and it's always gonna be there to take your queries. So the drawbacks in that are fairly obvious. The second way is to just let DNS do it for you. So in your LDAP URL, you just use the kind of the root domain name of the domain and that'll give you back all the A records of all the different domain controllers and you kind of just pick one pseudo randomly. A couple things to note on that. The first problem still applies to that. So just because you've got DNS around Robin or whatever mechanism that DNS will provide to you, you could still be sent to a domain controller that's actually down or not responding. So while it kind of does this load balancing, so to speak, it doesn't do anything to help you not try to connect to a domain controller that's actually down. Second problem is you get all the A records for all the domain controllers across the domain which could send you to one that's across an ocean someplace which for obvious reasons, you don't really want to do that but in this DNS scheme, there's no way to really pare down which ones you get sent to. So we kind of took a different approach to this to solve this problem. We actually run a local instance of HA proxy on the machines that run Keystone. That HA proxy is configured to load balance LDAP across all the domain controllers that are local to that data center. So we pre-configure HA proxy with only the local domain controllers so we stay local in that way so we kind of solve the number two problem. But HA proxy does the port checks and some of that intelligence to be able to detect which ones are down and it will only send Keystone to one that's actually responding to requests. So we kind of solve the first problem through that. So it's a little bit of a, it looks a little bit ugly depending on how you look at it but it actually does a really good job of solving the problem. And since we run HA proxy locally on the boxes that run Keystone, technically there's a network hop there but it stays within the kernel. So that piece of it remains fairly, you could be fairly confident that that communication path will always work. So it's not like we're adding another real network hops in place to get to AD. Okay, then just kind of quickly wanted to talk about a little bit of the things that we do inside the VMs with regard to our AD integration. Across the company, we use a tool called Power Broker, not sure what the IS means but we call it PBiz. Basically that allows you to hook into AD from the Linux side and be able to do authentication and group-based access control and two-doers based on AD groups. So what that allows us to do is basically everybody can log into all the Linux boxes using their AD credentials instead of some other password management scheme that is out there. On the root password side of things, we use a tool called CyberArk that manages and keeps track of all the root passwords on all the machines and that's tied into AD as well. So the security folks are really the only people that have access to that information. Sometimes the data center folks that need to console into machines, that kind of thing. But for the most part, nobody else knows the root passwords but obviously we need a way to track them and we use this for that. The fact that it's tied into AD and everything's kind of centralized there allows us to do root password changes across the environment very quickly. So if there's some kind of compromise or just the regular password rotation that we wanna do, it's very easy for the security group to go and just do that across all the machines. There's a bunch of other reasons why this is really good too. I mean, the obvious ones, you only have to remember one password, you don't have to think about, oh, which version of my hash is out there in this machine versus this other one? There's no hash management in the shadow file that we've got to manage. Employee onboarding and offboarding becomes much simpler because that gets done in AD anyway and then that just goes and applies to all the Linux machines as well. It makes everything a lot more user-friendly, so to speak, to the people that are actually having to log into these things. So some other stuff that we do specific to the VMs is when you create the VM either through the UI that we have for our platform, which is this screenshot, or through the API, you can provide a list of groups that should have access to log into that VM. So on the lower right here, you can see where the login groups option and that just gives you a list of all the AD groups that you're in, and then you choose the ones that you want to be able to log into the VM that you're creating. That gets put down into the metadata that's attached to the VM, so this is just kind of an example that greens maybe a little hard to read, but we also passed down this created by field, which is the username of the person that actually created it through that UI, or did the API call. That's the user that we tell Cloud in it to give the SSH key that's provided with the VM. So that ensures that the guy that actually created the VM is the one that has the SSH key on their account, so we don't have any funny Fedora or Cloud random accounts on all these VMs with some SSH key attached to them that we don't really know who that goes to. So that ensures some additional level of security there as well just to make sure that our keys are a little more locked down. And then you can see in the login groups field there, and then as well as the sudo groups, and then the analogous fields for specific users, that's how we pass in that information for what groups and users should have access to the VM. When a VM comes up, we run a puppet process to do some of this provisioning work the first time the VM is booted. Part of that process is actually taking that information, not a metadata, and then populating it into the config files as necessary. So the Etsy login groups is the file that that PBiz tool uses to have as a reference for what groups and users are allowed to log into that VM so you can see how now login groups is populated with that list of users and groups. And then we do the same thing in sudoers to specify who can actually sudo to root on the machine. In addition to that initial process of provisioning the VM with puppet, we actually do this process every 10 minutes through Cron as well. So what that allows us to do is you can, if you need to change the list of users and groups that have access to a particular VM for whatever reason, you can go back through the OpenStack API, provide new metadata which has a new list of users and groups who are accessed to the VM. And then within 10 minutes, the Cron runs, updates the config in the VM, and then the new access policy is in place there. So that makes it really nice to be able to control all the authentication and authorization stuff kind of external to the VM itself. It also is nice because now you have a way to audit this through the OpenStack API by looking at the metadata and all the VMs. And because we run this continuous process to make sure what's in the metadata matches how the VM is configured, we have some trust there to know that what the API metadata is showing us is actually how the VM is configured. So go to the next slide. Okay, so all that said, obviously that's all kind of enterprisey stuff and you wouldn't necessarily want to do anything like that in a public cloud or a customer facing things where you don't necessarily have all your customers in some kind of AD domain, but it makes it a lot easier for our kind of pilot environment for people to actually use this thing because the other developers and engineers in the company are used to being able to log in with their AD credentials and control things based on the AD groups and that kind of thing and making everybody sort of shift away from that and like, oh, you need to log in as the Fedora user using your SSH key because that's the default in cloud and it is a little bit of a paradigm change for people. So a lot of this integration we've done is more around just trying to make people comfortable with it and being able to have it be as user-friendly and frictionless as possible. But kind of the implications of this are now that we've integrated all these Linux VMs with AD, technically those Linux machines are actually joined in the domain so they actually have an object in AD when they get configured to authenticate against the domain. So this is where the name uniqueness requirement comes from that Craig had kind of talked about earlier where we can't have any duplicate names we need to make sure all the VMs are named uniquely kind of from a global perspective. And then this is where the external cleanup hooks come into play as well where we take a look at that notification queue and then any time a VM is destroyed we have to go back into AD and remove that object so we're not polluting AD with all these stale objects and you can imagine how if we didn't do that how quickly AD could just completely get out of control. I mean previously where we've just mostly had physical servers on this thing you know a physical Linux bot kind of comes up and it lives for a while and then there's a deep provisioning process to get it cleaned up. Well VMs can be created and destroyed much more quickly there's a lot more churn there so that's why we had to build sort of this automated system to do that cleanup to make sure that we're keeping things nice and tidy there as well. And that's it. Just wanted to do a plug as well for our team we're actively working on growing it there's several of us here at the summit so if this sounds interesting to anybody and you're just didn't talk to us about working there we'd be glad to talk to you. I mean I think just kind of my personal perspective on this it's been pretty cool to get open stack in our environment and kind of get more towards this cloudy way of doing things and it really is an opportunity to kind of actively help other people in the company do their jobs better on a daily basis and it's been pretty neat to be involved in this. So again, if anybody's looking I know everybody's trying to poach this week but definitely would be interested in talking with you. So any questions we'd be glad to take them just please go to the mic so that we can hear. You guys touch a little bit on some of the scripts that you guys are using with Puppet or do you guys have any of that out on the public GitHub or is any of it open source at this point from you guys? Sounds like you've done a lot of the work and it would be really nice to. Yeah, there's a couple of pull requests or I'm not sure what you call them in the Garrett review world exactly but around a couple of the tweaks that we've had to do as far as like the specific manifests of how specifically we go about configuring the Keystone module for example no that's out there today but for the most part it's just the stock modules and we fill in the parameters but we can talk afterwards if you've got other specific question I mean I'd be happy to show it to you it's none of it's out there right now. With your HA proxy in front of LDAP did you look into using weights or anything to try and deal with that the geo-dispersity of your AD or was that too difficult to automate in the end? No we didn't do any of that. There's a little more history to that than what comes out in the talk but originally we just did sort of the number one where we just hit a local domain controller and then the Windows team would need to take that guy down to do patching or whatever so that's kind of where we got to well we just need to spread across all these local ones and for the most part there's enough diversity within each data center that we should be okay and I mean we haven't investigated it but I would probably shy away from saying well if all of these guys in the local data center are down then as the last resort go to Europe. We have bigger problems if all of our domain controllers. It's not a bad idea but no we didn't look at any of that. Yeah so just to clarify we only have HA proxy configured for the ones local to the DC we're in so we just don't even go out to the other ones for our local HA proxy instances. Thank you. And actually we're gonna follow up on that same HA proxy thing. I'm from the Keystone team. One of the questions we request we see quite a lot is to support some like primary, secondary, LDAP server. We can't quite hear you can you just get a little bit. I'm from the Keystone team. One of the requests we see quite often is can I support a primary and secondary LDAP server? How would you compare if we had that versus an HA proxy which would be your preference? I mean just from like a strictly effort to get it configured I mean I think if Keystone itself could do some kind of failover I think that would be preferred. I mean that would save us from having some of the other. I like our solution. It's also worth noting that our that little patch we have to Nova also has to connect to Active Directory and then the code that listens on the queue and does the cleanup also has to connect to Active Directory and I think the solution we have right now scales very well for all those scenarios. I was gonna caveat that with like it depends on how reliable that failover and what the mechanism is within Keystone like if there could be some intelligence of just can I connect or not on TCP then that's essentially the same as what HA proxy is doing for us. But if there could be some more intelligence there like if I go out and query the first one and it gives me back some kind of error then try the second one as well that that could be kind of interesting to do some more intelligence there. Oh also one more note on it is just using sort of the HA proxy solution we do open ourselves back up to a little bit of replication lag issue you know so if the request the first request goes to server A and the second request goes to server B there's potential for replication lag if you're doing just the primary secondary and you're kind of always guaranteeing that the first one goes to you know server A you avoid that issue but for the purposes of Keystone specifically we aren't writing anything to AD it's just for those creating and deleting of server objects. Yeah. I had a question about the service accounts in Active Directory are the teams or groups allowed to create those themselves and manage those themselves or does that process have to go through you guys or? That is external to OpenStack I mean that is a that's the way we do service accounts I go daddy. Oh yeah so and I just learned this this week that through federated Keystone you can have service accounts in the database and still authenticate real users against Active Directory is that right? That's what I wrote my driver to do so if it's gonna be in Keystone that'll be awesome. Awesome cool. Yeah I appreciate the the hard work y'all did on Havana and I know I appreciate also having a bunch of the Keystone guys in the audience here so I'd love to be able to hear from maybe some of the Keystone folks what will change in Icehouse and going forward in a federated model that will make a lot or that will maybe hopefully simplify some of the work that has to get done for AD integration and so that you know you don't have to have a PhD in AD to be able to do some of this AD federation for customers who are looking to do that kind of thing is there anybody you can comment on that? Maybe yeah. Our job just got a lot easier up here. You can stop. Yeah federation's got a really good start in Icehouse I think we need to close the loop on a few things to make it so it'll solve this particular problem. One of the killer features that Henry was working on I think even back as far back as Havana was the ability to support multiple LDAP servers and that's a subset of a more general problem of the ability to support multiple sources of identity and understand we're solving it not just for the enterprise case not just the case where you can well basically we come from the world of trust no one and so I don't want to be in the case where use the classic example Coke and Pepsi are in the same data center and Coke steals Pepsi's users and is able to create users under their domain and stuff like that so we need a way to make sure that the user IDs remain globally unique across this when they're coming out of remote sources when you can't write that UUID back like we do in the SQL thing and that resolving how to do that is one of the major things that we have to do today so that while you'll see that there's code to that effect in there it's experimental it's experimental in that you have to guarantee that user IDs are unique across the whole thing so in theory you could probably get away with like setting up an open LDAP or something for that other domain over there I wouldn't know that it's been all that well tested that's kinda there now there's other things you can do what I'd like to do is get rid of the service users all together and just when you register an endpoint that endpoint is able to do work on its behalf and stuff like that which will work for a lot of things but obviously it won't work for stuff like heat where you actually need a real user that can do real things along that nature with Federation what we're seeing is a push towards it trying to take all the nastiness of dealing with LDAP directly outside the hands of Keystone so the Keystone just kind of inherits an AD setup but somebody's still gonna have to go through and do that I don't know if I can reduce the number of PhDs in AD that you need to get real work done blame Peter Puglio Peter's one of the, it works for Microsoft and he's the driving force behind Hyper-V being back in there so we have a very good natured ribbon going on but Active Directory is suitably complex because of the breadth of problems it has to solve. I'll just pick up on the stuff that I said just to see you know in terms of the stuff that we will definitely get put into Juno mainly because it was almost ready for Icehouse which is how I'm hinted at it that if you have multiple domains you'll be able to choose this domain's got a SQL backend this domain's got an LDAP over there this domain's got an AD over there and so you can split your configuration up that way if you have customers who want to use their own corporate LDAP for instance you can point their corporate LDAP just for their domain of users for instance that's kind of one of the key things we're gonna get in for Juno is say there's experimental version in Icehouse don't use it it's experimental means broken unfortunately in this case but we will fix that for Juno the concept is there in Icehouse but we'll fix it probably for Juno and so that'll probably it may well be even in for Juno one if we can get it in so it'll be pretty early in the cycle Cool, I just want to say thanks for all your hard work for the most part Keystone's been pretty slick Some great stuff this release I really apologize for the whole DN approach No worries, I mean yes Yeah, it's been a learning process the goal was to try to make the user look up as fast as possible and to use the DNs as that try to do LDAP things LDAP-wise and obviously it doesn't quite map to how people then look at the objects Yep, question Howdy, thanks for the talk I was just curious what else you evaluated besides the power broker services if you looked at SSSD and whatever else you evaluated for authenticating within the VM So specifically within the VM we didn't look at anything else we just use what the rest of the company has been using for a while I know in the past there was some work around the LDAP back end to PAM that was kind of before my time and I'm not sure I'm not familiar with the problems and whatever, what the motivation was to move to something else there But even PBiz is definitely not without its challenges for sure but for OpenStack specifically we just kind of forklifted in what everybody else was using already And I think the summary of this talk referenced alternatives to it time constraints we kind of did not put that into the presentation so apologies on that not as advertised Great talk Did you guys have to make any schema changes to the AD to get any of this to work? No, nope, AD was what it was Had we have to do if we were going to write projects and roles and stuff to AD I think we would have ended up because when we prototyped that they didn't map and it was as much an artifact of how our ADs were set up but with read only we didn't have to do any schema changes One person who really deserves credit for that I would like to call out who's unfortunately not here but works for CERN, Jose Castro-Leon CERN has the interesting ability they have a huge Active Directory deployment but they have the ability to write to it and they really wanted this and so they're the guys who drove early on Active Directory support so that's one of the reasons why Active Directory does work so well I just want to give CERN and specifically Jose much gratitude for me because I know nothing about Active Directory I don't want to Yeah, we don't either Whatever we know, we learn from this experience and CERN's wiki or blog article on integrating Keystone with AD was like our starting point basically where we referenced everything from I think we're close to the end of time let me look real quick here Okay, thanks everyone Okay, thanks