 All right. All of y'all came for good seats for the next thing, right? The Star Trek and Firefly Show, right? That's good. Don't worry. We'll get there. So who are we? I'm on stage, so I'm going to go first. I'm Joe Savick. I'm a Senior Product Manager with Rackspace. We have Steve Martinelli with IBM, Merrick Dennis with CERN, and Brad Topol also with IBM. They're going to help rescue me from stage here in a second. So first off, I want to give a lot of thanks to the contributors that helped bring in Identity Federation into Keystone. We saw a lot of different use cases and I'll deep dive into those. But it was definitely a community effort from Red Hat, Rackspace, IBM, CERN, and University of Kent. Just really wanted to say thank you. It's a great example of how collaboration and open source really does work. So let's talk about those use cases. We saw these from a private cloud perspective. We saw these from a public cloud perspective, from a client developer perspective as well. And some of them do indeed boil down to I want to be able to use one credential across multiple different clouds. Some of them also help enable things like cloud bursting and being able to provision workloads to the best suited cloud for that specific workload. Other things wanted to be able to enable the client to be slim and small and not have a whole bunch of Federation logic embedded into the client that it doesn't really need. And also the portability of the protocols that we use. We don't want to reinvent the wheel. We don't want to invent a new Federation protocol. So we boiled all these down into Identities being able to federate into an open stack cloud and then Identities being able to federate out of an open stack cloud. So briefly at a high level and we'll detail this up quite a bit more. What do we mean by Ice House being able to federate in? I'd say that you have an Active Directory already set up and you're running Federation services. So you have some notion of a Federation protocol such as OpenID Connect or SAML within your own identity system. You have many different employees that already have their credential already using their username password to access many different internal resources already. And you don't want to have to provision out to public cloud such as RecSpace or IBM or HP, those different credential sets. So this allows you to be able to use your employer provided credentials in order to get appropriate access to the public cloud. So you can control access centrally still within your realm. You don't have to manage those Identities externally. You don't have to worry about the maintenance of those Identities as well. If an employee were to leave or get terminated for some reason, you could just shut off that access within your own Identity system. What do we mean about Identities being able to federate outward? This means that still looking at that on-premises open stack cloud, you can still get access there but then you can actually request access to other service providers, other clouds if you need to. There's a trust relationship that occurs in between both on-prem cloud and then the public cloud as well. This enables a client to be able to hit one instance of Keystone in order to be able to interact with that Keystone and get access to other public cloud Keystones. So I'm going to go ahead and turn it over to Merrick who can talk more about the CERN use case. So if you attended yesterday's keynote session, you might have noticed that we have 70,000 cores in our cloud distributed among two data centers but this is nothing if it comes to analyzing the massive amounts of data we have. That's why we need to be able to operate in the multi-cloud environment. That's why we need Federation. That's why we need to be able to easily adjust our capacity on demand. Sometimes we need more resources, sometimes less. That's why we need to keep costs low while maintaining our data centers. It's obvious for all of us. We also need to meet increased demand due to a high energy physics conference. Physicists usually ask for more resources because they want to analyze their data and they need some equations computed for their papers. The business model of cloud computing pay as you go fits our CERN's needs. We want to be able to use our credit card and then have our 10,000 cores for next two weeks. CERN as a scientific organization needs to be able to stand on the opposite side of the cloud. So as a part of the scientific collaborations, we need to be able to share our resources with the others, like the Educated Gain Federation. So this is a theory of a critical picture where we have constant capacity, but the demands for the resources are varying from the very low to the very high, so the federated clouds could actually fill those gaps. Brad can tell you more about the classic authentication and authorization model, and then we will just proceed to Keystone to Keystone Federation with Deliberton Juno. Brad? Thank you, Mark. Security is very important. Actually, never mind. Thank you. Opening night jitters. Just to refresh everybody on the classic, Keystone classic authentication and authorization model, you typically have users stored either in a database or stored in an LDAP or Active Directory, and once those users are authenticated, what happens is they're assigned roles for a particular project. So we map a user to a project via roles. And when the token is created, that information is passed in those tokens, and the different OpenStack services that are out there will look at the user and see that the user is scoped to have certain roles for a particular project, and each service runs its own little policy engine with a policy file to determine whether the user with those roles is permitted or authorized to do things like provision of a virtual machine or attached storage or what have you. So that's the basic model that existed, and if you were here when we did a previous session and at the previous summit, you know, that wasn't enough, right? So folks, and Mark was one of our original stakeholders, came to us and said, okay, well, you've got some basic stuff in there. You can do basic LDAP, basic Active Directory, but understand we're very large, just like many enterprise customers are. We need support for federated identity management, federated identity protocols, being able to use identity providers. And so that required us to grow up a little bit. And, you know, the work that we did was to enable Keystone to work in that world where you're using open federated identity protocols like SAML, Security Assertion Markup Language, to handle the federation. And so the basic flow is here, the user gets authenticated, receives a SAML assertion back. The SAML assertion would flow to Keystone and Keystone would do what it would do previously, which is give you a token back. That token then could be used to determine whether you had access to services. So the overall flow was quite simple, but there were some issues that made this difficult. Namely, if all the users are now stored out in different places, a variety of different sources and coming across as a user with some SAML assertions, those users were no longer being stored in Keystone. So if we look at the model Keystone has, I take my user and I store in the database and map it with roles to a particular project, it becomes very hard to do that now because the users are ephemeral. They're not in the database. They are over in some identity provider somewhere. So how we solved this in the previous iteration was relatively straightforward. The tricky use is given those SAML assertions coming in with the user, we use that in some mapping language and mapping algorithms to map that user to a group in Keystone. Once the user is mapped to a group in Keystone, the group itself has roles. And now, you know, by inheritance, the user has roles because it's part of a group. So now the group and its roles can be used as the mechanism for authorization that allows the user to then have the privileges to do things like provision of VM or attach storage or what have you. And so what you see up here, and again, you can go look at the previous documentation or our previous YouTube videos from the previous summits, but the magic here is the SAML attributes and the user coming in and a very expressive mapping language that allows you to map these things to the actual Keystone groups. What was really interesting is to move this forward for hybrid clouds, what Steve's going to take you through is how we can extend upon this to do Keystone to Keystone Federation for on-prime and a public cloud. Right, thanks, Brad. So yeah, in the Icehouse release, we managed to make Keystone work as a service provider where we were basically federating in, but we wanted to extend that for the Juno release and make sure we can satisfy the federating out use case that Joe had talked about earlier on. So how we were going to extend that was by trying to federate two different Keystones, where one Keystone, where I tried to depict this on the left, on the diagram here. So the Keystone on the left is your on-prime private cloud and the Keystone on the right hand side is your public remote cloud. And the main tool that we're going to use to communicate between the two Keystones is going to be SAML. There were two reasons for this. One, it's an OASIS standard. We didn't want to make up our own proprietary stuff. And two, we could piggyback or reuse the content that we had already done in Icehouse. I'm going to go into that in just a second because briefly I wanted to go through the flows of the diagram here. Steps A and B are just a one-time setup that the admins have to perform to trust each other. Secondly, Steps 1 and 2 are where a user logs in to his own on-prime private Keystone and he asks for a token to be SAMLized. Meaning, here's my credentials. Please give me a SAML assertion. And then Steps 3 and 4 are old flows. These were already done in Icehouse. So the user has a SAML assertion now and he's presenting it back to Keystone, which was already supported in Icehouse and he can get a token back that can be used on that cloud. All right, so what was delivered in Juno? So the main piece of code that was delivered in Juno was the SAML generator. This one here is basically you want to take, it's Steps 1 and 2 of the previous diagram. You want to give a token as input to Keystone and you want to SAMLize it. Basically you want to generate a SAML assertion. Because if you think about it, a Keystone token has your username, it has your roles. It's pretty much the same content that you would find in a SAML assertion, just not as much, in a different format. So the typical flow would be user logs in, gets a token and then he asks Keystone to now give me a SAML assertion based on this token. And then the output would be then a full SAML assertion, very similar to how you would have it in coming from an identity provider, except it will have the user ID that was logged in, as well as all the roles that the user has. Furthermore, the user could be based on an SQL identity, an LDAP, or even an identity coming from another IDP that you use to SAML with. So it's kind of like a iterative process there. So that was kind of cool. But yeah, the main part is though, it would actually be a much shorter assertion than you would normally get from a typical identity provider. So going into a little bit more detail on that topic, you would have to configure two different keys, you would have to configure both Keystones a little bit differently each one. So the Keystone on the right hand side is already, is the Keystone acting as a service provider. This one here is basically the same configuration as you would do in the Icehouse release code. You basically have to just set up a protocol, set up an IDP, set up a mapping, except in this case the protocol is always SAML and the IDP you can call it Keystone IDP. And the mapping you would have to map from the user and the roles into username and group on your side. And on the right hand side, the private on-prem Keystone, you would have to fill in some content in the Keystone.com file to basically, what it does is it primes content in the SAML assertion about that Keystone because when you send that assertion over to the public cloud, it needs to ensure that the content is correct there and you're sending it to the right folk. And there's a lot of docs on this in Keystone. We tried to really document it as much as we could because it is, it's not simple. And just want to say thank you to Rodrigo Sousa who's actually not at the conference today, but he actually tested all this out and wrote a great blog about this. And he just published it, I think yesterday. And lastly, we also had a few more things that we delivered in Juno that are related to Federation. So the first one being audit support for Federation. In Icehouse, I think we delivered it a little bit too late for us to actually deliver this support in Icehouse, but in Icehouse we had code to audit authorization events, authentication events from local users who are connecting from LDAP or SQL backend. But we wanted to extend that support to federated users because it's actually pretty critical for federated users since they don't exist in Keystone. So we should really need, we really need to audit them. So we worked with the CADF library again. So we made changes to PyCADF and we updated the CADF spec itself. I think that one's still in flight. And now we have the ability to audit federated authentication events. And then there's also the metadata generator, which if you recall the diagram from about two slides ago, it's the trust setup. Basically, a lot of the content comes from the Keystone.com file, and it gets primed in, a lot of the values come from the Keystone.com file and get primed and put into the metadata file that has to be swapped by the two Keystones to trust each other. We tried to make this a little bit easier by having a command line tool. So we extended Keystone-managed to allow the users or the admins to create metadata. Now I think it's Merrick talking a little bit more about CERN. I'll click for you. Yeah, so a couple of words what CERN needs and what CERN uses today. So we are trying to be as much updated as possible. We have, as I said, we have two data centers. We have OpenStack installed. We use cells. I think this is not very popular. And we have our corporate database where we keep information about all of our 40,000 users, active users. And part of the CERN's mission is that there is a constant stop rotation. So it would be very inconvenient to set up local accounts in our 12,000 services, which are actually federated via the IDFS instance. IDFS is the Microsoft implementation for some identity provider. So centralized management is extremely useful for us, for our use case. CERN also needs to share cloud resources for other edugain federation members. That's why we had to install, update to Icehouse and then enable federation. And presumably this will be the first production set up in the world where we will be giving our resources to other members. So again, a good test, but coming to the business world. Our federation, I mean, so far our users will be accessing the federated resources via the command line interfaces. So Keystone Client and the OpenStack Client, which has been released a couple of days before. And then we, there's no yet web SSO enabled version in the OpenStack AppStream, however we have our own version. So a few weeks when I go to the OpenStack with CERNCH and I try to access our CERN horizon, I will be actually using the federation sum and everything we used to, we have in OpenStack Icehouse and Juno plus our, plus couple of our patches. We also will try to group membership one to one with the group membership in our Active Directory as this will be much easier for us to configure our users. And yeah, a couple of words from Joe. What we are going to do for Kilo. Thank you. And there's a lot of sessions that we have going on about federation today, including one from Merrick, who's actually going to demo it. What time is your session? 2.30. And then also Craig, I think I saw him in the audience. He's given a brown bag at about 1 o'clock or 1.15, I think as well. So we realize that there's a lot more that we want to deep dive into as far as federation and some of it goes along the lines of authorization. So horizon integration, making sure that we can easily set this up is very important. It comes already enabled to go ahead and set up the identity provider, establish a trust relationship and then on the login page be able to use your own credentials. Attribute mapping enhancements. Right now the attribute mapping is at a group level and so the authorization is actually in the keystone that you're federating to for the group to roll information. We kind of want to bring that a little bit deeper, especially focusing on the capabilities that someone has access to and not assuming that the service provider that you're going to and their concept of roll is the same kind of concept that you have internally. And so there's that roll to capability mapping that is missing within keystone that we want to go ahead and add to. Additionally the mapping as a service provider is actually stored within the keystone confile instead of actually being local within the private cloud keystone that you're going from. And so we want to be able to move that over to the private cloud keystone and have it a little bit more generic so that way you can not only just federate to a different keystone but you can also federate to a different service provider that doesn't run keystone. Fine-grained access control, so going deeper, going deeper than just the capability to roll mapping, we want to be able to ensure that someone has federated access to a specific server or a specific resource database load balancer. And so there's that concept that's also missing in keystone that is pretty tricky to solve at scale. You don't want to end up having keystone know everything about all resources, so we need help there. And then of course support of additional federation protocols. I love the design of keystone and how it's extensible in the federation protocols and it's extensible in the backends. It means that each keystone can become a unique little snowflake and how it's implemented based upon the company's needs, our customer's needs. So you can have one backend that is SQLite but then you can actually choose to federate over to a service provider or another implementation where you can federate from ADFS and then federate using a different federation protocol to a different service provider. So there's a lot of work all of it boils down to is to say that we need help in this. There's definitely room for more keystone contributors. So if you're interested in doing that please talk to me. The new PTL is Morgan Fainberg Hill. He's also probably a better person to talk to than me. That's it. Any questions? Hey, Craig. Can you touch up on token revocations? Token revocations. Steve. Adam, where are you? Well, there's a few ways to tackle this. There's actually, I think we were thinking about letting keystone middleware tackle this. There's also making tokens very, very, very short lived, like two minutes or something. And then there was also one more way... Through the SAML revocation. There's actually a standardized way through the SAML revocation. Yeah, there's a standardized way, yeah. So we'll be looking at that as well. It is a bug right now, but... Is that something for the K release? It's targeted for K. Yeah. Hi, Craig. So I just had a point of clarification I wanted to ask about. And then also another question I wanted to ask. But the reason why you're mapping roles to groups is because you don't want to create a user on the remote... Okay. Absolutely. Yes. Okay, that's out of the way. Now, in terms of resource discovery, I mean, obviously you've got to crawl before you walk and blah, blah, blah. So that's why we're dealing with just simple pair-wise federation. Okay, so it seems like the approach that you have right now is that the user will get a token and also something that tells them the other keystones that they could go talk to, which means that they would have to do an authentication to each keystone to get their service catalog. Yes. So this isn't... If you're familiar with past kind of discussions on this, we had this concept of a combined service catalog, one token to rule them all. And that actually was a security risk. We discovered during design sessions that there could be easily a man-in-the-middle attack on that kind of thing. And so that's why the trust is more pair-to-pair. And then it makes a client heavier than we would have liked. But the client still relies upon keystone to know the federation protocol. It just enacts as a redirect of that assertion information over to the service provider. Okay, so, but on the other hand, that's a scalability issue in terms that each user has to know about every possible keystone that they could possibly talk to. So that's discoverable in their keystone that they're typically talking to. So they can discover it. And there's this kind of classification of keystones that will probably need to occur within the client as well. Or classification of these services that each keystone is making available to a specific federation because they may not make all of their services available to everybody in a particular context. And this is why I put in, I think, the storage-only federation that if a keystone or if an open stack just wants to federate something like a Swift service as opposed to a compute service that it would have to return its service catalog, it would have to filter its service catalog depending on who was logging in as part of which federation, such that they would only get just the stuff that they are authorized to use out of that particular service. It's still somewhat discoverable though. Well, no, that's what I'm getting at, is that there's a significant resource discovery issue here. And there's lots of ways of dealing with the scalability side of it, but also I'd like to know more about the security threats that, you know, might be getting opened up. Sure. Okay, thank you. Thank you. Any other questions? And if you're looking for the brown bag talks, they're in this mezzanine level that's really hard to find. It's on level 1.5 on the other side of the building. And what time was your talk? I said 1 or 1.1345. Ah, I was way off. Sorry about that. I know who's scheduled in the mezzanine just so you know. I just had a question about, it's actually very pertinent what you're doing. To us, I work at the University of Toronto Libraries and for Scholar's Portal, which is a consortia of university libraries in Ontario. Have you considered or have you already done work in connecting Keystone as a service provider to existing identity providers like Shibboleth? Yes. Yes, I mean, so as I said, we... If you could just talk a little bit about that, that would be awesome. Yes, so basically we have tested this with a couple different identity providers. So first of all was Shibboleth IDP. The second, we have also plugins for authentication with the Microsoft ADFS, so Active Directory Federated Services. And this is actually something we use at production. And we use it on a daily basis since a couple of weeks. And I think Steve, you have also tested this with your Tivoli Federation Identity Manager, so IBM product. Yeah, so we tested it out with a few IBM products and it was working fine. Say we had the back, the identity manager be Tivoli Federated Identity Manager and Keystone was using ModShib as well. Yeah, it was working out fine. So two things on that. One, we have a developer works article. If you want to see how that works, you can go read it. And I have to point out that Steve is from Toronto as well. Yes, so if you need help. You can track him down. I work like 20... I live like 20 minutes away from U of T, so let me know. Thank you. Any other questions? Thinking about supporting other profile like... So for compliance reason, so right now what you're doing is you're posting the service provider over the STTP, right? And not every service provider likes this model. Because of security reason. So are you guys thinking of doing artifact profile so that artifact will be passed over the STTP and service provider pick the SAML from the identity provider from the different channel. Please speak up with the question. I'm sorry. Okay, so in the SAML there are multiple profiles. What you guys are doing is a post profile. Basically you are getting the SAML and posting to the service provider over the STTP. And it might be SSL or that's a different story. But not all service providers like this idea. They want different means of picking the SAML. So that you post the artifact which is a small piece of ID and service provider will have a different means of picking the SAML. Yes. So the artifact profile is kind of from a service provider standpoint indicating what attributes to actually bring into the service provider from what I understand. It's not... Okay, so right now you are consuming the SAML in the Apache. And from there you have defined the mapping. And in the artifact profile when the service provider basically your keystone has to pick the SAML from the identity provider and it has to process. So that model is more, I mean for the service provider or business perspective, that is more favorite than the browser post thing. Right. So one of the reasons we use these sessions is to get more user feedback and get people more understand that we are actually trying to make progress in the space. So this is really one where I think we really do have to learn more about the requirements. But that's why we're here and take it offline. Yeah, absolutely. There is a whole SAML workflow that we don't really, we just implement the post piece and we rely on the client to be able to implement the other pieces as far as negotiation that actually occurs. But yes, Brad's absolutely right and we need feedback and we would love to contribute. Other piece I think you mentioned were there, we need a more dynamic way of defining the mapping. So right now it's pretty static and we have a requirement to have multiple data centers or a cloud to integrate it like this. A lot of that is what is broadcast from a specific OpenStack cloud about the services that it supports. This model of mapping is not scalable. Right. Thank you. Yes. You're mentioning that we need to use the OpenStack common client and that client needs to connect to get the assertion from the identity provider. And as far as I know there's no standard way of doing that. So I suppose you need a plugin for each different identity provider like one for PING, one for IDFS, one for Tivoli, etc. Am I right or is that... You know, standard is standard. It's something written on the paper and there are two main implementations. And so far we have two different plugins in Keystone Client which actually cover both options. So for instance, if I want to use the SHIBOLF compliant implementation I will have one plugin and for instance for the IDFS I mentioned before I will have to use the different plugin and this is nothing we can do and we just get what we get. Yeah, but in case you want to use that for Tivoli you said you've tested out with Tivoli did you require to write a different plugin? No, I just used the ModSHIB one. The SHIB plugin and it was working fine. So it's just a standard example. In the client side as well? Yeah. Yeah, so two different implementations of the same protocol. And these are the major implementations and actually I'm not aware of them actually. The IDFS plugin was a little bit strange just because it had a few nuances to it so it needed its own auth plugin in Keystone Client and OpenStack Client. But if you're just using SHIB or just regular SAML or you mentioned Tivoli again then the V3 SAML auth plugin was working fine for me. Is there anything on the roadmap to allow to implement a service impersonation like ECP? She will have ECP constraint delegation and active director things like this? Yeah, I'm going to redirect all roadmap questions to Morgan Feinberg who's a PTL. Perfect. So talk to him, he's right there. Grab him, tackle him. Any other questions? Yes? No. So the Keystone said the question was the Keystones actually require ongoing communication in between the identity provider and service provider. The setup, there's an initial setup in which one Keystone knows that there isn't a service provider and then this one knows the other about as an identity provider and there's keys that are exchanged so that way it can actually validate the SAML signature that comes across. But after that there's no ongoing communication in between the two Keystones. Yes? Yes? Do you have plans to use encryption, SAML encryption? So the entire SAML structure has an encrypted signature with it, but do you want to take this one? Yeah, all the, well, what parts do you want encrypted? Well, you know that all the attributes to groups because you can change that in the way. Okay, well part of the SAML assertion like the actual keys portion are encrypted but not the actual username or roles or anything. Right now there's no plans for that but if there's a need. Were you worried about encrypted or signed? Just in the transport level, no? Okay. For those of you who didn't hear that, that was Nathan Kinder and he was saying that to just use, if you're just using TLS I think that takes care of it at the transport level. He's nodding so I think I got that right. Okay. Everett, do you have a question? I think it's OpenStack Client that we have this. Well OpenStack Client relies on Keystone Client, so Keystone Client. Now there's Enhanced Client Proxy which kind of takes care of this for you. If you get that in there it understands SAML and would do pretty much to work for you. ECP. Any other questions? We thought that this would be a short session but I'm kind of glad it was since there were so many questions. So that's good. Stay tuned. Star Trek and Firefly is going to be open next. What y'all are really here for? Thank you very much.