 My name is Sean Mullin, I'm from IBM, with me is Jeff Roses, Stephen Suhu, and Henry Nash. So we're going to talk about an issue that we have, and I think there's a few people here late on a Thursday, so apparently it's of interest to you guys also. So let me just kind of jump right into it, here we go. So here's kind of the customer's problem space. The customer's problem space is they want to bring services to the cloud, right? And the services are going to do something, they're going to, I don't know, it could be gaming, it could be financial, it could be research, scientific, anything like that. But what they want to focus on is what goes in the box, right? They don't want to use the off-eye, the authentication, they don't want to reinvent that, and they don't want to reinvent or write authorization code. Somebody comes there, we can identify them, right? Now that I've identified them, what are they allowed to do, the authorization piece? So if you can kind of provide that, if you can provide that in your cloud platform, then you have a way for services just to focus on whatever their core service is. Again, it could gaming, financial, research, scientific, anything like that. So I think there's an echo off of this guy. So that's what we worked on, and that's what the demo will show is if we look at this in a cloud, if we look at this in an open stack view of things, right? You come in and you authenticate to Keystone, right? You provide user ID and password, or it could be service to service, where you would provide a service ID and a key that goes along with that service ID. So when we do this, as we probably all know in the open stack sense of thing, you hit open stack and you get one of the different types of tokens it supports, like a Furned or a PKI or whatever your choice is on that. Now, once we've identified that person, it hits the authorization, and that's how we define it today is with policy.json, policy.json rules. Here's our rules, this person is, has this role, they're allowed to do these things. So that's the problem space for the customer we're trying to solve. Here's a little bit in our personal world that we're trying to solve. So we work for IBM and we have a group of cloud platforms we have to support and our customers maybe they're in the same positions and also they're in legacy systems. So if you look at open stack, cloud foundry, soft layer is IBM's proprietary cloud platform, right? It uses a system called IMS, Infrastructure Management Services, and we also, our customers have these legacy systems, right? So if we look across in the authentication piece, we can, we've done some work for single sign-on. So what we're finding is, is that the authentication piece, the authentication piece is fairly easy because we have a variety of protocols and we'll kind of look at those that allow single sign-on and if I can identify you, you provide your credentials, I can use those credentials across the platforms. Now sometimes we have to do little tricks with that because once you authenticate into one system, once you authenticate into one system, let's say cloud foundry, I authenticate user password and I get a UAA token, which is cloud foundry speak, it's user account authentication, user account authentication. I have that token, I call down into the services, all the cloud foundry services like recognize that. But if I need to call across to OpenStack or I need to call across to SoftLayer, for example, that service just knows my authentication through a UAA token. When that service tries to call over and say, hey, show me all of, Sean wants to know all of his resources across all the cloud platforms, you have to go and you have to exchange that token for a token that the other platform understands. So we developed a token exchange service. So we're doing that stuff, but still this is all around identity. It can figure out, I can say I authenticated, so I'm Sean here, I exchange tokens and it goes, oh yeah, I know it's Sean over here. But the harder part is the authorization piece. And so that's what we'll talk about is the access control across all of those platform layers. How do we come and break this out so we have a unified system, one way to define it and one way that all the platforms recognize? So we're going to give, I think probably the majority guys are security guys, but I do want to run through this. So here's the terminology we're going to use throughout this. So this is InternetRFC, a request for comment, and I have it up there, 2904, we have our policy administration point. Now we're going to go back to that problem space I was talking about. How do you do access control across all of these? Well, you also have to have a way to configure that to say, you know, Sean can see these things, he can write to this, he can update that, but it has to be a system that you can administer. Somebody has to grant you that access. And that's going to be something, you know, we as a cloud provider don't know. Because I'm a service and I'm going to come in here and I'm going to say, we'll use the simple crud. You can create, read, update, delete, right? So in that, in creating this PAP, the policy administration point, it has to be something that the user can get their head around and configure. So it has to be a very easy way to do this across the very broad expanse of multiple cloud platforms and hundreds, thousands of different resources across those. But so when that service comes in and this service does something financial and this service does something gaming and entertainment, you have to be able to allow them to configure access to both those systems in a meaningful way that they can understand in a common way. And that's usually done through roles and things and we'll get into that. Okay. Then we have our policy decision point. Every time a user hits and tries to access something, we need to actually, we need to stop them at the policy enforcement point and go, whoa, whoa, whoa, show us your ID. Okay. Now we're going to go and we're going to ask for a decision. We know what your action is and we're going to see what, if you're allowed to do this or not. And of course we have our PIP, our policy information point. And the policy information point is our system of record. You have to know when Nova is creating something, when Nova is deleting it, how that resource appears, right? So when a resource, you have to be able to understand the life cycle of these resources, they pop up because they're going to hit your policy decision point and say, is Sean allowed to do this? Well, maybe that resource was just freshly created and your policy decision point has to know when it's created and when it's deleted because that's how it will be able to make decisions on that. And then the last one is where do you store these policies? When you configure them, when you ask for decisions on that and that's the policy retrieval point. Okay. So here's the big diagram on that. The user comes in, asks for something, we stop, show me your identity. Then we go down and we said, Sean is trying to perform this action on this particular resource, is he allowed to do that? Well, we can get that decision back very quickly because you can imagine, I forget, some of the statistics on our bigger clouds, I mean, you're getting a boatload of these a second and you have to be very fast with that decision point. There's our administration point, that's a difficult challenge too to make sense of this, to present it to the administrator in a meaningful way in an easy way for him to configure it again across multiple platforms and hundreds or hundreds of thousands of resources and our PIP there is always monitoring the real infrastructure, the system of record, is it there, is it not there, what does it actually look like? Okay. So that was what we were trying to do across multiple platforms. So when we first looked at this, we kind of went down and we looked at the OpenID Connect. I think people, you know, a couple years ago with Keystone, it was federating Keystone. How do we federate identity across Keystone? And this is advanced fairly well and most people I think are doing this, where you kind of say, well, I don't know if this is Sean or not, but let me go ask this one system of record for the identity. So OIDC and OAuth2, it's still around identity, very strong and rich protocols. The industry, in my opinion, has done a fairly good job of solving this problem and standardize and standards around it. Now SAML, which you're familiar with, it comes out of OASIS. OASIS is an international standards organization created by the United Nations to facilitate business-to-business interactions. So SAML comes along, it has a way to do assertions, it has a way to do attributes. And on attributes, so we have like a role-based access control in our back and then you can have an A-back, an attribute-based. And SAML is pretty good about that. So we think of attributes, it's maybe I don't need to know everything about you. A good attribute example is your kid, you go to the fair, you want to ride or ride. It doesn't need to know, it doesn't need to see that kid's ID or anything like that. It just needs to know the kid's taller than this red line. That's the only attribute we need to know about it. And if you're taller than that red line, you get to access the ride. So that's a very way to have an easy way to, all I want to know is that attribute. If you have that attribute, you can do this across the cloud, anything like that. So what SAML did is it kind of stopped at the authorization piece. It said some simple authorizations, yeah, we're good at. But they intentionally stopped there and said the SAML standard isn't going to take on that whole thing. But we have another OASIS standard, which is exact amount, right? And so I think you're familiar with that. That's the extensible access control. Access control is exactly what we're trying to solve. Access control markup language. And so it encourages a separate PDP, which we see there. So that was a lot about if you break that out, then you can use ExactML on that. And we use ExactML in our decision, our policy decision point. Very fast, very robust, extensible way of doing that. And then each one in ExactML world has a subject, resource, action, and environment where environment could be an attribute. And I think I'm going to let Jeff talk now. Thanks, Sean. All right, so this, Sean had the picture of where we want to go with the final product. We're an experimentation type organization. So we said, let's experiment with this. We have two clouds, OpenStack on the left. And I think I have a light here, there we go. OpenStack, and then we have the IBM Bluemix cloud here. Now we have a cloud identity and access management solution. And it's here on the right. And you can access that through the Edge server. So you see, we had to do some modifications to our policy.json in order to point Barbican and Oslo.policy over to our CloudIM policy decision point. In the demo that you're going to see, we have the Edge server, we have the endpoint to manage access, and then our policy decision point. And then policies are created for two of the users that we'll have in the demo, and they are stored in CloudInt. And so that's the policy retrieval point that Sean talked about. When it comes to the OpenStack cloud, we have Keystone. And in Keystone, we're using the user identification, so the global unique ID, as well as Keystone for tokens. And that token is used by Barbican. And then in Barbican, there's Oslo.policy, and that's used to help with the policy enforcement point. And the decision that comes back from our solution. We did modify the policy.json file, and we're leveraging OpenStack's HTTP check solution. So we're leveraging that, I'll show you the policy.json file in a second. But what happens is Oslo.policy reads in that policy.json file. A request comes in, and then Oslo will say, okay, I need to make a decision on this based on some rules, right? And one of the things that we did is in the policy.json file, we have a value in there that points to our policy decision point. And when the transaction comes in, it goes to Oslo.policy. It says, hey, this is a remote, or an HTTP check. It's a remote decision that needs to be made. And it sends the information across to our edge server and down to our policy decision point. The type of information that's in the body is an action. It's the user, which is the global unique ID, and also a resource. And in this case, in the experiment, it was the project global unique ID. Okay, I'm going the wrong way. All right, hold on. You guys want to see this all over again? One more, there we go. Okay, so the policy.json, you can see it uses a similar syntax. We put in here IAM and our endpoint right there. And that's for a policy decision point. And then you can modify the rules within the policy.json to show, hey, go call IAM. So if you want to do a secret non-private read, the first entry in the rule is IAM. And that points us over to our policy decision point. Now, one of the things I do want to go back here, all right, go back here and say, we do have this remote call. And for future work, we'd like to add a caching module here. And use extendable cache so that we can increase the hit rate right there in Barbican. So the best thing is not to have to go over on the wire, right? And if we have that cache, we want to have a high hit rate in the cache. And the decision can be made right there in Barbican based on information that our policy decision point provides. So between Oslo.policy and Barbican, that enforces the decision that our cloud IAM solution returns. So Steven's gonna go ahead and show us a demo. All right, so for this demo, we have two use cases. We have the case of Janice, who is our development lead. This person has administrator access is able to form key creations and whatnot. And then the other user we have is Maureen. She's a development intern, new coming in just for a little while. They shouldn't have access to do anything administrative, just you know, an auditor role just is able to read keys. So now we're going to the demo. Demo. Do you have it on yours? Yeah. We just plug it in yours. Sorry, the best part and we screwed that up, huh? You know what, I think I have it right here, too. Do you have it? Yeah, no, that's, I thought that was it. Got it? Yeah. All right, so we're gonna do this demo. I have a pre-populated curl request here. So what we're doing is we're gonna use Keystone for authentication. So we're gonna go and get a token for the user Janice, she's the administrator. So we're gonna go perform that, go get the user token. We're gonna go ahead and save that off. And what we're gonna be doing here is Janice is gonna sit there and try and do a key creation. So she's gonna go off and form that key creation. We're gonna see it's gonna go through in that creation. It's gonna go hit the policy.json. It's gonna go to that rule for SecretsPost. And it'll go and see the IAM rule, do the HTTP check. It's gonna go across the wire. It's gonna hit our Cloud IAM solution. And then it's gonna go evaluate the policy. It's gonna come back and give us a decision. You can see here that when it was hit, it hit the SecretsPost rule. And the response that returned from our Cloud IAM solution is that response is true. Janice, this admin user, has the ability to do this key creation. She has this ability to do this key creation. So now we're gonna come back as, come back to this core request, pull it out. And we're gonna come back as user Maureen. We're gonna change up the request and go out and get that token as well. Maureen's also gonna try an attempt to do a key creation. But this person only has auto role. So they shouldn't have access to do that. So we're gonna get this token here and perform the same request for the key creation, gonna save that off. And then it'll go again over the wire. It'll send the attributes that it has for that project global unique ID. And it should say that on that request that Maureen does not have access. So when it goes across the wire, it'll come back to that policy.json. It'll go evaluate the rule and then it'll go on through. Hit the rule, make the response to our Cloud IAM solution. And comes back with a decision of false. That's gonna get passed back to the policy.json, passed back to the also.policy. And it'll, based on the decision that came back from the Cloud IAM solution, it'll go ahead and either create or deny the user at that point. So I think now I'll hand it over to Henry. Well, we get the slides back, yeah. So I'm Henry Nash, I'm a keystone core. So I spent my time on keystone identity stuff. So this ability to plug in external policy decision point is something that we've mulled on the keystone for a while. The HTTP check stuff was written like three years ago and hasn't been touched since then. So it's a reasonable first solution, but probably not where you want to end up. So I'll just talk through that. Right, so this is kind of what we've set out to do. But it's not just about that link across. And part of the problem is that the policy rules that are written might be unexpressible in open stack terms. And one of the questions people have is, well, just do this in keystone anyway. I mean, keystone's a, if you think about the keystone project structure, it's really a tagging mechanism for resources. So why can't you tag everything? You could tag spaces from Cloud Foundry and individual networks and so forth. And yes, you could. The problem is, can you express the rules you want in keystone language? And the answer's usually no. So either we try and kind of expand all keystone capabilities and make all open stack, the natural open stack capabilities. Cover the scope, sport axiomels, something like that. Or we leave keystone doing what it does best for open stack resources and allow independent policy decision points and the other kind of PXP components to be plugged in. And I think that is the right approach. With that, of course, comes a problem that if you have multiple different Clouds, then they each have today their own APIs for creating roles or rules, whatever it happens to be. So you have to kind of marry up those APIs somehow as well. And this is what you kind of saw happening right now, which is how we did it, you know, via the HTTP check. And as was said, that works okay. I don't think it can be a very good performance solution that scale, because you're going to have an extra HTTP over the wire check for every call to an API in open stack, basically, which would be the result. So you clearly are going to want some kind of caching mechanism, as was said. And it probably isn't a straightforward HTTP cache you'd want, because it'd be very hard to get the hit rate to what you want. Because if you think about multiple users hitting the same thing, since the user ID will be part of the request, the caching may not be as easy as you think. So this leads us to the fact that actually, we're going to have to take a step back and think about how we're really going to do this properly. Because allied to that is this idea that you need to keep track of resources. Almost certainly, the types of policies will be maybe more fine-grained than open stack and support. So how is it that we're going to extend that? And again, how do we marry these APIs? So in the world where you actually have external PDP, you want Keystone to look after, okay, I federate into Keystone, I get my token, I'm allowed to make API calls, and I'm going to, this external PDP is going to make the decision for me. And actually what we did is we spent some time with the Keystone team this morning in fact talking about this. And so what we've agreed within the Keystone team is that we really need a formal way of defining the interface to a policy decision, external policy decision point. HEP is okay, it was sort of written because that'll be fun, people might find it used to that three years ago. But it's very chatty, it's not necessarily what we want. And so the Keystone team is going to be working this release, it'll take more than one release I think of defining that engine. There's at least four companies, IBM's one of them, who identified the need for this solution. Each of the companies have actually had customers come to us and we've all implemented various prototypes like this one, doing roughly the same thing and all come to the conclusion that it's great for prototype and it's not great for production. So for people who are looking to do this today, they should be checking it's kind of the only game in town. It works, you can do this. And if it's just a few of the APIs you'd want to do, then you probably get away with it. If you want to wholesale hand off the position point to an external entity, then I don't think the HTTP check is going to be good enough on its own. We're going to do this in a vendor neutral way. So IBM are going to be pushing for this, but we had the agreement this morning that okay, we need to be in a vendor neutral way and that we will be publishing specifications. I'd encourage you to enter the interest in it. Please take part in that in the Keystone sessions, the Keystone online specification process. And if any of you developers, I'm sure we'll be talking about this again at the developer meetup in Atlanta in the end of February. So that's a good thing, I think. So we're going to get that done and it will take some time, because this is an important interface. We don't want to bypass Oslo policy. Oslo policy is the way that all the different OpenStack projects get their policies you've made. So whatever we do, I think it's going to be behind Oslo policy. So that we're not going to change that interface today. So that we won't break anything that's in production. The last thing I'm going to leave you with and I'd like to mull on and please feel free to give feedback. Especially if the Keystone team is, and Sean kind of hinted at this in the first few slides, is that none of this solves the fact that each of the, each different cloud platform have come up with their own token format. Now, sometime that's okay. If you have totally different administrative regions and you're fed it in between them, you probably don't want a common token format. It doesn't make any sense. But if what you have is a single cloud, which is just one limited region, having different token formats is a pain in the backside. Yes, you can come up with exchange services and so forth. And maybe that has to be the short to medium term solution. But it does seem strange that we can't actually come up with some kind of common token format. There are pros and cons to this, of course, because each of them have kind of melded it to be optimal for their solution. Keystone and OpenStat, we went through at least three or four different token formats. Well, we tried to get it right. Phanet token being the most recent and the one we're standardizing on. But other cloud platforms have their own thing. What about containers? Kubernetes, how are we going to do the same thing in terms of both the policy decision and also accessing the resource when we have a cloud that's made up of all these components? And I think many clouds will be made up of those components. You see the interest in containers all over the conference this time, as it was last time. So we're going to have combined clouds, certainly with containers and the different container measurements and OpenStack and probably a pass as well for good measure. So this is going to be an emerging problem. We're not proposing a solution today in terms of the token formats, but we're trying to start that discussion about does that make sense for having a common token format that would allow us to actually have a more easier interchange. Once you've got your decision made, you can then send that token to any of the different services. So again, please engage with the Keystone community and others if you have views on that. So finally, that's all we have. We'll be happy to take questions on any of the above to any of us for. Thank you.