 So good morning, I'm going to talk about our proposal for a federated keystone for OpenStack, the federated system for OpenStack. So this is work that has been done with three students of mine at PSIT, Anush, Meghna, and Pramod, and of course I am Dhinkar. So first quickly on what do you mean by federation? So we've got a number of different clouds, and each one of them has their own identity provider. And services, and if you are able to access resources among themselves, then that's what we mean by federation. And association comprising any number of service providers and identity providers. So I've talked about some other things we've tried to do in doing this. We've tried to keep it very minimalistic in scope, because you know, rather than having a big bank kind of federation, we thought we'd keep it simple. And you know, just focus only on federation, send that out to customers, and then see how they like that users, and if people like it, then we can always add more features. And we've tried to avoid complicating the design. We are leveraging well-known concepts like roles and so on and so forth. We're not introducing any new concepts, so people who are familiar, and as you can see when they go through the design, this is similar to the way Keystone already actually works, so that people who are already familiar with Keystone can easily understand how our stuff works. And it's, of course, backward compatible with existing Keystone. And one important thing is we are keeping track of the user's original identity when it goes to a remote cloud, and we feel that's important for security because if a user is in cloud A and he makes a request in cloud B and his identity is lost, we feel that's not a good design point. And at the end of the talk, I'll talk a little bit of how we are thinking of supporting Amazon. So the reason for that is that Amazon, you know, I think recent estimates say that Amazon is about 80% of the public cloud market, and it's like a 600-pound gorilla. So we think a federation should include not just, you know, open stack to open stack federation, but also open stack to Amazon federation. If you do only open stack to open stack federation, then users might say that, okay, that's interesting, but most of my public cloud is in Amazon, and so they don't really come with that. So there I'm just going to present our preliminary thoughts. It's not really well, it's not completely fleshed out as yet. And of course, I'd solicited any feedback that you may have. So yeah, this is the primary use case we're thinking of where we have an enterprise cloud, like, you know, it could be a banking company, or it could be any other enterprise like HP or IBM or anything else. And they need to access resources either in a public cloud like Amazon or Rackspace, you know, the common use case of cloud bursting, or they want, they have a separate cloud, which isn't the same enterprise, like in IBM, maybe in IBM India there could be one cloud, and in some other place there could be another cloud. So they want to access resources transparently across the two clouds. That's what we're, that's the use case which we think is very common, and that's the thing we're trying to target. So basically the way we are doing this is we are changing the current implementation of the way authorization rules are changed. So the current authorization is a three-tuple, which is subject-privileged object. That's who it is and, you know, the privileges and the object they're trying to identify. So we are making two changes in this and to make it into a four-tuple. At the first is in the case, we are generalizing the idea of a role. So if you look at roles in Keystone today, it's very simple, sort of like system admin, net admin or something like that. What we are generalizing the role is to be a pair, consisting of an issuer and an actual role. So for example, you could say you're a system admin at HP or a professor at Peset or a professor at MIT. That's the issuer part. So suppose I'm in a public cloud, let's say rack space, then I could get a request and I could see that this person is a system admin at HP. And then based on that, I could say, okay, I have rack space and agreement with, say, HP to give the system admin certain privileges. So the system admin at HP is allowed to do certain things. You might get a request saying, okay, the system admin at IBM. Now, rack space and IBM may have a different agreement among themselves. So then the system admin, the role system admin at IBM would have different privileges than system admin at HP and be able to access different resources. So that I think is the main change we're doing. We are converting the role, not just into a role, but sort of like a role at cloud. So in a remote cloud, that is a cloud which is not your native cloud, so to speak, you are kind of tagged with which cloud you came from and that differentiates what kind of privileges you have. This kind of a similar concept to what you have in mobile phones, for example. If you have a mobile phone, a GSM phone, it has a SIM, you can actually use the phone anywhere in the world. And the SIM, it has your phone number remains the same. It's just that the phone number is at the... For example, my phone number is from Airtel in India and when I come to the U.S., I can use the phone, but my phone number remains the same except that it's the phone number in India. And I can use the resources of the different phone networks that I'm going through. Yeah, so this simply gives us in more detail. The other change we've made is that we have extended the access rights to include an issuer, an issuer B. I've already talked about the role, the privilege and the object. The issuer B is for auditing kind of purposes. So this is the case where somebody in Cloud A is trying to access a role and a resource in Cloud B. So the role is admin in Cloud A and issuer B simply says, who's the person in Cloud B who has given this access privilege? So this gives the kind of a little more detail about how we are doing this. If you look at the top, you've got various policies and rules that might be stored in Keystone today. And so for example, if you look at the top point, it says rules dict ABC colon role net admin. And that simply says that somebody who has a role of net admin can do certain things. If you come down in just below the line, it says our implementation, you see a similar rule which says ABC colon role issuer A colon net admin. So what this means is the person who is net admin in Cloud A, he has certain privileges in my cloud. So the flow over here is very similar to the Keystone token flow. So first of all, when a user in Cloud A wants to... we have seen that there are gateways in each cloud which will facilitate this interoperation. And when a user in Cloud A wants to access a remote resource in B, that's the simple flow over here. So what happens if the cloud sends to the gateway in A, gateway access token saying, I want to access a resource in Cloud B. And so the gateway sends back a token. Once the gateway has the token, it sends the token to the gateway... sorry, the numerous typos over here. There should be gateway at B requesting a tenant access token saying which tenants can access. Then the gateway at B will again contact the gateway at A that should be to validate the identity of client. And then it sends back a list of all tenants. And then finally there's a resource acquisition part where the client sends the tenant token to the tenant and sends back the request for the resource. So this is very similar to the token flow that occurs when a user signs on to Keystone to open stack in that there are a number of flows with Keystone which eventually results in the user getting a token which represents all the... which represents all the resources that they can access. So here we have a similar flow. There are two clouds in the middle of the... there are two gateways in the clouds. First the user talks to his gateway saying, what are the remote clouds they can access? Once he gets a list of remote clouds, then he talks to a remote cloud and says which tenants and resources can access and it gets back a token representing that. So the flow is very similar to the existing Keystone flow. So now I'll talk about a few simple things about how you can do this if you wanted. So how we can extend this concept to support Amazon. I'll do that pictorially. So basically again we'll have the same kind of flow but since Amazon does not use tokens but uses access keys, we need a proxy processor to return the access keys to the user. So again basically the user will request Keystone and get access to the gateway and once it gets access to the gateway, the gateway will talk to the proxy processor which will talk to Amazon and return access keys to the user. Once the user has access keys, they can continue to access things in the Amazon cloud. So summary, we'll try to present a federation blueprint and I think our main key point is very simple. It's conceptually very simple. It sort of works the way Keystone does right now and it's backward compatible with existing Keystone. One important feature is that we keep the user's original identity in the remote cloud in the same way that the mobile number is kept track of. Your true mobile number is kept track of when using the mobile network. And of course we think we can extend this to support Amazon which we think is a major feature that we need to add if we are going to support federation. So that's what I had in terms of the talk if I'd be happy to answer any questions. The proxy processor will run in Amazon as one of the users in Amazon. So the mechanism would be standard for OpenStack to OpenStack because that's something we can control but then it goes to different providers, different cloud providers since they all will have their own authentication mechanism. We need to somehow interface with that. I believe there's an existing belief assessment on linking with that blueprint. Yeah, we are talking to the people who had the original blueprint. We also have a blueprint on this and so there are some differences between the models. So our model is very simple. It's focused very much on federated keystone which I think is a good thing because I think the correct way to develop this is to first do federation in a simple model and then get the feedback and do it. Also our access rights are based on the role which is a well understood concept in security today. In the blueprint you have to pass something called attributes which I think is a concept not familiar to most systems administrators and if I went to a systems administrator in Bangalore and asked him what's the role he would be able to tell me but if I say that in order to access a remote cloud you have to pass attributes, you say what I attribute. I don't even know what that means and the existing blueprint creates temporary user IDs when you go into the remote cloud. Transient user IDs, we don't do that. We preserve the actual user ID of the person so you know who's done each transaction. And also the present blueprint supports something called recursion whereby if you've got cloud A, cloud B and cloud C and B and C have an agreement then A can access resources and C. I don't know if that is really appropriate for an enterprise kind of solution because if I have an enterprise like HP or IBM and I have stuff in Amazon and Amazon signs an agreement with some other provider then if Amazon suddenly says we'll take your data and put it on that other provider as an enterprise I might not be happy because I think security is really a big concern among enterprises in today's cloud space and when people look at a cloud provider they want to personally examine security because they're doing things like putting data in and so on and they want to be sure they're secure. So we can support that model but we're not thinking of supporting that model now that can be on a roadmap in future keeping with our philosophy of incremental development but that's not in our model now and we don't know if it's appropriate. So we're not fully complemented the implementation of the model. We've done whatever we need to the policy engine and we're working on the gateways and the protocol. Most of the implementation is straightforward the only minor challenge we had was in modifying the policy engine so we kept most of it the same over there the major challenge was that we are changing the definition of roles we have to change the way in which the policy engine parses roles and handles roles but apart from that most of the rest of the policy engine is complete. It's untouched. If you have OpenStack Cloud and OpenStack Cloud you don't need a proxy process because in each cloud you'll have a gateway so the gateway is sort of like the proxy process plays a role in the proxy process and we have another piece of this which is also under development so this is the security infrastructure for accessing OpenStack Cloud for another OpenStack Cloud trying to write the software virtualization driver and so on which will allow you to actually make the request from one OpenStack Cloud to another OpenStack Cloud. Any other questions? Thank you very much.