 Let me just introduce the people here. First of all, I want to introduce Satya Routre, who's unfortunately not here. He couldn't make it, but he's heavily contributed to the idea as well as the implementation. And we have Rahul here, my colleague, and I'm Anant, and again Minakshi, who's our manager, who also couldn't make it. So we actually started with an idea for running NFV on containers and found a need for having standalone Keystone support for Docker. So hence, this idea came into place. So we thought we could also have a submission on this. So let me get started on what we have been working on overall. Extensively worked on the compute side and LAN side. And with Cisco, we've been working with a Cloud VPN product for the last couple of years. So let's get started on what we want to achieve here. So if you think of Docker, how do you authorize and authenticate at this point? Currently, you have user ID password-based authentication, which essentially allows once you get access to the Docker daemon, you'd be able to run any command, and you will all be able to see all the containers that have been provisioned, and there's no limit on the number of containers that one can provision. So these are some of the areas which we felt probably needed some more improvement. We tried searching extensively and couldn't really find anything like this. Though there are other OpenStack integration itself, we didn't find a standalone Keystone integration with Docker. So hence, this idea came into place. And so what are the advantages of adding multi-tenancy to Docker? So let's say if we add a standalone Keystone to provide multi-tenancy to Docker, first of all, we'd be able to create partitions, and then each tenant or project would be able to manage his own containers. The idea is not to let others see the containers that you have spawned and also provide role-based access control. And as an administrator, we would be able to set quota limits. So based on your resource planning or in a pay-as-you-go model, you would be able to control the amount of resources that are utilized by a particular tenant. And also, since Keystone supports multiple backend, be it LDAP or Active Directory, so we can bring in those features here as well. And even in an existing setup where you already have the users in an Active Directory, you want to just extend that into a Docker setup. You could do that with the help of Keystone. And of course, Keystone provides single sign-on capability. As well as with v3, we have hierarchical multi-tenancy support. So we can bring in those as well with nested projects. Last but not least, not only the users can get access to containers at a tenant level, what we are also planning to achieve here is have an identity service where the applications that are spawned on the containers can actually utilize the identity service, get tokens, and authorize with other instances of the applications or in a multi-tier application. If you're running them on multiple containers, you could actually authorize between each other, like how even in OpenStack, you have components of OpenStack actually getting the token from Keystone to even interact with each other. Similar kind of application, if you want to build that runs on containers, you could use this feature. Okay, so since we're actually dealing with some of the technologies that are here, for the interest of people who are kind of beginning on this, I would like to go over some of the topics that we are touching upon. So I'll give a brief introduction to how authentication happens in Keystone. So Keystone has some of these services. One is identity, which is essentially validating your user ID and password, credential validation. And then we have resources service, which deals with your projects and domains and region and all that. And assignment is responsible for creation of roles, admin and member roles, and assignment of each role to a particular project or capability. And of course, token is the token management, the life cycle of a token, the expiry and whether it is encrypted or what is the format of the token. We'll see a little more about that. And service catalog is something which has, which is like a registry of all the services that are under consideration. And what is the endpoint URL for each of the service? So when I talked about running services on multiple containers, so how do we adapt that to this? We create entries for those custom services in the service table, so that when you want to authenticate as soon as the entry is found here, so that is how you relate to a particular tenant or a token. And again, policy-based, so in OpenStack we've seen this policy.json, where we can edit the role-based access control. So similar thing can be applied here as well. Okay, so when we talk about tokens, these are the main types of tokens that are currently available for authentication. And there's a bit of history here. So let me just brief for people who are not aware of this. So initially, the support started with UUID tokens, which are basically 32-character long unique identifier string, and which are persisted in the database, which is just an ID. So Keystone is responsible for actually linking that ID with corresponding scope of that tenant, which is what is the service that the tenant can access, what are all the VMs or containers that the tenant can provision, and what are the projects that are related to, and what is the role, whether it's member or admin, all that metadata is actually stored in the database. So all that you get is only the ID. So all this metadata is fetched from the database based on this ID. The drawback with that, of course, is when multiple services are communicating and authenticating using this token, every time you would go back to Keystone, which would fetch it from the disk, do a read, and then compare and see whether this token is valid. So complete metadata is every time fetched from the database, which means Keystone is going to be heavily loaded, and that creates a bottleneck when you talk about scalable environment. So we came up with a model called PKI, which is public key infrastructure, which is a certificate-based authentication, so which also was a persistent token, but the token itself carried a lot of information which need not be fetched from the database again and again. So the token itself would have a lot of information like your projects that you're associated with, what is the endpoint, and we talked about some of the services, the service table entry, what is the role, all that information is available. So let me just quickly show you that. Yeah, so this is a sample payload of the PKI token, and you can see some of the, this is basically implemented in a JSON format and easy to parse. So this is packed with a lot of information, if you see, the service catalog that I talked about, and what is the created time, what is the expiry time. So this helps you process them and validate the token at a service level locally without having to go to Keystone for each and everything. But as you see, this is packed with a lot of information, so that means the size of this token is going to be really huge, and in cases it was even seen to exceed the HTTP error limit. So actually we heard that even problems were faced in Swift and Horizon when such large tokens were used. So actually PKI Z came up, which is just a compressed version of the same thing. That also was still considerably high. So the latest one that has come up is Furnet tokens. There is also a debate on whether to pronounce it as Furny or Furnet, but I guess Furnet is okay. Yeah, so these are non-persistent tokens, and they are based on symmetric key encryption, and they are considerably faster, and supposed to be 85% faster than UUID and 89% faster than PKI. This was introduced in Juneau and relatively new. So since there are a lot of adaptations of PKI still existing, I guess this is something which would take shape in future. So for our implementation or whatever, we tried with providing Keystone support for Docker. We went with PKI. But we're also on it right now. Right, we're planning to work on that and provide support for that in future. Yeah, so this is, can you guys see this clearly? Okay, so I just want to go over a flow, how authentication happens, and once you are able to understand this, you will be able to figure out where exactly does this fit in in case of providing authentication for Docker. Okay, so the user submits the user ID and password to Keystone, gets the token, because Keystone validates it against the database and provides a token, PKI token, which has all the metadata that we talked about, which is actually called as a CMS token. That's a cryptographic message syntax. That's a format in which it is packaged and that is sent back to the user who incorporates that token in all the further requests that are sent to services for authentication. And so this token is shipped from the request and again processed. So some of the things that are extracted out of this token are what is the certificate authority, whether the certificate has been signed and whether it's been revoked, what is the expiry, is it valid? So this check is done and once the token is found to be valid, so the request is of course processed and corresponding response is sent back. Similarly, if the request is rejected, if the token is found to be not valid. Now this is how a standard PKI based authentication happens. So we'll look at how we are adapting this to suit Docker. So we went through this already. I'll give a short introduction to Docker. It enables you to package an application on itself with all its dependencies packed into a single unit and it separates your applications and from infrastructure and however VM was able to separate out the operating system part from the hardware, similar thing. And it runs, the advantage of Docker is that it runs the same wherever you deploy it and it's easy to build, ship and run. Some of the main components, key components to look at in Docker is the Docker daemon, which is basically a background service which is responsible for managing all the containers that are spun up. So you connect to the Docker daemon if you want to write an API or you want to use the CLI. So like I said, API actually connects to the Docker daemon and there are multiple end points available and CLI is, again, it's command line Docker commands that are used to manage your own Docker environment. Docker engine is actually a name, is more a combination of all the other three that we discussed. And Docker machine is used to bring up a Docker swarm. Docker swarm is like a cluster of containers, like how we have a vCenter or we have an open stack. You remove the host part out of it and just consider the available resources. So Docker swarm helps you to provision containers irrespective of which host it is in. So it manages a host of clusters and it helps you provision on that. So you saw the key components in Docker swarm. There's a cluster manager and then there's a swarm node which essentially is a particular physical or virtual node. And there is a scheduler which has got basic filters and its own scheduler logic, which provisions containers on the swarm nodes. And there's a swarm store, which is a JSON-based implementation which stores the state of the containers and all the associated parameters. So this is essentially what we would be using the Docker swarm store for associating a tenant ID to the particular container. Now, we saw how PKI authentication happens. And so in Keystone database, we would save the tenant information, project information and all the scope. So in the Docker swarm store, we would store the tenant information correspondingly. So you can filter out and show only those containers that were provisioned by a particular tenant. And Discovery is a service which helps you to identify and discover nodes on which to provision and swarm has its own APIs and CLIs. So this is the landscape that we are suggesting where a user authenticates and actually provisions using Docker swarm. And Docker swarm in turn connects to Keystone to get the token and then validates the token as we saw and compares with the tenant list and once it is found to be valid provisions. So this is an environment you see where multiple tenants have provision containers but that are existing on same node managed by a swarm cluster. So this is a flow diagram similar to what we saw how a PK token is obtained. We'll see how it is adapted here. So the user as we discussed sends the user ID and password and the tenant information to Keystone to obtain the token. And using that you configure that in the config.json and then submit your command to Docker swarm and which in turn calls Keystone again to validate get the list of tenants and see whether it is validated plus other metadata is available in the token anyway with respect to your expiry time and created time and stuff. So validate against that and once it is found validated then you execute Docker swarm executes the command and provisions on a particular chosen Docker host based on the schedulers response and then the response is sent back to the user. So this is a small change that we are suggesting make in the Docker swarm code but this is more a conceptual. It's not hard and fast that you have to change it only in Docker swarm. You can look at even other way of orchestrating Docker containers. So similar logic can be implemented there as well. Since Keystone runs standalone and has its own database we just need to do the association. For our prototype we are currently using the Docker swarm. Correct. For our prototype we are using Docker swarm but I think in previous talk also someone talked about Kubernetes having a plugin for Docker. So that should be another option as well. So these are some of the things that we've been working on. Yes the Keystone support implementation is in progress and whoever is interested we would share if you want to contribute we'd be happy to work with you. And these are some other things that we've planned for future. As I said, Fernet tokens is one of the exciting prospects we want to implement that. So that means the token once you unwrap the token what all information that you fetch from it and how do you authenticate. So that implementation would vary for a Fernet token compared to PKI. So that is something we want to explore and provide support for. And one more thing is isolated tenant networking capabilities which is currently your containers have their own open networking. So they would be able to link with other containers that are in the same network. So we want to also similar to how there are tenant private networks and open stacks similar to that we want to provide tenant networking capabilities for containers as well. Yeah mainly work with the visibility part of the networks whether you wanted private or not. So since you would have the Docker instances itself you know certain instances which would be visible to you which are part of your control under your control. Similarly we want to provide a set of public networks and private networks. So that is like a future use case that we would be working on this. And so the last one is actually a framework which we want to also propose which is any application that is running on a container. For example these days actually UX UI applications or any app you can talk about Uber or Lyft anything that is currently running Docker on production. If they have to use this Keystone authentication for their own services itself not for extending or connecting between your front end or the back end. So for that we want to provide a framework where an application can easily connect to Keystone and do the authorization without much of a code change in the application itself. So this is also something that we want to work on these are things that we have planned for future. Yeah so connect with us and we'd be happy to work with anyone who wants to join us in this effort. And these are some of the references that I used. So that's about it for any questions. Sorry. Sure. We'd be sharing this presentation as well. If you have any questions or even feedback anything would be of great help. Are you gonna speak about Coda? I thought that was in the title. Did you cover that? Sorry I couldn't hear you. The Coda part. The Keystone Coda for containers. Yeah that pretty much would come together with we would not be making changes over there as such but that will be more of a logic change which you would come directly from the Keystone part itself. You know after that you can't provision so that will be a very simple logic which would not let you provision over there. So the Coda is anyway the administrator would be able to set that and the information would be as part of a Keystone API anyway. So once it gets stored in the database it's just a matter of logic check, right? So there's no special implementation needed for that per se since Keystone already supports. It's a small adaptation from the Docker swarm site. Have you guys changed the Keystone policy to support APIs for so on? Yes we are changing the policy.json for supporting the Docker API's and also the service catalog. Yeah that makes sense. The other thing is so there is a middleware right which runs on top of the service. So you guys are also using the Keystone middleware on top of so on? No we are not using the Keystone middleware for that. Okay maybe I'll check with you. Sure yeah. Yeah you mentioned some future work with a framework for container authentication. For container to container authentication. But have you thought much about the enforcement mechanisms that you're going to utilize? Sorry? The enforcement to prevent container to container communication if they're not authenticated? Absolutely yes that should be also an option we should look at yes. It's not just about providing access yes. The restriction would come in there. Part of the solution right? Authentication is one piece but you gotta enforce it right? Correct yes. So Keystone takes care of your authentication as well as authorization who can access what who can access you know what not components are meant to be accessed by you. So yeah when talking about networks that would be definitely taken care of. The idea is to have a framework where you can actually define what would be allowed and what would not be allowed. For example in a simple implementation if you have to think you could just have the you know list of roles defined. Forget about open stack if you're just accessing any application you could have multiple levels of users that want to access. You could have just an enum defining their role and what is the mapping role that our mapping resources that they can access. So this can this should be something not in the core this should be configurable easily and the idea is to have a framework where a configurable data can be implemented. So getting a question correct you were talking about the networks part that we said or the. I was more concerned with the underlying enforcement. It's a policy is one thing right. Somehow you have to enforce that policies by some means right. Yeah like IP tables for example. Yes so those changes would be like when we talk about networks those will be obvious specific changes where you know you're restricting the traffic from one to other. Sure they will come under the isolated network plans actually. I have a question about the network part that seems like you are going to use Docker native network. Do you have any plan to integrate with Cora. I mean that seems like Cora want to very similar things. I want to do very similar things about what you are doing. So yeah we still have to take a look and design that part. So we can definitely take a look at the network piece itself. For now what we were looking at was integration with the neutron part itself. So but I mean we can definitely connect offline and see what it brings. Is that part we haven't even designed yet. Yeah I mean Cora already integrated with Docker network with Neutron network. They have already done this. Okay so then yeah we connect back after those talk. We can discuss more on that because that is something planned for future. But yes definitely we would want to discuss on that. Okay thank you. Thanks. Okay so I think anything else. Okay yeah. All right thank you so much everyone. Thank you for your time. Thank you.