 Well thanks for joining. My name is Lance. I'm an engineer at Huawei and today we're just going to talk about unified limits in OpenStack and what it means for helping us enforce quota consistently. So we have a lot to get through in 10 minutes but I'm going to kind of do a quick introduction into what unified limits are and level set on some terminology and then I'll tell you how they're advantageous for both operators and developers. We'll go through some of the work that we've already done and then I'll tell you how you can get involved. So what are unified limits? If we take a very like unofficial definition of the term limit it's essentially a theoretical maximum of the number of resources that you can consume right and that's a very important distinction to make from another definition that we need to define which is usage which is the current number of things that you're currently using. When we started the unified limits work a couple years ago we wanted to make a very distinct distinction between those two terms because it could be managed by two separate systems or entities. Ultimately what we're trying to do is make limit management combine that with usage enforcement and then you have the basis for a quota system that lets you regulate consumable use consumable resources to your users. So what that means for OpenStack is that we're replacing limit management with Keystone and then keeping usage enforcement and calculation at the service. So what does a unified limit look like? It essentially consists of three pieces of information a resource which is a key that describes the thing that we're limiting the service that's responsible for it and then a limit which is essentially an integer. There's a couple of optional pieces of information like regions since not every service is deployed globally and also projects. Keystone offers a couple different types of limits. We have registered limits and these you can think of as project-wide defaults so they apply to every project in your deployment. The information is relatively static but essentially what we're saying here is that the compute service in the Berlin region is limited to 32 cores per project. So projects that don't have an explicit override will be assuming this as their default limit. This is something that operators might set up after they roll out a new service or after they deploy a new region they're going to go through and set up these registered limits to kind of let everybody know this is the baseline of what you can consume. The second type of limit that we have is a project-specific override and here we're actually overriding a registered limit and we're giving it a specific project and also we're modifying the resource limit. So in this specific case instead of 32 cores we're cranking that project down to only consuming 16. We can also do that in the inverse which is cranking it up to 64. So this is a really really high-level example. It'd be nice to go into something with a little bit more detail but this just shows the basic flow between the various actors. If we have a user a service in Keystone a user is going to make a request right and that request is going to contain some information about the resources they want to claim. So that could be the attributes of a flavor it could be the size of a volume and they're going to make that against a service. At that point the service knows the project that that user is operating against and it can calculate the usage underneath that current project. They can also go to Keystone and grab the current limit information for that project which Keystone will return either the registered limit or a project-specific override. At that point the service has everything that it needs to start building an enforcement check. It has the project usage which it calculated itself based on the project ID. It has the claimed usage which came from the user's request and then it has the project limit which came from Keystone. Based on all of that information it can evaluate that and then give an appropriate response to the user as to if it's going to pass or fail. So how is this helpful? For operators it's nice because it gives them a single interface that they can use to query all limit information against all projects. They don't have to go and individually hit every single service to figure this information out. It also supplies consistent validation so if you want to adjust the limit of the size of your volumes and you want to adjust the number of cores that you can have on a project that validation is going to be the same. And that also leads into a single definition of enforcement rules. So as Keystone being responsible for all the limit information and associating that to projects it's also in a good place to propagate the enforcement model out to the consuming services so that we can do consistent enforcement. This is really important for providing a consistent look and feel for how limits should work across OpenStack as a whole and not having a different user experience in one part of OpenStack versus another. How does this help developers? Developers no longer need to build their own limits API. Instead they're really only responsible for usage enforcement and I put an asterisk next to usage enforcement because I don't want to downplay it. It can still be a complicated problem but now you're not having to solve two different things. You really just have to calculate the usage of resources under a given project and then work with that data. And that also eliminates projects from having to understand this super complicated tree structure that you can get from Keystone since we support hierarchical multi-tenancy. If you get back a tree of projects that is n levels deep and you're trying to figure out how the limits work, we're trying to isolate all of that logic into a couple of key places so that we're not reinventing the wheel across all these services with a very complicated object. So some of the work that's already been done and Queens Keystone merged implementations for register limits and project limits. So that work is in there and ready to go. The API is still marked as experimental but we're not expecting any major changes to happen to it. That status will change as soon as we start having a consistent consumer. You can also start using all of this work from Python OpenStack client and OpenStack SDK. This release we're actually working on also limit and smoothing out the interfaces in that library so that it can be readily consumed by services. Some ways that you can get involved if you're a developer working on an OpenStack project and you're looking to solve this limit problem or implementing quotas, we'd love to get you involved and get your resources in Keystone so that we can tighten up that integration. That's also going to help us smooth out the interfaces and also limit because if your project is doing something interesting with how you're calculating usage for projects, chances are there's probably another project that's doing that too. If we can figure out where those places are and put it in also limit, it's more useful for everyone. And if you're an operator, if you have a very specific story for quota or how you're applying limits to your deployment, we want to hear that information so that we can start developing other enforcement models that make this usable for other people in OpenStack. And that's about all I had. I kind of blew through a bunch of that information, but we do have a couple of minutes and I'll also be hanging around here too if you guys have questions, but thanks for joining. I appreciate it.