 Good afternoon, everyone. My name is Stephen Gordon. I'm a product manager at Red Hat, working specifically on OpenStack Compute, also a contributor to the documentation project upstream. And today, I'm here to talk about resource segregation in OpenStack. So specifically, I'm going to talk about infrastructure and workload segregation. And some of the facilities we have available to us in the OpenStack Cloud to do that. The reason I'm talking about it, it's an area of much confusion, I think, among typically users and operators, because there are a lot of different options. And there's also a lot of terminology overload with other concepts in similar elastic cloud products or solutions, which causes some of that confusion. So to start with, why do we want to segregate? So there's really a couple of main reasons that I'm going to focus on today in terms of infrastructure. So you might want to expose logical groupings of hosts based on some physical characteristic. By that, I typically mean a geographic or physical location. So that might mean a region of the world. It might mean one of your data centers. It might mean a rack within that data center or a group of hosts that share some point of failure, like power or networking. One of the things in OpenStack is this is kind of a very arbitrary concept, because the operator or the deployer has the flexibility to define that point of failure or that failure domain as they need to. We also have some facilities for exposing logical groupings of hosts based on capabilities they expose. So those are things that you might expose as an additional paid tier with some special capability on those hosts that you want to expose to your users implicitly. The last one I'll list here is kind of more massive horizontal scalability. So we also have some techniques for scaling past the point of some of the non-OpenStack components that underlie our clouds. So by that, I mean when we talk about the components of, say, Nova Compute, it's horizontally scalable. So you can add more API servers. You can add more workers to the conductor and so on. Some of the technologies we use under the hood, such as the database and the queue, they use more traditional clustering mechanisms to provide high availability. And at certain levels of usage, that becomes a problem for our OpenStack cloud. So that's why we have some techniques to kind of alleviate that stress. In terms of workload segregation, I also want to talk briefly about some features that were added in the ICES release in particular that allow us to ensure an even spread of a single workload, also ensure close placement of related workloads. And I'm also going to contrast that to some similar concepts in Amazon EC2. So in terms of segregation in the data center, typically we have some kind of logical constructs there as well for segregating resources. So there's usually some kind of top level container, which I'm going to refer to as a logical data center here. The term does vary based on solution. And they contain some number of logical clusters. And typically the size of a logical cluster in a traditional data center vert solution is somewhere ranging from around 30 to hundreds of hosts. So relatively small scale compared to the elastic cloud and also fairly tightly coupled to physical networking storage layout, particularly storage quite frequently. In traditional data center vert, we also have for workload segregation some concepts of affinity and anti-affinity either for hosts or for CPU cores. That's again very manually configured and very in-depth knowledge of the infrastructure exposed to the user compared to what we're used to in the cloud. So talking about the elastic cloud, and I said I'd like to contrast a little bit to EC2. So EC2 has some concepts for segregating compute resources both in terms of exposing their failure domains to the user and also for allowing the user to do some affinity, particularly for workloads. So in terms of infrastructure segregation, they have regions which are geographically dispersed. So they're completely separate from one another. And where it gets confusing is that in OpenStack, we also have a concept of regions. And similarly with availability zones which exist in EC2, again we overload the terminology. It's a little clearer and we'll get to that in a minute with regions that they're similar. But when we talk about availability zones, there are some fairly significant differences. In terms of workload segregation, EC2 has a concept of what's called a placement group. So what that means is you can set a group of instances into the placement group and Amazon will ensure that they're placed within the same availability zone. So the unit it works with is the availability zone rather than a host or even a CPU core as we talked about with traditional data center. Just talking about availability zones in EC2, one other thing to highlight there is that I have there some example names. So regions US East One, for example, and availability zone might be US East 1A. One interesting thing to note there is that my US East 1A availability zone in EC2 may be different from yours. So they do some load balancing behind the scenes there as well. So I mentioned in OpenStack we overload the terms and there's a lot of differences in terms of the flexibility that someone offer. So when we talk about the elastic cloud, one of the key things we often say is that, you know, we're abstracting all of this infrastructure level information from the user. So why in this case are we actually trying to expose some of the information in terms of what our failure domains are and how large they are. So in this particular case, the answer is that users and applications do demand some level of knowledge of these things. So when you're building an application on the cloud, you can plan for failure. And obviously that's a key tenet of building an application for the cloud. So you want to make sure that you have an even spread of your workloads in different fault domains across the cloud so that if any one of those areas of the cloud disappears overnight, your application is still running somewhere. There is also, as I talked about earlier, the concept of having some premium level of features that you're exposing. So basically having some charge back mechanism for a set of hosts that have some more advanced hardware that users are demanding. So the main difference on face value in terms of segregation in OpenStack versus EC2 is that obviously you as a deployer and operator have complete flexibility in how you define this, how much or how little you expose. So the concepts I'm going to talk about today and I'm going to talk in terms of infrastructure segregation and workload segregation. So in infrastructure segregation, I'm going to talk about regions, cells, host aggregates and availability zones. And then in terms of workload segregation, I'm going to talk about server groups and how they work. So first of all, I'm going to talk about regions and cells. So regions, effectively, each region in an OpenStack cloud, if you're using this functionality, you have basically a complete deployment of each OpenStack component in either region. They share at least a horizon installation and the reason I have Keystone crossed out is until recently a region also always shared a Keystone deployment. There's now an available regions option in Horizon, which allows you to set up Horizon so it can authenticate with multiple Keystone instances in different regions of the cloud. And that means you get a little drop down on the login screen which says which region I'm authenticating with and then the rest of your session is tied to that region as well. If you do choose to share both Keystone and Horizon, then instead you have the option within Horizon once you've logged in to then alter which of the regions you're interacting with each time you run a command or launch an instance and so on. So in a default deployment, we just have the region region one, but then we can implicitly create regions by adding new Keystone endpoints. We provide an argument that gives a different region name. And then that action is what tells, for example, Horizon, when it looks up the Keystone endpoint catalog that, okay, there's multiple regions here. I need to provide that drop down. So just in terms of when the user's interacting with this, I like to differentiate between some of these mechanisms in terms of whether they explicitly use a targetable or implicitly use a targetable and also whether that targeting is mandatory. So in terms of regions, every time I run a command against the cloud that has regions defined or an open stack cloud that has them anyway, I need to specify what region I'm interacting with. So the concept of regions exists above the Nova scheduler. There is no concept of Nova choosing which region it's gonna run stuff in for you. The user has to specify. So obviously on the command line, that's via a command line option. On Horizon, you select by the drop down on the top of the screen. So just in terms of what that looks like in a deployment. So in this particular case, the first example, we have both a shared Keystone and Horizon, and then we have separate Glant, Nova, Cinder, Neutron, and Swift installations on each side of the cloud. I'm missing a shot there, but the next shot was without the Keystone in there. So in terms of then talking about cells, which came into open stack in the Grizzly release and have gradually improved from there. So here I have the basic compute architecture. So this has been horizontally scaled in terms of there's a load balancer in front of the APIs. There's multiple API instances, multiple conductors, multiple schedulers, and multiple compute nodes. We also have our message QNR database. So at some point in an open stack cloud's life, particularly once we start approaching tens of thousands of nodes, we start to see stress on the message Q and the database in particular. And cells originally came about as a way of alleviating some of that stress. One of the other benefits it does provide over regions, so I guess what I wanna say here is that that's not the only reason you might use cells. So one of the other benefits is it does provide a single compute endpoint. So what that means is you take away that aspect where the user has to specify exactly where this is gonna run. They just initiate their API request, it goes to the API cell, and then that cell is responsible for determining where in the cloud the actual request will run. The user doesn't need to specify it and actually can't as it turns out. There's also an additional layer of scheduling implicit in that. So there's a new service, the NovaCells service that you introduce into the environment, and that runs in each of the cells in the environment. So in this case, just looking at the diagram, we have the API cell at the top, a message queue in between, and then multiple compute cells. And what I'm gonna do is zoom in on the API cell, which looks very similar to the top half of our original architecture diagram in terms of we have a load balancer, we have multiple instances of the API server, we have a message queue again, but now we're talking to NovaCells at the bottom, which has its own scheduler within it, which acts in the same way as the default scheduler, it's a filter scheduler as well. And then it's responsible for talking through another message queue and placing requests on one of the other compute cells. So as we go down to the compute cell, we again see that there's a NovaCells instance within there, but we see that the rest of the architecture is actually very similar to our default deployment. So whether the stress is relieved on the database and the message queue is obviously, each cell has its own message queue and database. So we split out the problem space and we make the stress on those components lower, and that allows us to scale to a wider deployment. In terms of setting this up, there's actually some good documentation on docs.opinstack.org about this, that which I'll link to at the end. But so we need to set up a separate database and message broker for each cell, obviously. We need to initialize the cell database, which is an additional set of tables and rows that need to be inserted using NovaManage. And then optionally, we can do some scheduler configuration in terms of influencing that cell scheduler in how it places instances across the cells. And finally, optionally, you can also create a JSON file which records where all the cell endpoints are. So the reason that might be beneficial is that the cell's information doesn't actually change that much. So instead of querying back to the database all the time, it can just be retrieved from the file on disk. API or parent cell configuration involves changing the compute API class and then also enabling cells, naming the cell with a name that matches what was put into the database and enabling and starting the NovaCells service. Compute child cell configuration is almost exactly the same except instead we're disabling the quota driver. And the reason we're doing that is that the quota driver instead operates only in the API cell. So all the quota handling is still handled in that top level cell instead of the compute cells below it. So in terms of pitfalls, cells is really a very compute-centric solution. So most of the other services have only vague cell awareness, if at all. So there are some tips, like some tricks you can use to push some of the components from the other services down to understand cells, but it's very hit and miss. So there's no consistent approach across the other services to interacting with cells. There's also currently fairly minimal test coverage in the gate, which is something that we need to improve. And there's some standard functionality or some functionality you wouldn't really even think about when moving to cells, which also doesn't actually work at the moment. So particularly host aggregates and security groups there are known issues with. And the reason for this is the way it's implemented as cells is almost bolted on to Nova in a way that additional paths need to be created to support some of these functions in the current implementation. So in terms of just comparing some of the key points here for taking away. So regions are supported by all services, obviously, because it's effectively a separate deployment just connected by the keystone at the top or the horizon as it may be. Cells only really supported directly by compute. We have separate endpoints for regions, which as I mentioned is a little bit of a barrier from a user perspective because that means you have to specify them. Cells, we get a common endpoint. And obviously implicit in that is the concept of regions existing above scheduling and cells having that additional scheduling layer that they insert. One important consideration when it comes to whether we can upgrade these is that regions are only really linked by the REST APIs. Cells are linked via RPC. So the communication between the Nova cells services in each cell is using the message queue and therefore it's somewhat more vulnerable to changing between versions. Although a lot of work has gone in Nova to try and stabilize some of this stuff and support upgrades better. Cells hasn't particularly been a focus in that effort yet. So there's still some testing going on to see how far we actually made it with getting that to work. So I want to talk now about host aggregates and availability zones and some particularly confusing aspects of these as well. So host aggregates first of all are exposing a logical grouping of hosts typically based on metadata but really what the metadata is describing is some capability that all of those hosts share. So that might be fast disks for the ephemeral storage, some networking devices that have a faster speed, GPUs, so on, that you can expose to guests. Hosts can be in multiple host aggregates which for example I might have one host aggregate that has hosts with SSDs for the ephemeral storage, I have another host aggregate that has fast network devices and there might be some overlap between those groups. So I have some hosts that are in one group, some that are in another and then I have some that are in both. So host aggregates I refer to as implicitly user targetable. So what that means is as a user I don't go in and say I want my instance to run on that named host aggregate. Instead I'm selecting that I want to run my instance with this capability. So the admin has defined a host aggregate with some metadata describing or exposing a capability and then a flavor with extra specifications that matches that. So when I'm the user, I'm selecting by the flavor. And then based on that the scheduler then knows, okay, I've selected a flavor with that capability and it'll take that down and find a host aggregate with matching metadata. So in terms of an example of how we might lay that out on top of our previous region's example. So first of all, as the administrator I go through and I create some host aggregates. So today I'm creating three, storage optimized, network optimized and then compute optimized. So just kind of in terms of aggregate creation we kind of use very broadly descriptive names typically. So in terms of once we set that up we need to set the metadata keys. So here's some very basic keys, fast storage, fast network high frequency CPU again. In some cases you might want to set something more specific. So particularly one of the things to think about here is if I'm exposing a GPU I don't really want in the elastic cloud to expose to the user the exact model of GPU I'm using. Instead I should be exposing to them kind of capabilities that that device has. So that might be for example for a GPU a specific OpenGL version that I know this host can support or a DirectX version for that just for example. So once I have my metadata defined I start adding some hosts to the aggregates. And once I've added those hosts and populated the aggregate I start seeing them come up here. So in this particular example so I started with my storage optimized aggregate I then add my network optimized hosts and then I add my high frequency CPU hosts. So as you can see there's some overlap between those groups in terms of what's coming in. So a user might use a flavor that specifies both storage optimized and high frequency CPU and that would end up on the two hosts at the top left. And so on. Obviously because I'm using regions if I want to expose those capabilities in both regions of my cloud and because they're completely separate compute deployments I actually need to go and define those host aggregates again in the other side of the cloud. So in terms of making those aggregates I just created user targetable I then need to go through and modify my flavors to actually have those extra specifications that match the metadata of the host aggregate on them which you can now do through both the command line and horizon. So just touching on that actually you and I sales in horizon you can create host aggregates and availability zones. A key thing that's missing is you can't actually set up the metadata on the host aggregate from horizon yet. So you can do the initial aggregate creation you can do the flavor modifications but you cannot yet do the metadata modifications from there. Finally once the user is selected their particular flavor and the capability thereafter the filter scheduler is responsible for matching those extra specifications down to the metadata of the aggregate and placing the instance. So this is where it kind of gets a little bit confusing. So talking about availability zones. So availability zones are logical grouping of hosts based on some arbitrary factor typically a failure domain in particular that you're exposing to the user. And they are what I refer to as explicitly user targetable. So the user can say I want my instance to run on that exact availability zone. Although you can also as long as the default is on set Nova will choose one for you or you can set the default so that if a user doesn't specify a specific availability zone all requests will go to that particular one. So what's interesting is that availability zones and host aggregate despite those characteristics which would make you think that they're very different things are actually the same. So availability zones are effectively an extension of the host aggregates concept. So host aggregates are made explicitly user targetable by creating them or exposing them as an availability zone. And there's an additional entry that goes on the host aggregate in the Nova database to illustrate that. So in this example when I do my aggregate create previously I only specified the aggregate name. Here I specify also USH tier one as the availability zone name I'm exposing it as. So the main difference that that makes in terms of the user is that the host aggregate is now exposed in the availability zone. So yes I can target it as the operator though that means that I cannot put hosts in multiple availability zones in that same container. I say well sort of because there are kind of a couple of little gaps in Nova where it does actually let you get away with this but since grizzly the intent has been to prevent you from putting hosts in multiple availability zones and certainly it makes it fairly difficult to do so. Hosts of course as we mentioned can be in multiple host aggregates. So in terms of our environment at this point which has regions it has some host aggregates defined and if we were to layer over some availability zones it would look something like this. So really here instead of just the four host aggregates I defined originally I have eight but four of those are exposed as AZs. So that may mean that in my two geographic regions I have two separate data centers in each for example or it could be again my failure domains could be much more local to that depending on how I'm operating my cloud. So stacking those up again quickly. Host aggregates implicitly use a targetable availability zones explicitly use a targetable although it is important to note that when I say they're explicitly use a targetable that targeting isn't mandatory usually. If the operator does set a default availability zone then it will go to that zone by default even if none is specified. Hosts can be in multiple aggregates they can't be in multiple availability zones and obviously at the core of it the difference in concept is that a host aggregate is used for grouping via capability while an availability zone is used for grouping on some arbitrary factor but typically a failure domain. So the final thing I wanted to talk about today was workload segregation. So workload segregation is fairly new in OpenSec and it's been kind of a gradual path to get there. So in the grizzly release there was the addition of what was referred to as the anti affinity filter and what it allowed you to do was create what were at the time referred to as instance groups where those instances within the group would always be placed on separate hosts no matter what. If a separate host wasn't available then the request would fail. In the Havana release we saw the addition of the affinity filter which went the opposite way so all of the instances in the group would be placed on the same host. The problem with that implementation at that point was as the operator you had to determine whether you were going to enable one filter or the other. There was no separate storage of the policy anywhere so all groups were either affinity or they're all anti affinity. You didn't have any real choice exposed to the user. In the ISS release there's an API fairly basic implementation of it at this stage which is referred to as the server groups API and also as a result of that the filter names changed slightly. So what's important to note is that I mentioned AWS placement groups earlier. So AWS placement groups provide a facility for grouping instances and ensuring that they're always placed within the same availability zone. Nova server groups work at the level of the host. So instead of working on placement within an availability zone I'm working on placement within a host or against different hosts depending on which policy I'm using. So in terms of creating the server group and this is a little bit confusing because I created group but I don't actually put any servers in it to start with or any instances in it. So you're really defining the policy and then you're gonna boot host into it after the fact. So what that means is that I create my group, I specify an affinity or anti affinity policy for that group and then I define my group by booting instances with a schedule hint that is the group ID effectively. So at the moment there's no concept of modifying the group membership on the fly so I can't add in a running instance and then have it placed according to the policy and I also can't take an instance out of the group at the moment either. So it basically runs in that group until it terminates. So in terms of adding that on top of our deployment here so I have three instance group and I'm deploying it with affinity and it happens that it chooses that first host via a normal deterministic scheduling. So the obvious question here is what happens if I launch a fourth instance into that group and for whatever reason there's no capacity left on the host and the answer is at the moment these are hard affinity and hard anti affinity policies. So that means in the case where there isn't capacity on that host that's been chosen for the additional instance in the group then that schedule will fail with a no valid host error which is one of the more confusing errors in NOVA I think because you get a scheduling error back but you don't really know why a lot of the time. So in terms of if I was using anti affinity and let's assume for argument's sake that I'm using anti affinity and as well as that hint I'm specifying the AZ1 availability zone the schedule will place each instance on a separate host and then if at this point I was to do a fourth instance but in the same fashion then that would fail. There are some proposals in the queue for Juno around doing kind of soft affinity and soft anti affinity policies which would mean that once I try to place that fourth instance instead of failing it would do the best effort to then place it on one of those three hosts so it allows for some overlap of the workload. So in terms of what next for these particular features. So I noticed server groups in particular is gonna be a topic at the design sessions later in the week. So we have a number on Friday there the simultaneous scheduling for server groups. So that's really taking the entire server group and trying to place it as one unit instead of considering the scheduling for each individual one because what can potentially happen at the moment is you place the first instance in the group on a host and then you chose badly and you don't have enough room for the rest of the group and obviously we wanna try and consider them all at once if possible. There's also the idea of keeping the schedule hints with the virtual machine for the lifecycle of the virtual machine and that's important for replacement of the later date. Finally, I think one of the big things for me this week is it's great to see a lot of users and operators here. So there's a Nova Dev Ops session on Friday at three which is really the operator's opportunity to come in and talk to the developers about the problems they're facing and the needs they have. I think particularly if anyone's actually deploying or trying to deploy and use cells at the moment it would be a great opportunity to come in and tell the developers what's working for you and what's not and direct a little bit that way. I also wanted to highlight some resources I referred to in this discussion. So the operations guide which I myself have edited as well has some good content on all the concepts we talked about today and the differences between them. There's also the configuration reference guide and I particularly wanted to highlight there within the compute chapter. There's some good content on how to set up and deploy cells and finally some of the guys from CERN have done a great job of recording their experiences with cells and kind of a large-scale open stack deployment and it's a great contribution to the community. So I'd highly recommend reading that too if you're interested in learning more about some of the cells concepts in particular and some of the tricks for pushing some of those other services down. So at that point I'd like to thank you all for coming and I'll take any questions and there's a microphone in the middle there if people want to use that as well. Up the back. Oh, sorry. We'll go from the microphone first, sorry. For the API cells, is it possible to have multiple API cells or right now we can only have just one? My understanding is the moment is it's typical to have just one and then you still scale out the API services within that but yeah, just one because obviously you could potentially introduce like a separate API cell which would almost be its entire own region but then you have a different API address at the front of that. And then just one last tiny question. Server groups is available as of which release? So it was a gradual implementation but what I consider kind of the complete usable product in particular that I refer to at the end there is Icehouse where you have the API to actually interact with it. Thank you. You mentioned the lack of cell awareness in other projects than Nuba. Is there any work in progress to bring this to Cinder and the other ones? Yeah, that's an interesting one because at the moment not that I'm aware of and that's a little bit of a problem. So in terms of like there was some work done in one of the earlier releases for Cinder just to make it so that you could actually use block storage at all from a compute cell. So initially if you were using cells you couldn't use block storage which obviously is a problem. So now you can use block storage but it's still a global Cinder installation and it has no concept of cells. Are cells considered a feature to count on for future releases? I think that's undefined and I think that's why it'd be interesting to have that feedback at the operator develop a session on Friday to find out what people need from it and what is not working for them. Because there is certainly I think well there should be concern that it's not as well tested as it could be and obviously that puts it in a little bit of danger. But I think it's obviously a very useful feature to have so I think it's important and that's why I really want to see users coming forward and explaining their experiences with it. Dungeon from CloudWatch, I just have one question on Arborette and Arborette zones. Recently there is a trade in the mailing list on how to distinguish between Arborette and Arborette zone and there are two different opinion on that. One is on the current situation is that Arborette zone should be separated from one from another. The other opinions is that Arborette zone is the aggregate but exposed to the users. So it's logical to say that I would like to have one Arborette zone on our SSD and the other Arborette zone on all the highest frequency CPU. So in that case we shouldn't separate the Arborette zone. So that the user can say that I would like one vital machine with high frequency CPU plus SSD. So what do you think on this? Right, I'm very familiar with that thread actually because that's kind of what brought up a lot of this stuff around the fact that you can kind of get hosts into multiple availability zones because that original decision to make it so that you can only have a host in one availability zone was really a fix to another bug which is interesting where that came about. I can see both sides of that argument. I think, I mean the two things you have to balance is on the one hand you want the operator flexibility that they should be able to define the host aggregates and availability zones as they need to. On the other hand you have the concern about interoperability and particularly I think there does need to be some concept where an application can select a region or an area of the cloud and know that that's a separate fault domain from a different area of the cloud. I don't know how you best balance that. I think there may be room for another concept there or even for another way to expose the host aggregates for that matter. Well I'm not really in the open safe for long but I see that the first opinion on the aggregate and availability zones as an open aggregate or expose of aggregate is just aggregate is just like the configuration for the administrator while availability zone is a choice for the user. So the two opinion in fact they undermine the different nature of the availability zone so that right now if we don't make some clear definition on availability zone and aggregate we will take the confusion always. Right yeah and that's another concern there around confusion is the fact that we're talking about availability zones particularly obviously they have terminology in EC2 and other clouds where the same thing also means that you have identifying the fault domain. So if you're talking to users and application developers and you have the concept of an availability zone means something different in different clouds then that can become very problematic I think. That's just my personal opinion. So I think that's the kind of thing where feedback from users and operators is really important. In regards to pitfalls with cells how does Neutron interact with cells particularly in a provider network. As I understand it not very well. So the cells installations I'm aware of are using Nova Network. Forever for RMS. I have a question for you with relation to rack awareness and some curious how you would implement that were some of the things that you spoke about today. Right. So I think that's a big concept within triple O and Ironic and that kind of deployment model of how you define a rack and what that means whether it's a resource class or something else. So I'm not really that familiar with how they're applying that at this point and it's kind of an ongoing conversation I think in how they first of all discover the resources and then how they place them like whether a rack is just an availability zone or whether you go larger than that. I think vendors are very interested in that as well like in terms of hardware vendors who wanna ship out a couple of racks and then have you expand your cloud that way and having it easy to add that kind of capability in its own little pod effectively. But I don't know the answer to that. I think it is really an orchestration of deployment question in terms of how you place them. With that I mean I guess it depends on why you'll wanna put things in all in one rack but placement groups could be one idea if you wanna want to go a nose close together all in the same rack and say it's so they don't have to go off the switch or whatever. So could that be a middleman or a middle approach for doing it? So I think in terms of what I talked about in terms of server groups being not equivalent to placement groups but kind of similar I think at the moment we only really have initial policies that say the unit is the host and I think there's a lot of scope there for different applications for either going wider and saying that the unit is like an AZ or something similar to that or going smaller even and saying for network sensitive applications not only do I want it on the same host I want it in the same numous cell which for NFV would be appealing for example. So I think I see server groups as it is in Icehouse as really the first step to that kind of functionality I think there's a lot of scope for different policies. I have another question on the server group. The fact that using the anti-affinity user allowed user to use anti-affinity can lead to their very abusive manner that the user can request about 100 fighter machines in anti-affinity. So is there a quota or anything that can limit that effect? So there is a proposal in the NovaSpecs queue right now to put quotas on that. So that would be quotas on first of all the number of instance groups a user can create and the number of hosts or number of instances they can put in that group. All right, thanks everyone for your time today. I really appreciate you coming. Enjoy the rest of the summit.