 Thank you everyone for coming. This is OpenStack capacity planning. I want to start off with a great big thank you to the doc team that built the OpenStack operations guide. Rackspace put them up in Austin for a week. They locked themselves in one of the rooms at the Rackspace offices and put together an incredible guide. They did this about a month or a month and a half after I wrote the synopsis for this talk and got it accepted for the OpenStack Summit. It turns out everything they wrote and their guide is pretty much everything I was going to say up here. But they read my mind, trust me because everything that's great in the guide, it's actually what I came up with. But seriously, if you're trying to do capacity planning, the best place to start is the operations guide that the doc team put together and it's incredible. So that's why I wanted to start out with a great big thank you to that team because they did some amazing work. So I'm guessing if you looked at the topics and the talks that were available for OpenStack, you too were curious about how to do capacity planning for a big OpenStack environment and you were probably looking for easy answers, and I've got some really easy answers for you. So the things you need to worry about to make capacity planning super simple, basically start out with a blank check, so you need unlimited funds and unlimited time. So make sure that if you're time boxed by a project manager or whatever, tell them that's not going to work for you. You're going to need a lot more time than whatever they think you need. You need a lot of really smart people and expectations that never change. So don't allow whoever's telling you to stand up this OpenStack environment to change their mind at any point, of course. And on top of it all, if because we're, this is really a community, community thing, OpenStack, the summit, the design summit, if you guys could help me and all of us on capacity planning by slowing down the rate of innovation, it would really help, because things are getting too good too fast and it's really difficult to stay ahead of that curve the way OpenStack is getting better. If we could kind of swing over to the way Eucalyptus is or CloudStack, maybe it would be a lot easier for us to plan capacity and be able to lock something in for three or four years at a time. So maybe as a community, if we can all together do that, that'd be great. All right, that's probably not gonna work out. So starting off with capacity planning, the best place to start is DevStack. It is the number one best place to start to learn and understand OpenStack and give you a test bed for whatever assertions you want to test out. It actually is useful for testing some expansion abilities by adding nodes and it's great test bed for quantum and a lot of the other networking options that you have and great for testing some storage options and really giving yourself a good kind of playground or sandbox to test things in. So if you're gonna talk about capacity planning you should be really, really familiar with DevStack and with spinning up DevStack instances but it's not meant for production obviously. How many people here have stood up on DevStack instance? Awesome and how many people have used DevStack for production? That's good, very glad to see that because it's not meant for it and it will burn you bad if you ever try to do that. I bet you'd be surprised how many people especially a year back were basically trying to do that because they weren't sure how to get all of the components of OpenStack up and running together so anyway, don't use DevStack for production. So capacity planning. First thing to do is you have to ask yourself what you're going to do with it. So is your target a public cloud or a private cloud? And either one of those decisions as far as I'm concerned that's really the very first break point where one direction or the other really dictates a lot of the answers and even a lot of the questions you're going to ask yourself. So if you are talking about a private cloud, it's easier for most, it's generally easier. You know your workload much better, you know your intended capacity. So if you're in an organization that is standing up an OpenStack cloud you probably really know the purpose. You know what you're doing that for whether it's for app development and you know exactly you're doing something on Java or you're doing something on a LAMP stack or something else you have a really clear picture of what you need the cloud for which makes it a lot easier. Some of the decisions that you have to make later on makes it easier to figure out what those are. And there are just far fewer variables. A lot of times people standing up an OpenStack environment for a private cloud inside an organization. Even some clouds that are intended to get moderately large or be used throughout a couple different parts of the company those are still frequently being built on hardware that they already have in-house. So you have fewer variables when you're planning out the capacity there. You might have only one switch option and that switch option might be only what is already in your data center. And you might have fewer choices for the hardware nodes because maybe you've got 12 servers or 20 servers that you've been gifted for this project and you're not gonna be able to go out and spec out some brand new kit from NEC or whoever and go that direction. So in terms of the capacity planning when you have I guess the fewer variables you have the easier it gets to decide how you're gonna scale and where the break points are gonna be. And also from the security side you are when it's private cloud especially private cloud within an organization you're talking about in essence a single tenant. You might have lots of different projects within your OpenStack you certainly will have lots of different projects but they're all known people they're all going to be people inside your organization so you have to worry a little bit less about hardening it and you have to worry a little bit less than you do when you're standing up a public cloud and you have no idea who's gonna be using your system. And the last thing that makes private cloud a little bit easier is you probably have less of you have to worry a little bit less about how you're gonna provision all the systems especially if you are because of the nature of a private cloud it's probably not gonna be huge so you probably don't have to account or be able to stand up a new compute node every day or two new compute nodes a day or something along those lines you are much more likely to have a slower churn for your deployment so you probably have to worry a little bit less about how you're standing up the systems but you should still be worried about configuration management. And on the public cloud side it's trickier because you have to generally you expect you're gonna be designing for a generic use case or a generic workload so thinking of Rackspace or AWS they target the compute sizes and the storage as kind of the best balance of what's available so that it will work for the people who are doing all different types of dev tests people who are doing website hosting things like that the Jenkins build slaves and things like that and people who are doing data crunching stuff like that if you don't know exactly who your end user is or if your end user could be anyone of a range of different roles you have to be a little bit more generic on a lot of the decisions you make. On the plus side usually if you're standing up a new public cloud you've got the greenfield advantage of getting new hardware so you can kind of do some of the math ahead of time figure out what kind of compute density you want and what your storage and network play is gonna be and map that out ahead of time and get equipment that's the best fit for it. The challenge of course there is that you like the very first slide you don't have the blank check so there's always a balance too of how much testing you can do and how many assumptions you can make about a big scale cloud based on having some a few small initial purchases which is why DevStack is really important and I'll come back to that and then talk about some benchmarking and testing stuff. One last thing really to think about with public cloud is that provisioning is a major big deal so you are probably gonna be standing up new components for your cloud all the time. For instance, Rackspace they add compute nodes constantly to both their private cloud offering and their public cloud stuff and they need to be able to do that fast and reliably so I'm not sure what they're using to put the OS on but I know that they're using when they apply the configs to the systems it only takes them about two minutes to add a compute node because they pull it from a pool of systems that already have the operating system on them so you need to make sure that you have a plan down the road and something you're really familiar with for provisioning the hardware and using configuration management so you definitely should be using Puppet or Chef or CF engine or Puppet because I don't work for Puppet if I am wearing a Puppet hoodie right now. And the provisioning systems there are some pretty good options out there. The oldest is Crowbar, they've been around the longest and then there's Morantis just came out with fuel and I don't know if any of you guys have seen what they're seeing their demos but it's pretty fantastic and that's using Puppet on the back end, you can also roll your own either just your own pixie boot with a pre-seed and then finishing off with Puppet or using Puppet and Razor for essentially that purpose but it's probably one of the biggest early considerations is to make sure that you are thinking about how you're gonna provision and how you're gonna manage this stuff before you start staying out because it's pretty tough to retroactively do that and it's impossible to grow and manage a cloud unless you're using configuration management. Just wanted to put this slide up because monitoring actually in conversations that I've had with a lot of people it's an afterthought it's something that they're in a rush to get a proof of concept cloud stood up and they kind of are cutting corners and doing things as fast as they can and frequently people actually kind of forget about monitoring until it's not too late but until it's a kind of pain in the ass to implement and you, it slows you down. So think about how important it is to make sure all the nodes, all the components of your cloud are actually working Nagios is kind of the lead in that respect right now but there's not a ton of, there's still a lot of debate about what the best approach is and there is the whole monitoring. I knew it as monitoring sucks but Hunter tells me it's now kind of being rebranded as monitoring love but the idea that there are alternatives to monitoring and that you can think about monitoring is some very different ways in tracking the metrics and health of the systems and that's especially important because the thing about monitoring differently is gonna help you do trend analysis and track your compute usage or storage usage over time and see how it's growing so you can start knowing, okay, next month if we have signups at the rate we have right now for our public cloud, we're gonna need to add another 10 compute nodes and it makes it a lot easier to budget that and it makes you look good when you can, your budget actually matches for the next six months or year and is referenced against the growth curve of your customers. So once you've decided whether you're gonna be public or private and actually this matches, the rest of these decisions really do work for either frankly but the scale one is really the major difference between public and private but you have to make sure you pick your hypervisor. The hypervisor should be the best fit for your workload. If you don't know your workload, I prefer KVM and a lot of people in the open sec world prefer KVM but it's not the only option there's of course Zen and Hyper-V and QMU and few others but if you don't pick the right hypervisor upfront and you actually start using it it's extremely difficult to change short of standing up a new environment and migrating your workload or potentially standing up compute nodes in your environment and migrating. There's no real play where you migrate a VM image directly but it's basically a pain in the ass if you have a heterogeneous environment and you've got mixed hypervisors so it's a good idea to think carefully about what kind of hypervisor is the best fit for whatever you're doing. So obviously if you're doing Windows stuff you're probably gonna want Hyper-V and good news that Microsoft is continuing to invest at least a little bit of money and a little bit of manpower in maintaining Hyper-V connector for OpenStack and on the other side KVM is like I said at least in my world KVM seems to be the most popular do how about a show of hands for the people who have stood up OpenStack who's using KVM? Who's using Zen? Who's using Hyper-V? Yes. Mostly KVM. And one of the other things about it is that some storage choices that you'll make later on work a lot better with some hypervisors. So it's all connected, it's not like choose this one and then you choose this one and choose this one, it's all connected but the hypervisor is one of the first choices you'll make that dictate the answers to a lot of other questions down the road. And once you pick your hypervisor back to DevStack because you should test and validate your choice. This is probably one of the most important things that I would stress, especially if you are going to have to stand up a big cloud before you start buying a lot of equipment before you really stand up a big environment you should test the hell out of it. You should whatever make sure you understand what your assertions and assumptions are and validate them. And usually DevStack is a pretty good place to start especially around just a hypervisor choice and a specific workload. You're not gonna want to probably benchmark too much with DevStack but at the very least it makes it easy for you to test different hypervisors and even test some different network topologies and networking approaches. So whatever your decisions are make sure you test them. Next big one is networking. So in addition to the hypervisor your networking choices are not, well actually for the most part they're written in stone. Once you choose whatever network model you're gonna use if you start using it and you have any intention of maintaining those VMs you have to keep using it that way. So if you initially think oh I just need flat DHCP that's cool. Don't need anything fancy those VLANs are for people who need fancy stuff. Do you stand up a whole bunch of VMs? It's possible technically but incredibly difficult to implement Nova VLAN on top of that. It's even more difficult to jump to quantum and any one of the SDN approaches. So again you make sure you test whatever your assertions are. Some of the other things about networking that are really important to think about ahead of time is how much bandwidth you're actually gonna need. Everyone that I talk to who's doing production deployments is using 10 gig networking. I don't think for anything other than proof of concept I haven't talked to anyone who's using only one gig or bonded one gig NICs. Question for the audience. Anyone here stood up a big open stack environment and using it with just one gig networking? Okay, so do you wish you had 10 gig networking? You can't afford 10 gig for only a thousand. Is it, well I mean I guess the question too comes down to density and how much you're moving across the wire obviously. Which is why in fact there are no easy answers in the whole slide about getting a blank check and unlimited time. It's because when it comes down to capacity planning there is no easy answer. The only thing you can do is understand your workload and understand your goal. And I think the best thing I can do for you here is try to talk about some of the kind of bigger decisions you have to make on your way to the final architecture. So 10 gig networking in general is gonna be a big help for you especially if you expect to move a lot of images around or transfer a lot of data inside your network. So talked about flat GACP, Nova VLANs. I guess the other really big consideration is think about if you're gonna have enough IP addresses with whatever network model you're going to do and whether they're gonna be entirely internal or if you're gonna do internal IP addresses with some floating IPs for the external stuff. And probably for anything significant, anything that is really big, especially on the public cloud side, the only answer is gonna be software defined networking. I'm especially partial to NEC OpenFlow. It's, I mean, I'm talking to other people who are doing some of the other alternatives but NEC seems to have the most, they're the most mature and they have very likely the best long range plan for how to do this and how to stay current with OpenStack in a year and two years and five years. So I have a feeling we're gonna see more and more NEC OpenFlow deployments with quantum in the future. So again, whatever your assertion is, whatever you're thinking about with respect to networking, make sure to test it, which you can do some testing in DevStack. Certainly you can play with quantum and quantum plugins with the DevStack environment pretty safely. You can't reliably test the, like really stress test your environment with DevStack because in the real world, the production environment you're gonna have, you're gonna make different database choices and you're gonna optimize around a lot of things that DevStack won't really let you easily optimize for but you can at the very least test out some of the limitations and functionality of whatever your model is and then eventually you will have a pretty good handle on what you're gonna do with respect to networking and hypervisors and it might be a good time to actually benchmark. And by the time you're starting to do that probably make sense to be using configuration management of some variety. This is one of the options and mainly because you especially when you're in that test cycle and when you're validating variations of configuration or even variations of hardware, you're probably redeploying constantly and you might get a test result, do that test again and then make a little tweak to see if you get a better result depending on what you're testing, whether it's on the network side or the hypervisor side or local storage access. So if you are getting comfortable with configuration management, the sooner you get comfortable with that, the easier your life will be. And the last really major kind of capacity conversation is around compute density. So by this point you've determined what you're gonna use for a hypervisor, you've determined what you're gonna use for a network and you've tested it out and you make sure that the way the network will behave matches the way you expected it to behave. And now you have to think about what, how many, how you're gonna manage your compute nodes. I won't talk too much about, I do talk about storage in another slide, but the massive cloud storage options are in some sense that your options are much smaller and the variables among those options are also smaller and they get into the weeds so quick like if we start talking about Swift nodes and spindles and the networking behind them, obviously that's enough for an entire talk or two. So I'm not gonna get too into the weeds there, but when it comes to compute density, it's relatively simple. You wanna think about how many physical cores you've got in the box, how much RAM, what your oversubscription ratio is going to be. So basically how many virtual cores you're gonna apply to each accessible physical core and finally your instance storage. So whether the ephemeral or the space for the image is local, non-shared, local on something that's shared like NFS or remote and also shared like Ceph or a SAN, something from that app, something along those lines. Among that, once you're thinking about your storage, you have to decide what performance level you need to provide. So IO becomes extremely important in terms of the performance of the VM on the compute node and scalability is the other factor that's really important. So if you're talking about a shared solution, it makes recovery of the VM much easier. So if you're using compute nodes and the instance lives on NFS or Ceph, you've got 50 compute nodes and one of those compute nodes dies. The VMs that we're running on that compute node are recoverable, sometimes with minimal effort, sometimes with a little bit more effort, but you can bring those back up because the instance storage is untouched and hopefully not corrupt. So you can bring those VMs up bit by bit on other compute nodes in your environment. If you are using local storage and all the VM images live on local storage, if the compute node dies, you're screwed. The, maybe you can bring the compute node up, maybe it died because the power supply, someone tripped over the cord, something like that, but if something really catastrophic happened to your compute node, the, all the VMs are essentially gone. So the balance though is performance. If you are, if you're using SSDs or caching on SSDs in front of spinners for local storage for your VMs, you can get incredible throughput and incredible IO for those individual VMs at the expense of the durability of the VM. A lot of people in OpenStack actually argue that that's fine because this is cloud. You shouldn't cry if one VM dies. You should have architectured your application to be spread across all the available compute nodes so that if a VM here and a VM there dies, no one is really hurt and no one's affected. The benefit there is massive performance gains potentially. The drawback is it's not scalable. So if you have, if you're starting out with a compute node that has enough cores to host, let's say 60 VMs, then you have to figure out how much storage you're gonna need to host those VMs and it's difficult or impossible to expand that storage. Sure, you can add a JBOT box to a 2U rack and double the number of disks you've got, but then you have a pretty significant IO problem that you're gonna compound. So you might add more storage, but at the expense of the throughput. So if you really need to be scalable with the storage, you're probably gonna use remote. And one of the interesting approaches is what Dreamhosted with their Dream compute, which was using KVM and Ceph, and they were using the Ceph clusters for the instance volume storage. So their compute nodes were extremely dense. They were using, or are using, I believe, quad 16 core AMD chips with a pretty crazy oversubscription ratio. So they're getting a couple hundred VMs per node with the storage that's backed by Ceph, which is easy to scale. So when you need more storage on your Ceph cluster, you just add another Ceph server. So if you need that scalability, that's probably the best way to go. It also makes it a lot easier to recover. And it lets you, if you decouple your local storage from your compute and RAM, it lets you scale those independently. So you can really kind of consider those factors separately and in some ways it makes the computations a little bit easier. And so the major things you consider here and look, color on my slides for the first time. You take your overcommit fraction times your physical cores and divide that by their virtual cores per instance to figure out how many VMs of whatever particular flavor you're talking about to figure out how many of those you can host on that compute node. One thing I should say is I'm, for all of this, I'm assuming we're talking about designing a compute node and we're going to have multiples of those compute nodes. If you are taking a bunch of random old hardware, it's very, very difficult because you basically have to do each one of these calculations separate and you end up with some unpredictable results when you're launching VMs, unpredictable performance. You should be able to launch VMs across a random aggregate of servers. But anyway, assumption here is that we're designing your perfect compute node for your massive cloud, which will be big enough to make Jeff Bezos crap his pants, hopefully. Someone here, maybe we'll do that, I'm hoping. Let's see. Yeah, so it's, these are the calculations and when you figure out how many VMs you can store based on the first one, you also have to make sure whatever your ephemeral storage plan is that you actually have enough storage, especially if you're doing local. Sometimes it's easy to forget this. You can mitigate the, you can reduce your need for storage for proof of concept and for testing by using QCOW for your instances so you can actually spin up lots of VMs and the additional storage required is only the delta from the base image. But if you do that in production, eventually, if you thin provision your storage on production, eventually you will be burned and it's really difficult to back out of that when you've run out of that storage. And again, think about what you're planning and test it. So now you've decided what your overcommit ratio is gonna be and what you think it will work, how you think it will work. So deploy it, use your puppet scripts or use Chef and test it and test it and test it again. And even when it comes down to things like overcommit on RAM or CPU, it's actually pretty useful to play around with that and see what you run into when you change some of those ratios. And it also comes down to the flavor sizes too. So some flavors that will have different, you're mixing the different sizes or different flavors across a compute node, you can actually run into worst performance under KBM if you're mixing lots of different flavors versus using something like host aggregates where you point all of your small VMs at a group of compute nodes. So they're all using the same time slice under the KBM CPU allocation ratio. But test it and test it and test it again. And then storage, for object storage, basically you've got right now Swift, Chef and Gluster and you do have bandwidth concerns and especially as you spread that out depending on how you're accessing it or what access patterns you anticipate, you might run into bandwidth issues. In some ways it's kind of easy to deal with this especially on something like Chef or Swift where you can, if you've got the money for it you can just add another pair of 10 gig ports if you're overloading it but I don't know. It's, again, it comes down to testing and trying to figure out what your workload is gonna look like and how it stresses out the equipment. And then Cinder for block storage. So I have the most experience using NXENTA with ZFS for our volume storage approach and it's actually worked really, really well. People are using Chef and people are using LVM on Nova volumes. So a lot of options, again, really test it out and not a bad idea to think about how you're gonna scale out block storage. Just have a plan in mind and really think about what will happen if you've got a couple hundred or a couple petabytes of storage for blocks, how you add that and how you kind of spread that across your systems. And again, test, test, and test again. So the last thing to talk about in the capacity planning is designing your cloud controller. And I leave it as last because there's so much good information out there already about about the scaling and the performance considerations around your cloud controller which what I'm talking here about really is the box that hosts your database and your message queue and your API endpoints. And so the HA project would basically have you use DRBD, Core Sync and Pacemaker to duplicate all of those services and all that data to a second fell over box and that works pretty well. There's some point where the number of nodes or the number of end users hammering your API require you to pull your API or some of your API endpoints onto dedicated boxes and put those behind a load balancer which is another kind of easy way to add scalability for the final endpoint. And then when you actually outgrow that, next best step is probably Cinder. And I mean cells, next best move there is to really think about cells. And given your architecture, given the hypervisor, the density of your compute nodes, your networking decisions and how it all ties together and your storage and how that's all tied together, the simplest plan is really to figure out what your upper bound is within reasonable performance limits and define that as your cell. And then when you outgrow that, you move on to another cell and if you need to, you've still got the main API endpoint in front of that and you've got your kind of segregated environments that can grow independently. And the other thing is in terms of discovering limits, we've been, we at Morph Labs have been really, really excited with all the improvements at Grizzly and in fact were some of the assumed limitations in terms of API responses and some of the scaling considerations that we had kind of been building against for the last six months based on Fulsome have gone away even a little bit unexpectedly with Grizzly. So the trend with OpenStack and it's fantastic to see this continue is that every release is a significant massive step forward in terms of reliability and performance. So unfortunately for us, that means we're gonna be changing some of the things on our kind of target architecture. I guess it's not unfortunate, but it's actually kind of great now with Grizzly. It's pushed a lot of limits pretty far down the road and I think we're gonna keep seeing that happen. So the future is looking pretty good for OpenStack. And thank you very much for coming and time for questions. And I would actually, especially if someone has a question that I don't have a decent answer for or if I'm, if you think you've got a better answer, please speak up. We should be having, this is a community, we should be helping each other out. Yes. Do I have any tools to that allow you to do capacity planning? The answer is no. We've tried, we actually have some worksheets, some internally, some Excel worksheets that we use to calculate a few different components and kind of show what it's gonna look like extrapolated out. But the number of variables are so huge, especially just hypervisor, what message queue, what database, what networking. It's, I think it would be quite a challenge for someone to even create a useful tool. And then they would have to figure out what all the new limits are every time there's a new OpenStack release because they fix it and make it better every time. Yes. Yes, I agree, we do have to go much further, but I would counter that a generic tool would never be able to take into account the really fine details that you find. Like for instance, on a given hardware, you know, the difference between a SAS disk and an SSD and the disk controller. Those three things are gonna radically change from one provider to the next, like LSI versus an onboard SAS controller. You'll get totally different IOPS numbers and in order to get to those numbers, you actually need to stand up DovStack. So that's what I'm saying is that it's, no, well exactly, that's what I'm saying. You have to start by defining your workload, figuring out what you're designing for, and then you can take some generic guesses based on equipment, based on the information available from the vendor, and then I suppose, I guess I can't imagine a tool that would be, that could encompass the variables in a useful way. Fair enough. Yeah, a tool would be a good starting point, yes. I don't know of any up-to-date comparisons. What I do know is that KVM with AMD will get you today the most dense, the highest density. So in a single box, 64 cores, and maybe overcommit of two or four to one of cores to threads, you can get 200 or 200 plus small VMs on a single node. And in that instance, you'll need remote storage. You'll, there's no possibility you'll have the IOPS to support all of those VMs hammering local storage. And honestly it changes, especially with what Intel is coming out with in the next quarter. It's very likely to change the game again. So, you know, yeah. I think it's a trick because you can make open stack on KVM be significantly denser than VMware. So unfortunately it comes down to density versus performance. So how dense do you push it before it becomes unusable and what's your minimum performance benchmark? So for instance, we use Unix Bench in terms of the quick easy way to map the performance of a VM in an environment with other VMs with a specific workload and a specific CPU over commit ratio and a specific RAM over commit ratio. It's you'll get the same result or close to the same result with Unix Bench time and time again. So then you can load up an environment the way you want. So take a compute node and put some percentage of them doing a certain workload that you want and basically load up an environment with what you think will be a real world case and then run your expense a few times and see if you're getting consistent results there and then change something, test again and see what the results you get. So I mean, yeah, it's again, comes back to testing your and validating your assertions. Yes. So first question, how do you plan the over commit ratio of CPU and RAM? It's exactly what he was just asking really. It's, it comes down to testing. So you can, I mean, you can, I think the default open stack is 16 to one, which seems a little bit crazy to me. We somewhere between two and four to one will usually yield pretty good performance on really modern processors. But the answer to your question, unfortunately, is first you start with what processors are you using? And if you're totally greenfield, you can pick whatever you want. Then it's a question of what density do you need to target? So the latest AMDs, you'll probably get better density the latest fastest in tells you will probably get better performance at the, at the expense of additional cost and lower density. And then the RAM over commit really is a question of your workload. So if you're in an environment where, for instance, in a private cloud where you have direct control over what operating system your developers are going to use and you know they're all going to be running Red Hat or Ubuntu, you can use the KVM shared memory and have massive savings on memory and over commit without too much risk. But if you're, if you have no idea what people are going to be running and they might be spinning up some extra large Windows servers and you know a bunch of Fedora boxes and who knows what else, then your over commit ratio on RAM is a little bit riskier. And it's a matter, I guess, of testing your assertion. And you had a second question. I'm sorry, I don't remember what it was. Scalability of RabbitMQ or you can, I mean, you can cluster it and you can, you can cluster it behind the load balancers. You can follow the, and there's actually some pretty good docs about scaling up RabbitMQ in the OpenStack documentation. And then the rest of it, it's just kind of just like scaling up MySQL, you can actually do a lot to make MySQL more performant. And frankly, if you're really worried about performance of RabbitMQ, stand up a Rabbit server on its own or a cluster of dedicated servers hosting Rabbit and put your database on a different box and put your API endpoints on different boxes behind a load balancer. You had a question. We have found it to be extremely realistic and useful in production. I can't quantify it. I guess I don't have direct numbers but in terms of performance, the impact on the host was negligible compared to the benefits of getting that extra shared RAM in essence, the shared pages. So two to one with total, like two to one and we were never able to run out of RAM. And that was with a reasonably heavy workload. So we had a lot of VMs compile on the kernel, a lot of VMs doing like web load testing and some other stuff. We actually use Bitcoin servers as something that hammer CPU and IO. So we kind of mix these across and we were never able to actually exceed what we expected on the RAM. Yes. So the question, it's both. Okay, so the question was basically what's the best way to make use of, or the best way to put like VMs on like compute nodes. So the answer is combination of host aggregates. So defining, basically putting the type of compute nodes together. So Nova Scheduler knows how to find them and then Nova Scheduler rules. And finally, you can make specific flavors that if you wanna make sure that flavor X always goes to the host aggregate grouping, you can set up a flavor and a Nova, a scheduler hint for that. And I think though, I'm almost out of time. So maybe one more question or sweet. Thank you. Thank you very much. I'm glad everyone, thanks for coming.