 Good morning, everybody. Welcome to the High Availability Update Session. In the next 30 to 40 minutes, I'm gonna cover a little bit about current and existing high availability features that we have in OpenStack. I'm gonna give a little bit of an overview of what has happened between the Essex and the Folsom Releases, and what's going to happen in the high availability space for OpenStack between Now and Rizly. And I would like to start with a little bit of a look back. Back at the Folsom Design Summit, which, of course, we had in April of this year in San Francisco. Who was here for the Folsom Design Summit? Show of hands, please. Quite a few. And then we have quite a few new faces. Awesome. By the way, for the new faces, those who may have not seen me before, I'm that one guy who raised his hand when Jonathan asked whether there were any people from Austria in the audience this morning. So that's me. So at the Folsom Summit, we had a Design Summit session in which we had some pretty involved discussions about whether we actually wanted high availability features in OpenStack, classic HA features, or whether OpenStack should be just limited to, you know, the classic type of stack that's therefore for, you know, the cloud startup company, more or less, that builds one application that then massively scales out or a handful of applications that massively scale out and have resiliency and high availability built in. But we decided that, yes, it was a very good idea to have classic high availability features, enterprise high availability features in OpenStack. And there were a few reasons for that. A couple of the reasons were essentially based sort of on competitive analysis. When we look at infrastructure HA support, support for high availability of the infrastructure services that are powering and underpinning a cloud application, if we take a look at, for example, Amazon Web Services, of course, they have infrastructure high availability built into their product, so much so that it causes a major ruckus when that goes to shreds temporarily, as has happened a couple of times in the past. In the open source space, there is one OpenStack, well, a competing technology with OpenStack that also has high availability features, and that is Eucalyptus. Eucalyptus introduced cloud controller or high availability in its 3.0 release, and in its 3.1 release, finally actually made it open source, which is a good thing to have. But more importantly than what other cloud applications are doing and whether OpenStack should support it because of that, there's a much more important thing, and that is we have a very significant piece of our user base or perspective user base that really, really want HA, and these are what we call the enterprise users of OpenStack. These are the groups and organizations and companies that look to OpenStack not as a way of running their one massively scalable application that they're trying to deploy, but they're looking toward OpenStack as something to organize and run a modern data center, just a modern way of running an infrastructure that previously has either been running on bare metal or has been running on commercial virtualization solutions and they're looking to displace these existing legacy systems with OpenStack. And there is something that is very, very crucial about that part of our potential user base or our existing user base, that is contrary to, you know, the classic hot shot cloud startup company, these people typically do not have the luxury of being able to easily re-engineer applications. Number one, they typically don't maintain just a handful of them, but they maintain a boatload. Many of these are legacy. Many of these cannot be re-architected or re-engineered or some of them can, and then it comes with a massive investment or massive cost that's associated with it. So if you're trying to... And of course, many of these applications have to be available 24-7. They have to be highly available. They have to run essentially interruption-free all year round. And because of that, it's a vital thing for OpenStack to make an effort into that direction that we can support this kind of user base better. So in other words, we have to support a certain degree or, well, preferably the whole nine yards of high availability in OpenStack. So what we also realized in those design summit sessions that we had is that if OpenStack were to basically go and create its own high availability stack that it then plugs into pretty much from scratch, then that's usually a pretty bad idea because it's an example of reinventing the wheel of a certain, to a certain degree, an example of non-invented hear syndrome. In other words, it's a pretty bad thing. So... And it turns out that we have an excellent stack that we can combine with OpenStack in order to achieve an excellent degree of infrastructure high availability. So we agreed that because reinventing the wheel was a really bad idea, we agreed to use existing HA technology not necessarily across the board. This doesn't necessarily mean that we're not building any intrinsic HA features or scalability features or liability features into OpenStack, definitely not. But wherever and whenever it is feasible and useful to use and plug into an existing solution, we would do exactly that. And that is where the pacemaker stack comes in. For those of you not familiar with pacemaker, the pacemaker or the coarsing pacemaker stack I should probably say is essentially the definitive high availability stack for the Linux platform. It is a cluster resource management application which is extremely powerful. It runs on actually a couple of extremely reliable cluster messaging layers. The default one is the coarsync messaging layer, which by the way is not only the basis for pacemaker, but there is a number of other projects that are using coarsync at their core. One such project that OpenStack users may also be familiar with is the Apache Cupid project, which of course is also an AMQP message broker implementation akin to RabinMQ and N-ZeroMQ. And the pacemaker stack, which by the way has been developed for the better part of a decade in a very community-centric fashion. There is a lot of the work that's being done on pacemaker is currently done at Red Hat. There are very, very significant contributions and also from the Ubuntu and Debian communities, so it's a very broadly supported stack here. And the pacemaker stack has several advantages that make it particularly useful in combination with OpenStack, which is of course an array of very, very diverse services. Number one, the pacemaker stack is very storage-agnostic. It really doesn't care. And by the way, it's something that distinguishes it from many other high-villability stacks. Pacemaker really doesn't care whether the cluster storage or the data that the cluster is using is stored. It doesn't care if it's on a SAN, which is the very conventional sort of legacy approach. It doesn't care whether it's on replicated block storage, like RBD or DRBD. It doesn't care whether it's on a distributed file system like GloucesterFS. It doesn't care whether the application perhaps takes care of its own replication, which is also something that's very useful for many OpenStack services. So pacemaker can deal with all of these, which makes it very useful for something as diverse as the whole array of OpenStack services. It is not only agnostic toward the storage where data is stored. It is also essentially agnostic toward the application. And that makes it extremely extensible, and that is one of the advantages of pacemaker that I'm going to elaborate on in a little more detail in a few more minutes. So pacemaker really doesn't care whether it manages a MySQL database or a Libvert or RabbitMQ or whatever you may think of. It is entirely a storage and application agnostic high availability stack. Many of you may associate high availability with just your classic two-node failover type cluster. Pacemaker is by no means limited to that. So pacemaker supports two-node, three-node, 16-node, 32-node clusters if you wanted to. It supports the classic failover cluster, but it also supports load balancing clusters. It actually has some very, very neat features that make it very, very simple and easy to administer load balancing clusters with pacemaker. And we can build hybrid clusters that are combined failover and load balancing solutions. And again, that makes it extremely useful for open stack. And then finally, of course, as I've already said, this stuff has been in development for the better part of a decade. It is in production use and probably thousands of installations worldwide. This kind of stuff actually powers air traffic control systems. So it's something that's extremely well tested, very hardened, has a lot of development sort of under its belt. It's pretty much aware of just about every corner case that is to be had in high availability solutions. And that again makes it a very, very good solution for open stack to interact with. Now, I already mentioned that open stack relies on a few crucial infrastructure services in order to function, and those are not actually part of the open stack code base proper. They're just services that open stack uses very, very extensively. And just to name two such examples are the relational databases that we use. Most people will use MySQL, but theoretically we support anything that's supported by SQL Outcoming, by most of the open stack services. And another example is the message broker. Most people will be using RabbitMQ because that's in most of the documentation. You might also be using 0MQ. You might be using a Cupid. And interestingly, for many of these, pacemaker-based high availability solutions already exist. So most of the work, or a lot of the work that we needed to do between the Essex and the Folsom releases was to ramp up some documentation for this. It doesn't help if the solutions are out there if no one knows about them, right? So a lot of the work that needed to be done was work in the documentation space. And whenever we talk about documentation in OpenStack, that is of course where the very wonderful Ann Gentel comes in. And Ann has actually been very instrumental in number one, sort of accepting a separate OpenStack high availability guide into the OpenStack documentation. We decided that that was the best way to move forward because high availability is sort of an overarching topic that doesn't really fit into the individual guides that we have for individual OpenStack sub-projects. So build sort of a documentation compendium comprising all sorts of high availability aspect in OpenStack into one guide. And she also made some really, really useful changes to the Jenkins continuous integration infrastructure to help us build this stuff. I don't want to just give a shout out to Ann because we have a number of other significant contributors in there that were very, very helpful with reviews and with taking a look at our patches and getting them merged. So Tom Feifehild is one. Is Tom in the room here? No? Okay. Well, Tom's been very instrumental in this. And then there's Rosique, who's helped us out quite a bit. And a tool job who was actually mentioned in the keynote this morning. AKA Coolhead17 on Twitter. He also, and of course in Garrett and what have you, he also provided some very useful feedback in there. And we have results. And the result is actually currently available on docs.openstack.org. That is the OpenStack High Availability Guide. It is not yet linked from the front page of docs.openstack.org, but we anticipate that to happen relatively soon. Let's move to the pointer. Let's move that over here. Where are we at? So it basically gives a bit of an intro to the pacemaker cluster stack. Why did that collapse again? I don't want to see that collapse. There we go. So it explains, from an OpenStack perspective, what are the benefits of the pacemaker cluster stack? It gives an overview of how to install the appropriate packages. I set up the core messaging layer. As I said, that's the underlying cluster messaging layer that pacemaker uses. And then actually starting pacemaker and getting a working high availability cluster, which at the time doesn't run any specific resources. And then it goes into detail here. We already have that about configuring highly available MySQL and highly available RabbitMQ. The approach that we're using for both of these is a very simple one. The MySQL and RabbitMQ both use dRBD storage replication and virtual cluster IP address and standard two-node failover. But that's actually not the only way of doing things as I'll get to in just a moment. And we're expecting this high availability guide to grow as we're nearing the grizzly release. And most importantly, there are some very, very interesting additions that are queued for this already. So you can take a look at that. It's on docs.openstack.org. And obviously, if you're opening my slides here, I'll give you the URL for the slides at the end of the talk. Then, of course, you can just hit this iframe here and go from there. So let me switch to this thing again. It's too far. Here we go. So will this be, will the stuff that's in the HA guide be the one true way of setting up high availability for OpenStack forever? Well, you probably guessed the answer. Definitely not. We're fully expecting this to evolve in the future as OpenStack features become more powerful as we're getting some high availability sort of built into certain services. But what's in there right now is what you can do today that works in Folsom, and it works with minimal changes to an existing OpenStack infrastructure. That is to say, when you follow this guide, there's really nothing else that you need to do in terms of changing your OpenStack service configuration rather than maybe point your SQL Alchemy URLs to an IP address or a host name that is now mapped to a virtual failover IP and everything else will be completely transparent. You can just roll with it. Like I said, we are fully expecting some of the content in the HA guide to be revised and replaced as OpenStack develops, and I'll give you a couple of examples for that. Those of you who have been following the development of Combo, which is the Python library that we use to connect to RabbitMQ from various OpenStack services, Combo just recently acquired functionality to be configured not with one RabbitMQ host to connect to, but with a list of RabbitMQ hosts and then uses that for client-side failover. So it will be capable of knowing, okay, rather than having that one RabbitMQ host, I have several that I can connect to, and then I can use RabbitMQ facility to do active-active Q mirroring. So then I can use that directly. A similar approach would be to run SQL Alchemy with a list of MySQL addresses rather than a single MySQL address or host name, and then use MySQL Galera synchronous multi-master replication if the two-node replication capability that is provided, two-node and active-passive replication capability that is provided by the DoBD MySQL pair is not sufficient for you. So in that case, you would be able to say run a three-node Galera cluster and SQL Alchemy would know that when it makes a transaction and that transaction fails to that specific MySQL box, it can then re-roll and retry that transaction on a different node, which is something that's meaningless to do in like a single-node MySQL instance, but with Galera, you will be able to do that. Like I said, what is in the OpenStack HA guide right now is a setup that we know works and that we know works with minimal modifications to your existing OpenStack installations, but we're fully expecting for that to evolve as we go along and then maybe support some more sophisticated high-availability replication mechanisms that are built into the respective applications. What we're trying to do with the HA guide is when that happens, still give you a simple and easy upgrade path, if you will. So we're going to structure the documentation in such a way that you wouldn't have to touch or would have to touch your OpenStack services only minimally to maintain high availability, even if you migrate off of the currently presented DOBD-backed replication schemes. Contributing to this HA guide is something that we've actually made easy. Who in here, just a quick show of hands, please. Who in here has contributed patches or bugs to the OpenStack documentation? Anyone? A few? Okay. Those of you who have seen the main body of the OpenStack documentation have certainly realized that it is in DocBook XML, which tends to be not the most writer-friendly format. One sort of side issue that we solved with the HA guide is we now made it possible to write documentation and the HA guide is fully written in that format in ASCII Doc. ASCII Doc is a very simple text-based documentation format. If you've ever edited a Wiki page or edited Markdown or anything of that nature, you'll feel right at home in ASCII Doc that makes it a lot easier to actually contribute patches or additional bodies of documentation. So the contribution is a lot easier. Unfortunately, oh, this is all on GitHub. This is actually in the main OpenStack manuals repository. Unfortunately, I can't click around in this because apparently GitHub has a facility that just serves a completely empty page if it detects that it's in an iframe. For whatever reason. Anyone from GitHub here? If so, see me after. So that's another nice little citrack addition that we've had here to the documentation. The documentation, obviously, is not all of it. There is a lot of work that has happened since the SX release in the actual code base. And here is where one of the major advantages of Pacemaker comes in. Pacemaker is extremely third-party addition-friendly. And this is another thing that distinguishes it from many other cluster stacks. If you want to add additional functionality to Pacemaker, all you need to do is write a plugin, which we call the resource agent, and drop it in the right place and a path in the file system. And as soon as that's there, you can use it in your cluster. All the management tools know about it. It's integrated in the command-line tools and the GUIs for Pacemaker, whatever you would use. And you can just use that. And what's even better is you don't have to jump through any hoops like writing that plugin in C and then linking against the Pacemaker library or anything of that nature. We can write these resource agents in any language. Most of them are actually shell scripts, but we can't write them in Python. We can write them in Perl, whatever we want. And we can write them for any application that we choose to put under Pacemaker management. And as of today, we actually have resource agents, Pacemaker resource agents, for most OpenStack core services. So most of that work has actually been done already, and we can put OpenStack services sort of under the Pacemaker umbrella. And just as with the documentation, we've had some excellent contributions from actually much more contributions than we expected since the SX release. And again, I want to give a bit of a shout-out to a few people. Emiliano is actually sitting here in the second row, so raise your hand real quick. Sebastiano is unfortunately not here today. He's also contributed quite a bit of these resource agents. And then Martin, who's one of the guys who works on my team, has contributed some of the resource agents, and specifically the ones for Keystone and Glance. Speaking of Glance, we had originally envisioned that it was a good idea to sort of... to submit these resource agents to the individual OpenStack sub-projects. But the Glance project kind of objected to that, and my humble opinion quite rightfully so. And that is, they said that, we really don't know anything about OCF resource agents, so we can't really review this. It's actually a smarter thing to do to have these resource agents in a separate repo where people who actually know and understand OCF resource agents can actually review them. So all of this code is on GitHub, as you would expect. Part of the OpenStack organization's GitHub repo yet, and we're working on getting that upstream and streamlined into there. So that's currently in one of Martin's private repos. Martin is the guy who goes by Matt Kiss on GitHub and on IRC, and some of you may have met him on Hash OpenStack, on FreeNode, and in other places. So as you can see, maybe you can't see it in the back row, but we have OpenStack pacemaker resource agents for things like Cinder API, Cinder Schedule, Cinder Volume, Glance API, Glance Registry, Keystone, a boatload of Nova services, so most of that is actually there, and you can use it. And what's even nicer about this is that people have been taking this code and are using highly available OpenStack services in production today. One of the guys that I mentioned in the pictures earlier, Sebastien, has actually rolled out a highly available pacemaker-based, highly available OpenStack cloud and went live with it this week, and I actually asked him if I could name the company. It is Stone IT, a company in the Netherlands, part of the French small group, who actually went to production with a highly available OpenStack cloud just this week, and I thought it was really, really cool that we could get from, sort of, design summit session back in April to production deployment somewhere in Europe in October, and I thought that was really nice. So what is next? What's up next on our high availability agenda for OpenStack? As I've already mentioned, we have the OCF resource agents currently in a separate GitHub repo. It's not part of the official OpenStack GitHub tree. So what we want to do is, obviously, we want to integrate this OCF resource agent repository into the main OpenStack code base, and, perhaps even more importantly, the standard Garrett workflow, and this is what was actually suggested here from the glance folks that we would be able to sort of assign people who are knowledgeable in the OCF standard to be able to review patches and new additions to the resource agent. So that is something that is on our list in the coming weeks. I'm hoping that that will be completed fairly soon. And really importantly, we need people who test this. We need people who run this. We need people who are willing to run into issues and tell us about them or even, perhaps, review code and review documentation. If something is unclear from the OpenStack HA guide, that is a documentation bug. Tell us about it and we'll fix it. We can just not always know what's unclear or what comes across as unclear to users. So please, by all means, tell us and we will fix it. If you can fix it, all the better, we will happily fix it as well. If you're doing something according to the documentation and it's not working, then it's either a documentation bug or it's a bug in the code and we need your input in order to be able to fix that as well. People are deploying OpenStack in so many different ways and on such diverse platforms that we can't possibly, in our lab environment, sort of test out every issue, every corner case that you might run into. So please, test this out, use it and tell us about any snags that you run into. And of course, if everything works great, then tell us about it too because we're happy to hear that. So if you want to do that, what is the required minimum hardware and software-wise that you need to use and you need to deploy in order to run a highly-available OpenStack cloud? Well, you'd be surprised that the minimal requirements are, well, really minimal. So if you want to, for example, have a sort of core cluster where you're running all of your services in a highly-available fashion and perhaps not necessarily in production but in testing in a lab environment, really all you need is, you need two nodes, two like commercial off-the-shelf server boxes. You need a little bit of storage and you need a redundant network connection between them. And if you're so inclined, then you can actually go to OpenStack Inception and basically run an OpenStack high-availability cloud inside an OpenStack cloud. We've had people do that as well where you can run them on straight-up Blippard KVM if you'd like to. It doesn't require a lot of hardware for you to actually test this and get familiar with it and use it. All the required packages, all the packages that you need in order to bring high-availability to OpenStack are already available on a number of platforms. If you're on Ubuntu, it is one aptitude or apt-get command. If you're on SUSE, it's a zipper command. If you're on Fedora or CentOS, it's a yum command. So all of that is readily available. There's nothing that you need to build or from source or need to pull out of some strange old repository. It's all there. As I said, we're using something that's been the standard high-availability stack for the Linux platform for at least half a decade. And then you follow the HA guide and off you go. That's what you do. Another thing, this information that is currently in the HA guide is built for rolling the stuff from scratch as in installing packages and configuring, and that's how you go. That works great in a lab environment, but that is something that just doesn't scale. So we're calling on all of the system automation folks to deploy... to develop and deploy puppet modules and chef recipes and juju charms and crowbar folks. I'm sorry for not giving you an explicit shout-out on the slide, but crowbar, this would be awesome as well. To take that information that is currently in the HA guide and translate them into something that is actually automatically deployable. And again, if you're running into any issues, if you're running into any snags, please tell us about them and we'll be happy to help you out. And this would be a wonderful thing to have actually for Grizzly if we had all that information that is currently in the HA guide and make it deployable in an automatic way, because as impressive as Mark's keynote was this morning, there was one thing that wasn't in there, and that is HA. If he's losing that one Rabbitam queue note that he has in there in his juju-deployed cloud, he has a problem. If he's losing that one MySQL note that he had in there, he has a problem. And it would be much nicer if Mark could go up on stage and not you, Mark, Shuttleworth, actually. But you can do it too on HP Cloud. It could go up on stage and deploy a highly-available open-stack cloud with the same ease and the same simplicity. And of course, we need people, users, presumably the enterprise user crowd, to tell us what are the high-availability features that you need next, what's important to you. Now that we've got the infrastructure high-availability essentially covered, what's the stuff that you need next? We also have other things pretty much covered. Storage high-availability, something that the SEPH stack does very, very nicely for us. Object storage high-availability, something that both SEPH and SWIFT do very, very nicely for us. But if you're, for example, if you're looking for virtual machine guest high-availability in Nova, and that is something that you need to tell us about. If that's something that makes a difference or would make a difference for the adoption of open-stack in your organization, then please tell us about it. We have a mailing list on IRC, wherever you'd like, blog about it, tweet about it. Tell us what you need in terms of high-availability in Grizzly. And I, obviously, need your questions, and I invite you to fire away in just a moment. If you are interested in this talk, that's the URL. It's available there for you, fully to use. I want to give one last shout-out, and that is Bartek Chopka for coming up with the amazing Impressed JS, which allowed me to do this presentation in just HTML and CSS and JavaScript and nothing else. And then, finally, I should say what I think everyone is saying that's on stage here at the summit, and that is we're, of course, one of the many companies that are hiring in the open-stack space. If you take a look at that, if you're interested in being a part of this, we would be delighted to hear from you. And with that, I thank you very much for your kind attention, and we have six minutes for questions. Yes? Go back two slides? Oh, I'm sorry, you wanted the URL. You wanted the URL. It's actually very simple. It's hastexo.com slash open-stack summit fall 2012. Okay, we have one question here, and then over here. The services which you're making highly available, which of them support multi-site redundancy? Can I failover from one site to another or have services in three sites, say? Okay. Multi-site failover is something that is actually supported in the pacemaker stack. It uses a facility that it calls Booth, which basically out of several pacemaker clusters builds a pack-sauce supercluster that can then work with, you know, dead men dependencies and that sort of thing to sort of, if you want to, failover from Singapore to London, if that's what you like to do. So in the pacemaker stack, that is already very well covered. In terms of the open-stack services, it's a little more differentiated. So Swift has certain bits and pieces that make it suitable for cross-site replication with container synchronization. Ceph has excellent distributed storage, but as of this time, does not have asynchronous replication capability, which would sort of be required for this. Some of you may be using GloucesterFS in open-stack clusters. That actually has a geographic replication capability. But there is certainly quite a bit of work to be done when we move past the single-site high-vailability stage and go to multi-site failover. But that is exactly one of the things that I mentioned earlier. If that is something that makes a difference to your organization, then tell us about it and let's get that discussion going and see what we can do about this. So as of right now, we're not quite there yet in terms of open-stack. Pacemaker can do it, but in terms of open-stack, there's still quite a bit of a distance to be covered to get there. But if that's something that makes a difference to your organization, let us know about it. Okay, we've got a question up here. Just wait for the mic for a second, please. Hello. So what you tested and deployed in the document is that for an active standby high-vailability solution or are all controllers active, all MySQL databases active? For RabbitMQ MySQL, it is active passive. And like I said, this is one of the examples where the guide may evolve in the near future, where we're actually going to supporting and documenting active-active solutions. One thing that I would think would be very useful, for example, is if you have a very large-scale cloud installation where you have all sorts of services reporting into and writing to a MySQL database, it would be really handy to have a multi-master, a multi-node Galera cluster, and then perhaps one of your nodes would typically be handling your Nova or your MySQL traffic that's generated by Nova and then another one that Glance can hit and another one that Keystone can hit and so forth. But what's currently documented is active passive, but we're expecting to move beyond that as OpenStack involves. We have a question back in the back. Okay, thank you. Hello. Hello. Can I get some juice on the mic? Hello. Can you come up to this mic here? Just so we have it on video. So the OCF agents are a great start and a lot of us have been using them already. But can you give us any update on how much progress has been made in Folsom in the code base and some of the services towards supporting that? Yeah. Specifically what I'm thinking of is things like the network stuff has a lot of state that an OCF agent isn't good enough to actually be able to move it without all that state. Yeah, excellent question. And you're absolutely right. There are certain things that we will have to keep in the specific OpenStack services. One such example is multi-host networking. Multi-host networking is something that was already in Nova Network. Last I checked, we can't really do real multi-host networking in quantum yet. Could someone update me on that or call me an idiot if I'm saying the wrong thing? So what's your back-end disk? Okay. All right. And so, yes, you're absolutely right. There will be certain things where an OCF resource agent just won't cut it, and we will have to have a certain amount of cooperation from the services themselves. Another thing that I could mention in that space is Cinder still has an issue that NovaVolume had in Folsom, and that is that because NovaVolume was originally this thing that only exported iSCSI, it actually made a difference what host of volume would be exported from because that's what a Nova node would then sort of connect to to make its iSCSI connections. If you're now booting from volume and all of your volumes are RBD images in a Cef cluster, then what Cinder host it's being exported from is perfectly meaningless. But still, what Cinder still does is it throws that information into the database and Nova instances know, okay, I'm connecting to this via that. So that's another thing where there still is some work that needs to be done to an existing service. And we'll take the last question because we're out of time. Do you have any recommendations on resources for learning how the pacemaker stuff works? Yes, absolutely. There is a wonderful document that is on the clusterlabs.org website, which is where most of the pacemaker development is happening, and that is called the clusters from scratch. So if you just Google for clusters from scratch, that will be an excellent resource. If you are interested in the inner workings of OCF resource agents themselves, there is also an OCF resource agents developers guide that is on linuxha.org, and then you can just Google for it. Okay, so with that, I thank you very, very much for coming this morning. Thank you for your kind attention. And if you have any questions about OpenStack HA, feel free to grab me. I'll be here all week. Thank you very much.