 Hello everyone, this is Rados for PDGX speaking for the Massacari project. I'm going to show you some basic information about the Massacari project and also what has happened in the last release and what we are planning for the next release. For the starters, what does Massacari do? Massacari is delivering high availability for instances in an OpenStack cloud. It is implemented in terms of notifications and recovery workflows. Notifications are delivered by monitors, which may in turn rely on external sources of truth, like pacemaker. So now for a little background about the Massacari project. It was founded during the Rocky release of OpenStack. It was previously developed by Entity and OpenSource by them. We had 25 contributors in the Victoria Cycle and we hope to have more during the Wall Recycle. So, why Massacari in the first place? Cloud workloads are not always cloud native and resilience for legacy applications may need high availability solutions such as Massacari. This brings the OpenStack platform closer to solutions like Ovid or Proxmox where you get the HA functionality almost out of the box. Similarly, if you don't control what is running in your cloud and you want to meet your SLAs you might want to use Massacari to deliver high availability for your customers. Massacari is a simple project in terms of the OpenStack ecosystem. It has only two dependencies. Keystone for authentication and Nova for the virtual machines side. But it gets a little bit more convoluted when we look at the inside of Massacari. So from a very high level we can see core clients and monitors making up the Massacari project. In the core we can see API which is contacted by users and monitors equally. And API allows you to configure your segments and also to receive notifications usually from the monitors but users can also send notifications to it. And there is also the engine which is the actual workhorse of Massacari. It acts upon the notifications so it runs those recovery workflows I've been talking about. And the other part being clients. It's typical like in the other OpenStack projects. It's centered around the OpenStack client, the OpenStack SDK and also the standalone interface as well as a plugin for the dashboard so Horizon. And the last but not least, monitors. The interesting part for the detection of the actual failures. So there are four kinds of monitors at the moment. The first kind is instance monitor. It's compatible with Livered. It has been tested with QME and QME plus KVM. It can probably work with other backends of Livered but it hasn't been tested yet. There's also the host monitor. So it's integrated with pacemaker and it detects host failures. There's also the process monitor. It monitors Nova compute process. And the last one is the introspection of instances monitor which is compatible only with Livered with QME and KVM optionally. And it does look into the instances to check whether the health status is correct. We've finished the Victoria Cycle with only one feature which is a separation of host and instance level protection tagging. So basically before that feature Massacre retreated equally whether there was a host or an instance failure. You couldn't as a user differentiate between instances that are going to be protected against instance failure and those protected against host failure. Now it is possible. But for the wall ability release we've got a bunch of ideas about what to implement in Massacre. For summary of those please visit the link at the top of this slide. And I will now check through the three I guess most important ones from the summary. So the first one being the evaluation of pacemaker alternatives. Or perhaps the alternative is not the best word in general because pacemaker console and TCD are very different things. But Massacre uses pacemaker for the detection of host failures. And pacemaker actually has its limitations. Well the most basic limitation is that if you are running color sync and if you don't want to run pacemaker remote functionalities then you are limited to only 16 nodes. And that's usually too few for a typical cloud. With pacemaker it can be work around it by using remotes. But the problem with remotes is that they work differently to the basic color sync stack and they add additional complexity to the pacemaker cluster. So Massacre is looking forward to evaluating alternatives in the form of console and TCD which are also able to be used as host state tracking solutions. Another similar and also related topic to that is moving fencing and host status verification closer to Massacre. So for now Massacre is kind of blind. It completely relies on pacemaker to do its job correctly. And Massacre is unable to verify whether pacemaker is configured correctly and whether it acted correctly in that particular case. And if that isn't true, if fencing didn't happen there may be various issues in real operations like for example if the original host is actually still running connected to the storage array you might get broken volumes. And what we want to do is we want to evaluate how ironic could help Massacre here because basically we need functionalities related to controlling a bunch of bare metal hosts. And finally for an unrelated feature restoring the original state so the state before Massacre took its actions. And for now when Massacre runs its recovery workflows then it's done and it's not really possible for the user to revert what has happened. So all the evacuations that were done were done and that's it. But from time to time when you restore the hardware to its original glory you might want to restore the instances that were running there previously without having to rely on external projects like for example Watcher to rebalance your clustering. And Massacre needs your help. So join us on ISE on the OpenStack-Massacre channel at 3.0 I send our every two weeks meeting on ISE I try to not say bi-weekly because bi-weekly may mean twice a week propose and discuss features and enhancements report and thrashbacks on launchpad review changes, contribute a blueprint and RS pack contribute code, fix a feature welcome any kind of help. Make our Patreon slash logo slash hero slash sunburnout happy. And thank you very much for your attention if you have any questions I'm here to answer them.