 Okay. Yeah, let's start. We are anyway late on time. So my name is Vitek Bedek. I work for Fujitsu and will give an update on the Monaska project. Who of you don't know Monaska yet? Okay. So Monaska stands for monitoring as a service. It is designed and implemented to be highly available for tolerant and horizontally scalable. All the components which we use can be easily scaled, clustered. It is high performance. We have deliberately chosen such components as Apache Kafka as a message queue or inflex DB as a time series database to optimize the performance and be able to handle high volume on data. Monaska is multi-tenant meaning we store measurements, alarm definitions, notifications, all what you need for monitoring per project. So you can expose the service to your users and they can organize their monitoring per project. Monaska implements primarily push model for collecting the measurements, which is better suited in the complex cloud environments. It is more for tolerant. It also is flexible. You can change the sampling rate on the agent side depending how fine-tuned metrics you need. The architecture is based on microservices in the central place having the message queue, Kafka message queue being responsible for communication between the components. Here at that layer we have independent specialized components like real-time monitoring engine or aggregation engine. Monaska is not only measurements, but also logging and events. The design again follows the same schema with Kafka message queue in the middle and specialized components for logs and events we use elastic search for storage. Our project is relatively small with three main contributors being Fujitsu, Suze and Stack HPC. We also have several other contributors with smaller contributions. Now for the work we have done in the previous release. We'll start with adding support for Alembic database migrations. Till Rocky release we have managed the database schema in simple SQL scripts. Every time when the schema was changed between the versions, upgrading from one version to another was a painful and error-prone process. Now we have added a single command line tool to automate this. We have documented a clean workflow for developers, what they have to do if they change the schema and operators have a simple automated tool for performing the upgrades. In the last project update I have reported about the effort of contributing the Monaska publisher to the Cilometer project. The idea was that we leverage the work which was done in Cilometer and collect the measurements and then publish them to Monaska. It's particularly important for, for example, building applications like Cloud Kitty service. They need more detail about the instances as the Monaska Libre plug-in is delivering. Unfortunately the effort was, was basically blocked by the Cilometry, by the telemetry team. So we went to plan B and we updated the Monaska plug-in in a separate repository. The next area we worked on is Monaska Transform. It's our aggregation engine. So it consumes the measurements from the Kafka queue, makes the calculations and then generates new measurements, new aggregated metrics back to the system. It can be used to aggregate measurements between different metrics like, for example, calculate CPU usage across all the instances on the node. It can be used to combine different metrics and generate a new one. Like, for example, the number of logical cores and the ideal percentage of each core and calculate the utilization of the CPU or a rate of change of a given metric. For example, how fast the available storage on your system is decreasing. What have we done? We have significantly simplified and cleaned up the specification how these aggregations can be configured. We have improved the documentation and upgraded the middleware used by the component. In the past, installing Monaska was a real pain point and it was really complex and we have spent quite some effort to improve that. In this point I can say we have a number of methods and the situation has improved a lot. As before, Monaska is included in SUSE OpenStack Cloud. You can have it as a ready solution there. With the red color, I have marked the deployment methods we have worked on in the last cycle. Monaska has been added to Kola Ansible. The last reviews matched just last week. Thanks a lot. You can install it in Kola. You can also install it standalone using the same method. We have worked on improving Docker compose deployment in particular. We are moving the Docker file definitions into the upstream repose so we can integrate it more easily in the OpenStack CI. The OpenStack Ansible role for Monaska has been fixed and updated. The complete metrics pipeline is now fully functional and ready to use. Let's come to the current release. The outcome of the planning meeting during the PTG meeting in Denver, we have decided to concentrate in this cycle on extending support for locks and notifications. Locks and notifications are different in nature from metrics. They describe a single event-based phenomenon in the system, whereas metrics provide more aggregated view on some value in the system. These two sorts of information are complementary. You can use the metrics to narrow down the problem and then take a look at locks and notifications to look more closely at the root cause of your problem. In Monaska, we want to follow a goal of providing single pane of glass to be able to have one to view all this information. We also want to correlate different sources of information, only then we will get a complete picture. Also, based on this event-based information, you can generate new metrics describing, for example, system performance, like you can make new measurements about how long the instance takes to spin up, measure the utilization of your system, report about status and errors. What in detail we want to do? First task is related with our repositories. As of now, we have three repositories, one for collecting measurements, defining alarms, notifications, one for collecting locks and one for collecting events. The outcome of this is that the code diverges. We create a technical depth and they even behave differently in some cases. It's not what we want. We decided we will match these three APIs into one to have the same user experience for all these actions and it will also simplify the deployment and operation of the service. In the last cycle, we have worked on implementing the events or notifications pipeline. The state of now is we have the API where you can send the events and they will be persisted in Elasticsearch. The last missing part for a complete pipeline is to collect the OpenStack notifications. Originally, we wanted to leverage Cilometer for this, but recently, I guess September it was, they have deprecated event support in Cilometer. We decided we want to go for a future proof solution and we will add a small component which will collect the notifications from RabbitMQ and republish them to Kafka for use in Monasca. The component will provide some filtering so that the operators will be able to configure which notifications they are interested in and they will extract some relevant data, like for example project ID and similar. As of now, the logs and events API, they provide only a post method so you can only send payloads. For visualization, we use Kibana with a plugin which provides multi-tenancy for Kibana. The task which we want to implement is to add the GET methods to the endpoints so that you will be able to query logs and events in a generic way and you will be able to integrate it in third-party solutions, visualization tools, like for example in Grafana, you could write a data source for logs or events. The API will have some basic filtering on dimensions and will provide statistics about the collected logs and events. Apart from these, we also want to make some maintenance of our middleware. We want to migrate to a new Kafka client. We have done some benchmarking and found out with a new asynchronous communication, we can publish the messages up to 100 times faster and the consumer can be up to six times faster within your library. Cross-project work. Monaska can easily integrate with other projects. I have listed some existing integrations on the left. In general, we can group them into three groups. The first group consumes alerting functionality of Monaska so you can do heat autoscaling. In this cycle, we will update documentation about this. There is some documentation available, but it's spread and we want to update this. In the last cycle, we have extended, we have added authenticated webhook notification. We had webhook before, but we have added this authenticated method. It was a part of the work with Congress. Another group of services consumes the measurements from Monaska. These are CloudKitty or Watcher. Cilometer works the other way around. We collect the measurements from Cilometer, if needed. On the right-hand side, I have listed some possible integrations. Unfortunately, I don't know really why, but it's difficult to find resources for working on such cross-project integrations. We have talked with Vitraj and Watcher and we have described how such integrations should look like. If you would like to contribute to any of these projects, that would be a great opportunity. We want your feedback. Please contact us in any available channels. You can write us in the new OpenStack discuss mailing list. Please don't forget the Monaska tag. You can also use it for filtering in your client. You can ping us in the OpenStack Monaska IRC channel or attend the weekly meeting. How to contribute? We use Storyboard for coordinating our development. We have a Kenben board where you can find some stories in backlogs. If you want, you can pick some of these. Or you can just look at the open reviews or just propose a new story and a new feature and describe it shortly in a story and propose the implementation or just the feature. Of course, you could work on the bug fixes, cross-project integrations, as I said, in stores, documentation. These are all areas where we definitely need help. Last but not least, the next Monaska presentations here in Berlin. We will have an onboarding session in particular for these of you who are interested in contributing to the project. It will start in five minutes. Then we have a hands-on interactive session where you can try out Monaska and play a bit with a common client. Tomorrow is a lightning talk about Monaska in the high-performance computing cloud. That's all for me. Do you have any questions? Go on. What is... I guess the question is, is Monaska from telemetry or telemetry? How different is Monaska from telemetry? Can it fetch the same data as telemetry? Yeah, it can fetch very similar data as telemetry. Well, yes and no. If you really need exactly the same data as telemetry, you can use the Monaska plugin to push everything to Monaska. Telemetry is more concentrated on open-stack infrastructure and the services. If you're interested about monitoring your instances, I think you're fine with Monaska. With Monaska you can monitor your applications running in your project on your instances. You can install the agent on your instances. Any more questions? Yes, the only dependency actually is Keystone. Yes? Pardon? Will you speak more of the difference between Monaska and other monitoring sessions? On-boarding session is... I have the flow for the next presentation, but there is more time and I can definitely go into this as well. You mentioned that you use InfruxDB. Yes. Do you also use other premises that are based as Prometheus? We are not planning to use Prometheus as a back-end. We can scrape Prometheus metrics as an agent plugin. And for the back-end we use InfruxDB, or you can use Apache Cassandra. Anything else? Thank you very much.