 So, let's say again, the presentation is structured in three areas, first we'll see where and why we are using LibreOffice Online. The second one is the more technical section describing the solution, the deployment and monitoring of LibreOffice Online. The second section is we'll go through a list of contributions that 101 made open source for LibreOffice project. First, a couple words about 101 MailerMedia. 101 MailerMedia offers email cloud and identity services for more than 30 million active users in Europe and all over the America. 101 MailerMedia is part of United Internet and it is composed from four brands, JMIX, WebD, 101 and Mailcom. You can find LibreOffice Online in three out of four brands. You can find it in WebD, JMIX, BothInternational.com and Donnet and Mailcom. The current version of LibreOffice Online that you can find in this portal is 6.2, but we are in the middle of upgrading the version to 6.3. Now that we know where we can find LibreOffice Online, let's see why. As I said earlier, 101 MailerMedia offers cloud and email solutions, so most probably we'll have files in the cloud and attachments to the emails. Due to this, we've needed a way to preview the files quickly and to edit them without leaving the portal. So without using an application like Microsoft or having to install LibreOffice on our workstation. The solution created using LibreOffice Online to preview and edit files is called the Online Office Editor. This is the landing page of the application, which offers some complementary features like creating files from predefined templates, previewing your most recently edited files, browsing through your cloud documents in order to open them for edit. And one of the most, the first features from Online Office Editor, upload from local. This feature allows you to select a document from your local workstation, upload to MailerMedia cloud and start editing it through LibreOffice Online. So after we select the file, click open, the file is automatically added to MailerMedia cloud in a specific directory. And the view of the application of Online Office application will change and will look like this. As you can see, the file has been added in the left side bar in the list of open files. Currently, we're for a maximum number of file documents for a user to edit at the same time. And in the center, a Nightframe has been opened, which loaded LibreOffice Online load-athlete component, which allows us to start editing the file. You can find the same look and feel in both in GMX portal. Here we are editing a spreadsheet file. Or in Mail.com. Here we are editing a presentation file. Besides the upload, I want to briefly mention one feature that allows the customers to quickly create files from predefined templates and start editing them through LibreOffice Online. What's worth mentioning is that all these predefined templates are in ODF format and currently they are available only for German brands like GMX and WebD. So this is what I wanted to present for the first section. If we summarize, there are two important ideas to remember. First is that you can find LibreOffice Online in GMX, WebD and Mail.com brands. You can find the 6.2 version of it. And the second idea is that you can leverage upload from local or templates feature to quickly create files or to upload files to Melee Media Cloud and start editing them through LibreOffice Online. Going on to the more technical part of the presentation, we'll present the system solution of Online Office Editor application while focusing on how we've deployed LibreOffice Online in Kubernetes and also how we can monitor LibreOffice Online instances in Kubernetes using Prometheus and Grafana. Online Office Editor is composed from high-level modules, as you can see here in this diagram. We have the front-end components, which is the landing page, a custom component from Online Office Editor. As I said earlier, this component allows the users to quickly browse to the most recently edited files to browse through cloud documents in order to edit them or to upload new documents in cloud. So it offers some features complementary for those of editing. When we select the file for edit, LolaFleet component is open in a separate type frame, which allows us to start editing the file. These are the front-end components. Moving on the back-end side, we have a middleware, which acts as a proxy for HTTP and WebSocket requests between the front-end clients and LibreOffice Online components. It is also responsible for authentication, load balancing and session management. We have, of course, the LibreOffice Online component that allows us to edit files. And at the last, we have the storage adapter component, which implements the whoopie-rest specifications and facilitates the communication between LibreOffice Online and cloud. This module allows us to upload the files edited from LibreOffice Online through MeleeMediaCloud and to download them from MeleeMediaCloud in order to be edited. What's important is that all Online Office Editor modules are deployed in Kubernetes, which allows us to scale the modules of the application based on our needs and based on the traffic that our application receives. Let's move forward and see how we can do this, how we can deploy LibreOffice Online in Kubernetes. First of all, Kubernetes is an open source platform which manages containerized services. One easy way to deploy Docker container in Kubernetes is through Helm, which is the package manager for Kubernetes. And Helm uses a packaging format called charts, the Helm charts. As you would see in online repo, the Helm charts are a collection of files that logically describes a related set of Kubernetes resources. LibreOffice Online Helm chart can be found in LibreOffice Online GitHub under Kubernetes Helm. If you want to test this Helm chart, we have all the needed information in there with me in our online GitHub. And I will iterate quickly on them. Firstly, of course, you need a Kubernetes cluster. If you want to test this locally, the easy way is to install Minicube, have all the needed information in this link. You will need to install Helm. You will need a version which is newer than 3.0. Optionally, you can customize the deployment with some environment variables that will be passed to LibreOffice Online. You can customize these variables in values.tml. And for example, you can customize the credentials used by the LibreOffice Online admin console. Or you can add the whoopie domain host in this file that will be passed to LibreOffice Online. After that, you can install the Helm chart using this command. And the output of the installation will look like this. So firstly, the LibreOffice Online Helm chart will create a deployment called LibreOffice Online. This deployment will create a replica set. A replica set purpose is to maintain a stable number of Kubernetes pods at any given time. So as you can see here, the replica set from LibreOffice Online Helm chart is supposed to create by default three LibreOffice Online pods. And keep these three pods available at any given time. The list of pods can be seen here. As you can see, the correlation between the replica set and the pods is made through the replica set ID. You can see here that the pod name contains the replica set ID. So all three pods were created by this replica set. Besides deployment replica set and pods, one brief mention to say pod is the smallest abstraction in Kubernetes. For us, LibreOffice Online pod contains the LibreOffice Online container, which is created using the official Docker file from Online GitHub. Besides all these resources, we have a service which defines a policy in order to access the pods. I think by default it uses a round robbing algorithm in order to forward, to balance the request to LibreOffice Online pods. And one important resource that we had, one-on-one media we are using it, is the Horizontal Pod Autoscaler, which allows us to scale the number of LibreOffice Online instances based on some needs. The default LibreOffice Handchart scales the number of pods based on CPU and memory utilization. Let's talk about this feature in more details. And how the Horizontal Pod Autoscaler works in the official Handchart that we've released in Online GitHub. Firstly, what's important to say is that Kubernetes cluster comes with Metric Server, which is a cluster-wide aggregator for resource utilization data, like CPU and memory. And Horizontal Pod Autoscaler will query this Metric Server to our Metric API for the CPU and memory utilization for a specific set of pods. When it detects that a specific threshold is met, a specific threshold that we are setting in the Horizontal Pod Autoscaler, like you can see here, the threshold was 70% on memory and CPU. When it detects that this threshold is met, it will instruct the deployment, which it will instruct the replica set to increase the number of pods. Here we can see better the example, the use case. So in our use case, we set the memory threshold to be 70%. When the Horizontal Pod 70% from the total amount of memory that we limit the pods to access from our Kubernetes cluster, when it will detect that the threshold has been met, it will advise the replica set to increase the number of pods, in our case from 3 to 4. Here you can see the event from the Horizontal Pod Autoscaler, the event with the new value of the number of pods 4, and with the reason, memory resource utilization above target. You can see the target and the current memory utilization here in the Horizontal Pod Autoscaler. So the target was 70% and the current memory utilization was 80%. The Horizontal Pod Autoscaler instructed the replica set to increase the number of pods. Here you can see that the new number of pods, the new desired number of pods is 4. And actually in the list of pods you can see that a new LibreOffice Online instance is created. That will help us to adjust the traffic, the new traffic that we are receiving. What's worth mentioning is that the Horizontal Pod Autoscaler, the resource from Kubernetes, could be updated in order to use some specific metrics that we are interested in. So not only CPU and memory. For example, we can tweak the Horizontal Pod Autoscaler to check a custom metric that we exposed from LibreOffice Online, a custom metric like active documents. So when the total pods reaches a number of current edited documents like 200, it will instruct the deployment to create another instance in order to accommodate the new edit request. This Horizontal Pod Autoscaler, as I've said, is a very important feature from Kubernetes. And we at 1-on-1 MeleMedia, we are using it in order to accommodate the traffic that we receive on a daily basis. Okay, we have a lot of instances of LibreOffice Online in production. How we can monitor all of them? Because especially when the number of instances could increase dynamically based on the traffic that we receive. The solution for that, the solution for monitoring LibreOffice Online instances was created using two tools, open source tools, Prometheus and Grafana. Prometheus is used to store metrics and Grafana is used to visualize these metrics. Like short introduction to them, Prometheus is a time series database that works by scraping monitoring endpoints, processing them and storing them. And Grafana is a data visualization tool that helps us to build dashboards and graphs for metroducta. What's important is that LibreOffice Online exposes monitoring metrics which are compliant with Prometheus' format on REST endpoint lol getmetrics. Description of these metrics can be found in online GitHub. Under VSD, Metrix takes the file. There are a lot of metrics. Some of them are related to the global system, to the global memory available. Some of them are related to the process count. For example, for counting the processes of lol VSD of 4KIT or 4KITs. And there are some document-specific metrics like document upload duration, the number of views for a document, and so on. You can find all the metrics with their description in this file in online GitHub. Based on these metrics, we can define Grafana dashboards that will look like this. In these three examples, we are counting the number of processes that LibreOffice Online started. As you can see, we have the lol VSDM 4KIT which should have one process at any given time. Then we have the kits. From this dashboard, we can deduce the number of edited documents, for example. We can construct a lot of dashboards. Grafana offers us a lot of features in order to construct the dashboards. It only depends on what metrics we are returning from the GetMetrix REST endpoint. Also, we can define alerts in Grafana that will alert us. I don't know when there are two lol VSD process instances on the container or something like that. If we sum up this section, there are three important details to remember. You can deploy LibreOffice Online in Kubernetes using the official LibreOffice Online help chart that you will find in Online repo. You can leverage the horizontal pod auto-scaler to scale the number of LibreOffice instances based on your needs. As I said, CPU, memory or number of active documents. You can monitor all this LibreOffice Online instances deployed in Kubernetes using Prometheus, Grafana and the REST endpoint that returns the metrics in Prometheus format. Reaching the end of the presentation, I want to iterate a little bit through the contribution that 1-in-1 made in the past year. What's next, our future contributions that we want to open source? 1-in-1 Media started to contribute to LibreOffice Online project in February last year. Our first important contribution was related to WebSocket defragmentation on how we can manage defragmented WebSocket messages. We've open-sourced some cross-site scripting fixes on Lolaflit component. Also, we've done some library updates on LoliFlet. We've discovered and reported a CVE related to LibreLogo. After the LibreOffice Online conference from this year, we've open-sourced an improvement to support TLS for communication with storage. We've open-sourced the REST endpoint for admin metrics, which allows us to monitor the LibreOffice Online instances with Prometheus. And we've open-sourced the Helm chart for deploying LibreOffice Online application in Kubernetes. What's next? On short-term goals, we want to open-source segmentation for monitoring hasment with a new metric for segmentation for crashes. Count that will allow us to monitor the crashes inside the LibreOffice Online. Also, this will impose a change in the communication between LOL, VSD, and 4Kit through Unisocket. And also, we want to open-source CDN integration in order to allow the plug-in of a CDN, a content delivery network, for serving Lolaflit static files. All in all, Online Office Editor will not be made possible without the full contribution from open-source community. So, saying that, I want to really say big thanks to LibreOffice community and to all the patches that you contribute. Yeah. Saying that, I want to thank you for listening and I'm open for questions if you have. If not, thank you very much. Sorry. Do you think that you are the biggest deployment of LibreOffice Online? Yes. I never heard somebody doing it with Kubernetes before, so you must have lots of documents. I don't know if we are the biggest, but in terms of deploying LibreOffice Online to Kubernetes, yeah. I think we are the first. I don't know who is deploying. Your documents are being added in your system at one? It depends on the portal. It depends on the number on the date. But we think that based on our monitoring, we are having 10, 15,000 active users on a daily basis, and we can reach to 400 documents edited at the same time using LibreOffice Online. 400. 400, yeah. But as I say, it depends on the daily, on the portal. WebD and JMix are portals with a large number of active users. So on these portals, yeah, we are having an increased number of documents being edited on a daily basis. Cool. Any more questions? No. Thank you very much.