 So we're here to present our work with it within the Document Foundation Tende. The work is about improving our help content, help text with the screenshots and to make this somehow more sustainable and to somehow automate the process. So first I'm going to explain like, why should we bother in the first place? Why, what are the benefits this is going to bring us? And then I'm going to describe how did we do that? What did we implement it so far? And I would ask some next, some future steps to take because the work is not quite finished yet. So is there anyone, any one of you ever read a LibreOffice help text? Raise your hands, a couple of hands. Nice, any of those who raised the hands, any one of you found the text really helpful? Sometimes. Well, it's usually like with the, especially if you're technically, if you're a programmer, it's often like if it always fails, only then read help. And I have to say our help text and our help content doesn't really raise much confidence, does it? This is how an average text looks like. It's not so very helpful. It doesn't look good at all. We can do better. For example, like this, which is approximately the same help text. Just like on the, on the, well, it's my, it's my right, probably your left. The first one is the standard help if you get from pressing F1 and the other is the, is the LibreOffice user guide. And we can do even better. We can actually create screenshots and also annotate them and somehow interconnect them with the, with the rest of the images, with the text and make navigating them easier for the users. But as Olivier, who's sitting over there, can confirm, this is very, the whole process of including images into the help text is very tedious. So far it's, it's purely manual work and it can't be automated in any way. So this is what the document foundation came tender. The aim of the tender was to change that. And yeah, well, the other problem is that the help text is just some layers and layers of legacy code and obsolete workflows and it's not really sustainable long-term and the technologies are really so ancient that it's no longer maintainable. So our help content is in some self-invented, like this was this probably this not invented here syndrome. None of the formats is really good enough. So what do we do? Let's invent our own one. And so if you see those XHP files somewhere in the LibreOffice code base, those are the help files. The workflow like, and but what you, what you get when you press F1 in LibreOffice, it's actually HTML. So this XHP kind of XML format is run through some style sheet transformations and to obtain HTML and which is what you see in your help browser, but it doesn't stop there. So if you are a LibreOffice contributor and you want to contribute the help text, what you do? You go and check out the help content repository and then you realize that you would like to include some image to your, to the help text or are you looking for where the images of the, from the help things are and you will not find them in the help content repository. You have to go to the core repository and you have to go to the icon themes folder and that's where the help images are. And this is, this is of course like if you plan to increase, like to include some large amount of graphical content into the help, this is of course not sustainable. It can't be like this. And as I said, like actually enhancing the help content with the images is tedious because it has to be done manually. It has to be done separately for every platform. It has to be done separately for every language. It's not really easily repeatable and it's not really efficient. So what would the requirements be? How to, how to improve the state of a pair? As I said, the whole process to include the images has to be repeatable. Anyone should be able to do that. There shouldn't be much space for any manual interaction. And as we, as we all know, the UI of liberal office changes very frequently. So the developers somehow, how to say that updating the help text is not the first thing they usually think about. So it happens frequently that the screenshots come out. They get obsolete very quickly. So it would be ideal to get a new set of screenshots for our dialogue with every release, which is what this standard was all about as well. Those screenshots should be platform specific. So we should have different set for Linux, Windows, and Mac. And we should somehow be able to also produce screenshots with different icon sets and with the different themes. Of course, for the different languages as well. So that if you're reading the German help text, you don't have the English screenshots. That makes a lot of sense. So what have we done? How did we achieve that? And what are we going to do? The first logical step to produce the screenshots of the liberal office dialogues is to actually open the dialogue. And this has to be done automatically. And this has to include every dialogue. And yeah, so, but there is already a framework that does something similar to opening the dialogues. It's the UI testing framework, which is implemented in PyUno. It's been done on Moggy. And it opens dialogues by dispatching the corresponding QNOW commands. We have evaluated this framework and we tried if we can use it for our purpose. But at the end of the day, we decided against it for the reasons listed in my slide. There were some heavy performance issues. So, and Moggy complained about it himself, that generating like running couple of UI tests takes very long time. And it was kind of very challenging to debug, you know, in Python. So we suffered from not invented here syndrome and we implemented our own framework. Well, not really on, but we decided to build on CPP unit test which somehow solved the debugability issues because it's kind of, at least for us, it was easier to debug C++ CPP unit than to debug Python with UNO. So, what we're doing is that we're opening an empty document in CPP unit test. Every empty document or every document simply comes with the document shell. And most of the document shells come with abstract dialog factory. At least write a calc and impress do, for example, math doesn't. And to open a dialog, some dialogs can be open just like that, but some need some input data and those are the SFX item sets, the strings or whatever else they need. So, our framework somehow fakes those data or creates those objects like from the scratch, then somehow supplies them through the abstract dialog factory and then opens it. And I think Armin can now tell some more details. Yeah, thanks. See that process. That's right, I'm in play. SCCBubli knows much more about the help stuff. I don't know as much about the help stuff, but I was digging into VCL and how we could do the screenshots and we tried to stay as close at the current repaint as possible and the problem is about 20% of the dialogs need some special data directly bounded to its applications they appear in. And for each of them, we had to instantiate some stuff in the unit test to get some running and this is a lot of work and so we did the fallback to the standard dialogs with the standard instantiator which takes the UI description file and by default all dialogs are now added to this mechanism to get a rough screenshot and if you want more, I have to do more. It's handwork which has to be added and the screenshot itself for performance reasons and to get a screenshot as close as it appears in the application. We try to stay as close as what VCL is doing as possible and this means to asynchronously wait for the repaint and it's not easy to find a spot in time to get a clean screenshot, but at least we managed to do this and the current code it's already in there. So there's a new build target screenshot and when you build it, about 500 screenshots get created in the work directory. So yeah, this is my next slide where if you run make screenshots, you get this in this work directory screenshots folder, you get the screenshots of all dialogs of all UI files. We excluded those that are not based on UI and currently last time I counted it was like 169 dialogs all together and the structure of this screenshot where the screenshots are collected copies closely the structure, how the UI files are stored. One addition for the different systems there were quite some small problems. We had to exclude five or six dialogues which currently do not open even with the fallback but compared to the about 500 screenshots which get created, it's not so bad. Yes, so now we have collected insane amount of screenshots and what are we going to do with them? Put them into the help text. So as a first step, first logical step is to copy the screenshots where they would be actually expected. I tweaked some Perl scripts that are, it's a very complex process of packaging images in the in the labor office because there's some Perl scripts somehow running, indexing some directories, producing some one file line, special format files that are processed by yet another Perl script and only those are then somehow zipped together to create some zip folders, zip archives with all the images, icons, whatever else in the labor office. So I extended those insane scripts to actually process the content that is in help content folder. So nowadays if someone wants to write and help text and is looking for the images, all the help images, they will find them in the help content folder and they don't need to repositories, one is enough. So the images are packaged and now we have to embed them into actual help files. I can only express my endless frustration using the help authoring extension which I used for embedding those images but at the end I somehow managed. The good thing is that the way the images are embedded in HTML or in those help files, it's this special labor office image URL. So as long as we create some matching directory structure for different languages, we get some localization for free. So we don't have to extend the help like the file format, the markup we used in the help files and we get the localized images for free. And the next step is to actually if you recall the slide I used to highlight and annotate specific controls or specific parts of the dialogues and show them like an interconnected with this with the actual help text and I'm going to hopefully going to show some demo of that and I will sincerely hope it won't crash. So it's a work in progress. It's in a feature child workspace already checked in but not yet completely finished. How can I? Oh yeah. That's coming, okay. But the idea was to be able to make a direct screenshot of every dialogue. So when you open any dialogue and have the experimental feature set. My mouse is running away. Oh, yeah. So when you open any dialogue and so I click here somewhere close to the help button I get a funny context menu like this and here I can choose that I want to create a screenshot. Well, this is a bug. It somehow comes back in the background but here I can actually click and choose to highlight specific controls. And in the text area underneath this unfinished yet there will appear some text, some chunk, some snippet of the HTML or XML which I can simply copy and paste and then embed it into the help file. So this works currently with every dialogue which has okay on cancel buttons below and to do it in the context menu was just idea to not have to raise space with another button or something so it will work everywhere. And I have just experimentally added the other buttons which can actually be triggered from the context menu it works. And as I said, there's still some small bits and pieces to be finished. So what do we still have to do is somehow improve the markup and the process of annotation those widgets and how to best interconnect that with the existing help and possibly that was an idea both for design team and Olivier to include that in help authoring extension. So if you're editing the help file in the help authoring extension you can I don't know click some button on go somewhere and say I want to embed this screenshot now and that would embed like the existing screenshot. Yes and also the last step last mile is to, well currently we're generating the files only for, well on all platforms but they're stored like it's everything in one folder. So we have to somehow improve the structure to have different folders for different platforms and also perhaps for different icon sets or widget toolkits or whatever else. And to conclude my talk I would like to thank the document foundation and all the donors that donate money to LibreOffice that they actually founded our work on this. And I would thank you for your attention and if there are any questions there might be also answers. Or just one note it works recursively you can make screenshot of the screenshot annotation dialogue.