 Hello. Thank you so much for coming here to this presentation. I will be talking about using the online as the Android app. And before I will get to what it means, I will show you a bit of the history of how LibreOffice evolved on the Android. So it is quite an old project these days. It is eight years when it all started. The first thing was actually to get something on the screen. So Michael Meeks and Tori Lilquist have pioneered this thing. And they had just tremendous amount of things that they had to do to make it work. Like, first of all, it was necessary to make sure that there are some configuration switches to be able to cross compile. Luckily, at that time, there was some effort in LibreOffice to try to cross compile on Windows. So building on Linux for the Windows target using MinW. And so there was some preparations for the cross compilation at the time. So that was possible to use. But of course, there were many limitations. Then when the code was compiled, there were limitations how to actually run that. So linker on the Android at the time had limitation of some like 96 dynamic libraries that it was possible to link together. So it was necessary to somehow overtake that. So Matush Kukań at that time was merging the libraries together so that they become something bigger. There was lots of things that had to be done to actually link in the components into one thing. So as you have the components in the LibreOffice, that normally they are dynamically loaded. But it was necessary to get them actually into the one application, compile them together so that no additional dynamic loading is happening when you start using that. There were a lot of fun with font config because it has to load the files from the disk to actually being able to use the fonts for the rendering, the stuff on the screen, and stuff like that. Debugging. Debugging at the time was a nightmare as well. It was that you had to connect the GDB some remote way to the device, try to debug there. In order to actually being able to debug, you had to put there some timeout and do it in the right order. So yeah, big thanks to Tor, big thanks to Michael to actually get something. And something looked like this, which was awesome. It was the entire LibreOffice running on the tablet. But of course, you see the limitations. For the users, when do you want to use something like touch-based? It is just not fit for that. You have the toolbars there. You have the menus, which are small on the device and everything. So it was necessary to go further. So the next step here was the rendering the whole pages. So that was something that was reasonably possible to achieve. So I think it was Tomasz who was working on that, or somebody. I don't recall that well, sorry. But you were able to actually render the entire page from the document, have it as previews there, and then show the entire screen. It's on the entire screen without the limitations of the UI that was around that. So that was a great progress. But it was not getting us anywhere near to the editing that we needed. So the next step was to actually use the work that was being done online on providing the content of the pages and of the documents by the LibreOffice kit. So you have probably heard many times about the talt rendering. So who of you knows what is the talt rendering? So there are some who do not raise their hands. So I will explain. So the talt rendering idea is that actually, instead of rendering the entire screen, when you want to update, it is easier to actually partition it into some areas. So normally, like in the desktop applications, like when you type somewhere anywhere on the screen, there is some area of that screen invalidated, and it is re-rendered. In order to make it easier, like what actually is the area of the screen that was invalidated, we partitioned the entire document into so-called tiles, which are 256 to 256 bitmaps. And say, OK, so during this typing, only the third tile was changed. So request this third tile or the tile in the third column, first row, and send it again. And it is easier in that way than actually the client. So here, the Android application, but in the online, the client in the online only gets a new bitmap. It draws it on the screen. So even though it looks like text, like what is actually showing on the screen is not text, but it is a series of bitmap. And so the idea here was to use this technique of the tiles. And we needed to have some way to compose them to create the document. So the idea at that time was to reuse the code from Mozilla. They were using the tile rendering for their documents as well. And so Quickie has ripped off the composer of the tiles from the Mozilla to actually being able on the Android device to show the documents on the screen. So now it was nicely showing the document screen. And it was a great base for further work, like when you were editing, because you were able to request new tiles and get them. And so you were able to update the document. So as the next step, in the following years, they were adding more features to the toolbar and elsewhere in the Android app. But unfortunately, it turned out that the split between the online, where there was lots of development happening, and between the Android application is too low on the level. So it was that it was sharing the LibreOffice kit, but nothing above that. And so for every feature that was added to the online, it was necessary to actually re-implement it for the Android app as well. Many people have done great work there. So many things were being ported to the Android app. But it was not just catching up with the online. So very recently, the idea was to actually do the split not between the Java part and the LibreOffice kit, but much higher and actually reuse as much of the online for the Android app as possible. And Tor has pioneered that for iOS. It turned out that it is something that actually works. And so why not to do it for Android as well? Let's share code. And so what needed to be done? So first of all, it was necessary to adapt how the things are being built. Because so far, the entire Android application was being built in the core.git. So there's an Android folder in there. And when you were on Android, you've set up the configure to cross compile, blah, blah, blah. And at the end, you had some APK somewhere in the work there. And you were fine. So I wanted to build on top of this as much as possible. So I did the adaptations to this code so that instead, so in addition to creation of this APK, the liblo-native-code.so, which is basically all the merged code together that is needed for the tiles to be drawn on Android. So it is basically the entire LibreOffice kit and all the code from LibreOffice that is needed for its functionality so that it was possible to have it as a separate SO that you can link to something else. So that was the first step. Then I wanted this to be convenient for people to actually use it. So I created the Android project from scratch and did it the way how the Android projects using native code are supposed to look like in Android Studio. So from the LibreOffice point of view, like in online.git, there's an Android subfolder. But this Android subfolder looks exactly as a normal Android project that just works in the Android Studio using the standard tool. To be able to build that, actually, like using these normal tools, I had to use CMake for actually building the low WSD. So there's a CMake file list.txt, or how do they call that? That lists the files that need to be compiled from low WSD into one thing. And it links against the SleepLowNative code from the core Git that you have elsewhere. And you can directly press build or run from the Android Studio. It just builds it all together. So the inconvenient step for people who are not used to Android development is the first step of actually building the SleepLowNative code in core.git. But other than that, on top of that, that's as people are used to that. Side effect of that is that the debugging of the native part is just much easier than what it was years ago. So directly from the Android Studio, you can open the C++ code. As you can see it, you can start the debug app. It just works. Of course, when it is outside of the low WSD, you have to have the symbols for the SleepLowNative code as well. But it is actually not necessary to have them in the APK. It is possible to create the APK with the stripped symbols. And then it is described in the 3DME, like what you need to set up in the Android Studio. So you point to the version of the library that has the debugging symbols. And then you can set the breakpoints directly in the user interface of Android Studio and all these things just work for you. The next step was to create the minimal application. So Gulshar has done lots of work in this. So the UI is actually very simple for the document showing. So you have only a web view over the entire screen of the Android device. And in this web view, you run the JavaScript from the online. Again, Tor has done lots of stuff to this JavaScript so that it is prepared for this. There's a vague web socket in this JavaScript thing so that we do not have to use the normal web socket communication. Instead, we talk directly to this web view. And there are two directions, actually, how you need to send messages. So from the JavaScript into the native code, you use the JavaScript interface. You specify which object is supposed to get the messages and some handler name that you can then use in the JavaScript for actually accessing this stuff. And in this object, when you have the class for that, you have to annotate the methods that you can call with JavaScript interface. And then in the JavaScript, you can use it directly like this. So long message handle dot post mobile message stuff. Things work. Of course, it is more advanced the other way around. So from the native to JavaScript, because you cannot just call the JavaScript like the JavaScript methods directly. Instead, you have to use this JavaScript colon. And the thing after that will be parsed by the JavaScript, executed, and stuff done. So this is how you call from the native to JavaScript. And then lots of functionality had to be ported from the old app. So what I described so far was just the editing part. But the old app in the core.git had much more than this. So there was the initial shell where you have the recent documents and stuff like that. So that had to be ported. It was necessary to associate the files the same way the old application was doing that. So when you install the thing, you will like, and for example, in your email client, you tap on some file that Android knows that the online or this Android app based on online can actually handle these files and stuff that. Of course, lots of things are some necessary hacks. Or how to, well, it's the only solution. So it is not a hack, I don't know. So for example, for this font config to be able to work, you have to provide it as assets. But then you have to copy it actually to the storage part of the Android app so that the native code can use that. And do stuff with that. Then there were some additional activities or fragments inside the application, like showing the license and notice. And of course, there were some settings. And then this year, Kaishu Sahu did just an amazing job helping with this. So he has added a great list of features into this app. So that was print support, slideshow, so that you can start the slideshow and see it on the screen. Inserting images, sharing documents, save us into a new name, permissions. So of course, the app asked for permissions to access the storage, but it has to be explained to the user when they initially declined this dialogue. There were some launcher shortcuts, supple for more documents in the safe. And very importantly, dimming the document when inactive so that normally you have the choice between showing the document still on the screen or actually dimming it according to the timeout that is in Android. But with this, we actually listen to the dimming messages that come from the online code and then trigger the dim from the Android. Of course, when you wake it up again, the timeout starts from the start. And tremendous amount of bug fixing. So getting the lifecycle correctly, or at least to the state where it is now, was a big struggle because there are some many threats that are going on in there and align it with the Android's lifecycle, like how the application is supposed to be started, stopped, and how it's supposed to perform was amounts of tries. Not a constant, like eight hours a day, but it was a struggle. And then various other small fixes. So start-up time, the font config was just taking tremendous amount of time on the first start-up because it was trying to cache all the no-toe fonts that are on the Android device. And there are many of them. So I've just disabled that so that the start-up after that is nearly instant. Of course, there were some crashes and some other small things. So you probably want to see how it works. So I've recorded a video because I was afraid that I wouldn't be able to stream it directly to the screen. So when I started, you see the shell. Now I've tapped on one of the documents, like it is loading the document. Now it has shown on the screen you have to start the editing by using the pencil button. Then I was trying to type something. Of course, I'm slow in typing on the device. But you can see that it does something, actually. And you can even insert an image, which is what I tried to show here as well. So now I'm inserting the image. It asked me what. So some random image. Well, the only one that I had there usable. So it inserts the image. And now when I finish that, it autosaves. So now when you restart it again, you will see that the modifications are in there. So that's it for the demo, I think. And future steps. So there's obviously more to do. As you have seen, the text input is not ideal. There are some lags. Part of that is that it was a debug built. So there's a lot of logging going on and stuff like that, which adds to the time that it takes to actually type something. But I have a suspicion that actually parsing this JavaScript stuff and passing big strings through that takes a lot of time. I have to measure that if it is really true. And if it is really true, it is possible to actually have a WebSocket open inside the app on Android. So I would be able to use that for the communication as we do in the normal online. Still, there are some time the document doesn't load. So sometime when you trigger opening a document, you get a JavaScript error that the document couldn't be accessed. Some timing issue, most probably. I don't know. Have to debug that. Then the document creation code, the old way there is that it just copies some template file somewhere to the new name and then opens it. In the meantime, we have the possibility in the online to actually use some kind of template operation and start the document just directly from a template and maybe some more fixes because there is always stuff to fix. So that's it from me. Big thank you for listening, but even bigger thanks to people who were working on that. So I list here the people who helped me with this in the online part. But of course, it is building on lots of stuff that were done by Tor, Quickie, and Cloth, and other people. So thank you so much. If you want to get involved in this, it is familiar for you. If you have any experience with the Android development, you will see that it is easy to do. And steps how to get the stuff that is not that easy is described in this Android readme. So you can see it and try yourself. So that's it from me. Thank you so much.