 You can also see if you're on YouTube, so the streams are available from both the website and via YouTube, so it should be an option for everyone, no matter where they are, to have a good recording, a good live streaming of that. Italo, are you the first speaker of the introduction in the opening session? Yes, I'm the first, the second and the fifth. Okay, so I'd say that I will introduce the opening session and I will give you the word. Yeah, I'm, I will start sharing my screen, just closing the other application on the PC. I just made a reboot of the computer for, you know, safety purposes. Yeah, I have three machines. One is recording room one, one is recording room two, and the third one is the one that I'm using to speak, to present. A quick question, Gillem, the statistics module, do I need to reload that manually or is there a reload integrated? No, it's static, unfortunately. No, no problem, if I know it, it's fine. No problem at all, I just want to monitor a bit the number of participants, so we can also share that with time left. So we are on air, on the website, we are on air via YouTube and here via Jitsi. Also the conference talks and speeches are recorded. It's good to see all of you, even if just virtually again this year, but it's good to have all of you around and see all of you. The same for me, let's hope that the next year we will be able to join and meet again in person, hopefully. Yep, same here, seconded. Gabriele, I will start recording the room a couple of minutes before noon, so that we start the session and you will be recorded as well. Okay. A little organizational hint, if you're speaking live, I recommend turning off or muting your mobile phones and the normal phones. It's a good suggestion, since I had already muted them by a phone, but I didn't remember to mute the landline one, so great, and thank you. I had the experience once during a recorded interview. Luckily it was just recorded, normally I always turn it off, like, okay, I forgot something, can we redo that part? Then of course follow the LibreOffice Twitter account for more handy conference tips. Gabriele, I will, when I'm almost at the end of the opening session, but it's not going to be very long, I will hand over to Lothar for his introduction, and Lothar will hand over to me again, because the last slide is the announcement of technology and ecosystem logos before the session is over. Okay. So when the session is over, I will hand over to you. Okay, but I guess I will have to do the same again, meaning that the next speech at half past noon is yours. Yeah, maybe there are comments, questions, I don't know. I think we have to start by the hour, because the schedule is quite packed, so about a minute from now, and it begins. Yeah, started recording now. Make the countdown. Yes. I have one of those old analog wall clocks, so it's 35 seconds to go, and now we have 30. I'm looking to the clock of the computer, which is synchronized, should be synchronized with some time servers. Countdown, it's now five seconds to go. Okay. Two, one, zero. So, welcome to LibreOffice conference 2021. I'm really honored, and it's a great pleasure to introduce this conference this year. Unfortunately, it is still online, and not in person, but as we were saying before, we hope to be able to have an in person meeting next year. Let's hope so, and in the meanwhile, I just wish you all to enjoy this conference, even if online. I can't say good morning, because worldwide it's not morning everywhere. So, by the way, the very first session is the opening session, and I see all the speakers are connected and online. So it's my, again, a pleasure to hand over to Italo Vignole, who is going to introduce the LibreOffice conference 2021. Thank you. Morning, or have a nice day, everyone. Welcome to the LibreOffice conference. Hopefully, your language is in this slide. Welcome, everyone, for three days with a lot of presentation. First, thanks to conference sponsors, Collabora and Allotropia, who are the two main sponsors. Linux Professional Institute, we will announce later the starting of a collaboration with Linux Professional Institute about certification. And then Omnis Cloud and Carbone Io, who are also sponsoring the event. Thanks to them, although it's a virtual event, it is important to be backed by companies in any case. About the schedule, a few housekeeping information. About the schedule, you can find the schedule at those links. At that link, you can download the mobile application for Android. To find the links, you can go to our blog during the last couple of days. We have published a post about the conference, you can find the links, active links there. In terms of streaming, we are streaming on YouTube and the conference website. The YouTube links are provided just before the starting of the conference. Please check them regularly and at least once a day as they might change. The current GC version does not work. This is for Mac OS users. It does not work with Safari. So if you have a Mac, you should use Firefox and Chrome. We have also discussion channels. The room channels are supposed to ask questions and comment the presentations. Then we have a main chat and we have a broadcast channel for the conference. The broadcast channel is only on Telegram while the discussion channels are on Matrix, RAC, Liberachat and Telegram. Now, switching to the information, a couple of numbers. There will be a session about the state of the project with more numbers tomorrow. This slide is from Statista. It is based on Analyst number. We are in 2021. As you see, our office with market is just over $25 billion. The estimate is that it will reach over $31 billion by 2025. This shows that it is a dynamic market. The growth is not huge but is solid. In any case, $30 billion is a quite significant figure. This slide is from September 2017 from a SpiceWork report. Analyst numbers are more or less unchanged since then. As you can see, Open Source Productivity Suite has around 16% of market share. Of course, the numbers are higher than $100 because several users are using more than one software. What is important is that Open Source has quite an insizable market share. Based on recent discussions with Microsoft people, they have told, this is not official but it is just verbal. That LibreOffice is the most requested Open Source application from Windows 10 users. This gives a good idea and a good perception of our positioning. Let's talk about the project and the community. Last year we had a rather large discussion about how the project is structured and managed and how it is sustained. We have a big community and a community with volunteers and ecosystem companies. I would say that we are all important. Ecosystem companies are important because they allow LibreOffice to be in the enterprise world. Volunteers are important because they provide features to LibreOffice because their development is integrated into the master source code. Volunteers are extremely important because they are the active community of people contributing to LibreOffice. We have a large number of volunteers contributing to localizations. LibreOffice is the software that is localizing more languages worldwide, 119 available languages. We have volunteers contributing to documentation and documentation at the moment is following closely the announcement. We already have a couple of seven or two guys available, just one or two months after the announcement. There is a large number of volunteers contributing to the user interface to graphic and doing marketing, doing advocacy at local level. Volunteers are key for the project. These are the numbers of the community based on our dashboard so everyone can check these numbers over the last five years. The blue bars are casual contributors, the purple bars are the regular contributors and the reddish bars are the core contributors. As you can see, although there are seasonal variations, our community is rather stable. We have a number of regular and core contributors. We have a number of casual contributors that in some cases become regular or core. In some cases do not, but this is what is it with open source software. Some numbers to add that based on data from Git, ecosystem company sponsored developers provide 68% of activity on the source code and volunteers 28%. This is just quantity, it's not quality of course, the quality is outstanding for our contribution. Based on donation numbers, 90% are from individuals and 10% from small and medium businesses. This means that we don't have donations from large enterprise users and based on the data that we have, only less than 5% of all LibreOffice enterprise users and it looks like there are quite many are contributing back to the project and to contribute they could donate, they could buy product or services from ecosystem companies and certified professionals. This does not happen as frequently as it should and this is one of the issues that we are facing and we are trying to overcome with actions in the marketing plan. Again, these actions will be explained during tomorrow's session. So who pays for LibreOffice development? 68% is paid by customers of ecosystem companies. These are enterprise customers but they are not as many as they should be. We know that there are banks and governments and other large institutions worldwide or an organization worldwide using LibreOffice. We do not expect all of them to pay for some development, to pay for some activities but we expect some of them doing this. 28% of development is provided by volunteers and paid by their own time. This is absolutely key. Here I mentioned just the UX localization and documentation but as I said there are volunteers active in many other areas which are as key as these three areas. So this is just a hint of the data that will be shared tomorrow. And now it's my pleasure to call Lothar Becker, the chairman of the document foundation for the opening address of the conference. Thank you Italo. Let me switch on quickly the camera. Hope you can hear me and see me. Thanks to Italo, dear participants of our LibreOffice conference this year. My name is Lothar Becker and as chairman of the document foundation it is a real pleasure for me to warmly welcome you all wherever you are participating in this conference. The most of you know that we normally do our conference as an in-person meeting. But as we all discover another year of the worldwide pandemic, we decided once more to have an online version of the conference this year again after the one last year. I think we all miss hardly the opportunity to meet each other in person. Even if the possibility of the video conferencing are widely established now in the meantime, perhaps with this establishment even more. Because also with all the possibilities of virtual meetings it is my impression that a personal handshake with another participant, a private talk about know-how or experience with LibreOffice, or a spontaneous round of people about any common topic which arise during the conference, or just a beer or a coke with an old good friend in our community is missed very hard. Beside the missing of all these in-person, even local community activities, it was also hard for example for the board to have no opportunity in the whole term to meet in person. And to be honest, not every interpersonal effect could be covered by it, but I think the annual report of the year 2020, our Jubilee year, showed a variety of activities and contributions, and as Italo showed in the numbers, which filled the year instead of personal meetings. And nevertheless, it shows that it was one of the most successful year for the foundation. And with that in the last year, so also the organizers of this conference hardly worked again on all possibilities to have opportunities for joining virtually here. Here and in all talks, and I want to encourage you not just to participate in the talks within a great and packed schedule, but also to use the different opportunities to communicate with each other beside of the talks. There are possibilities we are chats, wire-longs, or asking questions during and after the talks to get in contact. So for this, let me thank the ORGA team here, which is working for month to bring this on stage within all these circumstances. Sorry for not mentioning all of you with names, which I probably do not know all, so thanks to all of you for your passion to have this conference nevertheless online again. The team have also made some great videos of the main sponsors, and there is a separate opportunity to meet them in private rooms for getting in contact. By the way, also from my side, let me say a huge thank you to our sponsors this year, who made it possible again to join in this setting for the conference. Thank you to Collaborer, thank you to Allotropia, thank you to Linux Professional Institute, and thank you Omnis Cloud, and thanks to Carbon.io. Please visit their websites and get in contact with them for more information on the products and services, or even for open job opportunities. But let's also see the good side of such an online event. We have around 200 pre-registrated participants from all over the world, even from time zones where the sessions now are held during the night. The amount of participants is probably the highest we ever have with our LibreOffice conference, and this is a good sign, a good sign that we as an international community survive this really hard time of the pandemic. And it will be a great buzzes for having in-person meetings again whenever this will be possible, and wherever we will do it. But, all online? No, as we will, as a well-known saying of a tale, asterix and obelix, a little resistant group of well-known community members who are meeting in person in Hamburg, the place where our roots of our beloved software were made. So best greetings to all participants in Hamburg. I tried also to come to Hamburg, but out of private reasons, I was not able to join. So I hope you enjoy also in Hamburg the conference and the personal meetup in Hamburg as well. Have a great time, all of you at the conference. Thanks for taking part and now that the show begins, see you in the different sessions, in the different rooms. Come together, have fun, be proactive and contribute. You all will rock it. Thanks and back to Italo. Okay, thank you. And I have the last announcement of the opening session. The opening session, we promised to have logos for the LibreOffice technology and the LibreOffice ecosystem as part of our marketing plan. And now you can see both of them. The author is the same, is Elisa de Castro Guerra. I think she made a couple of proposals and other people made proposals. And of course, we thank all of them for investing their time in developing these logos. But I think that Elisa was able to really summarize the concept in one image, using the LibreOffice icon and using it to represent the concept of technology, which is unique to LibreOffice is the tight integration of the LibreOffice engine in every LibreOffice flavor. So we have the same engine on the desktop, on the cloud, mobile and different platforms. And the fact that we have an ecosystem of equals, which are represented here by a number of LibreOffice logos, but each one with its own characteristic as underlined by the different color of the triangle. Again, thank you to all the people that have contributed to this logo contest. Thank you to Elisa. I sent a message to Elisa 10 minutes before starting the session because we really wanted to have this as a surprise to everyone. We will work during next week to produce the different version of the logo, which are necessary to use the logo. And we will announce it with a blog post where we also will explain how to use the logo and where to use the logo. Everyone in the community is invited to use the LibreOffice technology logo while the LibreOffice ecosystem logo will be more for the ecosystem companies. So companies that add value to LibreOffice by providing value-added services on top of it. But trainers, migrators, developers, support organizations, and there are quite many that will be able and invited to use this logo. And so thank you again everyone for listening to this starting opening session. And I hand over to Gabriele who is the moderator of the session before telling you in a few minutes about LibreOffice technology. Thank you again. Thank you Italo for your kind words and I wish to thank personally I would say also on behalf of many persons in the chat room Lothar for the kind words and really felt words especially talking about the in-person meetings and we are glad to know and get to know that there is in-person small meeting in Hamburg which is again the roots of our beloved software and it's a good sign of something is changing and hopefully the pandemic is slowly coming to I wouldn't say an over but at least a handling let's say and returning to the regular life more or less. At the moment there are no questions in the room. So I would say italo if you could please share the slide with the links for them. So the various rooms, the room number one. But you had this slide with the URLs of Telegram as well as Matrix and IRC and thank you so much because at the moment there are no questions. I also say being about that. Yeah, I think there's an exit indeed. So if there is any questions I'm monitoring the rooms and the various chat but no questions at the moment which is somehow expected just because this is an introduction and an opening session so there is no particular topic probably. Anyway I felt really interesting the numbers for example every time I know that you know so well but every time is interesting and also surprising to see them mentioned and especially for example the number of translated languages in which Liberoffice is translated as well as the market share and many other numbers you showed which are good signs of our growing community and a growing project which never stops hopefully and we strongly believe in it. So no questions at the moment. If I can make just one addendum to the slide that we see this morning, this morning in European time we set up a separate or an additional jitzy room for those who want to test their video equipment or for those who just want to chat and socialize a bit. There is the LiboCon social channel. You find all the details on the conference website but if you want to see the others in person so to speak with video if you want to test your equipment you can go to that channel that is neither streamed nor recorded all the details on that on the conference website. Thank you so much Florian. I have not a question but I think it deserves to be read and mentioned a user who is let's say thanking us for the precious words for such a great free and open source software. I really love LibreOffice. Thank you. I guess I hope it's the right pronunciation and I'm still available for any questions from the audience and you can see here room one and slide how to prompt those questions if there are any. Otherwise let's say that within three minutes we will be ready for the next session which will be again from Italo Vignoli and its title is LibreOffice Technology, a fast platform for personal productivity. In the meanwhile I could try to ask a question from myself. Let's say that Lothar has spoken about this year conference and also last year which where and is online and the hope to meet again in person hopefully soon which is something that I really strongly hope to. But the question is if for the next year we will let's say have this in person meeting but in the meanwhile we could stream the conference itself on the web so we could let's say take the best of the two words the physical world and so the in person meeting and the online meeting so having let's say even more participants and even more audience so are we willing to do that? Lothar. Sorry Gabriele. I was sitting on my messenger in the different rooms. Could you recap your question please? Yes quickly I just was wondering and asking if for the next year hoping that we will be able to have an in person meeting but if we are going to anyway stream it online so to take the best of the two words the in person meeting. Well this idea is for something like in hybrid conference I think you are mentioning yeah let's see I think that we have a lot of more experience now for streaming such a huge conference via the internet and I think this will be the new normal in the future to have both in place. But I think it will be one of the positive effect that we now know how important it is to meet in person so that we will have a lot of participants also in person in the next conferences. I hope so really strongly and I look forward to that. So it's now time to introduce the next section the next speech which is again from Italo Vignoli LibreOffice Technology a first platform for personal productivity. Okay thank you Gabriele and I will try to go quickly along this presentation. Let me stress the importance of the LibreOffice Technology concept. This is something that makes LibreOffice unique in comparison with other office suite and also not only proprietary but also open source office suite. Let's look quickly at the personal productivity market. So before 95 documents are to be printed to be shared so the productivity that there was a big obstacle to sharing data which was based on the fact that document had to be printed. In the next 10 years between 95 and 2005 there was an evolution between sharing printouts and sharing digital documents. Of course the turning point was HTML because HTML decoupled software and contents making interoperability at least on the online platform reality. And there was the birth of the cloud solution for personal productivity. The first was Google and then Microsoft with 365. Different reasons Google was creating the application to increase the opportunity of getting information from users to sell advertising while Microsoft was looking for an alternative source of revenues from the desktop application. Between 2005 and 2010 it was the birth of the XML based document format. So there was the development of the ODF standard and the announcement of the Office Open XML pseudo standard. The Office Open XML unfortunately is a very complex topic. The reality is that the standard never existed in the market. It's only on paper but what was on paper has not really been deployed in a software. During those years OpenOffice.org was the fact to alternative to Microsoft Office. So in 2010 we announced LibreOffice. Microsoft invested on making Office Open XML the cornerstone of their locking strategy. It's looking at the format. It's rather clear how the format has been studied with care to reproduce exactly the same mechanism of the previous proprietary formats. And after 2010 also several freeware or OpenCore Office Suite entered the market and all of them were mimicking the Microsoft Office format. At the same time the cloud-based Office Suite has grown and evolved. Let's now focus on LibreOffice. So when we announced LibreOffice we wanted to relaunch the innovation. OpenOffice was a very innovative product during the previous decade but for different reasons the innovation was stalling and it was slowing down. And of course the acquisition by Oracle the Sun acquisition by Oracle was seen as a kind of blocker of the innovation. So the community we took control of the software and relaunched the innovation with LibreOffice. With the easy access to make it possible for newcomers to start hacking LibreOffice at the time it was considered almost impossible to start from zero to develop OpenOffice. The learning curve was very steep and LibreOffice developers have done quite a lot to make that curve less steep. There is still a curve but it's now easier to start hacking LibreOffice. And during the last 10 years developers and the infrastructure guys have also created a complete infrastructure to help LibreOffice development with Garrett, Tinder Boxes, integration, Bugzilla, the wiki with OpenGrock and all the different tools that are used to develop, check, debug LibreOffice to get the quality product that we have today. I would like to underline the importance of the localization effort. There are over 4,600 registered translators, around 120 shipping languages and 145 active language projects. LibreOffice is available in more native language version than any other application and we should be proud of the fact that around 75% of the world population is able to use LibreOffice in their native language. Probably even higher than 75% but anyway 75% is already a huge number given that all the other Office Suite are below 50%. And for instance in some continents like Africa they give for granted that the software is used either in French or in English and not in a local language. Then there has been quite a significant activity on the user interface. I used the notebook bar as an example of what has been done. Of course this is not the only improvement. There have been many incremental improvement over the 10 years. So these are the notebook bar and this was significant because it's making it easier for Microsoft Office users to move to switch to LibreOffice that gives them a friendly environment which is not too different from Microsoft Office. Also important the static code analysis that defect density has been reduced from 1.1 to 0 defect density for 1,000 lines. The average density for similar size project is 0.71. We are at 0.00 since I would say forever and also very important use of phasing technologies to anticipate the spotting vulnerabilities in the program. Of course we also get the help of other organizations which are using similar tools and spot vulnerabilities but thanks to this activity or the combined activity on code quality, phasing and so on we can say that LibreOffice is in a very good position versus other Office Suite in terms of security, stability, robustness of the code. Then we have the open document format is the true standard, is important. Here we need to invest a lot of time to educate the market because the market is uneducated by Microsoft advertising money because they educate about using Microsoft Office formats as standard while they are not standard then they carry all the issues that are non-standard as and issues that were supposed to be solved with the announcement of open standards. The advantage of ODF is the best standard 5 format for personal productivity. I don't think that it makes any sense to enter into the discussion on how good or how bad or how performant is one document format. The fact is that we have one standard 5 format which is good enough for interoperability and the market should use this format while it's not using it. Of course because the market is not using this format there is a huge activity on making LibreOffice reproduce Microsoft Office format as good as possible as well as possible. I think that the quality of the interoperability is absolutely outstanding when we see people complaining about issues with LibreOffice you realize that in most cases the issues are low level issues so are not related to the contents but are related to the visual appearance of the document and here you have the fonts you have the page format that are responsible for many of the issues while they don't care it looks like the user don't care about contents care more about the visual aspect which is a pity. Then also there are the LibreOffice kit which is supposed to provide an easy API for LibreOffice also the Scriptforge libraries that are a collection of macro scripting resources for LibreOffice all that I've described so far is thanks to our developers to a fantastic community of developers that has helped to transform LibreOffice from a product to a platform so LibreOffice in 2010 LibreOffice was only for the desktop in 2020 and of course in 2021 we have LibreOffice for the desktop we have LibreOffice LTS Optimize for Enterprise LibreOffice online for the cloud LibreOffice mobile for Android and iOS and LibreOffice for Chrome OS all these products independently from the organization that is releasing them and in some cases the product is not even carrying the LibreOffice name share the same engine which is common to all modules and this is a key important feature and a key important advantage so LibreOffice technology is the result of ten years of development same processing engine common to all LibreOffice module based on a clean and refactor source code with a focus on code quality and consistency and supported by easy and extensive APIs so LibreOffice is not only a very good software but it's also the best open source platform of productivity tightly integrated on desktop, mobile and cloud this is a key important fact we should all stress this key important fact in all our in all our presentation LibreOffice technology based software sorry I didn't replace allotropia with CIB I will correct that immediately I just realized the fault here so we have desktop from TDF Red Hat and SUSE is the desktop version enterprise LTS from Colabora and Allotropia online cloud Colabora and Allotropia Android is Colabora iOS and Apple Store is Colabora and Windows Store is Allotropia and these are different the software is different but the underlying technology is the same and this is what makes LibreOffice different and stronger than the competition so we this is the slide that summarize the situation I've also produced a let's call it an anonymized slide where I don't well I don't stress the document formats but I stress the production of different file formats so text spreadsheet presentation drawings because in some cases it's important to present to companies without identifying LibreOffice with the specific open document format because LibreOffice is able to support all the standards both the good ones and the bad ones in a best way the main difference here is I think it's visually clear we have a common productivity engine while other software have a different engine for each platform and this creates issues in terms of interoperability this creates issues in terms of stability of software because the documents that are produced are in most cases completely different of course not what you see on the screen is different but what you are exchanging as document is different so the LibreOffice unique selling proposition is the following one LibreOffice is the best open source office suite ever baked by a strong community and a strong ecosystem is based on the ODF ISO standard document format for interoperability and digital sovereignty and provides a superior compatibility with proprietary document formats is the best of free open source software with professional support available for organization using office productivity for production and management of strategic business contents I think we should all promote this unique characteristics and now as I announced before we also have a logo a logo of the LibreOffice technology which embodies the characteristic this means that in every way you turn the LibreOffice icon you have the same engine the same technology the same community behind it and this makes the proposal absolutely key and superior to the other proposal of other office suites and thank you I think there is time for questions and I hope there are questions I think that this is a hot topic for our community thank you so much Italo at the moment we just have one question and it's Mike and the question is on relaunch the innovation slide are there signs that innovation in LibreOffice installing just as it was in openoffice.org at the moment of LibreOffice what's the trend from your point of view at least I don't personally don't think that innovation is installing innovation is probably we are in a completely different market since 2010 in 2010 the market was only desktop and therefore you could measure the innovation only on one kind of application today we look at a different a different environment a product is not just desktop the product is everywhere it's on our mobiles it's on on our desktop it's on the cloud it's integrated into platforms and in some cases the product is integrated and you don't even realize that there is the product behind it so I think that we should look at innovation with different eyes than in 2010 in 2010 the actually probably openoffice was not the innovation was slowing down but was not blocked of course the Oracle acquisition at the time was seen as a blocker of innovation and the reality is that it turned out to be a blocker of the innovation because Oracle abandoned the project and for different reasons the project was ended over to Apache Foundation mostly because IBM wanted the project to be there because they didn't like copy left licenses and wanted the software to be under a permissive license I think that today we should measure the innovation based on what the ecosystem is providing so the ecosystem is not just TDF the ecosystem is as companies that are providing different flavors of the product the ecosystem has consultants that are working with the companies to make LibreOffice a part of these companies strategic platform I am referring to governments I am referring to large organization in other cases I will not make names but today the technology is not just used on the desktop to create documents it is used in different places to create doc contents that can be shared so it is not just documents, it is more contents I think that LibreOffice is innovating of course the innovation is not a flat line there are times where we innovate more and times where we probably innovate a little bit less with 7.1 there was a significant innovation with the new graphic engine for Windows the SkiA engine that will open will allow LibreOffice to be compatible with future version of Windows and future version of also other operating system which are using or leveraging that engine with 7.2 we have introduced the version the Macintosh version for Apple Silicon again this is also an innovation so I don't think that we are stalling I think that we have the responsibility each one of us as the responsibility of pushing the project forward not just one person but the project has to be forward you know that this is something I used to say really a lot of times that if everyone does a little bit it's much better than having just one doing the whole by the way there is another question it's from Maridev how will TDF prevent companies of the ecosystem from forking without contributing any longer to the upstream project of course this it's impossible to prevent because based on the license you can fork an open source project and decide not to contribute back to the original project I think so we don't have legal ammunition to prevent this I think that we have the community ammunition to prevent this is as a community we really work together we really for the mutual advantage so the community supports ecosystem companies and the ecosystem companies support the community I think that we together can prevent hostile forks such as the one that you are that you are about from your question let me make just an example very quick and one that comes to my mind I have not said given an affiliation from for Elisa the the author of the of the logo but Elisa is working for Collabora so and Elisa contributed a logo for the community so I think of course this is a very small example which by the way is is significant for the LibreOffice Technology topic and she's the employee of a company and at the same time she's contributed as volunteer to the community which is something that happens really a lot of times quite often we're running out of time but I just want to mention that Mike just wanted to thank you and he says that it is about the trend of the innovation it is exactly his own impression as well so he thanks you Italo and me too Italo really thank you for all these interactive sorry sessions and it's now time to introduce the next session which is the Collabora keynote from Michael Mix so it's my pleasure and honor to hand over to Michael Mix and if I unmute potentially you can even hear me yes now yes hopefully you can see me as well I don't know if that's working at all yes and then the next thing is the slides you never know your luck let's try that do we have slides can you see that yes I confirm at the moment I can see the liberal office interface let me not share a window and share my whole screen instead and then life will be better perhaps yes much better now I see on full screen the slide go on please Michael thank you for all being here I'm going to try and speak slowly but I'm sorry I get excited you know and then anyhow let's try so I just want to tell you a little bit about Collabora many of you know of us the parent company has 140 staff it's quite a mature company arguably the leading open source consultancy in the world and it's the parent company of Collabora Proactivity which I guess is the liberal office related bit we came out of CSER in 2013 now 8 years just slightly over 8 years old around 32 staff and we're fully focused on office pieces and of course liberal office is a huge part of that and the liberal office technology underpins everything we do there and so you know I think a very positive way of looking at it and thanks Talo for building that whole model and framing and approach to market I think that's really cool so mission well yes make open source rock so hopefully we have a great alignment there with TDF's mission just to reiterate that this is really our raison d'etre it's why we're here it's the goal of our shareholders so what does that mean well if we're not doing that then we're failing to some degree so how do we do it we take the support that our people give us our partners our customers and so on and we reinvest that into a floss software everywhere all our code is open obviously we have to make money to pay our salaries and to reinvest and so on but broadly we're the goal is making open source rock and that's what we're here for we're not for sale we're not a start up that appears and disappears having done something of questionable value and changes radically and see we're really quite static in terms of structure, ownership and so on we're here for the long run to make open source succeed because it doesn't succeed by itself it needs lots of effort put in so the parent company does all sorts of things and I just a few things there so the fundamental thing that ties these together is making open source succeed in lots of new domains enabling primarily Linux to run on all kinds of semiconductor hardware doing automotive things in your car in your medical device next to your bed in your hospital the signs you're looking at the entertainment that streamed through multimedia displays the plumbing that ties all this together but perhaps more importantly making it easy for companies to do the right thing which is to base off an open source solution so providing that consultancy service and the ability to find people to solve your problem quickly and so there's lots of examples there of good things we've done Calambrose productivity subsidiary is really focused on selling things around the LibreOffice technology so we've developed Calambrone line which is really the tip of our sales marketing spear and we developed that to support it maintain it sustain it and it's built on LibreOffice technology much of the goodness underneath there is completely shared and I'll talk about that later and we make this wonderful scalable interoperable collaborative editing thing in your browser and then we provide an SLA around that to keep it available inside your organization level 3 support for fixing bugs to get some problems we talk to our customers about where they want to go what do they want the product to do where should we be going how can we improve and then we love to go to market via partners so our preferred a route to market is to find great open source loving companies in the world out there and to partner with them and then share the revenue around the code base there with them so they get a better product to give to their customers and we stand behind them and you know it all works well but the most crucial piece here is ensuring that revenue goes back to actual development because you know it's very easy to sell services around open source whilst not contributing back and that's something that we think is just destructive for the whole ecosystem we also self collaborate Office which is a branded version of LibreOffice obviously the foundation of what we do online and we have a year based versioning scheme and release schedule and we sell them on PC Mac and Linux for that we also do bespoke consultancy and we make wacky products to help people with obscure needs around LibreOffice technology and you know we'll even take the risk of doing prepaid fixed price level 3 bug fixes so we go to market then through partners OEMs and hosters all sorts of people you know as I mentioned earlier anyone can install a Docker image and claim to be an expert you know whilst asking questions of other people and consuming what looks like community resource in order to you know try and fulfill their needs and we've seen a lot of it around open office we see a lot of it around LibreOffice well maybe less because I think there's a more positive message about contributing that but all of our partners are essentially committed to then shipping Calabra online and encouraging users to pay for it and pay a reasonable amount massively cheaper than Microsoft giving you amazing digital freedom but then putting money back into actual free software development underneath so you know we love to work with smart people and so a lot of our partners are extremely sharp and produce a key part of the deliverable here because we just do office and we try and focus on that so without you know your next cloud your own cloud your PIDO your filer your jet key all of these things so in a group where we don't really have any documents to edit and we don't have any credentials of users to authenticate so seeing we rely on our partners to provide that route to market and the wrapper around Calabra online and we have losing them 230 partners if you're interested in becoming one contact me check out our partner list if you want to find one but in your own language near you we should have something and we have a nice web page you can look at which sort of breaks down the commercials the subscriptions and what we sell there and how it's priced so do you poke at that if you care about it but I suspect more people be interested in you know what we've done and how we've been working on improving Leibroff's technology very quickly screenshotted to Talo's slide earlier and so I now have the right logo in the background so I just like to show some of the things that we've done and we've been really pleased to contribute to CORE alongside the community so one of the things I'm most pleased about is that for decades we've had this rather narrow VCL API it's possible to expand the power of our rendering and to make it cleaner and you know quicker and so on but there are a lot of things that can get in the way there and so one of the things that I've been encouraging people to do is to invest in SCIA and trying to get to fewer different back-ends rendering stuff so with AMD we've done some great work there to get accelerated Vulcan rendering for Windows via SCIA sadly we still need the GDI rendering for print which is a tragedy hopefully we can cover the solution there. In recent times Lubosh has been working on SCIA support for Mac accelerated with the metal back-end there so we can then use a single API for Mac and Windows of course it runs on Linux as well but of course the Cairo API is actually really very capable of rendering API as well yet and we've ripped out the OpenGL stuff that used to be there and I think we're shipping that in LibreOffice 7.2 so really thanks to well AMD of course but also to all of the LibreOffice vanilla Mac buyers who actually bought LibreOffice on Mac they bought the convenience of having that in their hand easily and updating automatically updating then we put that back into improving the software some other things they funded so you know the buyers in the Apple App Store there are really the only serious Mac customers we have as a project the only real people paying for Apple Mac support so anyway the funding there really helped us we could buy the dev kit ahead of time we could sign the hideous NDAs with Apple we could do this work built on Stefan's work there to get the ARM64 ABI we could patch all the dependencies and thanks to Tor for doing all of that and we were there ready to ship it but of course there was some horrible bug in the App Store upload that meant we were rather late actually getting it out just too big to have an X8664 and the ARM64 binary at the same time but anyway it is there now and it works thanks to our Mac customers so here's a debate maybe some of you are aware that we have a board and you know our board elections are coming up I guess at the end of this year perhaps soon and so we encourage people who like to wrestle with naughty problems to think about this so here's a question that our board is currently continuing to evaluate re-evaluate everyone agrees that we should provide LibreOffice for free as download at TDF for all platforms broadly but should we actually charge for LibreOffice and App Stores well you know these are DRM App Stores that stop people from sharing the free software killing one of your software freedoms but against that we can charge a syntax using this horrible route which we can then spend of course some of it goes to say Apple or Microsoft but that gives us cash and it gives TDF cash to improve feature function which then hopefully drives adoption and meets our development goals as well or the alternative view is should we ship it for $0 in the App Stores because our adoption goal is more important and we should be giving things for free to users and that's more important development who would then be better to have less income and less development in order to get more users and more convenience for those users so that you know and bearing in mind of course some part of that will come back via donations so you know people will donate we know that's a small part compared to the revenue that actually comes in when they pay for convenience because we've measured it but you know maybe that's a sacrifice worth having and against that of course then convenient access in a DRM App Store will then stop people coming to the website for updates so that may cut into TDF visits and its donations that pay for the staff that do a lot of the work here very valuable work in the community but is that worthwhile to get more adoption and will those non-paying users then become future community members who will improve feature function and drive our development goals hard to say right I mean I have my view but we need people who are willing to wrestle with these problems and read both sides of these technology if you've got a passion for dealing with knotty difficult problems you know to stand or you know consider standing for the board and wrestling with us with those so anyway let me talk about some of the other things we've done sort of outside the sphere of the App Store a revenue there and feeding that back into improving the Mac so it's always talked winsomely about the LibreOffice technology it's used in all sorts of places many of them you can't see but one popular use is for indexing or converting documents so we're really thrilled with NLNet who have been able to implement this indexing exporter that generates not just a text output of a document but also where that text is in the document so that we can then render thumbnails of it and provide results that look pretty so you can see you know where, which shape or which paragraph in its context the thing was in rather than sort of converting to HTML and losing so much richness as part of that kind of cool new feature hopefully will drive adoption one of the things I'm most pleased about is well I have a passion for improving performance and you know many regression and breakages down to this that we were subsequently fixed but either way having an economic incentive to optimize the software is just really cool so that we can you know actually people want to make it faster and better and more beautiful and so loads of stuff came from online here in terms of caching sidebar panels line rendering, better image scaling I mean you can read the things here but lots lots of stuff and all of that's in the core of course all of that improves LibreOffice for everyone and of course lots of other core optimizations faster text rendering, we want this thing to be snappy and quick and beautiful you know the best office suite for interactivity that's out there one of the funny things here was typing fast so we get users who just like to mesh the keyboard to test collaborative editing and it turns out you can type 10 times as fast as a normal type if you just do this on the keyboard which is cool but it turned out that actually the dominant cost of rendering that far more than anything else we saw was rendering this absolutely beautiful B-spline subdivided into you know two pixels high with all of these control points and 90% also of our CPU time rendering in a document with this in it was just rendering those red squiggles which is kind of silly and actually when you see a large calc spreadsheet with Miss Bell strings in it you know that's another cost that's happening there and it's now being fixed contributed back so great for everyone X-ray so it's really important that people understand how their documents work particularly when they're scripting or writing things around the office suite and so we really were thrilled to have the TDF donors getting tendering fixed funds building an X-ray like document inspection tool into LibreOffice 7.2 so that you can learn the API you can see how your document is structured and you can easily write and debug and see what's going on which is just fantastic and you can see all of the properties then of your office suite it's not reveal codes which is the word perfect desire but it's pretty nice to be able to see the API and just thanks for being able to do that that's great user experience so there's a whole lot of little tweaks particularly improving things in the sidebar you'll see this font work sidebar panel there and much of that is driven by the desire to have these mobile panels there because we wrap the sidebar in clever ways to provide our mobile user one handed user experience that's pretty cool so style preview rendering just to make it easier to select the style that you want really really nice to be able to see what it's going to look like I'm great to be able to contribute that and of course share with online one of the nice things about working at Calabra I hope is a hack week project so from time to time when the customer craziness is not as bad as it is normally or we've just managed to get a big release out and people have been working really hard a hack week and say go and work on anything you like in LibreOffice just do something cool and so this is just an example of Tom Ash or a quickies hack week project to provide a heads up display to allow you to rapidly search for commands to do the thing you want to do but you can't find the toolbar button or the icon or whatever it is that does that that's kind of cool just try that with shift escape in your LibreOffice other things we love to well we do a whole lot of work on interoperability we have wonderful interoperability LibreOffice's interoperability is better than any other implementation out there outside of Microsoft let's say and it has an incredible history of unwinding and disentangling horrors in the file formats we have to work with there but there's always something more so this is a great fix by actually I think Sapa to improve how headers and footers round trip to PPTX here's some work from Mike Kagansky I think a multi-column text layout and impress something really cool there thanks to co-investment from SUSE this is a really big problem but SUSE really helped us focus on that and find part of it Miklosh's hack week doing bottom to top left to right text very important if you have a box that does that another customer here PPTAC is a research institution in Turkey which does lots of great things and so Mohammed and Miklosh together have got this project put together to improve how the bibliography references work and just make them much more usable and intuitive and powerful which is cool. Another example here of a partnership is with Nuunov Cornell's business making visible digital signatures in draw so that you can see your documents being signed and sign it in an elegant way another interoperability win is getting custom geometry and image effects so Gusha has been working hard on this and really nice to see that and a whole load of effects going there as well that I can't show you again thanks to SUSE cached fields so your calendars interoperate this is one of my favourites so we have a competitor out there with a project of LibreOffice that contributes relatively little back upstream that had a feature a custom proprietary feature that would then stick a line in your header to try and achieve this and fiddle with some code we took a different approach to implement the future properly put that in the core and the user experience for that make sure it interoperates nicely and to get that back into LibreOffice for everyone's use and we'd like to encourage you to use suppliers that contribute so why bother sharing all these contributions I mean lots of people do amazing things for LibreOffice the volunteers are doing cool things out there every day I'm sure if you're here and watching you've done cool things for LibreOffice well it's a good question some people differentiate from their competitors or some suppliers with proprietary value ads but really we want to encourage customers to select a supplier based on competence and competence proven by actual contribution we think that's the best way to get virtuous cycles of actually contributing and growing a competence there and so you know another reason is I guess to remind people that as TDF wrestles you know it's wrestling at the moment to work out who pays for what and how we can make this economically sustainable it's really helpful to remember that the customers really help drive our mission all of this stuff is paid for by customers if we're all unemployed tomorrow we would still probably many of us would do things for LibreOffice because we love the project but we wouldn't be able to do nearly as much as we can do with funding. Speaking of which one of the things we're thrilled about is to have moved MIMO now to you know a proper supported base now here with the stack of people and thanks to all of the people in the community that have made the case and helped there for a long time so with ATOS and ARAWA, Cannabra is thrilled to be able to well you know for a start for you know the sort of 160-ish patches to the oldest branch to make sure that this is a secure and safe deployment as well as actually fixing these things and shipping this bespoke version of LibreOffice for the French ministries so great to be able to do that together that is another example of a sort of distribution or distribution of LibreOffice that you know is built on the technology and is great for all of those users so hey, talk briefly about Cannabra online and you know some of the performance work that we've done in that I've got a 30-minute talk on Friday so I'm going to skip the slide just to show you how much hard work we've done to really you know zero in on performance and improve that massively some of the nice things I mentioned the bringing draw to online font work is pretty fun making draw a useful a piece of that product just nice features everywhere it's very hard with so much done in a year and again, core will have a talk on what we've done in the last year to try and make that you know expand on this there's really lots happened another important thing that we heard was that you know more documentation was wanted to be to be open so we've opened up our much of our documentation so there's the SDK there now you can search it there's all sorts of installation help API documentation so it's just very easy to now write and improve an integration and make it but it's very easy to write an integration anyway but you know we've made it substantially easier there by opening up we're just really grateful for all of the people that have contributed code who are not from Cannabra and of course those who contributed translations absolutely wonderful to have people come and help us out you know in our mission to get digital sovereignty back in your hands so you can control your own documents, data, workloads, network and software so I'm not going to go on about it a lot but we'll have a little conference dedicated to that after the LibreOffice conference next week so there you go eight years of Cannabra productivity well by affiliation last year something like 4,000 commits out of 12 13,000 and that's actually decreasing so if you look at the proportion of Cannabra commits it's gone down significantly since last year which I think is a concern that some people have that Cannabra was too large so thankfully that is starting to be addressed and you know it's important that we can all you know build successful businesses that really contribute around the LibreOffice technology and get more diversity and more people are working there people often ask me what percentage of TDF you know like there's a whole lot of commits there but the document foundations, volunteers and donors have paid for some proportion of that so you know it's around 5% of our revenue this year so far this year and about 1% last year so I just provide that as the guidance on that pie chart but every 5% is welcome it really is you know like wonderful to have a TDF as a customer other ways we serve well there's a lot of places we serve on the board I was encouraging people to stand there earlier let me encourage people to stand for the membership committee Mohamed Khara has now moved on to other things but he's served us on membership committee for much of the year on the ESC of course funding of cash donations from Cannabra and well you know I think probably the biggest thing that we do is every day we tell people about the LibreOffice technology we tell people about the goodness of open source software how to get involved and how to you know how to I guess free their systems and get that advantage built on top of that technology base so there's a big old sales and marketing effort there on our team something like 37 committers in the last 12 months to Cannabra online and also the LibreOffice core and lots of people behind the scenes and Lysa you heard about from Italo writing and creating beautiful graphics but you know William doing sysadmin for us and helping with marketing and lots of people whose names you never see in terms of finance and HR and support behind the scenes so yeah do you consider joining us we have a whole load of a job well I say we have a whole lot of job we have one job offer at the moment which is for a marketing manager so we'd love to if you have skills in marketing and you'll have excellent English then at least you look at that but otherwise Cannabra is hiring you know we love to have sharp people join our team and deliver on all of our many things we do here are some of the team that we try to we'd love to be with you but you know things being what they are it's hard and here are my conclusions so well it's our mission applied it's what we do every day we try and make open source rock and make that LibreOffice technology rock we want to liberate people's documents get them collaborating you know in a safe and federated way on their own infrastructure with work loads and servers that they control networks they control and get their privacy back everything we do and I've said this a lot more in previous years so let me just labour this point a little bit it's very easy to assume that software writes itself that quotes the community magically does things but the community actually is made up of people and many of those people are paid for by our customers and partners whether it's our company or others around the system you know the ecosystem it's important that people pay for something because without that we can't do what we do conversely our customers and partners rock they're awesome we you know they make possible everything that we do we can't do anything without them and of course we can't do it without our staff too so thank you to all the staff we've done you know extremely hard work this year and the community that works alongside us it's a privilege and a pleasure to work alongside many of you and thank you for making it fun it's really appreciated the encouragement and goodness that comes from seeing people jump in and contribute that's just great in terms of our branding that's one of our only assets and drives the leads and the credit that funds the firm by our customers and partners beyond that it's an absolute pleasure to be able to sponsor the LibreOffice conference we're really missing seeing you in person hopefully next year we'll be able to you know meet up and chat and talk about all things on your mind and you know go deeper so thank you so much that's pretty much it and if there are any questions comments I'll be around in the room and thank you for listening thank you Mike there is another question but there is a comment let's say I can't tell the name because it's written in Arab and I'm not ready to read it but the comment is in English so I just want to say that Collabora sets a very good model I was thinking about how we can make businesses floss driven and the first org that came to my mind was Collabora the way it does business is a big inspiration to me and I'm sure it's been an inspiration to many other people and businesses too thank you that's the comment thank you very much it's a very positive comment I think it's probably worth saying there's a mix of views on that but thank you we appreciate encouragement I mean it's amazing how important that is to me and the staff we're trying to do the right thing it's not easy making a business in open source and it's a privilege to do it with the community support so we really appreciate your help doing that all of you I'm really sorry I can't tell you the name of the person but you can find it in the Queen Walker something like that, yes it's great so there will be at least kind of one and a half minute left but I can't see any other comment nor question at the moment so if you want to add something otherwise I can anyway introduce the next session yeah now I'm so fine we're very good thanks guys, all the best okay so it's almost time so I think I can introduce the next session from Allotropia their keynote is from Thorsten Bernstein our good friend I think I can hand over to him it's almost time so we are on time I can't hear you Thorsten, I don't know if that's expected but I'm missing out so to be honest I'm looking at the list of participants in this room no we've got Thorsten he's there, he's talking but we can't hear him, at least I can see his keynote slides we get an in person view but I can't hear Thorsten so he's muted yes he's muted, Thorsten please unmute your microphone if you can we can see you I mean the camera but it looks like the microphone is muted and the reason for which I couldn't see his name is it's because he's Hamburg hybrid it's HH at least here it looks like the connection might be broken I see a veto that is fine so I guess the indicator is misleading but in any case he is still muted oh Uwe, we can see Uwe yes indeed I think the connection works because we can still see live or presumably live video and I see also a good microphone in the center of the of the stage but we can't hear anything since it is muted by the software and I can't enable, I'm sorry we can't enable the microphone same, I had a look we can't unmute so I'm not sure Uwe, Thorsten if you can hear us but you are still muted and let me send the nod also to the chat room so maybe the other spot on site I can tell him I'll try the telephone, how about that when you're panicking about your microphone what you need is a phone call Thorsten, you can't hear anything I can't hear you and it's a tragedy well we could see you but you seem to have wandered out of shot he just wrote on the chart of Jitsi that they are muted as we are saying but they wrote getting muted here but we didn't mute you by default everyone who joins the room is muted unless he or she I don't know if that's someone else or what's going on but we can mute you via the telephone but probably can be expensive so Thorsten unmute we can hear you now yes I think he joined by the phone and I'm going to move everything everyone else to avoid echo so I gave some tips via the chat what it could be but we actually tested from the venue this morning and it worked out problems so on that camera perspective but in general it worked we can see you now but you still muted at a certain point I heard the Thorsten voice I thought it was a phone call because of the audio quality but I don't know if it was indeed because there are so many participants I couldn't find which was maybe it wasn't a phone call and they were able to mute the microphone it was a phone call because I had Thorsten on speaker but then it started to feedback and I assumed that I meant that they got their audio through so cut it off so it was it was your microphone introducing him his voice yes the problem with this way is the echo Michael you should mute not your microphone but your speakers probably and Thorsten please go on can you hear me can you hear me yeah okay speak to this microphone can you hear me we can hear you okay great so we can hear you now we found a fallback solution with two different like video and audio coming from different devices I hope that it works perfectly it was perfectly the audio quality is really good thank you for solving and please thanks Uwe for sharing his band with here I don't know so that's the first time really that that happened to me that kept muting itself for whatever reasons hopefully we can fix that over like some more trials in the background and fix that for the remaining talks apologies for the technical difficulties I'm very very happy to be able to present today for a number of reasons we are here in Hamburg in a small spot with a number of people already a few more still arriving travel is not easy these days and I'm glad I'm still glad that at least smaller events with meetings are possible again I'm honored to be speaking here as one of the Gibraltar conference sponsors I'm here today in a new role as owner, founder and executive director of Allotropia so in that role just a few minutes to introduce the company which is some life and kicking since January this year so very young company not even a year old we're based in Hamburg from where the starting open office code and product originated shareholder some leadership team that's me for the development strategy and operations side and that's my former boss from CID where I was working with my team for the last five years who's running the rescue side and also the sales side we're a spin-off from CID essentially we took the free and open source team and spun it out into a separate company with a focus really purely on free and open source software and of course with a focus on the office that's what we do where we are we are a core team right now of seven people all certified developers in total that amounts to more than 70 years of collective experience with that code base really core competencies C++ development like me hacking the core of the product but also Java both client side and the server side with everything that belongs to that integration engineering binding it up wrapping it up with the office integrating it on a desktop but also Python and basic we also do deployment migration support we are active predominantly in the ODF standardization but also in OXML and we do DevOps and our development processes as state of the art as listed here our partners obviously CID we're still very very close to my old company as partners for investment and sales and also for consultancy and scaling just in case if there's the massive consultancy gig CIV itself is kind of large company in comparison more than 180 employees more than 30 years old 23 million euro turnover 2019 and with a focus really on document management whatever shape and form recently also growing in the AI supported area in many ways from text recognition to speech recognition to more in that area but it's all circling around document management the second partner Michael already mentioned that collaborative productivity with collaborative online and mobile very important partnership for me both for product and technology so that when we go to customers here in the German speaking world that we can offer the full stack most of partners for training and migrations all of them from the wider liberal physique system so that we can offer the full stack the full spectrum of services all around free and open source office suites so essentially what we offer is two products first is consulting which is the classic open source when you have a free product the classic open source way to do business so we we work on the code we integrate third parties we implement features we do bug fixing we also do trainings development trainings we also do long term support and we have a support channel to help to help users and users but also developers at the customer the second product is LTS versions which is some we're still running this under the CIB brand so liberal first part by CIB with versions for the best of windows and linux, volume licenses we also do changes feature development obviously for that version and that's the main point regular security updates the version is also available in the windows store yes and we are members of a number of organizations the most important ones listed here open source business alliance kind of German business lobby group OASIS as a member and pushing the OEF standard down and of course we are involved both as persons me for example on the board others as developers and contributors elsewhere but also we are in the advisory board and another way of helping and sponsoring and giving back a little bit to liberal office yes all of that and of course I'm most happy to be able to be here not alone but with my team to be on youtube not to say I'm here I'm just I hear that I'm not I hope I'm at least audible the visuals are less important the sound is much more so Thorsten we continue anymore or at least I can't even if I see that whoever's microphone is still unmuted so we should continue listening but something happened indeed no more sound I think they are trying to share properly the slides in the meanwhile I can explain that the reason for which people are watching the video the streaming from youtube for example they are probably not seeing not watching the slides and the reason is that for some technical problem Uwe had to share his microphones from his mobile phone so that user came up pops up and it has no video because the video is from another source probably that's my understanding at least anyway Thorsten we can't hear I don't know Michael if you can call him or someone of them there okay oh but I can lost to German I didn't understand that's not working how cool Thorsten was in these are the glitches of having you know hybrid conference unfortunately we are experiencing some trouble some problem yes we can hear you we hear you and we see Thorsten's picture right now and now the audio seems to be cut off again I think it was Uwe speaking but the connection was really bad I would guess and that matches what we see here in Jitsi not such a good connection so in any case I think we can then later upload the slides and also a video if required but if you can hear us in Hamburg I don't know if you have any ways to connect a cable if that would help it looks like a connectivity issue at least in parts so maybe that would help apologies for the inconveniences obviously thanks Thorsten Uwe is trying again apparently it's a very aggressive muting or something more kicking configuration that's causing grief again if I'm not wrong the default behavior is that when you join the room your microphone and your camera is automatically muted but it's up to you to unmute them and from my side I didn't mute anyone at least in the last 10 minutes I see Uwe is trying to fix this it's a pity I think from here there's little we can do at the moment unfortunately we don't have any video pre-recorded because otherwise we could have played back it I see a bit of movement the connection seems really bad with the picture quality in the frame so my guess it's an upstream problem from the internet connection in Hamburg that is what Jitsi suggests here maybe but usually the audio is the last thing to go away usually when the connection is poor or the video is gone is there a dial-in Thorsten can use? I mean that would work the dial-in? no I fear not I'm thinking if it can redo cool Thorsten and Bridget that might work yes but then please mute the speakers because of the echo maybe we can reschedule the presentation and I look at if there is a spot in the schedule yeah they were thinking of that that could work or with a recording to have a fallback so we can play the video that could work I think another spot would be fair I want to hear what Thorsten has to say honestly so I think for the moment let me check the schedule I think the next talk is already at one exactly I talk very little we can do but indeed rescheduling is something to consider definitely try after the time for that slot I think now they are completely left can try to ring them up if we find another slot maybe Italy can coordinate when it would work there is a I can move one of my presentation and create a slot in the closing session which makes sense just to have a similar position yeah you mean that you would postpone your next session no no I schedule another another session from Thorsten from Thorsten yes during the closing session and I move one of my presentation which is about sustainability in the in the morning of Friday in the morning of Saturday so at noon on Saturday in this way we don't have on Saturday the result already another session from Thorsten but in the morning room 2 is free and I can reschedule or re-add the the keynote at 4pm in the afternoon let's coordinate on something and people announce changes or updates to all the channels so you get it why IRC and matrix and all that so we will try to reschedule we will chat with Thorsten and try to shift in the program and have another replacement slot for the keynote so we can do it again apologies for that basically I've already updated the schedule so it's going to be it's going to be ready as soon as the next session is over I will update it but let tomorrow morning do a test with now we hear you perfectly in Hamburg I'm not sure if you can hear us but now you were perfectly audible with a bit of distance but the audio was not breaking up and at least the still picture looks fine looks much better yes hello did you with some microphone ok so we can also hear you so we can have an actual conversation at least if the network bolts up much better ok so shall we continue or what's the general notion I say that you have 7 minutes if you want to say something but we were thinking about postponing counter proposal looking at the schedule shall we just start again and skip the break at 2.30pm Berlin there's a break after Italo's talk so we could just start again run into the break and basically skip the break for today if that works for you Italo yeah no problem ok so then let's just continue again then we have half the time but I suggest from my point of view is that while Italo will have his talk you could try sharing the slides because I mean it's cool to have the picture of the camera which which watch you and the projector the image but I think it's not the best condition to read the slides in the meanwhile you could think if you wish to try sharing the screen and for us obviously and continue projecting the image for the people standing there but this is just my opinion and I see many people asking for that so I think it's probably always a bad idea to shift things around on short notice but if there's this gap the key so that's one option the other option is that you just start again straight away and that means all the talks will be shifted by half an hour and that is exactly the time of the gap so you keynote now basically starting at two Berlin time the talk planned at two will be shifted to two thirty and then we continue with the regular schedule retreat so now or you can fill the gap whatever works for you I would suggest to be to reschedule into the gap because there might be people coming to see it a little okay and the certification indeed and then it's really like short notice now okay then let's do that let's reschedule it to two thirty Berlin or twelve thirty UTC and if you want we can move to the I didn't quite catch I didn't quite get the proposal with the slide so we have the small issue here that we have a separate laptop for the slides and some kind of streaming laptop so technically but we can try to put the slides on the other laptop if that helps yes the idea was anyway to share the screen instead of you know taking the video from the camera just because it's also because of the quality of the video maybe because of the connection but it's really hard to imagine that some personal thoughts is readable but just because it's huge I think that smaller smaller writings wouldn't be readable that's it the reschedule would be in just before the closing session so in a similar position at four p.m. on on Saturday the plan is different for today today we have a break after your talk is next to the proposal is to in three minutes you start with your talk about certification straight after the talk there's not a break but the keynote continue with the talks that would be the proposal so we continue normally basically we reshift the keynote by one hour 130 Berlin to 230 Berlin if that works for you Torsley that will work for me and I think that's probably the least worse option and then we can maybe tweak a bit maybe on the social channel can tweak a bit on the video and set up here great then I will announce that on the channel so we continue with Italo and after Italo's talk Torsten's keynote follow and then after that we split in the three rooms with the regular tracks exactly I'll announce it in the channels thanks everyone let me go offline here and improve the setup and see you in an hour okay let me thank you Thorsten and Uwe and all the people there for this attempt in half an hour indeed I hope that you'll be ready not in an hour at 230 in a half hour from now but one hour from the schedule you know timetable by the way okay within half hour we will have again the allotropia keynote from the Hamburg hybrid different site and again thank you for attempting and I hope you'll be able to solve those problems within the half hour in which our beloved Italo will have his next session which is an announcement about LibreOffice certification he will probably explain that we have a certification program and I learned from his previous speech that something is going on with LPI which is something really interesting and so I think that I can yes it's time to hand over to Italo and introduce his speech thank you okay just as a side note I've already updated the schedule so the schedule it's updated refresh your mobiles to see the mobile app to see the new schedule with the new timing for the I can confirm I just refresh the page on my browser and I can see that indeed now there is your session and the next one is allotropia keynote and then directly strictly without any break we will split in three rooms and then so on okay okay so we there are a few new new things happening around the LibreOffice certification let me start from an overview of which is the situation today we have a certification going on since a few years this certification was started in a rather silent mode just because we wanted to see if it was working it was the project was as we designed it was okay so we started to certify individuals for their commitment and contribution and we started certifying people in three areas development and support level 3 support migrations and training so the objective of the certification which is actually the running certification but from tomorrow it will be slightly different where the to increase the perceived quality of the ecosystem allow skill community members to sell value at the services around LibreOffice and on the other end free the individual responsible for the migration to LibreOffice from the global responsibility of the project so if you are if you manage a migration or deployment of LibreOffice with the help of certified people this reduces your burden in terms of responsibility inside the company it's a mechanism that all the people that are familiar with the process have experienced what is peculiar about LibreOffice certification is that it's the first and still the unique one the only one from an independent free office free and open-source software community the other certification are from a company and of course they are seen by the company as a source of revenue and a source of affiliation or stronger relationship we have developed a different certification model which is working I would say rather well but we have found some spots where it can be improved and we are working in that area the certification is a way of regulating the quality of value added services and to ensure that these services are conformable transparent criteria we of course certification is different in between developers and other migrators and trainers the program in general is overview is overseen by the board of director through a certification committee the certification is just recognizes the competence but it's not binding for the document foundation for the actions of the individual so if you are certified you are responsible for what you are doing and so the quality and the value of the value added services is a responsibility of the certified individuals the certification path has always been easier for TDF members and will become not just easier but shorter in the future for TDF members so individuals are certified based on criteria set by the engineering steering committee for development and basically the criteria is quality of the code you are committing and by the certification committee for migrations and trainings the prerequisites are defined and updated by the certification committee certification lasts for 24 months from the time of the appointment and is automatically renewed if the activity is ongoing in the case of lack of activity certified professionals will be requested to go through a new certification review so far apart from developers in some cases developers that were certified were not contributing any more to the project so their certification has expired but for migrations and trainings all the people are still active around the project so for the time being all the certification for migrations and trainings have been renewed automatically and most of the certification for development have been renewed automatically the certification committee I'm chairing the committee together with Lottar Lottar Becker who is also the Document Foundation chairman members of the certification committee are Sofie Gauthier, Olivier Allot Gustavo Pacheco Eric San and Franklin Wang Eric and Franklin are from Taiwan Olivier Gustavo from Brazil Sofie from France Lottar from Germany myself and Marina and myself from Italy then we have three developers Jan Olasowski Stefan Bergman and Thorsten Berens we definitely need to add some people to the members especially for migrations and trainings just to summarize these are the is the profile of a certified developer so is of course is able to develop new features the research solutions and it's let's say that the profile of a certified developer the profile of a senior developer a senior developer who is contributing on a rather stable basis to LibreOffice a migration professional is able to coordinate the enterprise migration process and it's basically a project manager for the migration process may not be able as an individual to provide all the skills that are needed but is able to create a team and develop a team for the certification a trainer of course is able to teach the use of LibreOffice at different levels and to develop training materials is also able to help fighting resistance to change and is also able to explain the advantages of free open source software over proprietary solutions this is something which is rather important so certified people are brand ambassadors at the end if someone who certified is not able to give valuable support for migration and training and this of course includes development because development is key for migrations this will reflect badly to the for the LibreOffice project and this is of course the reason why if someone's performance certified professional performance are not reflecting the expected quality we will not renew the certification as I said so far this didn't happen apart from people leaving the project for personal reasons or for work related reasons and this will also happen in the future we will maintain the level of attention on quality in the future as well let's say that during the last couple of years we have seen an increase in the number increasing interest in certification especially for migrations and trainings we have seen an increasing interest in some cases people were were reflecting the prerequisites and therefore either they have been certified or they are waiting to be certified at the next certification review there are a few people who are just waiting for the date we will set the certification review other people were applying for reasons that were not related or not specifically related to certification as we were intending which is high level professional certification but were in any cases were asking to be certified for good reasons because they were using the certification to develop the library office presence in a in an area or they were wanted to organize some activities in schools or universities so looking at this evolution of application there are a few new things that are happening around certification the first one we is of course one of the reason why is sponsoring the library office conference we will start working with the Linux professional institute of course at the moment we will work with them to extend the reach of library office certification and to start providing a library office certification for end users we the document foundation will never be able to provide certification for end users because of the numbers working with LPI we will leverage each other competence to improve certification for end users of desktop productivity so we will start discussing and working together at the syllabus for the certification and also at the contents of certification in the future when we will have face to face conferences we the idea is also to organize end user certification sessions at the conference and these sessions will be LPI the reality is that LPI and document foundation speak the same language our objective is to increase the penetration of free open source software through certifying people, recognizing people for their skills of course we have just started the collaboration so we will there will be updates we will probably start meeting on a regular basis with LPI to discuss the activities and there are three talks by LPI representatives during the conference one is tomorrow and the other two are on Saturday morning and so if you are curious about LPI or want to ask questions about LPI I invite you to attend these sessions then there are a few updates that we have we will implement as TDF so there will be a basic LibreOffice certified level of certification and to access and this will be a first step which will be compulsory for everyone apart from TDF members so we are developing and actually we have created a training syllabus for this level we will create training videos so to access the LibreOffice certified entry level you will have to attend training videos through which will be provided through Udemy the the choice of Udemy it is because it is looks like almost a standard for certification you find on Udemy certification trainings for Microsoft Oracle and many other software and these classes will have a cost not a high cost and the benefit will go to the Document Foundation business entity of course we will be happy to provide this training for free in geographies where the people have not an income which is enough to pay for the Udemy which makes paying the Udemy online classes to expenses for them so we will be happy to work with applicants to solve this problem but definitely we want to increase the number of people who need it but by doing this we need to have a basic certification level which is not the current professional level so the we will keep migration professional and professional training as the certification level for people that have and so on experience trainings people with LibreOffice certification so with a basic level we will be able to apply for professional certification after one year from the first review but they have to get experience on a specific project so in that year they have to work on a migration project they have to work on a training project and they have to provide extensive documentation about the project this again is extremely important we are happy to recognize that you have worked on a project but if we don't get a discussion about the project so how you have accessed how you have managed the migration how you have managed the training and of course if your documents are owned by a company because you are working for a company we are happy to look at different documents that you provide for the basis for being certified but we need the documents to certify you only TDF member with full prerequisites so they have to be compliant with prerequisites so they have to be professional they have to have ends on experience on migrations and trainings can directly access professional certification if a TDF member has no prerequisite compliance it will have to go through the LibreOffice certified process as any other person last difference we will create and certify people for as professional trainers on single application so there will be a LibreOffice writer professional trainer calc, impressed, draw, base and macro professional trainer of course if your skills are fundamental you will get the LibreOffice professional trainer title as before we will also create certification titles for senior migration professional and senior professional trainer for the certified professional who contribute as volunteers to the certification project so the people that are on the certification committee will see their title becoming senior migration and senior professional trainer but also other people that and we expect and we hope that there are other people who will be contributing to the certification project will get that title of course we will update all certification related documents to reflect what I've said so far something is already in place, something will be in place soon on the certification area on the web I will I'm also working and I will start releasing incremental version of an index of documents which can be used for certification we used to have a bibliography on the website the bibliography is still there many links to documents have expired so at the moment the bibliography is not linking to any document the suggestion is to search for the documents because they may have moved in the websites but that bibliography will be implemented with additional documents that have been published during the last 10 years and not only there will be an index for all the videos the training videos that are covering LibreOffice there are many training videos which are freely available on online and there will be an index I have already started I think I have indexed around 300 videos so I will in the next few days I will publish that index but it will be implemented further during the next months and also I will start publishing the bibliography with the name of the documents and hopefully links to the documents when these links are stable enough to be uploaded to the network so these are the announcements about the LibreOffice certification the idea is to have all these in place during the first different stages during the last quarter of 2021 and the first quarter of 2022 so between October and March 2022 thank you for listening and there are questions I'm happy to answer them otherwise you can write me and Lothar in the future we will be happy to answer your questions thank you Italo you are on time as usually to be honest you are also early and we have still four minutes I can't see any question in the chat room there is a comment to be honest I asked the person who wrote this comment if he wants me to to cite it to mention but he didn't answer by the way it was about LPI and Udemy so it was a consideration about that no problem for that and I can also testify that Udemy is really common is really well known for almost every kind of stuff and training courses for most of the especially in IT field and then I can say that I would like to I would love to be a senior certified professional and I hope that I will be able to apply for that and no more questions no more at all so I can say in the meanwhile that there is around 180 people watching at the moment we are really glad really honored to have such an audience and I want to personally thank all these people attending and watching via our numerous let's say streaming channels and it's almost time we still have three minutes I don't know if we have someone from LPI in the room probably not because could be an option could have been an option to have some kind of opinion from them if so please unmute your microphone but I don't think we have any wait about the collaboration that we formally started with TDF we are proud of it too I'm happy to see this finally happen it's been in the works for a while and it's very interesting challenge Shinji I'm reading in real time comments that come from the room one chat room and I recall remember to everyone that we have these chat rooms which are cross-bridged between matrix and telegram and IRC so no matter which one you choose the messages will be cross-posted to every one of those channels Cesar can join would be great the time is running but just one minute Cesar is already in the room but I can see him unfortunately we didn't prepare this so I'm sorry we are here oh you don't have access to the room I'm sorry really a lot I mean we didn't prepare this so we didn't grant you access I'm sorry at least for LPI people in the telegram room but anyway the time is over so I don't know the other sessions probably they will except for their own presentation which can be anyway an option and an opportunity to hear their voices and Hito is doing a great job thank you so I think that it's time now to hand over to Hamburg to our dear Thorsten I hope they solve their problems in connection and audio and video and so on from my side I can already see and hear some noise from Hamburg site I can see the image projected so Thorsten are you there so hi everyone I hope you can hear me now yes we can awesome so I hope we solve at least the most pressing technical issues we're still not super happy with the image quality so I hope that the slides are visible I think most of that slide content I will just talk over and it's probably not crucial if you can't see the small print there and I will of course also upload the slides so if I may start or how should we proceed I would say so okay great so hi everyone I'm very happy very pleased to be able to talk to you today it's a new role for me I'm now here talking to you as the owner and founder of allotropia we're a very young company to be here on stage and I will start with the keynote talking a little bit about the new company as an aside you see that or maybe you even hear that in the background we're here in Hamburg hosted by a housing project thanks for having us with a small group of German hackers we're looking forward to come in the next two days and we're trying this this hybrid thing for the first time didn't quite work out for the first try for this talk so I hope it's getting better and we're learning and improving as we go okay so without further ado let's start quick introduction who we are we are based in Hamburg just started January this year we are a company and we are a spin-off of CIB my former company so we essentially just took the free and open source people there, the team spun out a new company and are now exclusively focused on free and open source software leadership team myself and my former boss was also a co-investor company our team right now is a core team of seven people certified developers collectively more than 70 years of experience with a code base and we do pretty much everything from core development and C++ over to Java, Python basic including training for other developers for extension integration development you do deployments, migrations we are active in the standardization landscape so we do mostly ODF but also a bit of OXML standards work and we do all this whatever you have to do to be regarded as modern in terms of DevOps and secure development processes our partners that's why we are here we are able to work and provide what we can because we have strong partners first and foremost CIB the kind of parent company or co-investor still partner for sales and for consulting mostly for scaling up if there is a need the workforce really solid old medium-sized company in Germany based in Munich 23 million turnover 2019 more than 180 employees and doing everything that is in the area of document management recently also with AI support second partner very happy about that, Kulabra already mentioned it very important partnership for us for being able to have a full feature offering in the product scape for online and mobile and also having a product and technology partnership and quite happy that it works out for that area beyond that we have partners for trainings and migrations most if not all of them from the provider LibreOffice ecosystem most of them also certified so what are we offering what is actually our business model which is first and predominantly LibreOffice consulting so it's the classic free product or open source product and you offer services around that like consultancy, bug fixing feature development support trainings and that's what we do and also long term support the second product we have is an LTS version which is still under the CIB brand which is well established under TDF license and we have desktop version for Windows and Linux with license agreements and we do custom and bespoke changes in that branch and of course which is the major point we do regular security updates on that version additionally that version is also available in the Windows store if you like to buy that beyond that we are members or affiliated with a number of organizations the most important I'd like to list here open source business alliance which is the German speaking areas a lobby organization we are OASIS member and some sponsoring one editor in the ODFTC and last but definitely not least we are affiliated in very very many ways to send the document foundation I'm on the board other people are contributing code standardization work other things both in their paid and spare time and we're also members of the advisory board of the document foundation right so that's in all briefness we'll take a quick walk through to what allotropia is doing I'm not alone I'm here and I'm very proud with the team of wonderful people some of them here some of them still coming many of them will have talks over the conference so you can see them on stage either or as pre-recorded but then chat with them in the coming three days you know all of them it's Jan Marek, it's Samuel it's Vasili, it's Armin and a number of more to come there in the next time so we're actually growing and hiring for selective positions okay but since I'm not here to sell the company to you it's mostly just to introduce you to what we're doing let me instead use the time and spend a few minutes and a few moments on personal thoughts while I have the stage because with the new role here and a bit of time to reflect and also with discussions we had on the board and discussions we had in the community I came to think about what it means to be LibreOffice, what it means to be being involved with that code base and with that project for so very many years and there have been many for me so I started back in the day it's almost, it's about more than 20 years ago I started in spring 2001 at San Microsystems working on OpenOffice and I've been purely engineering nothing much else so I was a paid engineer working on the code base and talking to community people learning learned about their worries and their problems and their challenges and also getting quite some insight into how a big corporation works that sponsors an open source project I moved on to SUSE then where my responsibilities also grew into standards work but also still mostly engineering but I was suddenly on the other side of the fence so I was in a comparatively small team I had to talk to this big Sun team trying to get my code changes accepted and it was interesting because it was quite different and things that I saw I didn't see as a problem before in the Sun role I didn't notice actually there are a problem if you are in a different role and then I moved on again inside SUSE I stopped being paid for doing liberal office work and instead was contributing in my free time and my pay time I was doing other open source work so there was another change and perspective when suddenly I learned what it means to have very limited time at your hands and not having the liberty and the privilege to contribute all your paid time and then again some change came and I moved to CAB and was again well not really being paid so I had to earn I was leading a team of liberal office developers I had to earn the money mostly to pay for the team so again different perspectives and different roles and I was growing into sales and marketing and other areas and then finally with the start of this year I founded my own company and I'm now really responsible for getting the money in to pay the people and so the newest changes more that of an investor taking on money, taking life savings founding a company hiring people paying them to work on open office and liberal office and again a change in perspective and again surprising insights into problems that I didn't notice were problems before so bottom line what I think I'm trying to get across here is that perspectives matter greatly and everybody has a bias everybody and then it includes me so take whatever I say here whatever I'm telling you with a big grain of salt it's hard to imagine or it's actually it's not easy to walk into some other person's shoes and I think with the experience with the past experience that I made it's worthwhile to assume best intentions on the other side it's probably fair to say that almost everybody who's working in this community is working as of today is working here and still around because they care a lot about liberal office so they might care about their personal income they might care about their investments but at least one of the motivators clearly is that people care about liberal office so very seldomly things are purely evil I would rather venture the guest that they're never purely evil but just things, decisions ways to do things really with her depending on your role that you're in and if you find something weird that I'm doing or that my team is doing then assume that's there for a reason and not because we want to annoy someone or because we're doing something nefarious and then maybe let's instead of people being upset and not talking let's just get into a talk good to condense that a little bit into something perhaps more tangible from my experience what are volunteers doing broadly so this is more like for the investors and the employees and the staff people among the audience here just put yourself, try to put yourself into the shoes of a volunteer what is the driving there what is the problem usually broadly don't want to say that's always the case there's always exceptions to that rule but broadly volunteers have very limited time at their hands so what they tend to do is something that is usually not massive or if it's a massive change then it's a change they can do bit by bit, little by little because there's only so much with a complex code base and just a few hours in the evening you can't do the level of rework, the level of refactoring given the same education, the same capabilities, the same smartness compared to somebody who's doing that 8, 9, 10 hours every day so that's kind of natural selection there what volunteers tend to do what it's important work and it's work that on the other hand most of the paid developers at least historically didn't do because there was no business reason for example to do it so all the cleanups all the improvements all the smaller changes that might not get the paying customer to do them those are wonderful things that our volunteers are doing on the other hand the way that the volunteers work of course for the paid developers or for those or perhaps for TDF as an organization there's always this will this volunteer be still around the next year will the massive change this person has been doing in their spare time and now once merged once that lands will that person still be around and maintain that in half a year or in a year's time or fix the bugs the inevitable regressions and that's this tension there that builds up why sometimes there's pushback in the review for a massive change or why somebody thinks that it should be a bit more polished or it should be too risky to merge as one large split down and easily reversible if necessary and on the other hand the volunteers perhaps being frustrated by this corp base still being so complex and the build still taking so long and all the spare time energy being sapped by those silly obstacles and they really don't want to have fun and be productive paid contributors I mean most of the time they you would expect they do what they are told to do so what the employer or their their customers is paying them to do but as I said that's probably for almost everybody active and liberal most of the I would say all the paid contributors are also volunteers because most of them contribute in their spare time they go well beyond what I would expect from an employee in answering questions helping volunteers reviewing codes talking nicely about liberal office and really working much more than they are paid for and then they're frustrated if they are just put into this bucket of you're just a paid person and I'm kind of not a true volunteer and I think we're not treating them fairly when we do that and on the other hand of course they are in a privileged position so when volunteers and assuming that those people would perhaps have the same time and could invest the same amount of work into polishing their code as they can themselves like expecting the same standards it's perhaps equally problematic than of putting for a volunteer so again try to put yourself into the other person's shoes change perspective try to understand what the other person thinks or drives last but not least investors to some extent that's what I am so I put significant amount of money into this so why am I here is it just for the money or am I just a very charitable person and I don't want to see my life savings back well in reality it's for me almost exclusively I care deeply about the project and the people and open source and that was a way for me to continue what I did before perhaps on a different level with a bit more control and say in what I do and how I do it and when I do it so I am definitely not here for the money beyond the fact that of course I have a family who needs to eat and that's my motivation and I am pretty sure that's kind of similar to a lot of people who put money into open source and especially who put money into liberal office development but it's not exclusively so contributors who were probably venture capital driven and they were purely for the return of investment which is fair enough I mean at the end of the day that's how the world works and at the end of the day if it's not worthwhile putting your money there then perhaps something is wrong with the project if there's no future and there's no profit from it so and again but of course again then the perspective changes and you realize that if a lot of money is at stake then perhaps a few decisions a few things how you do that or when you do that or what you expect in return when you do something actually changes as well so again I'm asking for a bit of that with a change in role perhaps a few things are more important for me than they were before but I do promise that I will try to remember the different roles I've been in and if I forget please remind me okay so that's all from that bit I think the most important I say from the last please all of you do enjoy the liberal office conference I will I hope that we can meet each other in person again the next year let's be careful with each other stay safe and healthy I wish you all a wonderful conference take care thank you so much a big applause for me too thank you so much for your words I really understand you since I am on my own let's say from 22 years not that big but anyway I can partly understand your worries and I'm with you and I really share your words then anyway there are seven minutes left and I am watching if there are any questions but I can't at the moment see any questions in the chat room the chat of the room one nor in the main chat so I invite anyone who may want to prompt a question to do it now possibly on room one chat I don't know if there are questions from the audience there I heard someone screaming so I would unless there are questions I'll probably just go on YouTube there's a little bit of background noise we're in the chat space here I see a question not in the right place but by the way the question is if the company name means something from Kaolin yes absolutely so it's kind of it's not a real world but it's a made of word but it's related to allotropes which is some way of chemical elements to exist in very different shapes or forms so I really like that idea also there's this bucky ball there was the bucky ball on the second slide let me quickly call this up so for example carbon comes in different forms that essentially it's the same element but it has very different chemical and physical characteristics for example diamonds that's carbon, graphite carbon as well bucky balls is also carbon which is really what LibreOffice codebase also shows so you find LibreOffice in so many different shapes or forms sometimes you don't even recognize it but it's included in the product or in the software sometimes it's obvious it's LibreOffice that you install but you install it on your mobile phone you install it on your desktop server I like that metaphor so that's where the name came from and by the way I guess it is not a coincidence that the language is C and the carbon is C as well the very same symbol yeah I'm sure we can spin a lot of stories around that so the good thing with those metaphors they're quite malleable if they're abstract enough you find lots of connections but yes absolutely C++ indeed I can't remember in chemical it should be for the linking possibilities of carbon so C++ should be C++++ to be chemical equivalent but jokes apart thank you so much we still have three minutes left again there is no questions from the audience there I guess not okay so I take advantage of this spare time to first of all to say thank you to all the attendees all the people who is watching this conference in the various streams and I really like to say thank you and I wish you a pleasant continuing of the Libravis conference my role is almost ended I'm going to hand over to Brett Cornwall who is the next moderator for the room one and I have to remember to everyone that from this point on from this from now on the conference will be split in three rooms which have different URLs for the streaming obviously so please have a look to the schedule also to see who is coming which are the next sessions and speeches coming so I just hand over to Brett Cromwall he should be he should be already in the room are you there I say thank you it's the opportunity to thank all the moderators you do a great job and it's a lot of work and I would like to thank you for making this conference possible thank you so much you're welcome so I just say hello and I take just one second to promote myself I will have a speech at 13 past three European time so within half an hour in room three there will be about creating professional templates with LibreOffice Writer but now it's really time to say goodbye from my side and I will be moderator again the next days but now it's time to leave the word and the microphone to Brett thank you bye bye well thank you for your work as mentioned my name is Brett Cromwall and I am still waking up to be honest let's see all right so next we have I feel the need the need for speed my Noel Grandin is Noel here oh excellent is your talk a live one yes oh excellent well I see my work is done okay can I just stop absolutely let's see the I will yeah we're all good can you see that first slide yes everything is good great stuff okay so this is a bit weird talking into a screen but I'll do my best to do it I'm talking about the need for speed in LibreOffice real quick could you could you raise your volume a little bit the chat is mentioning that it's a little low how's that Peter you hear me yes it's possibly better could you give me a good sentence okay I feel the need for speed excellent you're all good okay great stuff okay so I'm talking about some of the work that I've been doing in Optimizing LibreOffice now everybody likes things to be faster and Optimizing is something I enjoy doing but of course ooh I apologize once more it looks like you are sharing your desktop and not the actual slideshow okay is it better now we're just seeing the like the editor menu okay okay slideshow do that can we get it to do it here we go that should be right is that better yes now we're golden okay great stuff thanks for patience okay see if I can get you so what we typically aim for is 300 milliseconds or bust so 300 milliseconds is typically the point at which software starts feeling smooth and slick it's better to be well under that but 300 milliseconds is the is the target I typically tend to aim for about 100 milliseconds in my own software just to allow cases when things are not not as good so why do I do this the short answer is that it scratches an edge and I really enjoy optimizing stuff and in my general day job I don't get to do it very often maybe once or twice here if I'm lucky also I really like it when LibreOffice is snappy I really hate waiting for my computer interrupts my train of thought because it needs to take a while to think about stuff I was doing some work on this slide show earlier and I happen to notice that for example in the press when you right click the context menu on the left hand side to create a new slide it takes about a second or so to open and fully display the context menu which is not great but I do have to say that there is a lot of disappointment when it comes to trying to optimize my software as complicated as LibreOffice it is a huge undertaking there is just this mammoth load of software and you have to punch your way through often several layers to get between the piece that needs the information and the piece that has the information so often I end up throwing away my attempts roughly 80% of my optimization attempts get thrown away before I get to something useful so about one in five so you need to be prepared to do this a lot but take heart because it does work in the end if you keep trying and you keep asking people eventually things will work out one thing that I have find is that it is wise to stash your attempts so I either get stashed them or I use get format patch to export them to a patch file and I will say them in another folder because as you are exploring and working your way around you might often find that get patches or an earlier attempt is a better fit as you figure out things later on now when you do this recommendations do the easy ones first you will often start optimizing and I often will do a whole class of optimizations across the whole code base and I will always start with the easy ones because the easy ones get you into the rhythm of things they let you get some early successes going which maintains motivation which is very important when working on a code base as large as your office and it lets you slowly adapt whatever optimization you are doing to the code as you grow the optimization and get better and better and surprisingly fixing the easy ones often has just fixing two or three easy ones and it has just as much impact as fixing a big one the easy ones often add up to a decent size improvement and that is partly due to cache effects which are pretty much what dominates CPU stuff these days the other thing that I can definitely recommend is having good hardware and I know this is problematic for a lot of people because good hardware is expensive but doing optimization with our decent hardware is just an exercise in frustration and I think LibreOffice is not that fast to start with and once you throw an optimizer or a profile or something on top of it you will just often find that your machine lags dreadfully and you won't get yourself into a good flow unless you have at least a decent CPU with between 18-16 gigs of RAM and preferably actually pretty much a requirement these days is to have an SSD now that's not ideal because your machine is now no longer representative of the majority of people using LibreOffice but it's just one of those things the other thing that I can recommend is using two source trees when you're working on LibreOffice you will often find yourself needing to switch backwards and forwards between optimizing the code and then testing out those optimizations and it's very very hard to debug an optimization in an optimized build but it is just frustrating because the source code doesn't always line up nicely with the executed code the debugger has a more difficult time assigning useful values to things that you want to inspect and stuff so typically I'll bounce backwards and forwards between trying out an optimization and then I'll copy that optimization using GetDiff over into my debug build tree and I'll compile it there and test it out there and debug whatever issue I'm trying to debug there once I'm done working I'll GetDiff it again and then get applied to my native build tree and see how it's worked out there one of the things that I worked on over the last year was temporary files I thought the temporary file situation in LibreOffice was wonderful and I've noticed issues a couple of times with temporary files and it hadn't really it hadn't really stuck in my head until I ran across a particular use case which was trying to load a Microsoft presentation and creating hundreds of temporary files in the process and that became really obvious that our temporary file situation on Windows was not ideal and the more I dug into this the more I realized that LibreOffice's idea of a temporary file is closely aligned with the unix idea of a temporary file which is great because that's where it came from but that doesn't line up with how Windows treats temporary files now in the unix world temporary files typically load in the slash temp file system and that slash temp file system is typically a special magical file system which is very very fast and is sloppy in the sense that it will lose data if the machine dies and that's fine because they're temporary files but Windows doesn't have this concept of a slash temp file system it has temporary folders but inside those temporary folders those files are normal the way to make a file from a Windows perspective a magical fast temporary file is to pass in file attribute temporary which actually works really really well when you pass in file attribute temporary Windows will then say okay great this file can live in memory and it doesn't have to get flushed to hard drive and if it dies well that's just too bad so I made that small change and that didn't make as much difference as I thought and then I had to dig through the code further and then I discovered that we were closing these temporary files and reopening them now in Linux or Unix this is great it works fine because it lives in a special file system so opening and closing it is really irrelevant but on Windows the moment you close it it becomes a real file and then it gets written up to the hard drive and then the slowness kicks in so we had to unwind that change that closed and opened things on Unix which was probably there for back in the early days when file handles were a really really scarce resource but it's no longer an issue with today's machines so that got us better however it was still not as fast as it was on Windows as it was on Linux so I did some more digging and as it turns out we fluff we use our normal file handling infrastructure when we deal with temporary files but our normal file handling infrastructure will flush files when that file object dies which is great for normal files it is terrible for temporary files the moment you flush Windows says oh you really must want this data and then it writes it to hard drive so we had to pass in a special flag down to our file handling infrastructure to say when this is a temporary file and you don't need to flush it on close and then finally we reached the promised land with temporary files on Windows on Linux which is great because it speeds up a bunch of stuff that we do we really need to be nice to cache when you're optimizing stuff cache is incredibly important and one of the things I've been doing for you to be watching my comments lately is I've been switching out some uses of unique pointer with standard optional and what I'm trying to do is I'm trying to co-locate objects here so if you have an expression like int x equals p1 dereference at fetch p2 dereference at fetch p3 what you are doing is you are doing you are bouncing through memory and every time you bounce through memory you are probably triggering a fetch from DRAM which is a long way from the CPU it's a long walk and the CPU is effectively stalled until it gets that data back from RAM and then install the game to the next one so you are bouncing through RAM and you are really slowing things down so ideally we want to try and co-locate stuff to reduce the amount of time we spend fetching stuff from DRAM for example standard vector is almost always a better idea than standard list unless you have particularly large objects because the data is co-located so when you fetch it from DRAM you are typically fetching multiple elements at the same time okay now as it turns out when dealing with memory malloc is actually quite expensive it may not seem like it because our CPUs used to be slow but our CPUs are so incredibly fast these days that malloc is actually becoming a significant bottleneck and the reason is that malloc pretty much always has to take a lock now one possible option here will be switching to a fancier allocator like JEE malloc or one of the other meme mallocs or other things floating around but when you are dealing with an application as large as LibreOffice that is not an ideal answer because we just there is just so much magic going on down in the language that we use that switching out malloc implementations is not ideal so where possible we should try and minimize malloc calls and even better allocate things on the stack so I have been making a bunch of changes lately where instead of allocating something from malloc I just allocate it on the stack because the stack is wonderfully cheap the stack is wonderfully cheap because allocating on the stack is literally from the CPUs perspective a case of bumping the SP register you increase SP register the RAM that you are dealing with is almost always in cache the CPU generally goes to quite considerable lengths to keep the stack data in cache the stack data is accessed linearly forwards and backwards so it's very nice from the CPU perspective so stack is generally very very cheap and we have a fair amount of these days I believe we have about 8 meg worth of a thread stack on a typical application and if we ever needed to we could easily increase that because LibreOffice allocates very very few threads now some people say when we need to make things go faster one thing to do is to use lockless type algorithms now that is generally not great because lockless algorithms require slightly weird data structures they're very awkward to use and you have to be very very careful about using them because when you're using the lockless data structure you have to be careful that you're not accessing two different lockless data structures at the same time or a lockless one and a different one because then you could end up with two pieces of data that are not consistent with each other so I have avoided this thoroughly except for one case we have found one case where a lockless data structure is great now the funny thing here is that this is in the SVL shared string pool where we intern strings and calc we share strings because calc often with very large spreadsheets you have an awful lot of strings that are exactly the same and then it becomes a significant benefit to share the string objects in question as well as the uppercase variance of those strings now I actually tried this lockless data structure about two years ago and it didn't make any significant performance difference so through that patch in my pile I keep stacked and I forgot about it and I went back to the string pool about a year ago and the string pool was still slow and I tried the lockless data structure and it still didn't help and then myself and Lubosh worked on the string pool together and we improved it and it was great for a while and then this year I locked it again and the shared string pool came up again in a in a profile of a problem document and I stuffed in the lockless data structure and this time it made a difference so the answer is lockless is genuinely not the answer except when it is because you tried everything else so this particular time it didn't make a difference and we used a quite nice little library called libcoucou which is a concurrent hash map and that improved things nicely the other option is multithreading so we have multithreaded a few things we multithreaded the calc load and we multithread the loading of an unzipping of zip files in the background and this is great to speed things up but multithreading keeps bringing in regressions it's just really easy for the initial work to be fine and then later on it turns out that in certain scenarios you're accidentally touching some code which is not using a mutics or you're touching some code which is not using atomic reference counting and things fall apart so we use multithreading where we can but we acknowledge that it is a problem in the code-based side of LibreOffice where it is very hard to cordon off things and be sure that you've gotten all the edge cases there we've had to debug several not many personally but other people that had to debug lots of multithreaded related issues and consequently so we treat this with great care because it doesn't it doesn't always provide the magical land and also when you're dealing with multithreading you often find that you multithreaded and it's just as slow as it was before and that's because you find yourself hitting shared locks or you find yourself having to put mutics around pieces of code to protect them because they've been called from all the threads simultaneously and then as soon as you've done that you discover that all of your threads are now ganging up on that one mutics and then you've got no benefit so multithreading is nice but then you have to do a whole bunch of follow on work that's follow on work recently where I had to take mutics as out of pieces of code and convert that code to immutability so the code was initialized once the data structure was initialized once before the threads were created and then the data structure didn't change after that so that the threads could all hit it simultaneously without needing to take locks and that worked out very well and that is actually the end of my talk let's see how we're doing well the children are expected so any questions? well right now we don't have any in the chat but I will ask the author of those great drawings those are a benefit of my children I asked my children to draw some pictures for me and they very kindly colored them in wonderful okay great that's me I'll hang around in the chat if anyone wants to ask questions or anything absolutely the stage is yours for the next 5 or so minutes we have a question from Quickie what change resulted in the biggest performance gain? sure I think we got some really great improvements in the redlining code either the redlining code, I forget either the redlining code or the notes code in writer where we accidentally had an 0 in 3 algorithm where there was a loop inside another loop inside another loop where it was iterating over the same thing again and again and that took a little while to sort out because it was crossing over between two different sets of code and we had to pass iterators up and down between stuff and that produced some really nice gains that was like wow first time I ran it and worked I was like surely I did something wrong it can't possibly be that fast and that was really great but most of the time the wins are pretty small often I'll have tiny gains you can add up over time if you kind of have four or five small simple gains you'll end up with a decent size gain because suddenly you've kind of crossed a magical boundary and now you're fitting inside the CPU's main cache and it magically gets a whole bunch faster I see there's a question from Kendi sure let's fire up okay let's fire up I am the question from Kendi is about the tools you're using or even screen share how you use them sure so let's see if I can get this to screen share that one you should be seeing Intel vtune analyzer now vtune profiler yes this is a free tool from Intel it's really nice only runs on used to only run on windows I believe it now runs on likes but I've never tried it there because I use different tools on likes so this is called Intel vtune profiler it's part of the Intel vtune suite it has its limitations but it's pretty nice so for example I've got a configuring analysis now okay okay this CPU is a little bit old and doesn't want to do that so let's switch to user base sampling it's a sampling profiler and it comes with all the limitations of sampling profilers if you're trying to find extremely small performance problems then this is not the right tool for it because it has limitations because it does sampling as opposed to something like valgrind which is an emulator which can collect much finer grain stuff but I haven't found an equivalent for valgrind on windows so when I need to do extremely fine grain stuff then I'll switch to your linux and do a valgrind run there so for example here I have configured it with my applications pointing to my application I've passed in some parameters so it will load a file I am switching on some user defined variables here so that for example it only profiles I've using the exit post startup and disable recovery so that it won't get it won't try and profile the recovery process and it will only profile the opening process so then we run it and it will fire up it's now running then which may not may or may not work okay it's not very happy with my Vulkan driver right now okay so now it's managed to pull a profile and it's now post processing the data that came out of there and you end up with a relatively nice summary here this is if you look at this histogram at the bottom here you can see that LibreOffice is making extremely little use of of the of the CPU and in fact we're only using one call and we're not even doing a wonderful job of using one call so but that is in part just because of the nature of our application our application LibreOffice is not amenable to very good usage of the CPU because we are we have very complex data structures and that requires bouncing around a lot in memory and so you cannot really ever get to the point where you make extremely good use of the CPU except for a couple of narrow use cases in CALC where we managed to show some decent usage of the CPU now this profiler typically will work in the bottom up view it shows you the points that are the threads at the bottom here you can see the different threads that are active and as you can see there's very few of them and these are the things that are that occupy the most CPU time here and this is the bottom up view so it starts at the bottom part of the stack the last part of the stack bottom or top depending on which way you think stacks grow and you can drill down into it where those pieces lead to who's calling those pieces and what what was calling you can also do the same thing from top down tree so you can look at it the other way depending which way is most comfortable to you right now and you can drill down and see where the time is being spent and here we can see for example 18% of the time was spent inside the splash screen code that's because I don't think it actually loaded the document this time and then you can see it's spending time in BCL event lessons there and you can double click on stuff and it will actually open the source code for you which is really nice you can't edit it or anything like that but at least you can you can look at the code and then go backwards and forwards between the code and and the stuff and you can inside each one profiler you can pull up a section of it and tell it to filter in on that piece of the timeline so you can narrow you can drill down into pieces of the timeline and see which pieces were what was happening at certain points in time and similarly you can make it drill into the different parts of the thread you can make it drill into different pieces of the thread now this whole application is quite similar to KDAB hotspot which is a Unix program written by the great KDAB team and that does a really nice job as well I can't show it to you unfortunately because my Linux machine is at the other end of the remote desktop connection that doesn't work very well with screen sharing but it is quite similar to this and if you're only on Linux then KDAB hotspot is a go-to tool and is very similar you run a perf trace which also is a sampling trace and then you feed the perf trace into hotspot and in fact I believe I gave a talk about that last time at Elmeria and that works pretty well it has flame graphs as well which is really nice which this doesn't have okay great that's me thanks excellent thank you for this really honestly enjoyable talk I'm glad to see that their performance improvements coming to Lever Office alright so next up we have Lohamer sorry if I tried my best and Christian Lohamer sorry he goes by Klopp and his is a pre-recorded so if you sit back well I'll start it right on the dot so I'll give people a few minutes to congregate in here yes it's pre-recorded and there are two versions one has the audio boosted but otherwise it should be identical yes I was going to use the audio boosted one I assumed that was the preferred one but perhaps I should have checked no that should be the one unless someone gets heavy distortion then we should switch over alright now officially we've got Christian up and he is speaking about the crash reporter service in Lever Office hello and welcome to my talk about the crash reporting service used in Lever Office it is a pretty short talk it's I try to stay pretty much on the top level side of things and the talk is structured like you see here just short introduction as always then from the most high level to a little bit more detail about how QA can help with the whole process and showing some limitations and problems that the current setup has and then it's already time for credits and things of prior work and question time so let's dive in with who I am I'm Christian Lohmeier I'm mostly known by my NIC that I use everywhere which is cloth I work as a release engineer and infrastructure administrator for the document foundation and have been with the project since it's creation and before that I was active in the open office org project so I have been around long time and always on the infrastructure side or build bot side of things I hang out on IRC also with the NIC cloth and you see my email here cloth at document foundation org is another way to contact me so to the actual topic of the talk the LibreOffice crash reporting service and on the high level it's just the first question that comes up is why use a crash reporting service we have a framework of so many automated tests already for example we are using a Garrett for the review process of patches and each patch that is submitted to our Garrett system is tested on all platforms and basically all are built in checks run on each test by running the build time tests in addition to those tests we have we are using different static analysis tools like for example the one provided by Coverti and others like for example the fuzzing tools by Google and those already catch many problems we have in our code without actually hitting the end user and we have also a pretty extensive set of documents that we run our import and export and re-import basically round trip tests on that should cover most of the problems with our import and export filters and crashes but of course even with all the systems in place we cannot prevent bugs from happening and especially cannot cover all hardware combinations and certainly don't have the workflow of each user covered for example different order of operations can lead to different results and that's why it's important to have an additional source of information especially for a disruptive bug like a crash that could potentially lead to data loss so that's also the way what we are using to build our crash reporting service is basically Google Big Pad everything hinges around the Google Big Pad project and this is the component that creates the mid-dump when LibreOffice crashes and is also used to basically unwind that information back at the server level and the benefit of using a tool like Breakpad is that it allows for decoupling the debug information from the application that you ship so and of course we also need a server to collect the reports and for that we have a small jungle web application that collects the reports that are coming in and also is used to basically map the stripped down information that is created on the client side on the end user side against the full set of debugging symbols we keep on the server so that we have working backtraces and basically point us to the actual source code after everything is processed and we also have some integration between the crash reporting site and our bugzilla so both systems can cross-link to each other and have pointers to each other for further information to provide steps to reproduce to hopefully attach sample documents if those are near a cesare and yeah this is a big help in the process as I go to in our later stage in this talk so if going a little bit further into detail what is involved in going a little bit more in detail on the let's dive in what is actually involved in providing a build so it can be used as a tool for collecting crash reports so of course the first step is to create a build that has the debugging information in the first place so if you don't have debugging symbols even if you have a mini dump it's more or less useless because human brains cannot work with pointers and there needs to be a way to unmangle this and track it back to actual code functions and files and lines in those files to be useful so after the build is done with the debug symbols breakpads dump symbols tools extracts them and basically converts them into a format that is common across the platforms and is uploaded to the server and allows the server to unwind the symbols back to their human readable form basically and the debugging symbols are stripped from the Libre office that is shipped to the end user so it can be a relatively lean and doesn't have to be bloated just because of the debugging symbols of course Libre Office is still a rather large package but if it were shipped with debugging symbols it would be multiple gigabytes in size and of course once Libre Office crashes the breakpad tool creates a basically a snapshot if you want of the state that is run currently and we don't want anything that could interrupt the current creation of this state so that's why we don't try to do anything but just to create this mini dump so the interaction with the user whether to report the crash or not is all deferred to the next launch in order to just not mess with the the state of Libre Office after it crashed and so the prompt that the user gets looks like this basically a simple confirmation dialogue that for each crash ask basically whether you really want to send this information and you can disable the reporting globally in the options then it will never ask and will never send a report but if you have enabled the crash reporting in the options you will still have the option to not send any report on each occasion and the dialogue also has the option to not Libre Office in save mode in case it's a problem that is not tied to any actions but happens every time you launch Libre Office so it's another feature that we have in here and so the user then agrees to send the report and Libre Office sends the relatively small mini dump file it was created to the server and the server encodes cheats a little bit by not processing it specifically but just assigning a unique ID to the crash report and reports that back to Libre Office and the user can use that link to go to the crash report site and from there it has the option to create a bug report providing more information about the crash itself and hopefully all reporters actually do this because this having steps to reproduce a problem is basically the the most important thing that helps QA in processing and judging the importance of such a crash and if you're using the functionality to create a bug report we make sure to automatically include the necessary information so that both the crash report inside can add a link to the bug server ticket as well as having a link from the bug seller to the crash report and with that information hopefully QA has enough information or at least is able to ask the reporter the person having the problem for more information to further track down the problem and the dialogue with the response basically looks like this is very simple it's just the link with the unique ID that the user can visit and of course hopefully they do and provide more details and I said I'm going to go back to what is actually included in the minitab I glanced over it and said it's a snapshot of the state and yeah it includes the files that were loaded, the libraries and executes it contains the files that were loaded, the libraries and the executables and it all contains all the threads and the state of the processor registers and the stack memory at the point of the crash and then some additional meter information like the processor used to run the system the operating system and version and specifically for LibreOffice we also of course include the version of LibreOffice itself and we are also interested in whether OpenGL was enabled or not and what the driver for the graphic card was and what version it was so we include this as additional information in the minidump so next we have the problem of users report the crashes and what's next and this is basically where QA team comes into into play and completely over simplified QA's role is to monitor the crash reporting side and have an eye on clusters of reports that are new that indicate there was a regression introduced in the latest version and then going the next step further create test cases or at least reproducible steps by step by step instructions to make a tracking down the source of the problem easier and of course to also be able to verify a fixer in the later stages of the process and this is a easier said than done of course but yeah and this is easier said than done of course but hopefully the information given in the stack trace regarding the code that was affected as well as hopefully having a bug report with some background information about what to trigger the crash will make this a little easier and still of course if there is no corresponding bug report or even if the source code information isn't enough to help track down the problem or doesn't give you an idea and instead of trying to read hundreds of thousands of lines of code you just want to do the practical brute force approach you can use by bisecting and bisecting is short for binary bisecting and this just means that we have a git repository with binary builds of LibreOffice for all active branches and you can use the git bisect command to very quickly track down a change that caused a change in behavior basically and without having to compile a whole version of LibreOffice between each step and the benefit of is that you can track down a problem without any knowledge about the code itself you just need a way to reproduce the problem and then anyone with a little disk space to host the repository can basically test one revision after the other and it always checks half of the remaining commits and always of that half the other half so it narrows down pretty quickly even if the number of commits that you start with is very large the number of steps are probably 10, 12 or 15 at most in any case and then you have either a single commit or a few commits that are affected and causing the problem and with that information of course you can have a look at who was doing this commit so what were they trying to fix most often than not they have an old bug report assigned to it with the intent to fix this bug and of course having that information makes it easier to judge what other actions might have caused the regression from the regression to happen and yeah after you have tracked down the problem using a Bibersack repository even if not knowing what the problem is itself you have someone who committed code touching the area and this of course is a hint who to poke basically who to nag about the problem or who to ask for more insight and this helps great ways to to further proceed in getting a fix done for this particular problem and QA while not necessarily doing the actual fix is pretty good at providing these cases to prevent the regression from happening again of course this depends on the type of problem but if it's a problem with a file format for example it's easy to just add a sample file to the code and check that the problem doesn't occur anymore and next on some problems or limitations with our current deployment basically and it's hard to say whether it's really a problem but of course people don't update to the current version at least not as frequent as we would like them to do and thus we have lots of reports coming in that are from people using old versions of Lira Office with problems reporting that were already fixed in current versions so this of course kind of skews the overview towards old crashes because the user base of new versions is relatively low or starts growing as time goes on and the old versions are the ones with the problems that are not yet fixed so just looking at the raw numbers doesn't give you an indication what is really happening with the current version and the Lira Office crash reporting website has some filtering options at least the starting page allows you to have a quick overview of how many reports were committed for each version that is of interest basically the final release versions and then has an overview of those versions of the reports assigned to that version but if you go deeper then you have all reports from all versions so the filtering capabilities of the Django web app could need some love to make it more usable to be able to exclude versions from the listings for example or just focus on one single version to check that it was fixed in an RC for example and not just the main code line version for example and so far we also didn't remove any old reports from the crash reporting database or of course it grew quite large over the years and I don't have any immediate plans to clear out the old entries at some point I probably have to do it to keep the size manageable and to keep the database query times on a reasonable level and another limitation is that it's only available for Windows and Linux but of course Mac users are relatively user-based compared to Windows as well as Linux users compared to Windows users so I don't think it matters too much to not have macOS in there and Windows crashes and Linux crashes alone are enough to deal with basically there is no lack of reports coming in but another limitation is of course that it only works when you have the corresponding debug information on the server side and this then in turn means that it's only available for builds that are done by TTF, by me basically and only for the alpha, beta, RC candidates basically but none of the Tinder boxes or daily builds have this integration and don't have the crash reporting enabled and this is basically the technical side of the limitations of problems and of course there is the other aspect that I already touched upon that not all users go through the process of filing new information especially if they don't have a Buxilla account already if this is a big hurdle for them so they either don't even visit the site to file a bug report or they stop there and don't proceed further and also those who do actually file a report it's not clear whether they can provide enough information to have steps to reproduce the problem so still a lot of the burden is on QA volunteers to basically track it down, try to find a way to reproduce it, try to find a developer who can have a look at what's wrong and yeah it's just as it goes with all problems people either complain loudly about it or they stay silent and complain about it with their neighbors but not to report it and there's no real easy solution to fix all this because either you make it so easy that every spam bot can abuse your system or you have some login system in place that scares people away so yeah it's always a balance you have to take and we have enough reports coming in that I think are now reasonable to act upon even if not everyone can add a comment and of course having a comment system that allows everyone to comment also gets into the data regulation problems because then you have no control about what is filed and how people would access this information and how it's shared so having a system that has its own management lipoxilla makes it a little easier and this basically is it already for the pre-recorded portion of the talk I'll have enough time for questions I think but of course I don't want to leave without giving thanks and credits by credit is due and most from it is Marcos Mohert Moge he basically did create the system for LibreOffice 5.2 I think it was 2016 so quite a couple years back he did the bulk of the integration and wrote the Django and taking some inspiration from Socorro the tool that Moge was using for their back reporting and also Nobbya Thibault had a lot of work done in the infraside of things he was basically setting up Garrett and other double rail related stuff back in the day and then there was also Ricardo Marguercetti and I sorry I probably butchered the name Albuquerque Estimier contributing to the server side of things back in the day and of course all wouldn't be possible without having the breakpad utility available so have a link to our main repository and of course links to our Cresher parting side and our Vaxilla and basically now I'm ready to take questions and switch over to some live mode and maybe can do some live demos if necessary and of course thanks to all the sponsors making this conference possible thanks I'm going to wait just a few seconds so that any one with buffered videos can finish up I saw a question on the room regarding backtraces not being resolved against the symbols and this was a problem with the symbol extraction step so the dump sims process did frequently sec fault when processing the symbols for Windows 64 bit and our tooling didn't bother to check for any error during this processing and this has meanwhile been fixed and it should resolve the symbols again so it was a problem that the debugging information on the server to unwind the symbols wasn't complete and yeah but for current versions it should be all in the good and of course the old original reports are not resolved but even reports for old versions should now be resolved again since I also updated or re-uploaded the debugging information for old versions and yeah to add on that the problem still happens so the dump sims to still sec faults frequently while processing the symbols but it's just retried until it does succeed for this given DLL or XFL and proceed to the next one so running it multiple times until it succeeds so I didn't really bother finding out why it crashes or how the crash can prevent it as long as the result of a successful run gives the same result all the time so I'm confident that the data is correct if it works and only have to re-run it when the process sec falls and maybe just to show the crash reporting side as it is now basically on the landing page you're seeing the the number of crashes submitted per versions and so the different colored lines represent a single version of LibreOffice and you see that around 400 to 600 reports are coming in per version and of course you see the slow rising lines at the bottom that is the new releases that slowly gather a user base and still most of the reports are from old versions if you're actually using it I'm not sure whether you can see the mouse however in the screen share there you should get a breakdown of the versions and the number of submitted crashes at that specific date and if you go to the version select on you can select for example the 7162 basically the 716 final release and yeah it's a little bit slow for me right now the demo effect okay now that you see a listing of the crash reports and the signatures for that specific version so for example here you see a Skiya related one that is happening or was reported 74 times for windows in the last 7 days so the time frame can be chosen at the top and if you click on that you would get to the number of individual submissions and you can click on one to get basically the stack trace you see the function signature and the source code file where it happens and also the cross-linking to the bugzilla and clicking on that gives you the bugle port history basically and it turns out it was an optimization that uses a processor features the AVX instructions despite not checking for actually support on the hardware and it was fixed pretty quickly then and it's fixed for a 717 and for a 722 branches and just showing at the top the information that the crash reporting site would add so this bug was fired from crash report and there you have the idea of the crash and this can be used to go back to the crash reporting site to see it and yeah this is basically for the collect we have the version of LibreOffice the idea that is used to uniquely identify the crash the processor architecture the operating system and the operating system version yeah this is basically all this and I think I'm also out of time now if there are any questions I need to have a second look we don't think so so yeah then I guess I'll leave the stage and give the next presenter some time to set up I think it's not a pre-recorded one so yeah thanks for listening thank you Klauff yes indeed we have next up Marco Cicchetti and some improvements for the SVG export filter Marco are you around yes so yeah excellent your mic is a little bit low but otherwise sounds good okay I try to share my screen so I suppose now you can see it you can see the editor but not the presentation if that's what you're going for so now you can see it yes everything's great thank you so the improvements we are going to see for the SVG export filter was all about the exporting the background of the slide the first topics about be able to export a slide that owns a custom background in fact one of the features of it by Impress is the ability to set a custom background for a slide instead of using the background of the master page linked to the slide this feature was not working since the last presentation shown always the master page background instead the one used as custom background I suppose you are familiar with the basic code the SVG markup language anyway I want to remind you the use element this use element allows to reuse more than once an artwork simple rectangle allowing to change its geometry and style feature so we can define an object just once and use it many times and that the use element is exactly what is used for getting a single master page rendered in each slide anyway the master page is not referenced directly by the use element because a master page has some content that is customized by each slide or even generated dynamically such as slide number date, time or a slide that owns a custom footer when a slide needs to be rendered the JavaScript presentation creates a group of use elements referencing sub-object of the master page instead of the whole master page in order to create this group of sub-object references that is a specific class named the master page view what we are interested in this case is the first use element reference the master page background in order to get a custom background supported correctly we need to quantify this class in order to make it aware that slide owns a custom background so that instead of referencing when it creates that use element the background the master page it references the custom background the master page view class for creating this structure needs some metadata information about the slide this metadata information is represented by the slide number and the visibility of the several text fields that made up the the master page and even other kind of information the metadata is supported as custom attributes all we were prefixed that in the example below you can see for instance the master custom attribute that specify what is the ID with which the master page related to this slide is supported in order to make the master page view aware that the slide is using a custom background we have introduced a new attribute named as custom the ground the generating metadata method that export this group of metadata information checks if the field style property for a background slide is different from field style none and in that case up and to the group element a new attribute that you can see in the example below in red in this way the metadata the metadata slide class that is responsible for parsing in the JavaScript presentation in all these attributes and transforming them in properties of the the instance of the metadata class itself is able to provide information to the master page view class so that it can know if the slide is using a custom background and then referencing the ID of the custom background instead of the ID of the master page background in the user element we have seen in the previous structure so getting a custom background working the second step is provide the ability to use a bitmap as background now a spotting bitmap as background was used as a work round because the same bitmap was spotted many times that occurred only when the same bitmap was used as a custom background for several slides but also when a bitmap was used as a tile in that case the same bitmap was exported as many times as was the number of tiles obviously that lead to an exported SVG document very huge and the slide rendering in the browser was horrible in order to solve this problem we have modified the create object from background method of the SVG filter class this method in order to export the background of a slide of a master page used to create a GDI metafile for the object that made up the background of the slide this metafile then is passed to an instance of the SVG writer for converting its meta action in one or more SVG element so that the background gets exported we have added a new task to this method the method iterate on the meta action present in the metafile and call it all meta action related to bitmaps and use it for creating a map a map whose key is the checksum of the bitmap embedded in the meta action the checksum is dependent from the bitmap position so we have a single metahantry part each bitmap once we have collected all this meta action related to bitmaps we can export image element that represent the bitmaps we have implemented a specific method named import background bitmaps this method iterates over all the bitmap action present inside the map we have populated here and for each entry in the map it creates an image element the image element then is referenced for is referenced by one or more used elements at the place where the custom background for a slide is defined or more than once for the same slide in the case the image is a tile in this fragment you can see a slide whose background is a tile of the background and as you can see the used element is exported are more than once and the reference to the same bitmap the this occurs inside the write BMP method of the SVG action writer class which usually is the default for bitmap meta actions the bitmap is passed to this method and then is exported as an image element we have modified it to compute the checksum of the past bitmap and in case this checksum is present inside the map exported instead of the image element a used element we have a trasmatribute in order to set up the bitmap to the appropriate position size and the href attribute set to an ID that contains the just computed checksum of the bitmap that is the way we get bitmap used for for custom slide background or for tile background exported only once and then rendered correctly by the javascript presentation gene no modification to the javascript presentation gene has been necessary this solution was a big improvement but this did present some problem for the tile background case the problem is that when when they use elements are a large number it's expensive for the browser rendering them so if it has a background as a large amount of tiles as it occurs for background of type pattern or edges we end up having many tiles because these tiles are very small the result is that the browser can need one or two minutes for rendering a slide in order to solve this problem we have used the SVG pattern element the pattern element allows to define some some object some artwork that then is used as a tile for filling some shape in this case you can see a simple example where a pattern is made up by two lines one red, one green and is used for filling a rectangle note that the fill attribute is a reference to the pattern by specifying the pattern ID the other attribute of the pattern such as we made are obviously the size of the tile that is going to be used for filling and the x and epsilon attribute are instead the offset of the tile with respect to the top left of the bounding box of the shape that is going to be filled this pattern element is exactly what we needed for rendering for exporting correctly in the tile of the background and now there is some details how that works we modified further the create object from background method that now not only call the bitmap meta action as in the previous case but also looks if we are the background type is a tile of the background in that case it modify the GDI metafile created from the objects that made up the background by replacing any bitmap meta action related to tiles by a single comment action this comment action has a special starting tag that as you can see is lying on the score background and then a parameter this parameter is an ID exactly the ID of the rectangle you have seen here that is the idea is the following we define a pattern whose content is the bitmap and then we define a rectangle with the size of the slide which is filled using that pattern so when we substitute the meta action related to tiles by this comment action the ID passed as a parameter to inside this comment is the ID of the rectangle filled by the pattern represented by a bitmap in order to create to be able to create the the pattern and the rectangle element we need to call it some data this data is represented by the structure you can see at the bottom of this slide and the bitmap checksum that is the bitmap that is used therefore as a content of the pattern the position that is the offset of the tile the size of the tile and then the size of the slide that is used for defining the size of the rectangle all this information is collected in a map whose key is the ID of the slide which has a tile in the background at the second stage we are sporting all the pattern erect element as you can see in the example below by iterating over this map and we use the information that we have collected for set up correctly the pattern attribute and the rectangle attribute that is in the end we mandate the content of the pattern is a used element which is referring to the bitmap represented the tile which as exactly the checksum we have collected at the previous stage and exactly the checksum of the first bitmap of the bitmap represented the first tile once we have sported the this pattern rectangle element finally we are able to use it for defining the background of the slide in the right actions method when a comment met action is it we check if it starts by the slide the background tag and in that case we create a used element referring to the element having the ID passed as parameter inside the comment as I told this ID is the ID of the rectangle filled with the pattern and since we have replaced the the meta action related to tiles with this meta action comment the used element is a sport exactly where all that meta action bitmap representing tiles will be exported that is exactly where will be exported the background of the slide and you can see now that where we have several used element exported each used element was referring to an image element that is to a bitmap we have a single used element referring to the rectangle and that is the way we have solved the performance issues with a toilet background so these are some small improvement for SVG theater that can be appreciated to create a presentation that's all for me I thank you for listening and if there is any question well thank you Marco for the speech any questions please put them in the bridge chat okay I think there is no question thanks again for listening of course and is that to say you're leaving the room we can close the session or we can continue the next five minutes because that's how much we have how much time we have alright excellent I'll meet back in five or six minutes to introduce the next speaker alright we're back next up is Tomas Vine girl Tomas are you around I am excellent can you record that yes and I've got it queued up here great hi I'm Tomas Vine girl I'm from Kulabara and I will present you improved documents searching the LibreOffice so first I would like to clarify what is meant by searching improved searching we can search for documents in different ways first is like searching internally and searching externally so searching internally like inside LibreOffice when we are just traversing the internal document model and search for some some string but there is also another possibility here is searching externally this is with searching externally what I mean is generally that we input the documents into a search database and using the search database then we can search multiple documents for phrases and this is generally what is this improved search for with LibreOffice meant in the title so generally when we search externally like with the database we have to transform the documents into text and feed it into a search engine which then searches inside this transfer text search result the problem here is that we don't have a really good we don't get a really good context of this searches when there is one result found so we get okay we have in this document we found this search result but we don't really know where exactly this search result what is the context around the search result so this is what we want to improve here so we can search for phrases in multiple documents as I said and for this there exists already multiple search platforms search databases the most the one we used and most like it's very popular is Apache Solar which also uses documents Apache Tika to transform the document into text like Apache Tika is a library like Java library that can open the document and transform it into HTML or just plain text another one which is also very popular is Elastic Search but I don't know much about it mainly I export Apache Solar what is the general idea the idea here is to use LibreOffice and collaborate online to add this context of the search results and we had the idea to for a search result we would render an image where this search result was found inside the document so this is now the solution description what needs to be done to get this idea realized we need to somehow create the search data and put it into a search and indexing platform like into the search database and we have to import it inside the search data platform itself and then we have to search on the search platform and get some result and after that we have to render the image of the location of the search result so generally first three steps are already implemented elsewhere but the last one I don't know if there is some solution to also render the location inside the document and show the search result to the user so now the first step with LibreOffice we can create the search result data and we how we implemented this is to implement it as a new export format so this means that you can export export as or save as and it will save the current document into a search indexing data format XML the good about this approach is that we can implement this as export format is that it provides a lot of things out of the box so without just implementing it as export format we can just already use it on the command light with this SOffice convert to switch as it says here additionally it also we can use just LibreOfficeKit API save as function to just create this search data document this Eliquid LibreOfficeKit API is already used by collaboration online which provides rest service convert to so we can already reuse that don't need to implement it ok so next I would like to search to talk about the data format the data format of the search data indexing data is just a flat XML file the idea here is why is this flat like it doesn't have a lot of nested elements is just to be very very simple so we can easily transform it to a vendor specific format like the one used in solar for example on the top now we can see one example of this search data format it has like root element indexing always and then it has child elements either paragraph or object where paragraph is just one paragraph inside word inside writer and object also not just writer any shape that has some text is also exported as a paragraph another element is also object which can be shape or image or phone work this one is mainly so that provide additional metadata for the object so we can also search inside the metadata not just paragraphs and the paragraph has attributes main most important attributes are index and notetype and addition to this also object name with this we can then identify for which paragraph we are searching for inside the object document model inside liberal office for objects the important attributes are object type and name name is always uniquely identified so we can always identify inside the document each object just with the name then we also export other attributes these are like additional metadata as already said old text and description of the object so how this is implemented is that we have an indexing export and indexing not handler classes indexing export class just root class for search data indexing export which then delegates everything to the indexing not handler and indexing not handler is just a subclass of model traverser what a traverser is is just this class visitor that visits all the elements inside the document model and then delegates this to what to do with these elements to the handler and index not handler then just writes this inside into a XML file with structure that I already explained before model traverser is derived from accessibility check accessibility check functionality which also needs to traverse the document model but currently because it's not like it's just a copy accessibility check is not yet using the model traverser but the idea is that both will reuse this class and maybe some other other uses can be found for this for example there is one uses to search our colors documents colors that the document uses could also use this model traverser but this is something that we can we will implement later so the next step is then to render image for a search result so we now from the we now preform the search and we get a search result the search result now has to have all these metadata additional metadata index node type that is important for the identify which paragraph or which object goes inside the document model and with this information then we can we can render the result so this process is divided into two parts first part is that we need to get the rectangle the location where the document the search result data in the document is located so for this we have a search result locator class that is used for this and the search result locator can then use the XML or JSON search result data as the format it also uses a special structure which is used inside tests when we then get this rectangle from the search result locator we can then just render the the render the image with paint tile API that is already implemented inside LibreOfficeKit so next is then to implement render image service inside colabora online so we can use this as service on the web and for this created an array service render search result which is very similar service that already exists converts to which already mentioned previously that is used to create search data for indexing so what is needed for this search rest service is that we need to provide document and we need to provide the search result then we send this, execute the service, send this both to the colabora online server and we get back the rendered image of the search result location which I now explained what was done on LibreOffice and colabora online but how everything is now fitting together this is including the database including the colabora online server including all the pieces so that the user can search and for this I created a proof of concept web application which this looks like this I will demo it later so maybe first I will explain what this proof of concept application does so it's just a simple web application that demonstrates how everything should work together this is then using Apache Solar as the search platform the HTTP server is just Python simple HTTP server which is then using Python for server side processing also for sending for sending rest executing rest services on colabora online and solar and it uses html and javascript for the client side and the framework that uses is AngularJS this is just something of us previously familiar with and it's very strong with data binding and rest services and bootstrap for the UI so it's easier to build how the application looks so the last thing is of course what we need is colabora online server so that we can render the image for the search result and also to document itself so that application has like three major processes that it performs so first one is the reindexing process this is just needs to fill the solar database with search data from the documents maybe what I forgot to mention is that web application has is taking care of one folder in arbitrary folder where all the documents are stored and all these documents are then taken in account and printed as are available for opening and reindexing and searching so the trick is here that we need to reindex every time that document changes so how it is implemented currently we always delete all indexes and reindex everything but ideally this should happen only when a document changes then we need to reindex and we need to only update the indexes not just remove all indexes and add all indexes we just need to remove all indexes for a document and add new indexes new indexes for the change document of course if the document is deleted we need to remove the indexes from the database so for each document how we do reindex is for each document in this document folder we request the to with the XML search data from the colabora online server using this convert to service once we get this XML file back we can then transform it to the solar format so it has a little bit different format for entering all the search data into the database so it has a notion of mainly documents and fields document is not like LibreOffice document but document generally corresponds to a paragraph or object and fields are then additional metadata we also add special field special field file name to identify which document which document it handles and special field content which is then the paragraph text and then we can submit this search data to solar using HTTP post service so search process solar has a very extended querying API and we just don't need everything here but we can use of course all these querying API if needed so how we search with solar is we just send a simple GET HTTP request to the solar server and as a response we get a JSON document with the results or with no results depending if the database found something or not there are other formats are supported like XML but if we are dealing with a web application JSON is the simplest way to do it we don't need to parse it like for example XML yes web app only searches the paragraph text currently so only content field is important and this is the only field that we search in but we could also search in other fields so for example to limit only for certain type of objects certain type of paragraphs or something like this or just additional just for example if you want to search just one document we can search only in the file name field when we get the results back from we need to transform the search result again for something that the LibreOffice can render the image currently LibreOffice supports either JSON as I said or XML and the search result needs to be compatible with that also we can reuse this JSON inside the web application itself to show the search results generally it's just an array of objects which has key and value pairs for the metadata then we have of course then we have to show the results on the web application now last is rendering the image now after we show these results on the web application we can then request rendering of the image for each search results this is done asynchronously so we can show the results first and then a rendered image for and update the search results when the images get rendered so we use a rendered search result service we send search results and we send search results in the document to the colabora office colabora online server and then we get back the image in just pure binary PNG image and with that we can transform the image to base 64 string this is generally easier to deal with this on the web application so the demo this is the web app here we have a list of documents and as first thing we need to reindex all the documents so this here shows the status of what is going on and the colon says reindexing and now it's finished reindexing, now we can search we can simply search for let's say LibreOffice search and now found in about LibreOffice document we found a couple of search results and this is now rendering where the search results are found in which paragraphs these are now images inside the document and now we can search for something more general for example we can say web and now there are multiple results of a web that are in multiple documents here we see that this result is font work and it found a result this web inside font work text if you go down you can see like this is image and it found the result inside the caption of an image the similar like image is also a shape in font result in the caption of a shape the following two ones are shapes and rectangles and in font the result inside the shape text and so on we have next a couple of paragraphs in other in other document so if we are interested how this document looks like we can just click here and open the document inside Colab around line and this is the document we see that we have here we have here the image we have here shape there is also table which is also shown as a result and another shape and this is the font work that that we show previously can then I'll go back maybe search for lorem and we have like this lorem documented lorem is found in a lot of places a lot of generally just paragraphs and and here we search finished and found nine results and generally if we go now inside here and change the document we always have to reindex the document or the search result won't be found all the changes in the documents won't be found in the search result so this is all for me demo and thanks for watching and bye-bye all right so thank you for the talk now we are out of time and we don't have enough for live questions but by all means go ahead and answer the questions in the chat in LiboCon room one now for the final bit we have well I guess first and my talk is pre-recorded I suppose it should start now hello everyone I am Mike Ogansky software engineer working for Collabora hello everyone I am Mike Ogansky software engineer working for Collabora productivity mostly working on its core part this talk is about the feature is new in LibreOffice 7.2 multi-column support in textboxes shapes and other graphical objects this is the first time when I pre-record a talk for a conference so please excuse if I do something incorrect or funny first of all I want to tell that I am glad to be part of the great team of professionals which Collabora productivity is it is a unique position making commercial customers happy and at the same time bringing the improvements resulting from that to benefit everyone in the open-source project the feature that we have implemented is the perfect example of this mutual benefit that commercial customers and other users get because of Collabora commitment to invest its expertise and do all the development is in open-source and I am proud that I am part of it let's proceed to the multi-column support in LibreOffice and first of all let's discuss what was supported prior to LibreOffice 7.2 if you with Writer you possibly know that multiple column is itself not a new feature in LibreOffice it was available long before in Writer it already had it in the very first release of openoffice.org and I suppose it was available before that Writer's columns columns are really very powerful useful configurable far beyond what is available in our competitors in Writer's pages in Writer's sections in Writer's frames you may use columns and set them very finely not in graphical objects like text boxes, shapes all that uses edit engine the component that does not rely on Writer's layout machinery here you see text document with a section that uses two columns also you see dialogue to configure the columns and you may see how rich the configuration is where you can set up distribution of the text or the form of the separator line something that is absent in our competitors it was already present in Writer before the 7.2 release but other models calc impress draw didn't have this multi-column support because they didn't use Writer's layout machinery as was already stated they all rely on edit engine to do that and since we discuss here the development that brought this capability into these models the question is did they actually need this support or maybe it was okay without it let's see I want to focus on impress as the model that benefits most when any new feature appears that provides new layout capabilities when authors create slides they might want to put their text into columns and they would expect naturally that the text would flow freely from one column into the other to the next at the same time they want the good stuff that they already have with the boxes in the impress for instance allowing them to automatically scale the text to feed the size of the box but that wasn't able directly user would need to create for instance two text boxes or a table with two columns and then you would need to type text move it from one element to the other from one cell to the other to distribute them you would need to make sure the same properties to the two elements so that the text looks consistent when you edit the text you naturally need to do this all again because for instance scaling might need adjustments all that would need to be done manually the other aspect requiring the new feature was interoperability with other suits that already provide multi-column support in presentations when LibreOffice users get documents created in those suits they naturally want to see them as they were intended to be but that wasn't possible before 7.2 indeed we tried to work around the missing feature and tried for instance to import multi-column texts into tables trying to figure how to distribute the text over the different cells but that couldn't be done perfectly and often the text ended up being distributed in a different way not as it was intended to be here is the screenshot of Microsoft PowerPoint with a slide with two column layout and a configuration dialogue so naturally we needed to provide ways to import and export the data to allow our users to interoperate with the users using other office suits so now we see that the need is real let's proceed to the change itself when you start working on a new big feature you need quite some effort the feature takes much time it requires developers to work for weeks and even months and indeed that means that you may be sure that the feature will be completed at some stage when someone supports the development and makes it possible for developers to focus on their task when a commercial company gets a support contract and they listen to their employees they create support tickets to resolve problems that their employees have the company does just that it makes the difficult development possible which otherwise could be totally impossible for volunteer it could be just too huge for a volunteer who needs to eat in the first place and we glad to have Suse our valued customer who keep helping the whole community to improve liberal office over the years they are the ones who made this development possible when implementing the new feature you need to check if it would bring changes possibly incompatible changes to the file format that is used in the office suit and we studied ODF if it has a required support or if we need to extend it to the new feature it turned out that ODF already has all the required support for it it already defined that graphical objects may have columns element in it so that was a nice surprise for me and I want to thank all the wise people working in the ODF technical committee in OASIS thank you then we proceeded to analyze the scope of the task if we needed to implement all the rich feature set implemented in writer in the columns in graphical objects and after discussing the possibilities we decided to not implement everything that writer has and focus and limit the task to the simple same with columns without separators there were two reasons for this decision first keeping things simple allowed us to provide the new feature faster and second possibly the most important was that having two complex feature set implemented from the start would create another compatibility problem this time in the opposite direction the users would be unable to save these features into external file formats which would make them which would make it difficult for them to share their documents with the users of other office suits so we decided to limit our task and additionally it was unclear if this set of features was actually demanded so that would justify the development the next nice part of the task was to implement the document model for the feature and to make sure that it gets stored to and imported from the file formats the two file formats is our native ODF and the second the OXML that Microsoft Office uses I really enjoyed how well engineered our existing code was when I worked on this feature the code that already handled this task for writer was so well engineered that it was very easy to reuse it it needed very few changes very few modifications to make it work for broader set of objects we made some refactoring to move it to make it reachable edit engine and it worked I always enjoy working with a code that is so well engineered because it makes the task easier it allows to do more good stuff in less time and I'm very very glad to be able to work together with brilliant engineers like Miklos who is always ready to share their expertise and point to ways to avoid inventing a wheel it's him who pointed me to this well written code thank you one thing that required a completely new development from our site was this layout itself we needed to make quite a few changes and most demanding part was the new iterative algorithm to distribute lines of text over several columns when you implement an iterative layout algorithm one of the biggest problem is to make sure that this algorithm ends at some point that it never hangs and since we changed an algorithm that was previously linear into iterative we needed to make sure that it can't freeze so I'm glad that we were able to do that I'm sure that our algorithm never hangs it works reliably and that's what I'm proud with in the end there were a number of commits implementing the feature in steps there were an initial set of merged patches then there were some more patches that fixed some omissions bugs and now the change is ready for everyone to enjoy the end result in LibreOffice 7.2 you may see that the boxes in Impress as visible on this slide as well as in all other models got the power to distribute your text over the columns just as you would expect the configuration of this feature is very simple it is hosted in the text properties dialog where other text properties were already hosted also it is available on the sidebar and you may see the comparison of the dialogs used in Microsoft Office and in LibreOffice 7.2 you may see the parity of features in the two office suits so what's left to do there are a number of issues filed against this new feature in Baxila it is natural it is a new feature it is a big feature and some problems are still in the first steps and especially so when the feature was just published to the wide testing but one of the most important issues there is in my opinion bug140022 which is about still unresolved compatibility problem between Microsoft Office and our new implementation in LibreOffice to resolve we need some missing bits in initialization of edit engine and recently Noelle Grandin has made some nice changes toward this thank you very much for that yet more work is needed on this problem as well as on other still unresolved problems with the new feature yet I am glad the feature is functional and is ready for you to test please do that this is basically the end of my report about working on this new feature that is landed in LibreOffice 7.2 the feature that brought to you thanks to Collabora that magically makes commercial support to benefit the community the task was really a great pleasure thank you for your attention that allowed me to feel all that pleasure again as if I working on it the second time now it's time for questions and I am sure that I am ready there in the room waiting for them to answer thank you very much thanks I am always glad when there is no questions it means that I made a perfect job describing everything and thanks to all the community and to TDF it is a pleasure of course it would be great to meet in person but this conference makes feel closer to each one