 So I'm Cisco Fably, I'm QA engineer for TDF, I've been working as a staff member for the last three years and today, well my presentation is called QA Quality Assurance One Year Recovery Cup. So we're going to review what happened in the last year. For that, well this presentation is divided into two main topics. The first one is about what happened in Baxila during this last year with some charts, some stats. And then the second part is going to explain how the team is improving its automation. Of different ways of testing a LibreOffice. So yeah, first we start with the stats. And for that, that's something I've been doing already in other presentations and for this presentation I found interesting to compare what happened during this last year and the previous year. So for the data, I took information for 2018 from November 2017 until November 2018. And for this year I did it from November last year, more or less when the last conference in Tirana took place in September last year until the beginning of this month. So first, when a bug gets reported, it goes, as you may know, to unconfirmed status. Well we try to keep the number of unconfirmed bugs as low as possible because the lower the number is, the faster we can interact or respond to new reports. So this is how during this last year the number of unconfirmed bugs has evolved. At some point we reached more than 600 and then the lowest was around 450. So in average it's 538 and compared to last year we have increased the average of unconfirmed bugs. So basically that means that, well, the trend is going up so we should try to keep it down again and at least move it back to 400, around 400 or so as we did last year. So that's basically something that we have to keep in mind. Then comparing the number of created bugs, in 2019 we had around 7,400 bugs, which means 300 more than last year which is around 5% more. Well the number of 3H bugs by the QA team, as we got more reports we also did more triaging. So more or less the number of 3H bugs and created bugs match it for each year. So it means, or as I understand it, we keep doing the same amount of work. Then, well, I forgot to say but I'm showing here what's from my point of view or from the QA point of view seems to be interesting, but if you would like to know any other information just let me know and we can get that information and also show it in other presentations in the future. So this one is about regressions. So this year as the chart shows we are getting, these are open regressions so we are always getting more unresolved regressions. So in average in 2019 we had around, or we have around 1,000 and in last year it was around 883. So I would say that in average the number of open regressions increased by 100 every year. And then this year 1,200 bugs were identified as regressions which is 100 more than last year and for those bugs identified as regressions this year 384 are still open which is, it's higher than last year but it's kind of expected because some of them were recently discovered so well they are still unresolved but because we didn't have time yet to solve them. And then if, well, so on the one hand we have regressions but sometimes it's difficult to know when that regression was introduced especially in the beginning of the project we didn't have bisected repositories so it was difficult to identify the commit introducing the regressions so this chart to me seems more significant because we used the keyword bisected when the commit introduction the regression was identified so sometimes it might be easier to fix that regression I say sometimes because sometimes it's not but at least we have a starting point from where we can start to work on this regression so well the chart shows the same trend we are going up but the difference between the beginning of the year or the beginning of the period for 2019 and the beginning of this month is not that different so it's just 80 we have 80 more from the beginning up to now and the number of identified bisected regressions in both years are more or less the same and still open 265 so as said before some of them were recently identified we didn't have time to fix them yet this chart is quite interesting because we have high-guess ability bugs it's going down so we had two points in the chart where the number of high-guess priority bugs will drop and that's mainly because at that point the QA team decided to review those bugs and well we found out that they were inherited from open office times so it's been nine years from now and well it doesn't make sense to have those well being high-guess ability bugs for such a long time so after nine years it means it's not that high-guess ability as it used to be so right now we are at 25 high-guess ability bugs ideally we should be around zero we get new ones every week but then they get fixed most of them they get fixed quite fast well they are mentioned in the ESE meeting so at least I'm happy to see this chart the trend going down hopefully by next year it should be around 10 and as we can see comparing 2019 and 2019 this year half of them were fixed compared to last year and basically that's because the reason why we have less high-guess ability bugs fixed this year is because well in the past we used to have more bugs so any crash even like really hidden crash was identified as high-guess and when we have slightly changed this policy and now we are more strict about which bugs are really high-guess or not so that's the reason why in 2018 we had more high-guess ability bugs fixed compared to this year but on the other hand if we look at the high-priority bugs which is one level below so they are well should be fixed soon but not the priority is not as important as high-guess we see that this year compared to last year we have got well 50 around 40 no around 30 fixed more than last year and yeah the trend is also going down but it's important to mention that we are still at 475 so first we probably from the QA team we need to well review them, recheck them, re-evaluate them to see if they are really high-guess ability bugs and well if they really are then well all together with developers should try to get this chart at lower number because well 475 is quite a high number and other interesting fixes this year we fixed around 241 crashes this year which is 50 more than last year it's also interesting to see that we got many performance issues fixed as well almost double compared to last year so yeah there were many performance issues fixed in Calc and also in Ryder and then it's interesting to see that we are still fixing all bugs well when I say all bugs I mean bugs that were reported more than 4 years ago so in 2018 we got much more than compared to this year but we are still there and we are still fixing them so that's important at least to me because it means that well at least those bugs don't forget in Baxila and then for those bugs that were closed this year well this year we have 6852 bugs fixed well when I say fixed I mean resolved so 34 of them were resolved as resolved fixed 18% as work for me insufficient data 11% not a bug 6% duplicates 22 1fix for the 3 and then others that were below 2% go around 5% so now we have every month we have a blog in the QA well a post in the QA blog where we have this information for every month and even more detailed information about what happened during the month in QA and development so if you are interested just check the blog and then about the second part of the presentation well it's gonna be about automation so first some of you may know that we use a script to check interoperability we started to use it two years and a half ago we are still using it we have it in a VM and we use it with many around 2000 writer documents and around 1000 card documents sorry, impress documents so basically what we do is to open it to create a PDF and then we open the same document in Microsoft Office and then we compare the documents the PDF outputs to see if there are differences so this way we can see if there are regressions in master so this year we found 34 regressions interoperability regressions still open 10 but as said some of them were recently introduced or found and then we already have 20 fixed so it means that almost 2 thirds of the issues found during the year are already fixed and here we have some of examples of the findings from these scripts so basically here on the left we have the result from Word and there on the right the result from writer so as we can see some paragraphs are going left to right and then others right to left so all of them should be from a line to the left so yeah that was I'm going to do it quick so this is an example that was fixed and found by the script this is another one which is not fixed yet but as you can see some of the labels on top and here some of the bottom are gone so these are the kind of issues that this script finds then regarding UI testing we have this year 170 related commits RAL did a lot of work in this area we had a Google Summer of Code doing an interpreter for the UI testing and right now we have 500 UI tests in total so yeah most of them call can write there and it's interesting to say that we don't have any four draws so if RAL you're here you want to do it yeah it's an invitation and then there's something I've been working this year it's called well mass UI testing because we use the UI test framework with a pool of documents well basically the code is there in that URL so basically what we do is to run a script and we say well I want this directory where all my documents are and then well I use this instance of LibreOffice and well this here component I just say if I want to write there or call, call in press so basically what it does is it arrays over a list of files that I have in that folder then it creates a random user profile for each folder and then it copy paste registry modification with some parameters like I want the macro execution disabled ignore protected areas and other parameters and then I have another file which is a Python file and where I define the kind of test that I want to test with those documents so then every method starting with test underscore will be a different test and then what we do is to parse the output from the UI test framework and see well if the test pass or not or we have a timeout of 20 seconds and then well it's this I forgot to say this script is mainly used for finding crashes so mainly tested in writer and calc and well I still need to do it multi-threading so well I talk with colon and Marcus and I would like to test it with the pull of documents we use for the crash testing and run it in the VM but so far these are the kind of tests that it can test that it uses so basically I open the document and well it opens the document remove all the contents and then undo insert new line undo base break there are simple tests but when testing against many documents well some documents might crash so this is kind of a this is a basic sample of a test so basically it executes the command select all then delete and then execute the command undo and then well close the document so here is an example of a crash found I wanted to show you but I think I'm running out of time already so yeah so right now testing and I use it with 2,600 files in writer with this format so the t.ex.rtf and there were many crashes found related to the fly at charge selections which are inherited from OpenOffice and recently Michael Stahl fixed them they are back ported to 6.03 so all these crashes I then retest it and they are all well there there's still no I think there's just one that seems related but there were like 35 or 40 files crashing and now all of them are fixed so that's really good news and then the same for calc well these are some of the tests that I've implemented I'm just using so inserting rows columns we do it for every sheet then high column, high row, print preview and then this is an example of a test so we open the file then we go to the first sheet and then we iterate all of them and for each one we insert a row below then we undo it and then we go to the next sheet this is a crash that is still reproducible and it was found by this script and well in calc I use 560 files with those extensions and it was useful to find crashes when Noel did the dynamic number of columns work so yeah and all of them were fixed right away so yeah in that sense it was really useful and yeah that's it thank you sorry for the rush