 Yeah, so Let's get started Yeah, so Welcome everyone. My name is Jayce Cofaud Lee. I am the QA engineer at the document foundation I started Two years ago in this position and in today's talk I'm going to talk about Well How to improve liberal of his quality together and well basically it's a summary of what happened in 2018 and Yeah, what's for the near future in? 2019 so first I'm going to talk about What happened in Baxila the the backtracker we use Then I'm going to talk about the automation of things we are Doing in QA or how we can find ways to automate things then Gonna talk about QA events in 2018 and then in the middle of 2018 we created a block about QA so I'm gonna slightly talk about that and Well, like what we are using it for and Yeah, basically, that's that's it so Let's start talking about what happened in Baxila last year So, yeah, that's Well We got around 7,500 reports Yeah, in the whole year By 3000 well to like 3,100 people 88% of those Bugs where or these those reports were bugs and then 11.5 were enhancements and Yeah, then we have There is a pointer I think oh no, okay, so in average we have 550 Reports every month and then in October we got around 700 reports and Well, the the reason is that Well, we got 6.2 1.3 report Release about that time so I think that's Well, when this release become still and then we got more more people reporting issues then Well, we got Almost 6,000 6,900 reports close by 520 people This is slightly Lower than the number of reports we get open But anyway, I think it's quite impressive to see so many bugs reports In a year, so in average it's around Well, it's slightly lower than 600 reports every month and then we also have October as the The month where more reports were close. So, yeah, basically When we get more reports normally, yeah, some of them are Diplicated or not an issue. So they get close as well Then Well on this chart we see that of those 6,900 bugs close so Almost 32% of them were closed as a fix So there is a commit fixing them Then Like 24% of them were duplicates bugs. So we're triage by QA Around 18% were work for me, which means those bugs were Well, there were triage in the past and then at some point they got fixed then someone retested them and They they were no longer reproducible, but We don't know that the patch or the commit fixing them. So we just close as works for me And then we have around 12% of them which Well, we're close as insufficient data because We requested more info from the reporter, but then we didn't got we didn't have any any info back Then well normally When a bug is reported It goes to unconfirmed status So then from that point on it's when QA jumps in and basically triage it and Decided to move it to new or duplicate or whatever state with it needs to go so the important thing is that The lower number we have of unconfirmed bugs the better because otherwise if this Charge is going up all the time. It means that well that we are Not Triaging those bugs in time. So yeah, it's not that so if we don't trust them then it's gonna take longer to fix them so yeah, it's always a struggle to Put this number back like in March. We put it to 300 then it went up to 500 Then back again to 350. So then it goes up and It's always like this kind of trend, but yeah, they list that the idea is that Yeah, we we keep on doing it. So it's not going all the time up Then well, this is Interesting one. So yeah, we see that over one year the number of regressions went from 850 to Yeah, more than 1000 so we have in In April or May more or less there was a drop there I took a look at in Baxila and well, there was this huge change from arming about AW 8080 So then like many regressions were introduced it and then at some point he fixed all of them So we had a drop but yeah, it's It's yeah, it's something to take into account because Then I while I was preparing the slides. I Did the same as light? I Created the same chart for the previous year from for from 2017 and it was the same trend like we get more and more Regressions so those is those are the open regressions so yeah, I I understand that like many People report bugs at some point we find that they are regressions because they do they didn't work in the They used to work in the past But yeah, it's something to to take into account. So Looking at this charge. I thought okay, we should analyze like when We have like when those regressions were introduced it so, yeah, we see that in 5.1 and for that one, sorry We had like we still have like 100 open and the same for Yeah, well in 6.1 is kind of expected because it's a Well, it's still a Production release so it's not end of life So we are still working on that on that Release as well same as 6.2 But yeah, like looking at this at this chart I can say that well half of them were from Four times or three times. So like three that the three releases major releases or four major releases. So then those regression seems that Well, it they were introduced it so so a long time ago that well It's now difficult to to get someone to look at them. It's not like if you introduce a regression then it's like too much Old then it's easier to fix it. But then yeah, we are carrying all these Regressions from their past. So yeah, it's difficult to to get someone to to fix them then I guess priority bugs from that time from from last year. So this This peak here it matches also when this Work where the Refactor from Armin was done. So then many crushes and many Regressions were were found. So then when he fixed them, then we got Back to where we were and now where we we are kind of stable here and Then here with high priority bugs. Well, it's kind of Yeah, steady as well. And then that peak there. It's another Refactor from Armin that yeah, there were many And Yeah, many problems introduced it and then it got back to to normal Yes, so that was it for back sealer Well regarding automation so two years ago I Talked about this Script we are using now. So basically what we do It's well, we we have a Pool of documents and what we do is to import them in LibreOffice then we export them to different formats like dog dog X Then RTF then we open those documents those exported documents to in word Or in PowerPoint or yeah, and then we create a PDF. So then We have the reference from word and then the the PDF created from LibreOffice and then round round trip it in in in word so we can Well, it's it's way it works by finding differences in those PDF so then we can Yeah, we I use it to to find regressions and And yeah, so last year we found 62 bucks With this tool so the good news is that 70% of them are already fixed it seems CIB and Needs a team or I don't know how do you pronounce it? They are using this tool as well so right now like in the TDF Infrastructure, we use it to test writer calc and impress so those are the the formats we test nowadays and In writer we use a pool of four thousand files in calc five hundred five thousand Five hundred files and then in impress a two thousand four hundred files. So Those are random files downloaded from different back trackers so With using a huge number of documents allows us to find Really corner cases like really strange or skewed documents that well like for normal Futures, we have already these cases but using this tool allows us to find Well problems that otherwise we couldn't find So this is an example here on the left you have the reference and On the right you you have the document exported from LibreOffice So you see that Where some characters are missing? So yeah, that that was This way we just find it right away that something's wrong here Then we just bisect it and say okay this comet produces this this regression. So then it's faster, yeah Same here so This is from LibreOffice. So the background was white while it should be transparent or the same here we have the bullet the bullets should be one like Specific size and in there they were much smaller then We also use some scripts to Track what's going on in Baxila Yeah, so as I said before with we get more than 5,000 reports every year You just need to create an account is as easy as having an email and Password you can edit everything you want. We don't restrict what users can do in Baxila. So it's Yeah, we have this philosophy of Anything can be like anyone can edit anything So well the downside of that is that well we need to check what that things are done in there in the Right way. So in order to do that I use this script so It Yeah, like we check 30 different things with this script Some of the ones I find more interesting is that okay This script creates a report that let me know like okay that regression or a class was just fixed So then I get the list of those Regressions and classes fix so I just Go there and verify they are fixed or also an interesting one is that we are encouraging new new commerce to well to Confirm Bax, but the problem is that if they just confirm the bug Then we don't know if it's a regression or not. So maybe if there it's a recent regression But no one checks that then the the bug remains open forever So then with these reports, I know that a specific bug was moved to new without confirming Whether it's a regression or not. So I just check it double check it and then it's speed up the The process Yeah, and things like that so my idea for the future is to because right now I run this script script locally because Yeah, it's something I've been working this year So my idea is to know that it's more or less working as expected To have it published somewhere like in the wiki or like to have a website for that. So then like any other contributor can Read this report and also help on that Yeah, and I have a talk in the in tyrannous conference about that. So here's the link Yeah, this is another thing This script is good for so when I see a new a new a new comer is Doing things in Baxila then I get a notification with this script and I just send him an email like a Tony so welcoming him and giving this person some Pointers and some links some interesting links. So yeah, last year I send this email to 130 people and I do the same for all contributors. So Yeah, people Go on come and go and so what I do is just like if I see that someone was Contributing in the past actively and then after half a year. He's not our dispenser is not contributing anymore. I just send An email saying a we miss you Like we would appreciate if you like can't help us Anytime in the future. So yeah, and I send this email to 150 all contributors And yeah, sometimes you get nice replies like oh, I'm busy now But whenever I get time again, I'll contribute back again. So yeah, that's some and I think like people will think Appreciate sometimes not always but sometimes Then we have about we have you I test as well. So we have a Here so he did a really impressive work last year He did around 200 patches in 2018 well, we are we also have Marcus mohan who did this framework So right now we have one one hundred thirty six testing right there fifteen in impress five in math None in draw. So that's something to work on and then calc is Really well the most covered 264 so that makes 427 tests so considering this Framework was introduced it one year and a half ago. That's a really impressive work and progress here and This is something I Did in the hackfest the other day? So I found one dialogue which was well, there was a regression in a dialogue dialogue So then I thought well, we have Make a screenshot thing that just brings all the dialogues in a PNG files So I thought well, maybe we could just Compare those screenshots between different wheels. So then if we have different like in this case Well, there might be a false positive, but we could also find Regressions much faster this way. So that's something I'm doing now. So Yeah, let's see if it works. I'll I'll have it. Yeah, it's a simple screen, but I'm gonna Have it in a VM like running Like I pull master I build it with screenshots and then compare and see if some useful Information comes out of it Thank you events. Well, we have for most of the or normally for alpha one beta one and as one we Organize a backhand in session. So we had three Backhand in session in six dot one three other in six of two So, yeah, the we normally have dedicated sessions where we encourage Participants to test some new features or to test a part of the project of the program So in six that one we focus on fiber migration and the image handling the factoring and then in six that two Well, we did it there for KD five integration The notebook bar and also the fiber migration And so there was a Huck fetching in Taiwan. It was organized by Franklin who was here just one hour ago also Change shea shame. I don't know and Jeff one so this this was an interesting event because there were more than Well around 70 students attending this event and yeah, it was focused on QA and also we had Here we have Muhammad. So he ran to inside hunting sessions in Ankara from right Yeah, yeah, so Yeah, and then finally the block so in August we created a block for QA Right now we we're using it for announcing prayer releases and also to announce well to to publish Monthly reports so In the past when we had Pre-releases normally we send an email to the QA list, but that's something that we stop doing it So I thought well the more we advertise it the more people is going to download it And then we're gonna have we're going to have more people testing it So then I thought okay, let's use the QA Block which goes to the planet as well and then I can also share the link in in telegram or whatever channel and Yeah, so Right now we announce all pre-releases in the block so then Well, what we do is to point them to To the getting both page where they have the the links to download the the build So, yeah, I see like in this On this chart you can see that yesterday. I announced the final pre-release or the RC 3 from 6 to Which is going to be Announced as final next week. So You here you have Like another age of 35 people visiting the getting both page But then yesterday I as I announced this a pre-release then we jump to 70 people so then we have yeah, it's I think it's important to announce it because then we get more people Testing it And finally we have Monthly reports that's something I've been kind of Yeah, on the on the one hand that I do it with a script because we have some charts that can be generated with a script But also I'm gonna show you this Example this is from from last report from December I Still have like two minutes I mean, okay, but maybe some questions so Yeah, we have here like the number of reports three ads box and the people doing that fixed bags like list of critical box like crashes and high highest priority box Verify, yeah, like different information about QA Then we have this chart which was similar to the one I just at the beginning of this presentation progressions so you get an Estimation of what's going on in the project and that's well. This is this information is done Automatically right well automatically with a script, but then We have this one here, which is Quite interesting because well, I think it's been A topic for quite some time that we wanted to have a place to gather all information about What's going on in development and well in this case development and QA? so if you are not really following Geet or what's going on in the repository then you can just come here and see What's going on in the project? Like I don't know like scan support like things that are going on in master. So, yeah, it's kind of for like human way of Knowing what's going on. So, yeah, that's it Um Thank you for Yeah, thank you for attending and do you have any question? Yeah, but you mean if the bug is reopened? Yeah Yeah, so what we do now it's Okay, so someone reports a bug then We request if we need to some information. We just put it in need info So after six months if we don't get any reply from the reporter We just send a reminder that okay This bug is going to be moved to insufficient to resolve in sufficient data In a month. So if we don't get That that response then we just close it So they don't remain open forever or in unconfirmed or whatever Well, no questions Yeah, but then Yeah, that's because the bug is Said to new but then nothing happened within a year. So then we just send a reminder. So the reminder it's Well, what it's trying to Well, they reported to test that to to retest that back But yeah, sometimes we don't get that's what you mean this long reminder that we send Yeah, but Yeah, but the bug was confirmed in the past So someone needs to retest it in order to close it. We cannot close it automatically because then We are losing information those I mean if we if you retest those bugs and they are still reproducible in master I mean if we just close them automatically then we just Closing a bug incorrectly because it's still reproducible. So that's why we we asked for input from a third person or the reporter of whoever who can Retest it just to to make sure it's still reproducible or not. Yeah