 It seems to work. OK, this will be a reasonably sharp presentation, and it'll just be pretty much a presentation of the continuous testing that we've been doing over the last 12 months. It's been going on for more than 12 months, but I'm just going to give a 12-month snapshot of our severity figures, the crash testing trends, and our fuzzing status as well. Two areas, a few minutes in each one. So the first one is the severity. We run severity about twice a week, and we get about two slots from severity per week to allow us to do that. So what I typically do is we do the first one, we find out whatever bugs are new, and then we do a sec. We fix those ones pretty much there, and then we do another one back to back, and hopefully we get a severity output that's pretty close to zero. So we use our two slots pretty much close together like that. The results are emailed to the list. You'll see the severity summary list comes to the mailing list, the development mailing list. So you can keep track of that. And obviously, you can go look at the actual dashboard, severity, the website itself. You don't have to be logged in as a developer or anything. It's public access now since we came down to small numbers. So anybody who wants to read only the severity bogus can do so without asking for any special permission or anything like that. So they're fully public, that sense. Anybody that has been a developer with LibreOffice, I tend to invite them to be a participant in Coverti. So if anybody hasn't gotten invited, and has any interest in fixing any of their remaining Coverti bugs and wants right access to Coverti, just let me know and I'll add you to the admin list there as well. On my side, I build them locally on my own hardware and then I upload to Coverti because that gives me a little bit more flexibility over what compiler I can use and what configure flags I can use. And it takes about 12 hours on the Coverti side to analyze the logs that are uploaded. They have their own compiler, they have their own compiler basically that collects all the information and then they do the analysis on their servers. That's, this is last year's numbers. So last year's numbers, we had actually, at this time last year, we had basically, what side, this is, yes, last year's numbers. We had gotten the figures down to an absolute zero of zero outstanding defects. So just to compare against next year's second now is that you'll see the 7,102 lines of code in the project that we analyzed. So what also has to be pointed out is that to build labor office, there's a whole bunch of third-party libraries as well. They're excluded from these figures on this slide. That compared to 2014 of a zero point zero weight and you see the kind of average defect size for a project of our size is that one there at the bottom of zero point six five. So, you know, you're an order of magnitude, even at our worst in 2014, we were an order of magnitude ahead of the competition. That said, though, because labor office is so large, I'd say a large amount of these statistics from that date effectively refer to us because we'd weigh it by our own density. This year's, the numbers actually back up to zero point zero two, both we have less code. So that's good too. Now these, this chart isn't the most useful chart, but it just points out that there are big spikes, big ups and big downs. This is generated from severity itself. And I said earlier that the previous figures excluded all the third-party code that we built to get labor office built itself. So this time last year you can see that there is a horizontal line at the very start. I've clipped it to the 12 months. The horizontal line at the very start is a zero defect. But when you use severity to chart your defects over time, it then goes back and includes all of the third-party code that is excluded from the other views. So you can see that there's one K there. So there's one, between one and 2,000 defects in the third-party code that we built as well, which is why that line is zero from our code, but including everybody else, you get this one as well. And the reason it goes up for the first two spikes is because we build some of it against system stuff and therefore, severity never ceases at all. And as time goes on, our baseline changes. And if you're building on for door 23 and the new Librevenge comes out, then you have to build new Librevenge or more likely a Firebird and things like that, which means that these figures and these charts aren't particularly useful for looking at long-term trends. So for next year, we'll make things a lot easier and going to build everything basically on a Docker image that always has the system libraries of the most recent requirements installed. So I'm hoping to see a very straight line that effectively zero defects reported will be zero defects on this line. But what's interesting is the other spikes where it dips down to zero for a couple of months is where my compiler goes to GCC six and the previous release of Coverti can't support that. So there's just that period where there's no bugs at all. It spikes up because Coverti 8.502 comes out. That has support for a whole bunch of extra warnings that I'll describe in a minute that didn't exist previously. So it's not that our defect density got certainly worse, it's just that we discovered a whole pile of new stuff that Coverti warns about and it didn't previously. And then one just kind of, it happens every now and then there's like say a new exception is added to some low-lying code and then Coverti beans report to us that all these places that don't allow that exception type in its specification. That's a fairly regular occurrence that the exception specification, a small change can have a big propagation but it's temporary. So it's changed apparently since this time last year we have 16,000 less lines of code in our project which is nice. We're now using the very latest version of Coverti. That works with GCC six, the previous Coverti doesn't and this now has extra warnings for C++11. The new warnings in Coverti is that it now knows much more about the C++11 features that it did before so it knows about a standard unique pointer. So it started complaining about cases where it believes you have let something escape from a standard unique pointer that will be destroyed before the thing that you have let it escape into. So you get some extra warnings there. It seems to have, it warns about legal address computations on things that I believe is just wrong about but they're all silenced now as well and it has a confusing warning about a misused comma operator which does refer to something quite useful which I think has now been encoded into our own clang plugins. So a useful warning but a very confusing error message. The most verbose new warning it has is missing move assignments. It has learned about C++11 and it has now all the remaining bugs that are described at the beginning the 0.02 defect density. They're all effectively all missing move assignments. So there's a whole, I suppose, bunch of reasons or different ways to solve this. You can either just ignore them, you can actually write move assignment operators or we can just adjust the code slightly so it doesn't need, it doesn't warn about them because the assignment isn't there in the first place, isn't existing. There are a couple additional new warnings for Java as well because Coverti at this point allows us to build a project that uses Java and C++ and warn for the two of them. So it has a baseline of 1.7 JDK now as well. So if we are to rate a baseline this means that this problem of a mixture of APIs goes away as well. That's the Coverti stuff, pretty much under control. The outstanding stuff is how to solve the last remaining missing move assignments. They're only down as a low priority, Coverti recommendation level warning. They're not very serious. We could even silence a lot of them and one fell swoop without any major issues. It's just that with Coverti, as you've seen, is that when code changes, the same old issues arise again and again because of code changes and it doesn't recognise that the previous time you dismissed it, that you still want to dismiss it. That's the Coverti. The crash testing was mentioned briefly, Tharston mentioned some of the crash testing. I'll show you what crash testing looks like over the last few months. And there's 118 different formats supported for load in crash testing, including the ancient Staraffus file formats that we got rid of with the binary filter. But now we have Lib Staraffus, so they're actually relevant again. So all these Staraffus formats, the column there can be investigated for any Lib Staraffus crashes. So we check to see if anything crashes. We also check to see if anything asserts as well. We enable asserts. So any of the crashes you see may not actually be significant in the real world. They may just be an assert which goes on to be harmless in this particular case. We save a bunch of them. We save out about these 12 formats here. So the exporting is much less formats than the importing, but they're the serious formats to be interested in. Again, this one is run once or twice a week. Takes about two days to complete. The number of documents at this stage is up to 93,000, which is up 10,000 on last year. The documents, 90% of them come from various bugzillas and we have a script to download from the various bugzillas, which probably means that since last year there have been 10,000 bugs logged with new documents attached to them in our bugzilla, the Mozilla bugzilla, and the Red Hat one and one or two others. The actual pile of documents that we test again that has come from the bugzillas, we don't refresh them the whole time. The actual script to download them all to update them just to add to what's already on your disk is about 12 to 13 hours. So it's a very, very large process. Once every couple of months when there's a couple of consistent weeks in a row that there's no new crashes reported, then perhaps we update it. Yeah, so this is what this year's one looks. What you're just hoping for is, this is basically a presentation where I try and show you a straight line and say how great it is, but this shows how the whole thing works as a process. Nothing happens, nothing happens, nothing happens. There's a new bug introduced. You find it straight away. You fix it straight away. Nothing happens, nothing happens for months. Then there's a new feature, GSOC feature. You notice straight away that there's a crasher there and you fix it straight away. So I mean, I think that's as an idealist chart as you can get. Finds the problems, you fix the problems and you go straight up and straight down. That's 12 months since this time last year. Only two major events and a couple of minor ones. Export failures, I went back and I checked to see what was the reasons behind these ones. That's the Netscape plugin API removal. Had a dramatic effect on the export of power bank documents or some kind of presentation documents anyway, where whatever property was removed, cause great grief when you exported something that had previously had a plugin embedded in it. So that was a major one there. And then I think it was the DCL event this batch and I'm entirely sure there was one spike that went away and those two or three commits in that period that might have explained why it came and why it went away again. Either way, the good news is we got fixed pretty fast. So that's the crash testing. I give the results for this week. It's about 40 conversion warnings. That's the missing move assignment warning about zero to one import failure, about zero to one export failure. That's pretty much power for the course. And then I just give you one last update on where we are with fuzzing. And it's just basically to say that we are fuzzing and what it kind of looks like from my side for that. The one I'm using at the moment is that American Fuzzy Lop, it's really good. I'm a very good, I'm a great fan of it. And we have a small stripped down custom file format loader called FFTester that basically has stripped out all of the slower configuration related paths. And it supports this American Fuzzy Lop server mode so it can restart pretty quickly. So I can parse multiple, multiple documents in a tight loop and it works pretty well for that. There's an A to the C minimizer that you run over your collection of files and it tells you what is the minimum set of these files that exercises the majority of code pets. So I can run it over, say the doc file format and come up with the minimum set of 1,000 or 2,000 documents that exercises the most code pets. At that point then you need to get the smallest ones that you can, so you just take the big ones and you throw them out and whatever's left over, you seed the process with that and you just turn it on, let it go from once an end and come back to it. There's a chart in a bit that shows what that kind of looks like. So again, this time last year, I was looking at the Lotus Word Pro file format and you can see there at the very start. As soon as you turn on the process, it can take you a couple of days to get started, to get everything into the right shape. Set up your filters, find your minimum set of documents and then once you turn it on, you find results basically in the first five or six minutes, the majority of them cluster there. So you can see it starts off, great big excitement of fixes. Every time I find something, I do it on just a 24 hour basis, let it run, check it in the morning, shut it down, fix whatever bugs it finds, start it up again. So that's continuously stopping and restarting as that goes along. You get a period where nothing happens for ages, then you get another tight cluster, nothing happens, tight cluster and so on. So I shut that down there about July this year, it wasn't finding anything for about two months or more than two months. And I turned it on again for the RTF filter and you get a much less dramatic spikes, but you get the same pattern again. You get a lot of stuff at the beginning, long pauses and whatever. So that's just to let you know that the fuzzing is going on, it's going on all the time in the background and it's going on right now and that will keep going for that file format until I get another two or three months where we're not in happens and then we'll go back to just trying it on all formats that I'm interested in and let it chug away. So it continues away in the background the whole time. That's all I got. If anybody has any questions on the process, now is a good time. So, questions? Okay, thank you.