 All right, let's get on with it. There's three brief topics I meant to cover here, the severity stuff. And I used to do this every couple of years. I used to do this every year just to update the numbers. So I won't bore too much on the long stuff. But on the severity stuff, I build it on my side, static code analysis, upload it to the severity service, and they analyze it on their side and give those numbers back. It used to be just C++, now it does C++ and Java. Better this year's numbers. A tello tried to steal my thunder earlier on. It's been about a year and a half since we got the numbers back down again from their height. So we're back down to a zero defect situation as of the 24th of September after quite a while. This is the historical data of the number of defects that we had. The next one makes it kind of better viewing. Low is good, zero is good. We've made an attempt to upgrade to the first 2022 version of severity at that point, rolled it back because there was too much noise there again. They deprecated that version, the version of severity we were using. So we were forced to go to the second release in 2022, which was an even higher peak of noise. So it's been a difficult process of bringing it back down to zero again. There was new categories of warnings that severity found which are useful to us. It knows about standard move. It knows that after you use standard move and something that's left behind is effectively uninitialized or zeroed and needs to be checked. So it does that check now. It kind of knows about a standard unique pointer. It's not great about it, but it does find some issues and then often brings in more noise again. And then normally what we do for the rest of the defects is that it helps me if we annotate inline that are false positives, because in Red Hat we run a second severity instance that is not at all connected to the public one. So if they're in inline annotations, I don't have to do it again twice. Otherwise using the web interface to say that it's not a bug or it's false positive is helpful in that instance, but there's no impact on the other one. But that shouldn't really be a big problem for you. That's just why you do it that way. This is the severity warning patterns. We have false positives, intentional, and then there's a new one for don't even record it in the instance at all. That's rarely used in our base. So severity numbers after a long process, they're back down to zero again, and that's great. And we're using the latest version of severity. There was this Fuzz, that's where Google provides. What happens there is that it's built remotely on Google's side. We have a whole bunch of fuzzers where we import in a whole different bunch of file formats, and it finds crashes, records crashes, categorizes them into security relevant crashes, and not crashes. We have 48 targets in that directory. They're from GIF to DOC to ODT. They're built with four different types of combination of fuzzers, address sanitizers, undefined behavior sanitizers, memory sanitizers, and then different engines that run that configuration doesn't really matter. It's very similar to the IOS case, just for reference. And then this is a chart of the reports we've had for the last couple of years. First couple of years, huge number of reports, small after that, but then we add more fuzzers as time goes by. That finds things. And then of course, OSS Fuzz itself gets smarter and begins combining things in different ways, it develops new features it didn't have before, so the number has been growing recently as the number of reports per year, not the amount of backlog of unfixed items. This is just a sample bug. What is difficult for those as fuzzers that documents the time out, don't get completed in the amount of time available, it'll report one of them for each fuzzer. If you were to fix it, then another one will be found inevitably, so there's always this constant churn of time outs. It's great when you find something that's an infinite loop because you've got a real bug, it's really painful when you have this constant sequence of slow documents. So you end up cutting out the slow paths in a lot of cases where ideally it would always be fast and no one has helped out in improving the common cases that are slow, but edge cases often get just chopped. Out of memory, similar problem would as well, have to deal with it in different ways. What's not covered by fuzzing at the moment, which is the real problem is that we're only fuzzing importing documents, we're not fuzzing exporting documents, and more crucially, maybe more crucially, we're not exporting that sub case, which is when you're printing to PDF. So you can still crash LibreOffice, there probably is a lot of crashes in LibreOffice that we haven't detected this way, that we could detect this way, which involves just importing random documents and exporting it to PDF. That's the second thread of that. The third thread then is the crash testing, which is where we have loaded documents on our own servers again, on one particular machine sponsored by Red Finance, and we export them to a whole bunch of formats that we import them and we see for crashes and we do this pretty regularly and we try and pick up on most recent crashes that people have introduced by refactoring things or whatnot. We get the documents from Bugzilla, but one tool scrapes them from our Bugzilla and a whole bunch of other people's Bugzillas, and then we have a new tool that scrapes them from various forums. Microsoft forum or Open Document forums or whatnot, that's from Cisco recently, and then there's some other collections where we have documents come from as well, and I'd like to point out a donation from ForcePint of a set of documents that crash. That brings us up now to about a quarter of a million documents that we're processing. That takes about four days on this pretty good hardware to import them all and then export them in different formats, and then when they crash to click the back traces, et cetera, et cetera. Actually build time is, I don't know, minutes on a machine like this. Marcus has helped set up all this infrastructure back in the day. So that's this year's chart of crash testing. At some point here, it's difficult to tell on the chart Cisco's documents were added, but people smarter than me might be able to find where that exactly happened. So that's the crash testing, and I'd like to say thanks. I'd find us with the hardware and thanks to Cisco for adding this extra set of documents to make my life easier over the last couple of months to try and fix, and thanks to all the people who have actually fixed all these people. The companies listed here are the people behind the fixes. Thank you.