 That's the one, very good. OK, it's 15 minutes in theory, so it shouldn't be too long. I like to do this every year just to give a quick overview of the last 12 months of the three continuous testing things that I oversee. The OSS-FOS stuff, the crash testing, and then the severity scan. OSS-FOS first. This one is thanks to Google. Google lets us use their infrastructure and their resources and their framework that they use for fuzzing Chromium to allow a number of open source projects to fuzz their projects. And we use that extensively. We continuously fuzz our import filters. It's a little diagram of the process. The important bit is that the stuff is built on Google's side. We don't build and send it to Google. Google just pulls it down daily and builds it with that script. We have 45 targets in BCR workbench. That's 45 individual file formats that we target. And then on their side, they build them with four different combinations. So there's 180 actual fuzzers running continuously, 24 by 7, 365, et cetera. Configuration-wise, it's difficult for us because we're designed to use dynamic libraries. And the OSS-FOS is designed to not use dynamic libraries. But we do have an out that we have this disabled dynamic loading feature that we've intended for iOS. And we can use that instead to target the fuzzing architecture. The individual fuzzers are quite large. They come around maybe 200 megs. They're each individual massive single binary. And that doesn't help us with our speed, but they are functional. We also don't run with a configuration layer. We run with various hard codes defaults suitable for fuzzing. And we avoid configuration. That's linked then as a link to where we keep little seed corpses for about 65 formats, the 45 that I mentioned earlier, and then a bunch of others that belong to third party libraries that we also use for import stuff. So they're good useful information if NBLs wants to run their own fuzzing stuff. OK, so OSS-FOS reports this year. There's less four years. You can see that the first two years that OSS-FOS was operating. It was generating more than one bug report a day, one and a half, nearly two times two bug reports a day. This is a huge amount of effort and huge amount of work. An awful lot of those were duplicates. Same for the following year. And then finally, with a big drop in 2019, where we managed to get ahead of the fuzzers and we were getting days in a row without actually finding any problems at all. This year, there's been a slight uptick because a new route into some old code that's full of bugs has been opened up that the fuzzer can find. So once the fuzzer has found its way into it, it goes to town and finds every error that has existed for the last decade. So yes, it's good to get the fuzzers into as many areas as possible. So a huge significant amount of work there. So that's what OSS-FOS looks like. It shows you these various features. And then the part one is highlighted there that they minimize the test case automatically for OSS-FOS. The large crashers is brought down into quite a small little document. Just an example of the kind of thing it finds. What often happens at this stage is that somebody makes a change and the fuzzers the next day will start generating reports that something terrible has been discovered. Before that, it was telling us stuff that existed in our code base for decades. This is the kind of thing that happens. Somebody makes a change, uses a smaller type than they should, and then you get this and you find a sanitizer and you get these kind of reports. A lot of work in the OSS-FOS is not actually the crashers but timeouts. OSS-FOS will stop logging timeouts when it finds one at a particular fuzzer. So there's various features and ways to get around that. You can limit the input size of the document that's being fuzzed. You have a difficult even there with some of these file formats have effectively practically infinite decompression support. Something like a TIFF can give you one line of an image and then basically it can give you a number that says there was this many number of duplicate lines going to follow that line. So you can have a document to exhaust your memory by basically saying you have one line that have a huge number saying duplicate that previous line this number of times. So there's various ways around that. You have to derive something from the input limits to give you output limits as well. So yeah, there's a whole set of infrastructure around doing that as well as timeouts above memories and again the similar problem here as well and then various features. We use the libJPGTorbo because that gives us the ability to limit the amount of memory that's used by JPEGs. The classic libJPG doesn't have that capability. And then again with spreadsheets you can have infinite spreadsheets. So here we have some arbitrary limits that are applied to it. Am I still audible anybody there? Yes. Good, good, just making sure. Otherwise you're a crazy guy talking to you on laptop. So this is the current situation as I took this screenshot. This is all the open bugs we have from the OSS Fuzz. It's only got timeouts left. After I took this slide I fixed four of those timeouts and like I said in the earlier slides only one timeout is reported at a time. So two days later it has found two new timeouts to replace the four that are gone. So the number of timeouts is fairly stable. It's always some documented timeouts in some way or other. Yeah, so the OSS Fuzz keeping the security reports to a minimum of possible and finding things before they actually get shipped. That's the first of them. Second one of these continuous things is this crash testing. Again with the crash testing we have 116,200 documents mostly pulled from our own bugzilla but also from Red Hat's bugzilla and Mozilla's bugzilla and some legacy documents from Novell's bugzilla before it became private and other places like that. So these are not your ordinary documents. These tend to be documents that are reported as bugs because there's something unusual about them in the first place. So we import them all and then for many cases we re-export them all automatically. Recently apart from exporting them we also then re-import what we've just exported and see does any of this crash. If any of this crash, we extract backtraces from the cordons and we clicked up all the information about how it crashed and we send it to the email. Send the link to that data to the email or at least to the developer email list. This year we have new hardware. Last year we were taking about three days to do a full run on that 116,000 documents. These days we're doing next day results so I can set it off in the evening and the next day I can be sure that I'm going to get the results in again which is a huge, huge, huge improvement and that's why I'd thank Ed Finis there for that. Christian and the boys apparently have helped us there. So this is the stats that we have for the last 12 months on the crash testing. So the number of bills at the moment, we have 73 or whatever it is, 76 bills this year and I can see there's a huge number of clusters at the start and they'll take this pan down again. So we have a kind of a persistent collection of about 10 documents that are causing difficulties for import and there's difficulties with footnotes. The tables and footnotes are probably tables and footnotes in the writer. That's one of those problematic areas. Other issues over the year have been parallel importing issues in Calcutta. I believe they're all fixed now as well. The crash testing allows us to find those things. Often people will deliberately put in a search in the code that weren't there before in order to get the crash testing to trigger so they can find real world examples of corner cases that they want to investigate. So yeah, it's a useful resource for not just for finding regressions but also for actively searching for documents that can prove or disprove something you're trying to investigate. Exporting. Exporting is, we have had no export failures for a number of weeks now so that's quite good. They tend to jump in huge lumps like that. As you can see, we tend to have very few export failures and then when we do have an export failure, we tend to hit it a huge number of times. Yeah, a lot of people run. Yeah. Right, so that's the second one of the crash testing. The last one then is just coverage. You've had... We've been using coverage for quite a while. Unlike the OSES Fuzz and with coverage, you build it locally yourself with their tooling and you upload that blob to their server which does whatever magic analysis it does and that blob of information. They're linked to the project there. It's a public project for the last year or two. Before that it was private so it's now public and you don't need to request from me any specific permissions to view the bugs or any of the historic bugs. They're all just publicly available. The scans used to be just one language, the other you're designing a project to be a C++ to a Java project but it has grown the ability to manage both of them now so we get both sets of warnings in the one instance. We do have to drop down the version of C++ plus we support the maximum, the coverage it supports is C++ 17 so we have to go back down to that version. We only scan our own source. We don't scan any of the source that we include from third-party so not OpenSSL or NSS or Python or anything like that. Just our own source. This is what it looks like. You get an initialized member warning like that. And this is a common enough one is where people add a member to a class. They're a member to initialize it in one of the constructors but they don't realize that there's two or three constructors and they haven't initialized in the other ones. That's a useful warning you kind of get. Sometimes you get warnings around so useful so a lot of the work with your verities is to identify that these things are deliberate. So in case you have something where you're deliberately not initializing something, this is the kind of markup you can use and so there's two types of markup. The first one there, you just cover it here and you use the name of the warning and the previous one you'll see the warning is highlighted here when you need to member. You use that same name inside the square brackets and put a comment afterwards and that'll be automatically marked in the user interface as intentional and the other one then is where you use the same text but full calling and false simply marked as false positive. That's handy for me because if you just mark it as such in the user interface directly then when we in Red Hat run this stuff through our own separate coverage instance it won't be backed as those deliberate and false positives. So it saves me effort if it's done this way in the first place. And also helpful if the code is moved around the place because it tends to lose track that it's the original code that I saw the last time and doesn't realize that it's the same issue. You should use a new one. Now you can use this markup to say that something is intended to exit the problem so you can mark various pets as being fatal pets and then the coverage won't argue about and it will base its future decisions on the logic of that. So it can be convenient to mark that for the C++ CPP unit stuff which is marked like that. So when that fails it just exits the program as far as covariate is concerned. Tainted data, it's probably not that relevant anymore but there are ways to tell covariate that it's trusted. If you know it's trusted, so it's an eternal data promise that it won't complain that you have to be more careful about that data. Here's an example of something where you should check it. We need to check if something like that in resource lent that resource lent is valid data or not. It may be outside the lentry of files so it's probably not a good idea to allocate a ginormous block of memory based on the claim of a random file format. Yeah, so I think this might be our last, second last slide. So this is just the historic data of how many bugs we had initially and how many we have now. So we started off somewhere too far off 10,000 and we're now down to effectively zero. You can see the gap there of when we didn't even get C++ 17 supporting covariate. So we're out of action for a few months. This year's stats, I waited deliberately until I got to a state just there three days ago and zero bugs again. So yes, so this year, like last year, I'm able to present the final slide that we are currently zero bug free as far as they're concerned. And I presume as you've seen from the product, we obviously have no bugs. So our coverage is telling us the full truth here. Thank you. Thank you. So just to let everyone know, we rescheduled Sarah's talk. Her talk will take place shortly after the two lightning talks that we have that are coming up.