 Så hvad skal jeg sige om? Det er bare en vurskning, så du kan få ud nu, hvis det ikke interesserer dig i hvilken måde. Jeg har en stærke beslutning for at finde sikkerheden, fordi det er fint, og nogle gange pagerer billeder. Jeg tror, det er bedre, hvis du kan finde dem, når du er sikkerheden, hvilket betyder at gøre nogle fussing normalt. Jeg gør en serie af problemer, som jeg incinerede, når jeg var tænkt til at fuss i dette case Adobe Reader. Håbentlig gode solutier, og hvis de ikke er gode, kan nogen anden give mig bedre solutier, og så er det bare et sted-by-sted guide på, hvordan du kan finde sikkerheden i stedet. Så hvis vi ser at fusses som et sted, så er der nogle ting, du må have, når du vil gøre en fussing-kampagne. Men du må have en sådan sort af indsynning til at være færdig til noget, der passer indsynning. Det indsynning, hvilket indsynning du genererer, har til at være færdig til hvilket sted, som passeren expecter. Du vil være lidt videre, bedre, bedre nu, fantastisk. Okay. Du skal bruge en indsynning- og deliverings-kampagne, hvor du kan færdige indsynning. Der er en and på min computer. Du skal bruge en indsynning- og deliverings-kampagne, hvor du kan færdige den genererende indsynning til din target, og du vil skabe din target så meget, som kan, fordi mere skabninger betyder mere testkabel, så og så. Du er nødt til at bruge en indsynning til færdige indsynning, så du kan se, hvis din genererende indsynning har færdiget en færdig, så du kan færdige indsynning og simulige ting, og færdige indsynning, selvfølgelig. Så når vi ser på den stedet af færdige indsynning, og det selvfølgelig betyder noget som AFL, det er fantastisk, men det tænker også vores arbejde, og det er faktisk, at det fortæller, at det bliver kompliceret, at om du vil gøre ting, lad os sige, så godt som muligt. Så en af problemerne, ikke problemerne som sådan, men en af problemerne er, at modern færdige indsynning har evolvede into something extremely complex, so there's been in the later years, there's been almost like a revolution in how well fuzzers perform, and if you're not doing it the right way, maybe it's shit, but it doesn't have to be like that, because bugs are so plentiful that there's still room for doing it just good enough, right? You don't have to do input tracing and hit tracing and function coverage and feedback driven fuzzing and all that stuff, which is amazing, but also complicated. And this is what I refer to when I mean bug hunting for normal people, because you don't have to be God's gift to computers or Alan Turing to be able to write fuzzers to find bugs in actual contemporary software. You can just start simple and then just sort of make it up as you go and see where it takes you. So for Adobe Reader, the first problem you'll encounter is that once back in the day, it was enough to just download random PDF files off the internet and just do some random mutations in them and do some bitflipping and whatever, insert weird strings and that would be enough to find exploitable vulnerabilities where you smash a function point or something else, which could be sort of trivial to exploit. Excuse me. This is no longer the case. If you look at checkpoint research, they made a pretty good post called, I think it was 50 bucks in 50 days or something, where they fuzz a lot of the Adobe Reader binary content parsers using, I think it was Winifil, excellent article, but that sort of also took away, well, it took away at least 50 of the bugs in Adobe Reader, which is bad for us who else, the rest of us who wants to find bugs in it. So the solution for that problem is to take a step back and find out what else you can attack, which is not the binary parsers that checkpoint has fuzz to death. And I figured maybe I can just find unflippable functionality in the software, meaning that I can't just bitflip my way to a memory corruption and go, ooh, money, right? So instead of looking at that, you can look at the PDF spec. It's very long and not super interesting, but it gives you an idea of what the format actually supports, and in this case it's JavaScript. So it's been supporting JavaScript since around Y2K, and you cannot just bitflip JavaScript and find good bugs. You can find some, but nothing. The amount of compute cycles you need to do to just do blind mutation on JavaScript and hope to find bugs is through the roof. I mean, it is possible, I've done it, but it's also a waste of cycles and pretty stupid. So when we look at fuzzing, a very important part of it is the input generation, right? And I only have a limited idea on how to make smart, evil input. It's pretty easy when it's binding forward, because you can just flip bits and brute force your way to a solution, right? But in a textual context, it's not very easy to make smart, evil input. But some smart people, in this case the guys from Mozilla, I think, released some free tools with a proven track record. In this case, Dharma from Mozilla. Domato is from one of the guys at Google, and Redam says from some Finnish guy. For this one, I settled on Dharma, which is a generation-based context-free grammar fuzzer. And you can also use Domato if you're so inclined, but I had used Dharma before for something else. So it was natural to settle on this. And what is a generation-based context-free grammar fuzzer? It sounds complicated, but it's actually not, because this is a grammar file I made for Dharma. Just a same grammar. And as you can see here, I instructed to generate output. Output must consist of either first one or second one. It's going to pick one at random. First one, when we look at this, consists of the string first one, and then something which will be expanded, in this case a parameter and a parameter and a parameter. And then maybe with two parameters or with one or with none. And second one is the same, right? And then parameter pulls from the standard library inside Dharma, where it can put in a boolean, a digit, integers, all that stuff. So you don't need to come up with what could be bad data, right? The framework will handle that for you. So you basically just describe the constraints for your output in the grammar file, and Dharma will generate random, but well-formed input, provided you make a well-formed grammar file, right? Boom, boom, boom, yes. So this is what the grammar will output. So we can see that it, first iteration, it just picked the thing called first one with no parameters and second one, la la la. So subsequent executions of the first one will just keep generating well-formed, but malformed data, if that makes sense. So malicious in some sense, right? Cool thing about Dharma, it's written in Python. The standard library of edge cases is fairly extensive, so it covers all the, well, it covers integer overflows and all the edge case integers around where, if you increase by one, it goes negative and all that stuff. It's easy to extend the grammar as well, so you can just put in something like this, where you go overflow and then repeat. So this is like F, macro, which will expand the string ABCD, blah, blah, blah, blah, into a long string. And if you, I mean, it's really easy to modify the generators, which means that you can put in your own secret source, so even if you happen to write a grammar file, which is exactly identical to the guy next to you, if you extended some of the badness library, you have a chance of finding bugs that he won't find, right? You can steal the bad string library from something like Bufers, and I think that one stole most of it from Spike back in the day. Well, Bufers is the continuation of Solly, which stole the strings from Spike. Anyway, it's a good idea to ensure that this generated data is valid and monitor the output for syntax errors, because as you can see in the bottom here, I've generated some data, which is not valid, which means that the parser will actually, it will bail out because you cannot catch syntax errors. So you can catch, let's say, final and final exceptions and other things, but you cannot catch syntax errors, so you have to make sure that it's syntactically valid so that it's not being rejected by the initial parsing mechanism, which does some magic before it's executed, right? The next problem you're going to encounter is that Dharma and Tomato, they don't, in themselves, know anything about PDF-specific things, so you have to make all the PDF stuff yourself, right? And if you say, well, I'll just use the standard grammar and first the JavaScript engine, well, it's a waste of time in the Adobe Reader case, because it's Spider Monkey, which is the JavaScript engine from Mozilla, and it has been since, maybe it started with Reader DC, I don't know, but it's been like that for quite a while. It's, of course, a heavily modified Spider Monkey because all the PDF-specific JavaScript things are bolted on by Adobe afterwards, and you can, maybe if you follow the Spider Monkey bug tracker, maybe you can find, when they fix a bug, it's most likely not going to be fixed in the version that Adobe has imported, at least not instantly, so there's quite a bit of latency period there, where you can score some quick wins, maybe. So it's just a matter of finding something PDF-related and JavaScript from the JavaScript reference, write a grammar file, and away you go. If you look at the PDF JavaScript reference, it's been updated. It was updated in 21, after maybe 10 years of nothing. It's actually pretty well sorted into the Rural API and the properties and methods, so you can focus on complex or interactive parts, and you can actually just browse the table of contents, as I do here, and see that the annotation object has a bunch of properties, and then you go a bit down, and you see that it also has methods, so maybe that's an easy target to pick it. Well, I know it is. Then we come to the lines. The computer is lying, as usual, all the document is lying, because if you dump the search object from within the JavaScript console in Adobe Reader, you will see that, okay, I dumped the search object, and then when you compare it to the actual documentation, the search methods, according to the dog, is add index, get index, query, and remove index, but when you dump the object itself live, you will see that there's a method called getInth index, and there's also an undocumented property. Why is that good? Well, it's good, because it's either really old code, which is supposed to go away, but no longer maintained, and no longer kept in the table of contents, or it's new stuff, which may mean that since it's undocumented, it may be a good area to find bugs in, since nobody knows it exists. I've definitely found bugs in Adobe Reader, where the documentation didn't mention any, there were no references to some API it had, but it still worked. So if we go back to the Dharma stuff, and look at the annotation properties, and those are here, the alignment attachment, the AP, blah, blah, blah, all that stuff, you just define a grammar, where you say the error properties can be non-open error, blah, blah, blah, blah, blah. It's just, this is the, I would say this is one of the easiest ways of finding bugs, because you just have to sit and transcribe the documentation from the JavaScript reference into a grammar file that Dharma can understand, and then it's going to start generating malicious input that you can feed, and then bugs are going to reign. So it's pretty easy, and it's pretty fun. If you look at what the output from Dharma is for the annotation properties, it's going to look like this, for the set properties. So it's going to set a bunch of weird properties, do something with state models, and maybe it's going to change a logo, and then call a function, and then something bad is going to happen eventually. So this output is well formed, but nonsense. So next problem you're going to encounter, is that Adobe Reader is now, it's no longer just something which passes and does stuff. Since Reader DC, it's split up with the standard renderer and broker model, so the renderer is sandboxed and asks the broker process if it's allowed to do such and such and such, but it also means that there's multiple processes that talk IPC and PDF surrender in tabs, and so on and so on and so on. So you can't just start it twice and say, you run this input and you run that input, and if it crashes save it. A pretty easy solution is to first on OSX instead, because that one doesn't have a problem with spawning 20 simultaneous Adobe Readers, because well, POSIX I guess, or just use virtual machines for isolation and scaling. It's super preventive, but it's also cheap. If you want to make it so that you can run it over Reader multiple times in the same Windows instance, I mean you need to reverse it and find someone who knows what they're doing to sort of riverway that cannot run twice logic, it's not worth it because hardware is cheap, and if you find a solution to make it run twice, it's gonna get updated and you'll need to re-apply that solution with different offsets or whatever. It's a nightmare and it's not worth it. The next problem, when you start an application, there's a bunch of work being done in loading libraries and other boring disk IO. So I was thinking maybe you can just revout the JavaScript engine and first that one. Turns out it's extremely complicated, and it's not like there's javascript.tll, which exports a function called pastJavascript, sadly. So extremely complicated and I wanted something simple, not a six month revision project. So solution is to, well, just no solution right now. So what I ended up with is start the application only once and then instrument the GUI to avoid restarting the application. The good part about that is that you have a live PDF document with a DOM that you can interact with. So all of a sudden it don't on me that with this you're not fussing the JavaScript parsing, you're actually fussing the DOM itself, which is a lot more useful because that's where all the DOM interactions are what's going to lead to use after fees and all that stuff. And then I mean, yes, it is cheesy, but it works well. Just do periodic housekeeping. We start, so fire off a thousand test cases. We start the application and done, right? So how do you get the console? Well, you make your own console.pdf. DDA Stevens has made a few PDF tools, including one which can make a PDF which contains JavaScript that will be executed when you open it. And then, as the script, you just put in this console.show. So you can pop an interactive JavaScript console from the regular reader and not just the professional version, which you normally need to do JavaScript development. And this also works for Foxit. In the case of Adobe Reader, there's this Acro help object you can pass as an argument. I think they stopped maintaining it because it works for some things. And then it also definitely doesn't work for a lot of things. So the next problem is that you have the input and you know where to put it, so you can get rid of the startup overhead. But how do you get it there? Do you sit and copy paste the input you generated? No, you do not. You just use Python also and Piperclip. Doing it like this, where you generate the input externally, has a distinct advantage over indom fuzzling, because you have a precise log of all the DOM manipulations you attempt to perform. I made another fuzzle where I just do stuff, bunch of JavaScript, stick it in, execute it where it does DOM manipulations. And the problem is that when something goes wrong, you don't really know what went wrong, because logging is hard. So I made a really, this is the worst thing I've ever done. I think as a log function, I try to read a file where I get permission tonight, but it's still gonna log the input. So let's say my input is blah and here, then I'm gonna try to read a file called blah, and it's gonna fail, but it's still gonna show up in something like Procmon, so I can just do API monitoring and see that it gets accessed tonight by trying to read this file, but I still have the log of the stuff that I fed into it. So that's a way of doing fuzzlogging inside the DOM, but it's a nightmare and it's definitely trash. And it's also easy when you do the DOM manipulations from inside the DOM to end up in a situation where when your fuzzle lives in the DOM, your fuzzle can do a manipulation which destroys your fuzzle, right? Because then all of a sudden, RAND is no longer a function because you've destroyed it with your fuzzle, so not very good. So for the GUI automation, Python Auto, fantastic. You can write some Python code, you can attach to binaries, you can find the dialogues you need and just make it click all the buttons and do all the things. If you spawn it with Python Auto, it prevents easy debugging, but the solution is just to don't spawn it with Python Auto, you can just attach afterwards with the Python Auto application Connect, right? The next problem you're gonna have, and this is something you can only, this is something you can only find once you've found your first crash, right? Because if everything is working as intended, well, and the program is not crashing, then you'll never encounter these weird error dialogues, right? But then once you've found your first crash and your fuzzle says, well, continue, it's just gonna stall, so you need some handling of error dialogues as well. In the Foxit reader case, that's fun, because if you feed a bunch of script, it's gonna pop up this enter no more than 256 characters dialogue, but you can still keep feeding script to the console and it's just gonna get queued and then when you click OK, the next 200 lines of script are gonna get executed, so that makes it really weird and something you need to handle specifically. I just patched out the code to the message box in this case, because otherwise it was impossible to find out what part of the script actually caused the crashes in Foxit. Yes, next problem. Pywin Auto Windows GUI automation stops working after a while. I don't know if some handles are leaking or something. Just reboot the machine and it's a lot. I mean, rebooting wastes 15 seconds of your fuzzling runtime and trying to investigate why it fails. Well, you can waste days on it, right? And maybe you can't even do anything about it. If you use the 64-bit Pywin Auto for 32-bit apps, you're gonna have the problems arise more often, but you no longer need to maintain two different Python installations, because the reason Acrobat, well, I no longer need to maintain two different, because the reason Acrobat is now 64-bit, finally. Next problem with Acrobat, the JavaScript console is the worst JavaScript console you will ever try in your life. There's a maximum of 3200 lines of script, but the solution is to just generate your input in chunks and just feed it a thousand lines at a time. And this doubles as an easy fault indicator as well. And what I mean by that, I mean that if I cannot feed the data through the dialogue, my chunk of data, then something bad has happened and I can act accordingly and know that, well, something bad happened. So your next problem, once you start finding crashes, well, PyDBG hasn't worked for quite a while and WinFDBG is a pain in the ass. A lot of things. Well, running Adobe Reader under a debugger, while you're fuzzling it, is also a complete waste of time. And my solution to this was just don't do it, because when you look at the way it works, when I do the sequential input feeding, I feed it data and then eventually the app crashes. I'm no longer able to feed the data, so I know it is crashed and then I can just save that data, right? So there's no need to do this advanced fault detection at runtime using a debugger. If you're so inclined, you can also catch faults with mini-dumps, but in this case it's not needed, because if you cannot feed the script, you know that something is up and then you have a solution, right? Next problem, well, something you're going to find out when you want to combine the sequence of data into something that can reproduce the crash, is that the disk I owe, well, I used SQL super slow with the exit overhead, so as you can see, it takes like seven and a half seconds to concatenate a few files, do it in Python, it takes 0.06 seconds instead. So in this particular case, any solution would also just be to the data that you generate if you save it in decent filenames. You won't need to do this terrible batch loop, and then you can just get all the files into one. But this is something you're only going to find out when you actually start working on it and find out where the hangups are in your queue. Next problem, you have a crash, what is it? Can I sell it? Can I use it? What can I do with it? It's super hard and complicated and takes time and actual work, but you can use Skyline's bug ID for automatic triage of the walls. So if anybody remembers, there was this WinTPG plug-in called Bang Exploitable. I'd say this is Bang Exploitable on steroids times a lot, because this is actually very useful. It generates beautiful HTML reports and comes with some pageheap script, so you can enable pageheap for your app, or it has a bunch of defaults as well. It's free for test runs, you have to pay if you want to use it to make money. If you look at pageheap, the pageheap is useful because you set certain patterns in memory when allocating and freeing memory, etc., which means that once you start doing some memory overrides on the heap, it's more likely to cause a crash, or if you try to read from some data which has been freed, it's also going to crash instead of maybe reading something valid and continuing execution. So you get some slowdown with it, because it's a different allocator, but you get more bugs, and I think that's a great trade-off. If you look at a sample bug ID report, I think the meat of it is you have an access violation at this address, blah, blah, blah, blah, blah. And then when you look at the verdict from bug ID, it says that it might allow information disclosure, but then when you look at the disassembly down below, you can see that it's trying to read EAX into ESI, and then ESI into ECX, blah, blah, blah, blah, blah. And then it's going to call ESI. Okay, so we control, or at least EAX set to F0, F0, F0, F0, F0, which means that the memory has been freed, and then PageHeap has set this pattern for the freed memory, which means that while bug ID thinks that it's an out-of-bounds read, it's actually trying to de-reference some freed memory instead. So you still need to do some manual analysis to make sure that you're not missing out on Goodbox, because this looks to be a use of the freeway to just get some code execution instead. If you, the next problem, yes, you have 100,000 lines of JavaScript, and minimizing that into finding out what causes the crash is also a huge pain. You cannot really be sure which of the many, many lines are required for the crash. Of course, Common Sense says that since we're feeding the input sequentially into the reader, the last blob of data is what's causing the application to crash, but maybe it's dependent on, as well, it's most certainly dependent on the sequence of actions before that. The quick solution for that is also instead of trying to minimize by hand, which is sometimes easy, sometimes not, it's just to use lithium. So again, from Mozilla, line-based test case reducer. So here, yeah, this is the marketing material. It can reduce the 3,000 line. It's fun for a crash test case to treat the 10 lines in minutes faster than you can do by hand. And lithium works on interestingness tests, as it's described in the literature. So in my case, I'm interested in things that crash due to memory errors. So there's an interestingness test that's called crashes, right? And then you say you instruct lithium into monitoring the execution, and then just cutting out chunks from the line in the PDF. And when it has cut out chunks that cause the application to no longer crash, it throws out that and says, well, cut out something else instead, right? It supports these markers called DDBegin DDN, so you can stick it in to in front and after the JavaScript that you've injected into the PDF, so that you don't try to minimize and remove essential parts of the actual PDF file itself, which will just cause, well, the wrong kind of errors and not the stuff you're interested in. And you go away, that's fantastic, right? But then once you start looking and using lithium, you find out that maybe your fantastic use after free is now being reduced into a different bug, which causes a null point of the reference, right? And then all of a sudden your moneymaking input has turned into something useless instead. I just asked the guys' help what I do here, and they said, well, there's something called crashes at.py from some other project. It's not mainlined into lithium and probably never will, but you can use it anyway. With that, you can say, I want the call stack to look the same, so that when lithium reduces your JavaScript instead of just saying crash, good, keep reducing, if it sees that the call stack has changed, it's going to throw out that reduction so that you keep your good crashing input and not bad crashing input, right? So next problem, it's pretty hard to make something generic targeting different applications. Of course, that's what I didn't want to make, right? So you end up hard coding a bunch of stuff, but it's not a problem if you're not giving it to someone else. I think some smart guy told me yesterday your code can be as terrible as you want, if it's just for you, you only have to make it nice when you give it to someone else, right? So just make all the temporary hacks, it's perfect. Of course, you can also use a modular approach and just make an application-specific GUI instrumentation harness. For eksempel, when I made all this, I made it for Adobe Reader, and it was like maybe there's some bugs and bugs as well, the only thing I need to redo is the GUI instrumentation part, and that's like seriously three minutes of work and then of course a few other changes, but it was pretty easy actually. Yes, if you do as if you read the Dharma documentation and do as you're supposed to do, you're going to run into some out of memory problems in Adobe Reader, when you keep adding annotations sadly, but you can just then select the instrumentation from within Dharma instead, and then this is the main fuzzing loop. This is basically all you need to do to generate data, feed it and lock faults, right? So the astute viewer here will notice the complete lack of exception handling, which works to my advantage, because if the thing, the abomination here trying to type keys into the JavaScript console, if that fails, then the app has crashed, right? And then the fuzzer will crash, and then something external will say, well, the fuzzer stopped feeding data, saved the input queue, and then start everything again. So the entire workflow in this is actually that you find some target functionality and you write your grammar, and you can even make a half ass grammar initially, and then you can just keep updating it while the fuzzer is running. That's not a problem at all. And then you use Dharma to generate your input based on this grammar, feed it using Python also, and just wait for the crashes to start raining. Automated crash analysis with bug ID. So that's handled for us. Minimizing the crashing input using lithium, then that's handled for us. And these steps are not tightly covered, right? You can easily bake them out to distinct worker nodes. They don't need to, I mean, the whole process is slow enough that you can just have centralized file stories where you SCP over files or something, and then workers can come in and pull jobs. It's quite fine. And the way it works is that Dharma produces for bug ID, which produces for lithium. The mining for the actual bugs with Dharma is a lot slower than the bug ID analysis, which is a lot faster than the lithium reducer, right? So it makes sense to have a lot of the producers here in number two. Maybe not so many for the bug ID, because that process is pretty static, and then some more nodes to do the minimizing, right? So you can play around with that a bit and get something where you don't have nodes waiting around for actual work. But that requires some tuning, and I can't come up with a generic recipe for how many you need of each. So that's just a matter of trying it out. Finally, results. I first it for two weeks on a pretty shitty laptop, and I got 339 crashes, giving me 132 distinct reports from bug ID with 39 distinct crash locations, and distinct reports, but identical crash locations means that there's a different cost tag leading to the same crash location, right? And it's pretty much the entire palette of what bug ID supports, so it's access for the collision reading near null, plus offset, blah, blah, blah. Buffer flows out of bounds reads, right? You name it, you can have it all in Adobe Reader. And I definitely took some shortcuts when I wrote the grammar for some of the things, so it was pretty lazy, but it worked out a lot better than I expected anyway. So there you have it. When you make your fuzzing infrastructure, don't use Krabby old Windows XP machines, or whatever you usually use. A virtualized Windows 10 or Windows Server 2019, it boots in like 10, 12 seconds, so the complete reset of state and everything until you're back and the fuzzer is running is like at 10 second job, right? And you can disable the antivirus on the server. If you fuzz on the end user OS and you generate this and you make PDF files, and if they have even just a bit of meat in them, Defender is gonna quarantine your files, right? So get rid of that, because otherwise it's also gonna, every time you make files, it's gonna come in and scan them and whatnot. I keep a shaved down installation with basically nothing but Adobe Reader on some NVMe, and then I just spawn linked clones or copy clones to RAM drives, that's also pretty cheap and I avoid wearing out my NVMe's. And definitely destroy the Adobe Reader update mechanism because otherwise it's just gonna update mid-campaign and then you come in and your crashes are, you have a big queue of crashes and you wanna analyze them and they're no longer crashing and you're like, what the hell is up? Why didn't it work? Did I break something? For me it was the reader, so destroy that one. Building on this, I think, and something for you to do, you can go ahead and write more grammars and write, well, some grammars for reader or even acrobat, which is like the professional version to make PDF files. It has a much richer feature set than the common reader. I need to fix my grammars for some better coverage because I took a bunch of shortcuts. I was thinking to, so if you look at Dharma and Domato, they're both contradictory grammar fuzzers and they should be behaving more or less identically, but maybe there's some secret sauce behind the scenes. So I was thinking to maybe write the same grammar, the formats are well-closed but not identical and then do a fuzz off between Domato and Dharma, just for fun. Something else you can do is look at historically problematic JavaScript. So regression tests from Chrome or from Firefox, that can also give you a hint as to what kind of malicious JavaScript that is you want to generate. When you write your grammars, like do you want to set values to null or do you want to do some weird re-entrancy or whatever. So it pays to follow what all the browser fuzzers are doing and of course not all of it applies to this limited subset, but still it pays to snoop on their stuff. Something else that I or you could do, I want to do at least, is to make a replay algorithm so I can minimize the minimizing efforts because while Lithium runs unattended and automated, it's still intensive and it feels like, well you're definitely wasting a bunch of cycles, so if you have to pay for your compute time, maybe it pays to well cut down on the minimizing efforts. I was thinking to make some sort of blacklisting too, because if you keep crashing on the same unexplodable bug early in your fuzzing, then again you're wasting cycles, right? It's surprisingly complicated to blacklist the things you don't want to do when you do this dumb manipulation, but it is what it is. It's not a big problem. It's just something to look at. So if we try to see or make some sort of conclusion, well I think Adobe Reader is still incredibly full of bugs, right? Because I'm one guy with one laptop and I sit down for some weeks and press some buttons and it's just raining crashes. For some reason, it's still widely used in corporate environments and I don't understand this because they have the quarterly patch cycle and there's always code-exclusion bugs in it, right? So I honestly don't understand why it's not blacklisted in all corporations, but hey I don't write the policy. The bugs are easy to find if you look in the right places. Everything is easy. It's a bunch of scripting. Python, batch files, imagination. And I don't like it when someone gives a presentation and says, here's a puzzle I wrote. This is how you do it. I found all the bugs and I reported it and made money, right? So I didn't report anything. You can just go to what I described and you'll find the same crisis as me right now. So also it's super complicated or annoying at least to build your fuzzing pipeline if you don't have anything that crashes, right? If you can't find at least one valid crash, it's hard to work on your crash detection logic and all that stuff. So I left all the crashes in there and that was it. So talk to me about computers and if you made something better than this for PDF dumb fuzzing, hook me up. Thank you. Awesome. Thanks so much. So do we have any questions in the room here? If so, could you line up behind the microphone? Any questions from the internet? No questions from the internet. Great. Okay, well, if there are any kind of questions, then people can come and... Which village are you in? Do you have a village you're in? Oh, there's one question. Hi. So have you met any software that you couldn't find any bugs in? Just yes or no? No. No. I mean, finding bugs, I think you should... If you're interested in how to find bugs and all that stuff, if you watch Mark Daout's kino from OffensiveCon, I think that this year, I think that was awesome because it goes into the whole mindset required, right? Because I think also if you go in with the mindset of, everybody's looked at this, I'm not going to find any bugs, then you will most definitely be correct, right? But if you go in with the mindset of, I'm probably going to find some bugs, you're also correct. Because finding the bugs is a matter of understanding how the thing works and then just pushing all the edge cases everywhere. Great. Thanks. Next question. First of all, thanks for your talk, real nice. You said that one of the tips was to disable the update mechanism, which sounds obvious. But what about disabling the internet? Will that actually result in different behavior in your experience? Well, so he's asking if it will help to just disable the internet. Well, I guess it's an easy quick solution, but I built my network, not my network, but my setup with distinct worker nodes that talk over the network and pull jobs and do stuff here and there. So I think it's a bigger pain in the ass to not have internet on them than it is to just screw with the update mechanism. Because maybe you want your pipeline to actually do check for the latest version and pull that one down and do everything automatically. So you skipped the update mechanism altogether, as in you don't test that part? Well, the update mechanism just downloads, assign binary and executes that one, right? So that part is not interesting. So I think it's better to just destroy it and have, then you know what version of Adobe Reader you're fussing. In the case of Adobe Reader, it's not going to say, there's a new version. Do you want to update? It's just going to do it, right? Yeah, I get that. Maybe perhaps there might be different behavior. If you have internet, it might... Speaking to the microphone. I'm walking the microphone now, but yeah, sure. So I was thinking perhaps that if you have internet enabled, yeah, is that better? If you have internet enabled, then perhaps the Adobe Reader would do some other checks as in a load different pieces of code, then well, there's no internet. It might skip that perhaps. Jack, if this is online, then go into this subroutine, but if it's not, then go into that subroutine. So I was wondering if that might be interesting as well. Yeah, you might skip that. Yeah, I see what you mean and possible, but I'd say not relevant for the stuff I'm doing right now, but definitely something to bear in mind, right? Thanks. Thanks to the question and let's give a round of applause for...