 Yeah, hello. So as I said, I'm Hano Berg and I want to talk a bit today about how you can find bugs in your software Especially if it's written in C or C++ Yeah, and some very powerful tools that I think not enough people know about yet But first a quick introduction about me So I once work as a freelance journalist and I often write for the German online magazine Golem I also write a monthly TLS newsletter and and Now you can see that I Sorry, these are the wrong slides. I Did almost the same talk a couple of days ago. So Yeah, I write a monthly TLS newsletter and I run the fuzzing project which is a Something I founded like about two years ago and which is funded by the Linux Foundation's core infrastructure initiative Where I try to reduce the number of bugs and security vulnerabilities in free and open source software So here's an example and it's from Qt So There's a problem with this code and so what's actually happening here is we have some kind of array structure and we have an iterator pointing on it and there's a check here that if We're if we're at the last element of this array Then it reduces the pointer by one and therefore like we are pointing to one element earlier Now the problem here is if we have an array with only one element Then we're trying to get to the previous element and then we're in invalid memory. That's not allocated So what happens here is like the the code reads some invalid memory and then something weird may happen This is a real bug in Qt I have actually not reported it yet because I discovered it yesterday while I was on a ship without Wi-Fi So, sorry, I have not been able to report it yet, but I will report it and I found plenty of bugs in Qt and KDE So I actually found a use after free bug in Q make which is kind of the build system tool from Qt I found this bug I just showed you when I found several out of bounds reads of similar kind in Qt, Goy and also in Kwin For the last true I'm not entirely sure who I should blame because I mean the code is using the X IPI in a wrong way But I think the X IPI is it's really strange how it behaves because you like It asks you to use a struct That's but then you need to allocate more memory for the struct than the size of the struct because it tries to read a certain size from it So I'm not entirely sure maybe there's a way to fix this in the X IPI. I have to look into this And to be fair, I also have to mention GNOME I found a bunch of very similar bugs in GNOME For example two bugs in Glib which were out of bounds reads Heap overflow in GNOME session, which is triggered actually every time you start GNOME Out of bounds read in pango Here's this heap overflow in GNOME session. So What what's happening here is that There's there's an allocation for a buffer that contains a list of pointers and it contains all the pointers From arc v plus three more pointers So, um, can you see the problem with this who sees the problem with this? One person that's nice So the problem here is like we add arc c plus three and then times size of the pointer But there are brackets missing around arc c plus three So what we're actually doing is doing three times the size of the pointer plus arc c instead of the other way So it ends up we have an allocation that is too small and it's a very classic buffer overflow And like as I said this buffer overflow happens every time you start GNOME So why are there all these bugs and why don't people seem to find them There's a tool that I want to Suggest that you all should use if you write code Which is called address sanitizer because all of the bugs I just showed you and mentioned can be trivially found with address sanitizer and This is a feature which is part of the GCC and LLVM compiler suites. So it's basically just a flag you add during compilations So you add F sanitize address to your compiler flags and then you get this feature that will analyze these kinds of bugs for you So This is how if you have a project that's written with auto tools This is how and if you have a test suite which some people have and Then this is how you would run your test suite with address sanitizer enabled So it's basically you you pass some C flags to your configure flag To your configure script and some C plus plus flags and some linker flags And then you compile as you usually do and then you run your test suite So I think we can agree that this is not too hard like if you're developing code like Typing in these free lines. It shouldn't be too hard But apparently it seems like several people have never done this before. There's a question I just wanted to say that in KDE our CI does use those flags So we sometimes find some of the mistakes. Okay, so so the thing is that we're missing more tests So what I just heard is that KDE is already doing this in their CI system But it's you probably never start KDE with these flags Because as you would have found all the bugs. I told you earlier, but these were all fine found by just starting KDE with these flags so, yeah But it's just it's good if you do this. That's great. Thanks. GNOME doesn't do it. So good So so how does this ascent thing work? So what address sanitizer does is it reserves an amount of a so-called shadow memory Where it tracks for each byte whether that's allocated memory or not So if you do a memory allocation, it makes this as valid or if you Create a stack with a function call then and but it also makes sure that between the allocated space There's always some so-called poison space. So if it changes the stack layout that between variables There's always some poison space so it can detect when you Overrun a buffer no matter if it's on the stack or on the heap And the most common bugs you can find with this are out of bounds and use after free There are other kinds of bugs it finds but these are the most interesting because they are very common And also that use after free bugs are the most common security bugs in complex pieces of software today So especially if you have something like a browser then usually the memory corruption issues there will be used after free So this is an out of bounds read access Relatively simple code. So we have an array with two elements Then we create an integer which we assigned to and then we try to access element with the index two of that array and As array started counting by zero two is invalid. We only have an element zero and an element one so I Have this code here and now if I compile this without anything special Then I just see some garbage number print it out Because okay, it's reading some invalid memory and there's something in that invalid memory and it just prints that out. Okay now we add address sanitizer and We add G because then we have debugging information and that gives us a better stack trace that's just the reason for it and Then we run this and then I have to unset this variable and then we run it again And then we get this very nice error message So it says a read of size for because it's an integer for bytes in line 5 column 17 and It also here says That okay, the address is located in the stack of this thread that's generated here, which is our main function so So you run this code with address sanitizer You hit this bug and you it the program terminates and you get a very very detailed error message So if you know your code, you should be able by reading this error message to figure out what's wrong and This is a user of the free bug Very simple example. We're allocating some space with see a look. So it's initialized by zero Then we print some element of this array and then we free that and then we try again to print something so That's now invalid because we just freed this buffer. So we should not read from it anymore, but we still do so I have this code here and the same If I run this with address sanitizer and for use after free the very nice thing is it tells me basically three important things The first is okay. Here is the use after free access then here is where this stuff got Allocated and here's where it got freed because like these are the three pieces of information you need to Analyze this to analyze a use after free bug So, yeah, also very nice. Of course, these are extremely simplified examples You will have much more complex examples in your code, but that's the basic ideas always the same what a use after free is So I kind of got this I saw okay There are these many bugs you can find with address sanitizer And I made it kind of of my mission to like test everything with address sanitizer and fix all these bugs So what I did was actually I Started building a linux system with address sanitizer like with everything except a very few core packages where it's more difficult build with address sanitizer and It runs like I can run it in a VM or I even have a server where it runs and Just by doing that just by trying to create a linux system with address sanitizer enabled I found bugs in things like bash core utils man pitch in OTR Pitch in OTR is kind of security sensitive. I don't know. Maybe this is important Yeah, and also a Qt and KDA so we're at Qtcon here and many other things So basically just by compiling stuff with address sanitizer and then trying to use it Yeah Yeah One more thing When I did this I thought this may be something I would recommend as a kind of special hardened system for very security sensitive Things but it turns out this is not a good idea because address sanitizer is not designed to be a security tool It's designed as a testing tool and if you do this you end up having Security vulnerabilities introduced by address sanitizer For example, if you have suit binaries with address sanitizer, you can basically use the the error message Functionality and let it dump files with root permissions and then you can get root with that So it's a good thing to find bugs Great to test things, but it's not good for production use at least not in the way It's implemented right now. Maybe there can be some future address sanitizer plus plus That's usable for production, but not the current version. It's testing tool and like the overhead is You have like 50% CPU overhead and but and quite some memory overhead because this shadow memory takes quite some space so yeah and Now I want to get into the next topic which is fuzzing so fuzzing The idea is very simple. It is you you take some Invalid input and test the software with it So the very Thoughts thing I'd like to phrase this is you throw garbage at software and see what happens. So in practice If we take a simple example like we have an image parser and we want to test it Then we take a valid image like a JPEG or a PNG and then we just add random errors to it Like we just add in some characters flip some bits delete something whatever swap parts of it around and then we test if that crashes the code we're trying to test and Who of you has heard of the DARPA cyber grand challenge recently? So quite a few so here a bunch of press articles about it So what DARPA watch AI hack into AA at DARPA cyber grand challenge AI hackers will make the world a safer place And also Elon Musk says DARPA AI hacking challenge will lead to Skynet So the general theme of these articles was okay. There are now some some AI bots in computers in some shiny glowing Boxes like on the lower left. That's a picture from Defconn where these boxes were like on the stage and and they're hacking each other and automatically finding vulnerabilities patching them and exploiting them and Then by that they are basically trying to replace Security professionals with AI bots. So they want to replace people like me With an AI bot and then these AI bots at some point, of course will create some Skynet and Suppress humanity and we will have some matrix kind scenario So if someone wants to replace me with an AI bot, I mean I'm cool with that that's I mean totally fine I will find something else to do, but I would like to understand what's going on here, right? So I'd really like to understand this and all these articles were like very shallow I didn't get any technical details of what's going on here and here's a PR video I'm not sure the whole video, but like some parts of it. You can see the the fancy colorful things on the Defconn stage Game started at 0 945 seconds Pacific Welcome everyone to the first ever fully automated cyber security Automated competition Sandra managed to discover and prove a vulnerability in one of the services that we call opsim But this is a unique situation and it's something we were kind of hoping to see they actually managed to discover an unintended Vulnerability so each of these each of these services that's written just like real-world software It's impossible for us to have made absolutely certain that the one vulnerability that we intended is the only one that's present So the discover this vulnerability and actually begin trying to prove that vulnerability against other teams Very very early on To have to what we have going on here because in that five-minute window that Deb mentioned that she was hoping for That actually managed to be proven and patched by several of them basically represents A leap forward in program I mean all the talk about state machines and all the possibilities state machines really reminds me of one of the papers We published recently we published paper driller, which is supposed to augment Fuzzing with symbolic execution Did you hear he said fuzzing so okay at this point? I thought okay. This is something. I know something about So yeah, okay, so maybe yeah because you know if people talk about artificial intelligence these days I don't know how you feel, but I get extremely skeptical I think a lot of stuff is called artificial intelligence, which isn't really anything about artificial intelligence and Okay, so this seems to be about something about fuzzing and I Talked to the the leader of the winning team David Brumley and he told me okay They are using a tool called AFL or American fuzzy lock And immediately this became something I'm very familiar with because I use it all the time And then I learned not only the winning team, but basically most of the teams were using American fuzzy lock The first place the second place and the third place all were using American fuzzy lock So basically that's the real winner of this competition We can argue what the question was is American fuzzy lock is AI I don't know I don't think AFL is the skynet but It's pretty smart. So maybe it's some kind of AI. Yeah, so This rabbit is an American fuzzy lock, but it's also the name of a tool And to understand what American fuzzy lock does I want to go a bit into like What different strategies there exists for fruzzing? So the very original idea is what I would call dump fuzzing where you Only add errors at random to an input and test it so That's easy. It also I mean it it's surprisingly effective already So you already can find a lot of bugs with it, but it's not going very deep into your code So if you have some complex issues, it won't find them And then you can do something which I call template based fuzzing where you do Where you write a fuzzer that's specific for a certain data structure So you could do something like you want to pass an image parser and then you try things like What happens if I put a zero into the width because that doesn't make any sense So let's see if the parser correctly handles that The problem with this is it's a lot of work because you have to do it for every data structure your fuzzing and Therefore I say it doesn't scale if I want to pass all the software out there I will never be finished and This new thing that came up with AFL is what's called coverage based fuzzing and what this is doing is that it You're adding some extra assembly as to instructions during the compile step and these allow the fuzzer to learn something about your code paths and When you have an input to the to the tool that generate that triggers a new code path Then the fuzzer sees okay This is generating something interesting because now we trigger new code and therefore we should continue fuzzing with this as a starting point And this leads to some pretty impressive results So there's a blog post by the by Michael Salewski the author of AFL where he started to fuzze a JPEG parser with garbage input and After some hours it created valid JPEG files so Maybe that's already kind of skynet thing, I don't know It's a really pretty impressive so Yeah, so AFL made this idea of coverage based fuzzing possible and You work in two steps first you compile it with the AFL wrapper Compiler thing and then you start the further so and to show you also that this is really easy Yeah, you run like still auto tool space But you run configure your pass CC and CXX so that you run it with this compiler wrapper Then it's always a good idea to disable Shared libraries because then you don't have to fiddle around with LD preload and anything Then you build it and then you need some kind of sample file that you want to fuzz Put it into a directory which I always call in but it's just a arbitrary name And then you run this fuzzing process Which where you put the input directory? Output directory and the path to the thing you want to fuzz and then you add these add-at and then the fuzzer will call this executable with and pass the file that's currently tested at this add-at position And this is how it looks So nice asciiart interface The most interesting part is the upper right corner because there it shows you the number of unique crashes That's a bit of a lie because You will have duplicates there The detection of unique crashes doesn't really work, but it tries to err on the side of caution. So And it also detects like hangs if you're stuck in an endless loop or something And the number of paths it detected and that's also important because if that number Doesn't change if it stays at one or two then probably something is wrong with your setup and you check that And I'm inclined to say AFL probably found bugs in every single important piece of software that's written in C So in open SSL open SSH JPEG, PNG, SQLite, Knoopig, Bash, Stagefright, Stagefright was the Android thing that made some headlines last year Yeah, so really a lot of bugs found with AFL. There's a list on the web page. That's quite impressive So it made a huge impact like on bug finding in in the free software world And So I I now told you about AFL and address sanitizer and there's a very obvious thing to do And that is to use both at once because address sanitizer finds these additional classes of bugs that are hard to find otherwise and AFL is a very good fuzzing tool. So you can just use both at once What you have to do is basically AFL already has a feature built in for that which you can enable with an environment variable AFL use ASAN you then need to disable the memory limit from AFL because address sanitizer has this thing that it allocates this shadow memory and it is only a virtual Allocation, but it's several terabytes So it's only virtual memory So it doesn't really use that memory But if you put a memory limit on it, it won't work Therefore you just need to disable the memory limit This can of course cause a situation where you have an application that uses a lot of memory and everything will crash But in my experience that That almost never happens, but that's the reason why AFL has a memory limit in the first place So simple solution just disable the memory limit and you're done Okay, who remembers this logo? I hope everyone This was the hard bleed bug probably the most famous bug in any piece of code ever made headlines everywhere So At some point I was asking myself, okay, could we have used fuzzing to find a bug like heart bleed and I have to say here, okay, one of the people who found a heart bleed actually used a fuzzer, but it's a proprietary tool that's Specifically targeted to TLS. So it's not kind of a generic father It is what I earlier called templates based fuzzing and it's not free so I cannot test it so but I wanted to know like can I use the tools that I use for fuzzing to find a heart bleed bug and Okay, one problem is that American fuzzy lob is file-based and this is kind of networking so but the open SSL API has a nice feature that I can basically Do a TLS handshake without really doing a handshake. I'm just passing buffers back and forth So I have two instances of open SSL and let them talk to each other But without any real networking involved and then I could like write something that I just Swap one of these handshake messages with a file that I pass on the command line and Doing this after six hours. I was able to rediscover the heart bleed bug And then I got an email from Kostya Seribrani who is the developer of address sanitizer and a couple of other things And he said he has this tool called lip father and it's able to do the same thing, but in five minutes So that's also quite impressive And lip father is it's modeled after AFL, but it works a bit different So what AFL is working on executables lip father is working on functions And that's much faster because it's faster to call a function than to run an executable But there's a downside to it and that is you have to write code to actually use this And because the big advantage of AFL is that it's just so simple and with lip father you need to write code This is an example for an Fuzzing an open SSL function with lip father So we're basically writing a wrapper function here that gets a buffer passed and the size of the buffer And then I'm just calling this open SSL function, which is called as one and B string copy I pass this this buffer and then Very important here is that I check if it succeeded and if it succeeds then I free The structure it generated again because if I don't do this then I have a memory leak and as it's running this like millions of times The memory leak will add up and at some point I will run out of memory and my father crashes Therefore if you use lip father keep in mind you have to write a wrapper that's free of memory leaks Yeah Then something What's also an interesting method is what's called differential testing so the typical fuzzing scenario is to look for these kind of memory safety issues and crashes and But you can also do other things and if you have a function that has a clearly defined output Then what you can do is take two different implementations of the same thing and then compare the result and Some area where this works really well is Mathematics because if you have something like an exponentiation or a division or then we can agree that this has a clear Result, I mean there's no discussion like two times two is four and not something else so if two Implementations disagree on some mathematical calculations, then one of them must be wrong. Yeah At least one of them must be wrong. Yeah, I Didn't run into a situation where two were wrong in a different way at the same time But maybe that happens And there was a bug in OpenSSL in a function called BN square which is squaring so you multiply a number by itself And in very rare cases this function would produce wrong results And these were so rare that it was in one out of two to the power of 128 And this number is so large that if you use this function with random input You can do this for your whole life and it will never hit this bug So it's a very rare bug, but the surprising thing was American FuzzyLob was able to find this bug. So maybe this is another indication that there's some Skynet thing going on I don't know The first person who showed this was Ralph Philipp Weinmann. He had a talk about big num vulnerabilities at blackhead last year It's on YouTube and I recommend watching it. It's very interesting And then I tried this and AFL is really good at this So I found a bug in OpenSSL in the modular exponentiation This is the function that's basically RSA and Diffie-Hellman. So Kind of important. Unfortunately, nobody found a way to exploit this, but it's still I think it's Something where you should look into that. It's correct Then in the elliptic curve code from Nettle, which is used by Gnutial as also in the modular exponentiation of NSS In the pulley 1305 authenticator in OpenSSL This has no CVE because this was never released in a I found the bug before they released the code. So, yeah And in in matrix SSL, which Have the interesting property if you had a zero to the power of something it would crash And then I thought okay. I can do this over the network. I just sent them an RSA signature That's zero because then it will try to exponentiate it and crash and that worked. So, yeah And also like there were situations where it was wrong results and they were not really they haven't really fixed all of that. Yeah, I Wouldn't recommend using matrix SSL Yeah, so One of the things I run into when I do these things is the vast majority of bugs I mean with these bugs there's no discussion These are bugs and also usually people agree that this is a security vulnerability But if I found something like an invalid read access on the heap for one bytes which happens extremely often then people tend to argue is this a is this a security vulnerability or not and and I Find these discussions a bit pointless because I say okay, it's clearly a bug I mean that you're reading invalid memory. You cannot do this So I say okay, can we just like skip the discussion whether we call this a vulnerability or not? I simply don't care. I want you to fix this and we're done with it So I think in many situations, it's just easier to fix the bug than to have the discussion on how severe this is Yeah Then I want to mention a couple of more tools So this address sanitizers there are a couple of other sanitizer tools I just think address sanitizer is the most interesting. There's undefined behavior sanitizer which Finds many small things that the C standard says are undefined and that essentially means if you do these You cannot have any expectation of your code that it does anything Reasonable it can just do whatever it wants the compiler can optimize it in a way that if you do this You cannot expect it to do anything sensible There's memory sanitizer which finds a uninitialized memory problem with that is it's nice But it's tricky to use because you need to compile all the libraries you use with memory sanitizer And if you do C++ even the lip standard C++ So that's a bit of annoying to use and there's threats on the ties of which is mostly there's a question Same thing you just said is valid for address sanitizer as well So if you don't compile a lip C++ with ace and you won't be able to find bugs that are triggered within Lip C++ by your code and it's the same for why you that is true That is true, but you can still use address sanitizer to find bugs in your own code and not in the standard C library While if you do try to do this with mSAN you will just get random false positives that make no sense at all so the difference there is You you get better results with ace and if you compile everything down to the last library with it But you can still use it if you only compile one thing with it Yeah, and there's also a threat sanitizer, which is like mostly interesting for large projects like browser for KDE It's probably also interesting, but Yeah So here's an example for Ubisan what we're doing here is we're shifting the variable By J and J is minus one and that's invalid. You cannot do a shift with a negative value. That's undefined And the next thing here is a integer overflow, which is also undefined And I have this code here as well And the difference here is that Ubisan doesn't terminate your application So you can find multiple bugs at once and the latest version of ace and is also able to do this But you see here shift exponent minus one is negative and signed integer overflow And here's an example for memory sanitizer, so we have an An area here and then we set element one and then we access element number of arc C Which is by default one, but if we pass a parameter, it's more so So if I run this I don't get an error But if I pass any random parameter, I will get this here use of uninitialized value Okay, then there are a number of these tools also available in the kernel So there's a kernel adder sanitizer kernel Ubisan kernel Tsun And there's a fuzzing tool called syscaller, which is trying to adapt this coverage based fuzzing to the kernel space with syscalls Yeah, I'm not super familiar with those. I just once tested Ka Sun and it found a bunch of stuff in my GPU driver Yeah, but yeah then One kind of biggie is network fuzzing because nobody really found a very good solution for that yet. So You run into a lot of trouble like trying to do this because you kind of need to define like when You're basically you need an application that terminates after one Connection attempt and things like that and there's a tool called preemie, which is trying to Put a file and then simulate the networking calls and Like let the file look like it's a TCP stream and there's a patch to AFL But which tries the same but all of that is pretty fragile and doesn't work very well So I haven't seen anything there yet that that works as flawlessly as the file-based things and that's Something I would like to see Yeah, but yeah, we're not there yet Then there currently seems to be some research interest in Combining fuzzing and something that's called symbolic execution. So symbolic execution is basically it comes from formal software verification and the idea is that you're basically trying to analyze a piece of software and Analyze all kinds of states the software can get into for all kinds of inputs And the problem with that is if you try this on any real software It will just have so many states that it will never finish And you're running this analysis So what people are trying to do is to use fuzzing and if the fuzzer is stuck that it cannot reach a certain Part of the code then they try to use symbolic execution for it I am kind of looking forward to see whether this will work or not So some people in the DARPA challenge have done this But there's nothing there yet that you can just use and try out So there's one thing has been published. This is called driller, which the guy you saw earlier in the video was talking about But it is like for a Artificial operating system that was used in the DARPA challenge and not for a real operating system Therefore it's kind of questionable how this translates to real So this is like open research and we will see but I I must say I'm a bit skeptical if this is really something That will lead to significant interesting results Yeah What I want to say like all of the tools I presented to you are free Unfortunately, not everything that the DARPA challenge made is free, which is kind of a pity But like AFL lip father address sanitizer all the other sanitizers. It's all free software So there's essentially no reason not to use it. So just use it and fix your software with it And Now this is something that kind of came up in the past couple of years What I'm doing here is mostly fixing bugs in c software, which are typical c problems And you know, there are these people who've been saying since forever that we should just rewrite everything in some safer programming language And this debate has now accelerated because there's rust and a lot of people like rust and Um, I think this is I mean, I'm not joking here. I think this is a serious debate We should have whether this is really the way to go to try to fix all these c bugs which pop up again and again Or if we shouldn't like move to some something safer. So But I mean, this is also kind of questioning of if what I do is the right thing to do. So yeah But I wanted to have it mentioned and yeah, maybe someone wants to rewrite qt and rust that would be cool um Okay, so and and finally something is anyone here working for red head No, uh, that's sad anyone working for dibion or like involved with dibion Okay, so you will be happy. Um So I thought okay, I will fast package managers because like why not They pass stuff and then that's good for fuzzing and I can test if they are have same code So I fast dpkg And um, I was impressed. So I reported two bugs to the debian security team And eight days later there was an update and the security advisory with these bugs fixed for both from dibion and ubuntu So that's pretty good, right? I mean if you have a vendor reaction like that, that's cool. Um Then I fast rpm so um in november last year I reported Three bugs in rpm passing to the red head security team Got an answer. Yeah, we just got 30 crash reports for rpm. It may take some time till we tackle this um, okay a bit about rpm and rpm is It was originally the red head package manager But in 2007 they decided to make it an independent project. It's now the rpm package manager It's used by redhead and zoosa and some others Um, so it's kind of independent although I mean rpm.org still belongs to redhead. So but The redhead security team says however, we don't own rpm.org domain. So they disagree with the who is data. I'm not true rpm.org is a track installation Track has a bug tracker. So I could report bugs there. That's great Then I tried to register an account and I got a certificate error. It was expired in 2012 and it didn't match the domain name But then I ignored the certificate error and created a bug an account anyway And then I found out okay. I still cannot create a bug It tells me I should ask on the mailing list or on irc whether I'm allowed to create a bug I And then in the communication with redhead I found out okay, there's a github repository But there's no link on the web page to the github repository Which is yeah, I don't know And then uh, okay, maybe you want to find out what's the latest version of rpm There's some disagreement. So if you go to rpm.org slash releases is just 412 01 In the github repository it says 412 0 and fedora is using 413 0, which This doesn't seem to be an upstream release for it Yeah, so status right now is a one stack buffer overflow that I reported is still unfixed in the git code And in the release you can find at least once you can find maybe there's a secret release somewhere I didn't find but in the release you can find all these bugs are unfixed And there are more and then some people kind of uh, there was a Discussion on reddit where someone said this is all boring because This is all after the signature check So I tried it again with pre signature check and still found some things I'm not really sure what to do about it right now because They seem not to be willing to fix the situation. So the thing here is these bugs are not that interesting But what's interesting here is that the development process of rpm seems to be completely broken Like I mean, that's not how you do things. Yeah, we hopefully can agree on that Yeah And a small announcement here, uh, there's a berlin security meet up and uh, the next meeting is tomorrow So if you're interested in security and cryptography, you may want to come it's at the musilla building community space And that was my talk, uh, thanks for listening Please test with address sanitizer and please use fuzzing to test your software And now I'm open for questions It's more a comment again than a question. Uh, I'm one of the developers of poplar the pdf, uh, library Oh, great. I want to talk to you. I I get I randomly get the People saying yeah, I have this pdf that crushes the thing. It's like yeah, I did that. Yeah Yeah, maybe that then I say can you report the bug and then like no because it's a security And then we end up in the same situation You mentioned like they don't want to report the bug because it's a it's a vulnerability But I want them to report the bug because it's much easier to track if it's in a back tracker So Yeah, I understand The thing you said, let's not discuss about it They just fix it But sometimes it's like they don't even want to give me the file to fix it because they think they deserve more, uh Things than using a back tracker So I reported a number of things in the bug tracker and they have not been fixed and it's more than a year ago Oh, really? Okay. Let's fix them now. Um, or send me an email Um, and also, um The gnome bug tracker has no option that I as a bug creator can create a security bug That's kept secret the bug tracker has this ability, but I cannot trigger it. So maybe that's something so Maybe we should talk later because I have yeah, I think there are stuff that can be improved and yeah Uh, one thing that I would Uh, ask you is do you know if anything for actually fuzzing user interfaces because I mean Fuzzing file file-based things is one thing but user interfaces are also prone to Well, in well, yeah missing input validation. Let's put it that way. Yeah, okay. So Um, first of all, you can just like use them Um Or start them like as I said, I started kde and I think four or five different bugs just by starting it Um, what I did for a while was run my email client with address sanitizer enabled Um, the problem now is I know there's a user of the free bug and I don't manage to fix it because it's like complicated And now it's a bit scary because I know there's this bug. It may be a security bug and uh, but So that's actually but I think that's a good way to do it Like if you're the developer just occasionally run your application with address sanitizer enabled Um, and there has been some work done by lipo office where they simulate user interactions with a fuzzer But I have not looked closely into this but Yeah So, um, can any of these Principles also be applied to let's say safer languages just like rust or camel. Yeah So I mean it It entirely depends on the language So what you can always do is these things like differential testing where you're just doing some correctness tests That is like independent of any language if you have a function that produces wrong output Um, and there are all kinds of things to adapt afl or similar tools to other languages So I think there's an afl rust and The o-camel people are also using it. I'm not sure how how exactly but I've been told so and there's a go fuzzer The the thing the the question there is what are you looking for? So some of these safer languages Can crash like they crash if you have a buffer overflow Some of them just like tried where crashing is not really a valid thing I mean, they can still crash when there's a bug but There are all kinds of ways to adapt this to safer languages But the I think the the thing is that C has most of these crashing bugs therefore there it's The there you get most of the output Yeah You mentioned that you run the address sanitizer on cute. Did you also try fuzzing cute? No, not yet I just did this in the past couple of days because I thought I have to do it for this talk One question you mentioned these rare back bugs in mathematical code, which is actually pretty typical How does the father find that? I mean if it's so unlikely that you really get the right input That's a very good question and I hope I I wished I knew the answer. So there are people kind of So it would probably require some very detailed analysis of what's going on my guess here is that AFL has a tendency to create repeating patterns And that seems to be more likely to trigger these kinds of bugs But there was for example a discussion I had with the developer of lip father because he was very angry that There was one bug that AFL could find and lip father could not find And he's well that cannot be in Lip father is the better tool and and he fixed it now and he fixed it by adapting this Pattern generation thing that AFL does because these are really horrible bugs that can live for 10 years or so. Yeah And and and this is where probably the symbolic analyzation really would help right because it can analyze Mathematical structure of the code So I had this discussion with people who are doing formal analysis and symbolic execution and they said yeah This is our thing is good for finding these bugs and they asked him how many bugs he has found with it And he had one and I had like five so I'm not sure if this maybe maybe there's a But I feel right now that fuzzing is at least Produces more results, but AFL isn't doing no no Analysis of the code at all right. It just checks new code paths. Yeah, that's it's analysis. It's doing it's fine Find the correct number that then triggers this code path and this so yeah So I mean this Yeah, maybe we can later have a discussion, but the all this formal verification stuff it all this sounds very interesting, but then The then you're not so impressed by the real results that are produced by this. Okay. Thank you I have one comment like I'm one of the guys who actually tried to build all of the kde and cute applications with asen And I haven't seen your bug So I was wondering whether that could maybe be due to me not compiling libex kcb or stuff like that with asen enabled Because like I I'm not using gen 2 right. I didn't rebuild all of my system so So I think if you build qt and kde on top of it with asen you should find them Maybe it's new Hi, I have a comment from from my practice my job We have Soft commercial software based on open source components, so to say we use qt. We are still at 4.4.8 And I'm using velglind and address sanitizer for years At the beginning as we started using these tools it took us Year or two to eliminate bugs from our software And At the moment the last I would say four years I am hunting jobs bugs in in libraries like Lip font config or G lip or something like this and I'm not sure should I really report all these bugs where I Actually have only the stack trace and the place where it happened and I don't don't really have time from my manager to To look deeper into it and so my job is limited to putting all these stack traces onto a suppression list So I don't have a good answer today. I mean the That's basically something you have to deal with the upstream projects if they kind of accept Low quality bug report. I have a similar situation that I can run this gen2 and try to compile all the packages with address sanitizer and then I have a bunch of log files with address sanitizer errors, but Like I won't ever like look into all of them So I'm wondering if I should just put this online somewhere and have some crowds So I don't have a good answer to you. Sorry, but I mean ideally these projects should at least all Use some kind of address sanitizer testing themselves If there are no more questions, then please give a hand to hano. Thank you