 My name is Rulof Teving. This is my beautiful assistant, Harun Mir. We're doing a talk entitled The Tale of Two Tested Proxies. Just to give you an idea of where we're from, we're also the guys that brought you tools called Lecto, Crowbar, Beauty Blow, we're the producers of Hacking by Numbers, Terry putting the tea back in Cyberterrorism, the directors of When the Table's Turn, several sin-grays, fairy tales and contrary to popular belief, we were the inspiration for the Matrix Trilogy. So the talk is about two different proxies. I know most of you that are here have got a bit of a headache from the 303 party last night. So I'll try to keep it brief to the point. We're talking about two proxies and they're very different proxies. So don't get confused about the two different proxies. The first is called Suru, which I'm going to speak about, which is a web application proxy. It's kind of main in the middle, the same as things like the AdStake web proxy, web scare up, those kind of things. And then Harun next to me is going to speak about LR, which is a generic TCP proxy. So it's totally different things and we totally different people. Okay, so why do we need yet another proxy? It seems like we have lots of proxies, there's Burr proxy, there's Parros, there's AdStake web proxies, all of those things. Why would we want to have yet another proxy? Well, one of the things that we found was that none of those proxies actually does nice discovery of directories and files like we do it in Wecto with the back end miner. I don't know how many of these people in this room are using Wecto. Can I see hands? Okay, so you know that the tab there that does the back end mining that looks for directories, looks for files, different extensions, different directories. We wanted something to do that a little bit more intelligent. We also wanted something that could fuzz variables a little bit more intelligent, but also wouldn't try to be too smart about that because you get lots of these tools out there that are trying to be super smart about how they fuzz parameters within a web application and that smartness that they put in there is really their downfall. We also wanted to be fairly generic so that you can use it for just about anything. So, after looking at how the guys use web application proxies for a long, long time, we found that there's really no one button kind of solution for web applications. You can't really take this one button run it and it finds everything about the application, logs in automatically, bypass all of the forms, all of that kind of thing. It's not going to happen, right? But what we've seen is that people that are good at web application assessments, they really want to have full control over exactly what they send to the application, and then they inspect what they get back on the page, and then they make a change and they send it again. But a lot of the stuff that they do, they automate. They don't automate, they do it over and over and over. But there's just enough variation in that test that they do that you can't really automate it properly. So they want to have the control over the application and want to have control over exactly what they send, but they kind of want some power tools that can help them to do a lot of the stuff that they do repetitively. And this is where this proxy comes into play. I've also seen that lines between where the application start and where the web server ends are kind of blurring. There's a lot of the components and extensions and stuff that's really sitting between application level on the one side and a network level, network application on the other side. So this is what we set out to do is to build a proxy that can do these things. And it didn't happen just in one day. So I'm going to give you a quick brief history and you can kind of figure out how we got to where we are now. It really started off by when we did work on Wecto and for Wecto what we wanted was we wanted something that could do, you know, that won't be filled by friendly 404s. So we started looking at the content of what a page returns and matching the content between a test page and the real page. And that's really where the cleverness of Wecto sits. It sits in the content comparison algorithm. Sadly, some people still don't know how to use that option. Dude, you can't do that, that's mine. Okay, so we created, then we created Crowbar in early 2005 and it's also sad but most people don't know how to use it or how it works and it's really one of the most powerful applications that we built. And we extended the thinking in Crowbar that we said we want something that is really a generic brute force. So we would send a request that a test request and then we send the real request and we would see how these two match up. You'll see later on how it sits together. So how does it work? And hopefully this is the last time I'm going to talk about this. In Wecto, we basically, if we want to test through a file, let's say login.pl that sits in the script directory, we would first test for something in the script directly that we know that does not exist. Like moo, moo, moo, moo, moo, right? And that's the base response request that builds the base response. In Crowbar, we do the same thing. If we want to test for a username and a password, we would test for something that we know never would be correct like moo and blah and then we would actually use the actual test that we want to perform and check the content between the two and compare the content between those two requests. If the content are the same, then of course the file doesn't exist or the login attempt was unsuccessful. But if they differ, then we know we got a different page, which is interesting. So how does the content comparison algorithm really work? Sorry, I've got a question there. Can I take them at the end because we're going to kind of, is that okay? I promise, I won't forget you, sir. Okay, how does the content comparison works? Well, let's see, step one. What we do is we crop the header if we can. If there's nothing in the body, we use the header because then that's all we have left. We split the strings, let's say we're comparing string A and B, we split the string A and string B on new lines, on tag brackets and on a space and let's say we call that collection A and that results in collection A and collection B. Then we count all the blank items both in collection A and in collection B and we do a little loop loopy thing that says, you know, for every item in collection A, loop through all of the items in collection B and if they are equal, increment to counter and if you found a match within that, break out of the loop and look for the next thing that you want to match within those strings. And then what we return, we return two times the counter divided by the number of elements in collection A, the number of elements in collection B minus the blanks. Now, this is what it looks like. So let's say I'm testing a thing, two strings. First, the first string is I'm testing this, dudel sakduk, and the second string is I'm testing this kaas krallikis, then collection A is I'm testing this dudel sakduk and collection B is I'm testing this kaas krallikis because now we split and all those different things, right? And there's four words matching within the two collections, a blank count is zero and we have the number of items in collection A is five, number of items in collection B is five. So our return is 80% match, 0.8. If we have I was testing and I am testing them things, then our return is two times the amount of words that match, which is I and testing and we got a 50% match because we got eight words in total. That's how easy it is. It's nothing like super fuzzy neural networks. It's just kind of easy. Okay, so with this we have the ability to in crowbar set a level, a fuzzy logic trigger level, which is if it is below this particular level or outside of that level of the compare, then put that in your result. Okay, you'll see how it works just now. In crowbar we also did some content extraction. So in this example, for instance, we mining here the numbers that you see there circled, if you can see them, I know it's hard at the back, you probably don't see a thing, right? Okay, then you'll see here what we do is we basically put in names into Google and our start token and end token is of about and the bold tag, closing tag, which means we're looking at what Google returns, how many hits does Google returns on different names. And you'll see here, you got Peter has a hit count of 990 million, while someone like, you know, let's see. Audar has a count of 124 million. And you might go, I can write this in pool in like two lines of code, but you'll see later why this is interesting. Now, why does Victor suck? And it actually does suck, right? One of the features that we use in Victor a lot, lot of people use it, there's the back in minor thing. But what if the entire site sits in slash corp, you know? Then we're not going to find any directories in slash corp, right? So one of the thinkings behind the mirroring option in Victor was that you mirror the site first, you get all the directories that exist on the site, you load it back into the back in minor and then you search within those directories for other interesting directories, correct? But what if the site has a form based login? So there's a username password field and the whole site sits behind that, then what? Well, that's why Victor sucks, because it can't know that, it doesn't know what to do. It doesn't do any fancy stuff like looking at directories within directories if the directory name isn't in the list. And so at the end of the day, we say, I say, Victor is a blind chicken, peking away at dirt. Yes, okay. We also have a problem with this, we can't see when there is, let's say we look for a file. We see that a file has been served to login.asp sitting somewhere. Wouldn't it be nice if in that directory, we also looked for something like login.zip and login.bat and login.asp tilde. Well, it wouldn't be login.asp tilde, you know. You get the idea. So we start looking for other extensions. That's just one of the things that we can do very easily if we sit in line with that, in a proxy that we sit in line with the request and we can see what request has been sent. We can also do fuzzing with Suri, okay. Is it something I said? Sorry. If we have a content comparison algorithm, then what we can do is we can send junk into different parameters, right. And send like, let's say a thousand different strings. A thousand different strings in one parameter. See how this, how it reacts. And look at the content compared to a base response. Right, and group responses together. So that we say for a thousand requests that I've done to this application, I only get three pages as a result. So I'm going to show you how that works. So I'm going to try to keep the mic here as well. So here we got this little site. I can log in there, test and test. And you can see that it failed. And if we go to Suri, this is why I said you should sit close because you know, that's a small font. Now you can see here, there's a request. It's a post, it resulted in a 200, okay. It's got three parameters. I double click on it. And it basically shows me this request editor that I will keep still anytime soon. And I can say, okay, take the username parameter and fuzz it with all of the strings that I have. Okay, that's just the file. Okay, the file sits, the file simply sits in the, see what's happening, getting base response. Okay, the file simply sits in here in a config. So I can actually go to that file and I can edit the file here right on the fly. It's just the text file, all of the type of strings that I want to test against this thing. Okay, I just want to see why I'm not getting to it right now. Oh, there we go. And there on the right hand side you will see that it starts to actually do all of those requests to the server. I can then go and I can say, well group the responses with the tolerance of 0.02. That 0.02 is just the ob number. Remember, a thing that matches completely yields a one. Something that doesn't match at all yields a zero. Okay, so with the tolerance of 0.02 between different requests group those responses. So I'm going to click on auto group and even at the back you should be able to see that we get three different responses. There are three different unique responses. Let's look at the responses that that generated. I can browse the request immediately there for all of those. And that is basically a CGI error that I get. Standard CGI error, right? That's one group of requests that I've made that resulted in the CGI error. This one over here, if I click on it, you'll see it gets to login failed. Please try again, which is what happens when I put anything in there that doesn't work. Over here you'll see there's one request that got a totally different response. And that is standard SQL injection, single quote or one equals one dash dash. If I browse that response, you can see that that gives me access to the application itself. Okay, so clearly I can test with any strings that I want and I can basically say show me how the application behaves in different ways. This application can behave in three different ways whenever I send junk into the username field. On the recon side of things, if I set my target to be intranet.postsense.net, I do directory mining, I put on directory smart scanning and file smart scanning. I clear everything and I manually reload. You'll see that it's now got 118, 117 jobs that it has to perform. And I can set the speed. These are recon jobs that it's performing. I can set the speed at which this thing is basically running on this side over here. Because maybe I don't want to have this reconnaissance process while it's trying to find these files. I interfere with my browsing experience, right? Because if you're flatlining this thing and it's starting to look at files, then you can fully understand that your link is going to get saturated. While in South Africa our links get saturated by things like this, okay? So I can up this a little bit, make it a little bit faster. Over here you can see the structure of the site so far that we've seen. Our VMware is not responding very well at this stage. There you go. We also have on this tab over here, we've got miscellaneous tools, so you've got a one click convert from your user input to the word since post to MD5, to hex, to shot one, by 64 encode, by 64 decode, all of those kind of things. You've got a search and replace on the outgoing, search and replace on the incoming. Just want to see what's happening over here. So I'm going to leave this for a bit. We'll get back to this as soon as it's done. Okay, is this where I was, I hope so. Okay. So there you can see actually what we've done with the request editor. We see the different request being broken up into different parts. And we can pause the request, nicely break it up into its different parts and let the user select exactly what part he wants to fuzz. You can also fuzz anything within the HTTP request. So in this case here, you'll see what I've encircled there is the user agent, right? So I can brute force, not brute force, I can fuzz the user agent, see how the application would respond to different user agents that I'm sending in. Maybe if it's a mobile application, it would look differently when my user agent is set to that of a mobile phone. We can also extract anything from the reply that comes back, right? In this case here, we always extracting by default the title that comes out. Other things why Siri is nice, we have a thing called automatic relationship discovery. So for instance, well let me first tell you what it is. So it basically looks at every parameter that you send into the system at any time. And it then in real time works out the char one hash, the MD5 hash by 64 encode by 64 decode of every parameter and puts that into a structure. And when you click on the auto relationship discovery button, it basically goes through all of the parameters that you've said, plus all of the derivatives of those parameters and see if there's any kind of match. Why would it want to do that, okay? For instance, if we log into an application and let's say as a cookie, we get send back the MD5 of let's say the username, which is really a bad idea, right? Then this tool will tell us, Siri will tell us, listen, those two things match. The MD5 of the username is actually the cookie. Now imagine you testing application with 300, 400 pages. That's useful because you're not going to keep track of all the parameters and everything that goes through, right? We also have search and replace on both incoming and outgoing streams with the ability to change binary data. So what you see there is the picture of Google, the little Google picture thingy. When we've replaced binary hex zero three to hex three zero, and that's going to screw around with the picture, right? Because those are things that's contained in the binary stream that's coming into you. Other reasons why Siri is really nice, it's really usable. We've really tried to make it super usable. So it uses the IE browser object to replay request. So you don't have issues with authentication, digest authentication, NTLM authentication, those kind of things. We can instantly replay request while we busy with it. And we can keep track of request that we played back because it marks the request and says, this is a request that you edited and played back to the system. Something that was quite interesting to handle was XML and multi-part posts. The multi-part posts are nasty, nasty little things. It also handles multi-part posts very nicely, handles XML very nicely. Where do we see XML? We see it with web services, right? So if you have a thick application that is using a web service over SOAP, then you're going to see those XML coming into Siri as well. So it renders it nicely, you can edit the stuff, that kind of thing. You can save and load sessions. So you can work on something, say, well, well, well, I want to save this and I want to come back to it later. You can save it, load it, it's nice. You can instantly fuzz any variable as you've seen. You can fuzz any part of the HTTP header. Those fuzz strings that you have, you can keep it in a file. So it's not limited to someone's fuzz strings that they decided to give with the application. You just add on top of them whatever you want and it will always pull it out and tell you. You know, the application responded differently when you set this particular parameter in there. You have instant access to the HTTP raw request, which means you can go to that main window, you can add anything in the middle window right there, a post, a get, it doesn't matter what it is. You can send it to the browser or you can browse it or you can send a raw response, a raw request and it will automatically calculate the content length for you. So you don't have to do that. So it's useful in certain cases, right? We got one click direct reminding all of that kind of things. So you've seen all of those kind of things. Let's go back and see what's happening. It's not very, it doesn't look very promising. Let's switch back. Getting the demo kind of gremlins. I did this thing outside five minutes ago, but it's the Zen energy in the room that's just wrong. Okay, nice. It's a very nice new man in the middle proxy. It allows the analyst the freedom of thought so you can still control exactly what you want, but it automates the parts of the application assessment that's really mundane and boring. It's really a good combination between the best features of work to crowbar and e-all and a really useful proxy. If you need to application assessment, if you need to web application assessment, then I suggest you start off with something that's a little bit less complex. As my colleague Schauffen had well told me when he saw this the first time, he says this looks like the control console of a Boeing. So it's not that friendly for first time users, but as you get used to the other proxies, you will find that they have certain limitations and you want to use something like this to overcome those limitations. When you get to that stage, you might want to switch over. It was written in house by ourselves, so and we do hundreds and hundreds of web application tests every year. So it was written by people that actually is doing web application testing and therefore the interface and the way that it works is geared towards people that actually have to do the work and not by people that can program. That's probably why you don't see the results in the other page. The URL at the bottom there, if you want to write it down, www.sinspost.com slash research slash suru. And this is the stage where I hand you over to my beautiful assistant and colleague Haroon. And I'm going to switch over here. Let's see if it works. Okay, thanks for the sound effects. My voice is a little bit scratchy for no good reason. I didn't go big last night unlike some of the guys I spoke to earlier. So you're just going to have to forgive me. For something completely different, what we're going to talk about now is a tool called LR, largely because we never got around to naming it because it's not that packaged a tool. So LR is a generic TCP proxy where LR at the moment starts for last resort. You've used all the tools you can and nothing seems to be doing it for you. We fall back to LR. A quick background. Suru is neat and well packaged and has a shiny C sharp interface. It's even got something like 62 pages of documentation, I believe. And videos, Ros just corrected the number but I'm going to skip it. But Suru is neat and well packaged and LR is not, okay? LR is a collection of other people's cool tools tied together with some duct tape, which in this case happens to be Python. I'll say at the outset, almost everything that you can do with LR, you can get by using a combination of other tools. The truth of it is that LR just makes it convenient for you and we'll go into why I did it anyway. Okay, so what does the whole thing mean? The first thing it means is that clearly I have no future in sales or marketing, okay? The second thing really quickly, someday LR will grow up and become a neat, shiny Suru like thing for generic TCP connections. Today what it is is a bunch of really simple scripts that allow you to modify TCP packets kind of in the same way that Suru allows you to alter HTTP, okay? There's very little or no cleverness in LR that belongs to me. It's built on top of two much cleverer projects, one of which is Philippe Biondi's Scapi and the other is Neil Pickett's IPQ. We wrote it in Python just because all the cool kids were doing it. It was peer pressure, okay? So the question for why we actually did it, one of the reasons that I don't have up on the board there is just because we had this idea and wanted to play with it, we wanted to see if it would work. But one of the other reasons was because there's lots of limitations in current tools that get annoying when you just want to get something sorted. We also wanted to take a look at how simple something that to date has been pretty complex actually can be, okay? And we end up with a pretty simple framework for you to be able to do this and hopefully you'll see it, okay? And pretty much it was just to go into the possibilities, some of the stuff that you can get right once you're in the position to mangle TCP packets before sending them on. Okay, so one of the first questions you get are someone saying what about ITR, what about NGREP, what about insert tool that they've been using till now. Okay, lots of those tools are okay. All of them that I've seen require you to point your server towards the proxy. So you make ITR your server, you connect to it, it connects to the real server, okay? That's pretty okay, but in some instances this becomes a problem. For example, when you're doing some malware analysis, okay, what you really want to do is sit in line, watch the request go through, say I want to mangle this now, I see this DNS request going out, I want to give it an arbitrary response, or I see this request for a file, I want to give it another file. Okay, lots of them are closed source and lots of them require some hardcore packet foo, okay? If you take a look at anything written in LibNet or LibDNet, those are great products, great tool sets, you're going to end up using several hundred lines before you send out a few packets. Okay, so we were going for really simple, really quick and easy. Ultimately, we wanted you to be able to modify packets and payloads as they go through, and for you to be able to do them within complex sequences, okay? So maybe you don't want to change every packet, maybe you want to change the 10th packet if the day of the week starts with W, okay? So basically we wanted you to sit in a comfortable scripting environment and say, I want to change X, I want to change Y and go forward from there, okay? And we wanted to do it really quickly because we've got more important things to do, like retain our title as Office Minesweeper Champion. Okay, so how it currently works is really simple. At the moment it's sitting on top of IPQ or IP Divert. Okay, so all that happens there is we've got a firewall rule that grabs the packet as it goes on to the network and moves it to a user space process. Okay, once it's in user space, we then use Python and Scapi specifically to mangle the packet. Okay, so Neil Pickett's IPQ is allowing us to grab this packet off the IPQ and Scapi is then allowing us to say what that packet is and what we can do with it. Okay, really quickly, if I'm going to do this, it's only fair that we pay homage to Scapi. Okay, Scapi is available at sective.org by Philippe Biondi. It's by far currently the easiest way, most convenient way to generate packets. This year it was 28 on Feodor's top 100 security tools, which means that most people have yet to discover how cool Scapi is. Okay, if you haven't used Scapi, I'd advise it, go play with it, it's really worth the effort. These two quick examples stolen from Philippe Slides that he gave earlier this year shows that creating a packet is simply a case of saying, I want IP destination this, ID this, TTL this, make me a packet and send it. Okay, in his slides he says he wants the equivalent of you ordering fast food. So you should be able to say I want a burger, extra onions, hold the cheese, make it now. Okay, and what you see down there is an example of the land attack, okay, famous for old IRC's, okay, but literally what used to be a few lines of code is now a one line Scapi script. Okay, so quickly SPLR is going to grab the packet through IPQ, then decode the packet using Scapi, mangle the packet using Scapi, and then put it back on the wire using IPQ again. Okay, so you understand why I say the heavy lifting is done by other people. Okay, at this point we were supposed to give you a live example of this on the internet, which we can't because we have no internet, but we've got several other examples that work locally on VMs. Okay, so for this first example, which you're just going to have to trust me for, we do a simple Google search replacing a search for foe with a search for ba, okay. At this point you should be saying that's a pretty easy packet modification to make because we haven't altered the payload size. Okay, as long as you're not altering the payload size of the packet, modifications are pretty straightforward thing to do. Okay, and then you get bitten as soon as you change your TCP packet payload with new check sums and with sequence numbers. Okay, one of the things you bump into here is a common misconception about TCP sequence numbers. I'm not sure where this started, but there's actually texts on the internet. You even see it wrong in some books. Which basically describe the TCP flow that says, the sender sets his initial sequence number to something, sends a packet, which in this case is five a's. And what you find in incorrect texts is that the receiver then adds one to the sequence number and sends the packet back. Okay, now clearly this is incorrect. Your receiver doesn't just add one to the sequence number. The receiver adds the number of bytes of data that he received in the payload and increments that to the sequence number sending it back to the sender. Okay, so he tells the sender, this is how many bytes of data I received from you since your sequence number. Okay, now why this causes a problem for us when we doing arbitrary changing is because you creating a desync between the client and the server. Okay, and we'll have to find a way around that. It puts us in the position to do a classic man in the middle attack. Okay, so all we do is sit in the middle of this TCP stream and don't let sequence number or act numbers go through as they want to. Okay, so the client sends through his sequence number. Okay, so if you take a look at this example, as I have to tell you, this photograph here is completely arbitrary and has nothing to do with the presentation. I asked someone at work to do this Vizio for me and he said he would do it if I gave him mad props. Okay, so I'm not sure if this counts, but that's Shal getting mad props because of his Vizio. Okay, so at this point, we have a client that sent the request for Foo. Okay, we grab this with our proxy and we change the search to Foo Foo Foo. Okay, the problem is we've now added six to the data that the client sent. Okay, what's going to happen is the server's going to act nine and the client is waiting for an act three. Okay, so this desync is going to be a problem because you're going to end up in a loop where the client's going to keep retransmitting saying what the hell are you talking about and the server's going to be saying already told you I saw that. Okay, so what Alar does in the middle is just does this translation. It says we added six as the sequence number comes past we're going to subtract six and keeps both sides happy by playing man in the middle. Okay, and once we in line, it puts us in a position to alter any data that's going to the client or to the server and it becomes interesting for the new wave that everyone seems to be focusing on these days which is client side fuzzing. Okay, it becomes interesting for just about any lame client side security bugs and if we're going to talk about lame client side security bugs we probably have to say something about the VNC 4.1 authentication bypass. Okay, so sometime earlier this year someone writing his own VNC client figured out that the VNC protocol actually had this bug in it. The client and server decide that they both speaking remote frame buffer, whatever, whatever version and then the server sends the client in a ray saying I'll happily talk to you with authentication type one which is password or two which is certificate or whatever and the client then sends back that's cool, I'll talk to you with authentication type one which is password. Okay, the problem that turned out is the server never checked that the option that the client chose was one of the options that it actually offered. So the server said I'll like to speak to you with two, three or four and the client said that's cool, let's do zero and the server said cool, let's do it. Okay, where the client was actually choosing type zero which meant no password, okay. So it was easy enough to build a tool for this. All you have to do is alter the VNC client to always offer up zero or if you wanted to, you could achieve this using in our case LR, okay. So hopefully I'll have good luck with the demos. This mic is clearly in my way but if we go XVNC viewer to the Win2K box, okay. What you should see is that the WinVNC box comes up and asks for password. Okay, at which point we put in our password and everything is happy. Okay, at this point what I'm going to do is pass the packets through our proxy and this time we're going to tell the proxy that actually what we wanted to do is modify that authentication type to always be zero. Okay, so running the exact same client, what you should see is authentication completely bypassed in this case because the proxy got in the way and said hey let's not do this authentication stuff, let's just skip it. Okay, at this point you're passing through a proxy that's written in Python so it's going to be a little slow but it should work for just about anything that you want to do. Okay, I'm going to go back to my slides if I can find it. Okay, once we at this stage obviously we in the position to do header modification of the TCP packets, at this point it becomes totally trivial. So we are looking for a good example to show you guys and the obvious one that comes up was a problem that FreeBSD had a little while back where if you changed the header flag or the ECE flag in the header, if you set it FreeBSD firewalls thought that you are part of an established session and just sent you through. Okay, so what we've got here is a FreeBSD version that was vulnerable to this and for those of you with super sharp vision, what you should be able to see is that this firewall has two rules. The first rule says allow TCP if it's part of an established session and the next rule says deny everything else. Okay, so if you familiar with this type of firewalling you should know that nothing should get through. Okay, I can't send any SIN packets through which should leave me sitting quite badly. Okay, so at this point if I tried to telnet to the FreeBSD box on port 22, okay, as expected I should get back nothing. Okay, and this should work for just about all the services on this box. Okay, I shouldn't be able to surf to it or anything like that. Okay, it should just spin its wheels. At this point, all we're gonna do is start sending packets through us again. Okay, and at this time we're gonna tell we're gonna tell our proxy to add the ECE flag to everything. Okay, what you should see is that same telnet this time happily goes through. Okay, and in fact you should find that just about any connection to that box should happily go through to the server. Okay, so again all the proxy is doing is saying grab the packet, you ask me to modify the header, here you go and the stuff goes through. Okay, the bug in FreeBSD is pretty old but you guys know there'll be other bugs. Okay, the interesting thing about this is we released an exploit for this way back to allow you to do the same thing and originally even though it stole stuff from LibNet, the amount of code that you needed was around 270 lines of C code, okay, to achieve this. Using LR today, you can do it in 20 lines of Python code, okay, and about four or six of those lines are actually comments. Okay, so pretty much you should be able to say grab this packet, do this, pass it on and you'll be a happy camper. Okay, other uses for it, we found you can use it nicely if you're doing some sort of malware analysis. Okay, it's really useful and at this point you get the joy of hooking up with a clever project like Scapi, okay, so I don't actually know when a packet's going through what that packet is or what that packet contains. I simply ask Scapi, hey show me it's IP field. You look like your UDP. Please show me what your UDP request is or your DNS request is. Show me what your DNS request ID is and Scapi happily says, well this is what it is. And if you've ever tried to modify a DNS request as it went by on the wire, you'd know that it's not just a text record, okay, there's some ID stuff, there's some stuff that identifies it. In this case, I skip all that complexity and leave the cleverness to the guy that wrote Scapi. I say, make your DNS a result, www.food.com, Scapi does it, Scapi magic, and gives me a packet and I say send this packet out. Okay, what this needs and for those of you who caught the Matasano talk on PDB, anyone who's doing this stuff bumps into the same problem. So I've effectively fixed the desync between client and server by sitting in the middle. But what you don't fix is the fact that you type slower than TCP expects you to be. Okay, so while you're sitting mangling your packet, the client's going, I sent this and didn't get a response, resend. I sent this and didn't get a response, resend. Okay, and what you really want to avoid more than the noise is the client and the server breaking off communication because of a time out. Okay, so nobody's really solved this. I figure we know how to, we just didn't have time before getting here. So we'll try it before putting the actual TGZ up. Basically what we're looking for is to tell both client and server that you're now being debugged, please stop what you're doing. The problem is we doing it by sitting on the wire so we don't have direct control of the client or the server. Okay, what I figure we can do for this is to steal an idea that tarpits use. Okay, if you've ever used a tarpit for anything, you'll know that if you set your TCP window size to zero and talk to a client or a server, you're basically telling that client or server, I'm too busy right now, please stop talking till I tell you you can start talking again. Okay, and what my plan is with this is that we have a lot sitting in the center and as soon as you want to mangle the packet, it spoofs the TCP window zero to both the client and server saying, can you please just give me a second? Okay, both guys stop and when you're ready, you send them both a window update saying, okay, let's start talking again. Okay, I can't theoretically see any reason why it shouldn't work and so far nobody's come up and told me, I'm a total idiot. I lie, people have told me I'm a total idiot. They haven't told me I'm a total idiot for this. Okay, so we'll try it as soon as we get back and it should be on the final version of the TGZ that we put up. Okay, so with this, what you should have is a really trivial easy way for you to mangle packets. Remember, one of the core objectives with this was just for guys to see how simple the stuff actually is to do. Just about all of those things that I used for the FreeBSD thing, for the VNC thing, they're about 30 lines of Python. Okay, where at least half of it is templatable. Okay, where you're saying, here's the packet coming in. If the packet looks like this, change it like that. And you literally, for those of you who've seen any Python script, you know that it's pretty readable, unlike most of my old Perl. Okay, so you should be in a position to mangle the stuff pretty easily. You should be in a position to say, well, hold on, if we can do this, we can do the following 10 things. Okay, which is pretty much what we want you guys to do, to go back and say, well, you can use it for the following and you can do much cooler stuff with it. The TGZ for this stuff will be available at www.sensepost.com forward slash research as soon as we get home and actually upload it. So yeah, download it, play with it, give us feedback. And other than that, just have fun. With that, we go back to the good looking one. Who needs to alter. So, in conclusion then to our talk. We basically, when we train people, we say, listen, it's not about the tools, it's about the thinking. And if hacking is art, then tools are really the brushes. But a monkey with a brush is still just a monkey, right? So the bottom line with these things are, we preach to everyone that it's not about the tools, it's not about your exploits, it's about the thinking, it's the way that you think about things. But having nice tools really makes it easier to really make cool stuff. So, again, we don't want to sit here and say, hey, look at this tool, look at this tool, look at that tool. But it is quite cool, I think. The stuff is available again at that URL, www.sensepost.com. You can have a look there, go to the research area as well, if you want to. And at this stage, I want to take some questions, there was a sir there that I said. I promise I won't forget. If there's any questions at this stage, I know we got, I think we got five more minutes, two more minutes. Do you guys use the mic because then I don't have to repeat the question. Two quick questions. What's the support for HTTPS on those tools for SSL? And what's the support for internal proxies? Okay, good question, the first was SSL. Yeah, there's full support for SSL. We do it as all the other proxies do, manual with the certificate, same thing. In terms of internal proxies, that you mean training proxies together. Now, I haven't done that because it's actually not, it's doable, it's totally doable, but I haven't done it, so it's just implementation. So I'm sorry, at the moment you can't change it. Is the source going to battle? No. Depends on which one, so LR source code, it's just source code, it's just Python. So I just have a question about how you handle packet loss because if you're changing the payload size and then you change the sequence number, you're adding and subtracting these digits, if it says all right, we lost three packets and it's going to continuously send those first three, not the whole, absolutely. At this point, the easy answer is it totally screws you over. As does fragmented packets. You can get away from both in the short term by having your firewall handle it for you. So do reassembly at the firewall level. I wouldn't advise you to use LR as a genetic proxy while you're just oddly surfing around. So you want to use it while you're doing testing on an application. It's not going to handle heavy loads. You're simulating an IP stack in Python. So yeah, at this point we not and to be honest, I don't think I'd build it in because I think it would be more trouble than it's worth. I have about three questions regarding suru, your import-export format for saving sessions. What kind of formats are they in XML, CSV? No, it's suru format, so. No, it's totally text, so you can modify it if you want to read it, but it's totally proprietary. It's not XML, it's not CSV. You can probably convert it if you want to, but at this stage it's not any of those. Okay, and dealing with your auto-relationship discovery where you look at the parameters and different encoding methods, do you look for clear text also through other parameters? No, we actually don't. So it's something that we thought about doing, at the moment we don't do it. So if I understand you correctly, that is if there's clear text in one parameter that is contained within some of the other parameters. Yeah, correct, so like if you have some post and it has a username or password and then it's putting that username in the get request. No, at the moment we don't look at the clear text. It's easy to add, it should be trivial to add. Okay, and then last question is, so mainly the way that the new Wicdo backend that you put in Suru works is that it works off of 200 versus 400 or mismatched responses. If you have 302s in those and they're popping over to the same website, does it look at anything else in there like content length or? At the moment within the directory and file mining in Suru, we don't look at status code at all anymore. We took it out completely. So length only? Sorry? So you only look at length only? We don't look at length, we look at the comparison to that and a base response. Okay. So it's the same, what we found is in, so it does the same testing like we do in Wicdo with AI option switched on. And we decided to leave it like that because the results that we're getting is quite, it seems to be, well it's a lot more reliable than looking at status codes. Sure, okay thank you. Thanks. I was wondering how LR handled TTL values? Well as it stands, it handles it exactly as Scapiwood. Sorry the question is how LR handles TTL values. It handles it as Scapiwood, which means by default it'll leave anything you don't specifically mangle. So if the packets coming through with the TTL of X, unless you choose to make TTL Y, it'll handle it exactly as another hop on the way and pass it through just fine. So again, that's none of my cleverness. Scapi handles the stuff really well. Anything that you don't mangle in a packet, Scapi will try to intelligently fill in for you. And in this case, that's exactly what it'll do. It'll just pass it through. So any more questions? Thanks for your attendance on this late. Okay, thanks a lot guys. Day in Defconn, thanks.