 Thank you. My name is David Byrne. I'm a security consultant with Trustwave. We're a security and compliance consulting firm We also have a full offering of managed security services. I'm in the app pen testing group I spend most of my days attacking websites, but our group also does other types of app pen tests We do security code reviews and training for developers on secure coding techniques before Trustwave I was the security architect at Dish Network, which is how I know Eric Hi, my name is Eric Dupre. I am a member of the IT security team at Dish Network A significant portion of my responsibilities at Dish Network involve web application security Including pen testing of internally developed applications I'm also the co-chapter leader of the Denver chapter of the open web application security project, or OWASP Alright, so I'm guessing most of you know that Grindel is an open source web application security scanner Which is why you're here. I've been writing it over the last year Earlier this year Eric joined and started helping with testing, porting it to other platforms And made a really nice live boot CD that is the basis for our demonstration later in the presentation Commercial app scanners have been available for quite a while. They're pretty well established in the market Brands that I'm sure most of you are familiar with But the selection of open source automated tools has been a lot more limited Which was my main motivation in writing Grindel There are several goals that I had from the beginning. I wanted to be multi-platform So I wrote the tool in Java which obviously is available on just about every platform that's out there But I wanted to avoid the slow, clunky look and feel that you see with a lot of multi-platform GUIs So I used the standard widget toolkit which allows a single Java code base to make native GUI calls across multiple platforms So for example, this is what Grindel looks like on Windows Vista, on an XP, on a Macintosh And in a little bit you'll see what it looks like on Linux It's kind of a pet peeve in mind when I go to download a tool and then I have to download a secondary library And then maybe go to another website and download some other tool that it makes calls to So I wanted to make sure that Grindel was very simple to install It's just a single distribution or single download, no install required really The only requirement is Java 5 which has been out for four years next month I think it's safe to assume most of you have it, for some bizarre reason you're still running Java 4 It's time to upgrade I also wanted a tool that would be useful to a wide audience Anyone ranging from an IT security manager who might have a high level understanding of app security But really doesn't need to understand how to test for specific vulnerabilities But then also something that was useful to someone like myself A professional pin tester who's fully capable of performing manual tests But can benefit from some of the automation Especially areas regarding application mapping and information leakage Which can be very time intensive to test for on a manual basis But still are very accurate to test for automated Finally, Grindel is a vulnerability discovery tool, not an exploitation framework There are some tools already out there that do a good job at exploiting specific classes of web app vulnerabilities But to write a generalized platform would be an enormous undertaking, that's just not what Grindel is Before I talk much about the features of Grindel, I wanted to discuss quality a little bit When I first started working on Grindel, my dream was that it would be so beautifully perfect In how it discovered vulnerabilities that instead of releasing it publicly I was going to go out and get half a dozen jobs doing manual pin tests But instead spend all day running automated scans and just raking in the money Oddly it really hasn't worked out that way yet I still think that Grindel is a good tool But it's not the same level of quality as what you'd see on a commercial scanner There's several reasons for this The commercial tools have been available for a number of years They have large development staffs and I would imagine that they usually have multi-million dollar research and development budgets Grindel of course has none of these things Despite that I still think that it's a viable open source alternative And this is something I really struggled with when I was first writing it Because I know that there are things, even in the current version that I can improve on And there are things that I want to add to it And that will happen in time But I came to the conclusion that eventually I needed to release it Because the only way it's going to continue to improve is if it gets exposure to a wide variety of websites Both secure and insecure And the only way that's going to happen is if it's released publicly And the same thing is true for commercial tools too The more they're used, the more exposure they get, the better the tool it can become So there's a number of ways that people can help with Grindel if you're interested If someone wants to contribute code, that would be phenomenal Be ecstatic But realistically I know that that's a pretty significant investment of time that most people can't make Simply providing feedback on things like bugs, false positives, false negatives Ideas for new features is also greatly appreciated Keep in mind that I don't get paid anything for writing Grindel And I have a real job and a family that needs to take priority So I can always respond quickly to emails But I'll certainly try whenever possible Another thing is that if you're using Grindel to scan a commercial software package Or to test an open source tool Pretty much if you're scanning any kind of application that's publicly available And you find a vulnerability, let me know I'm kind of interested in collecting vulnerabilities that have been discovered by the tool One of the advantages of making Grindel open source is that There's a lot of libraries that become available that for a commercial product Would have to be either written from scratch or purchased to some part of the toolkit So just to mention a few by name that were particularly helpful The Apache HTTP components library is, in my opinion, the best Java HTTP library out there Lobo is an all Java web browser Grindel doesn't use Lobo, but a partner project called Cobra Is an HTML parsing engine and document object model implementation Grindel uses a heavily modified version of Cobra Which is useful for things like spidering, testing cross-site scripting Pretty much any time when there's a need to model real world browser behavior Along the same lines, Mozilla Rhino is the all Java version of the Firefox JavaScript engine Obviously that's used for testing cross-site scripting as well And the Nikdo, I'm guessing most of you are already familiar with Nikdo If you're not, it's a tool that's been around for a while It's based on an even older tool called Whisker And it's basically designed to identify known web vulnerabilities So for example, is the version of Apache out of date Or are you running a version of Outlook web access that's known to be vulnerable to cross-site scripting It's a collection of almost 2,500 tests Now, Nikdo itself is open source, but the database, even though it's plain text The database of tests is not However, the creator of Nikdo, Sulo, did give permission for it to be used in Grindel And there are actually some advantages to using Grindel to run Nikdo tests For those of you that use Nikdo frequently, you'll know what I mean when I say that it's prone to false positives That's almost always because a web server responds with something other than a 404 message for a file not found condition It could be with a 200 or a redirect And Nikdo tries to handle that, but it's not perfect Grindel's not perfect either, obviously, but its logical file not found tracking mechanisms Are much more sophisticated, which significantly reduces the false positive rate There's a lot of features of Grindel that we're not going to have time to discuss today And even if we did, it would be kind of boring So I just want to hit on the main features that I hopefully will wet your appetites And make you interested enough to go and download it Give you an idea of the type of things it can be capable of In addition to the automated testing modules, which I'll be discussing in just a minute There's an internal web proxies, and that serves two purposes One is that it allows a user to guide an automated scan So that as the application is interacted with, or as the website is being browsed Grindel will discover new components and start scanning them for vulnerabilities The second purpose is to act as an intercepting testing proxy Like a web scarab or burp or peros Eric will be demonstrating those features along with the request fuzzer, manual requests And also talk a little bit more about the automatic file not found profiles Upstream proxy servers are also supported Authentication isn't supported yet due to a bug in HTTP components But that will be fixed in the next version The number of HTTP requests or connections can be throttled via a few different settings That's obviously pretty important whenever you're scanning a production environment Just to be sensitive to the needs of prod Now, any kind of reasonably well written application and well provisioned environment Should be able to easily handle anything that a single instance of Grindel can throw at it But speaking from experience, trust me when I say that if you're running a scan And anything happens on that web server, you're going to be blamed for it So by throttling the number of requests and connections Hopefully you can divert blame to where it belongs Which is usually with the operation groups HTML form-based authentication is supported Which allows for a few different things One is that the spider can start exploring areas of an application that are only available via authentication Also, there is an authentication enforcement test module And in the next version there will be an authorization enforcement test module There's a lot of ways that an automated scan can be tweaked and tuned in order to improve performance Or the accuracy of the scan And you don't really have to do that If you want to just do a quick default scan But the more familiar you are with the targeted application The more familiar you are with Grindel And with web application security in general The more accurate and better results you're going to get And that's true of any tool The more knowledge you have, the better you're going to be able to use it So a few examples of some of the settings include The ability to block specific query parameters for any kind of testing You can also mark a specific query parameter for As being irrelevant from a spidering point of view You can create URL whitelists and blacklists based on regular expressions Which helps prevent the scan from leaking out into other websites Or into parts of an application that should be off limits And you can also identify known session ID names Which obviously helps with the session management testing So again, this isn't a complete list of all the test modules that Grindel has But it's just kind of the highlights Things I think are a little more interesting There are several spidering modules The HTML tag requester is a traditional spider basically It will look for tags with a source or href attribute value Which tags are tested can be configurable by the user And then the URL that points at is requested Now there's a certain amount of risk associated with this The RFC 2016, if you want to look it up Explicitly states that a get method Which is which used to request source or href attribute Should only be used to request data Should never modify data back on the server Despite that, it's not uncommon to find web apps That will use a get to modify data Google found this out the hard way When they released their desktop caching engine a few years ago And fairly quickly they started receiving complaints Because the prefetching logic of the caching engine Was going out and requesting URLs of applications that users were logged into And it was modifying data In some cases even deleting data on those apps Last year I was speaking with someone That works on one of the best known commercial app scanning tools And I asked him how they handled this And at the time I didn't really mention that I was working on an open source scanner Which I feel a little bad about now But it's the type of information that I'm pretty sure They would openly get to any potential customer His answer was basically a very polite too bad And I agree with him that he had several points One was that it's impossible to anticipate Every kind of bad application design for an automated tool There's just too many ways that you can screw something up And if you're scanning an application That's really fragile and poorly designed There's no way that you should be doing it in a production environment If you're doing it in tests Then you should be able to recover fairly easily From any kind of problem like this And if you come across something like this After an automated scan Then you're going to know the application is poorly designed So you can go back and change this Something that he didn't mention is that At least in my opinion A developer who's going to make this kind of mistake Using a get to modify data Is extremely unlikely to protect the same query Against cross-site request forgery Which is a much, much bigger issue Than simply modifying some data On a web server via automated scan So there's a couple of ways that this can be avoided One is that you can use a URL blacklist To prevent the scan from visiting parts Of the website that are problematic But that requires a fairly in-depth Knowledge of the application And can be time consuming A better option is just to ditch the spider completely And use the internal proxy server to guide the scan So that when you get to a page that says something like Click here to delete the database Don't click there And Grindel is only going to test areas of the application That you're actively requesting It's not going to go out and start testing Just URLs that are only in an HTML response Unless you enable the spider The form baseline module Can basically try to guess how an HTML form Should be filled out And then submit it to the server There's actually even more risk associated with this one Because a form can generate a post message And posts can legitimately modify data on a server However, the module allows you to select Which methods are used And so if you want to play it safe Just don't enable post Search engine recon can be pretty useful It's not uncommon when I'm performing a pen test To an authenticated pen test To come up against the website I'm basically just seeing the login page Maybe some help pages or something like that Basically very little to test But when I go to a search engine I find that some other website Has linked to content on the site that I'm testing A search engine has picked that up And then indexed it Which obviously allows it to be tested This test module will query Google, Live, and Yahoo It'll also throttle the request So that you don't get blocked For sending too many queries at once File enumeration is pretty straightforward It's basically a brute force attempt To guess at common directory or file names Now this is a good example of a module That can take a while to run There's really no way around that It's just the nature of file enumeration It's a brute force attempt And when you look at the configuration page For the module, it will say This module takes a while to run Only enable it if you know you have time to wait So the lesson is Don't just randomly start clicking on things Make sure you understand what the impact is Of the modules that you're running Usually I'll only enable something On file enumeration If I'm having a lot of problems Finding content on a website Or if maybe I have a lot of time Left on an engagement I want to go back and just double check That there's nothing hidden Or unusual that I could have found There's a number of session management test modules One of them will test the session ID strength So we'll look for repeated session ID values Coming back from the server Either in the cookie value Or in a query parameter value We'll also check the level of entropy Or randomness between values That come from the server And obviously report on it If there's an insufficient amount Session ID should never be stored in a URL For a number of reasons So there's a module that we'll test for that And report it Session fixation is a vulnerability That allows an attacker to trick a user Into using a session ID that's known to the attacker There's a number of ways this can happen There's a module that we'll test for the common ones The authentication enforcement module I mentioned briefly earlier It's marked as experimental Because it's not as accurate as I would like to see it Basically it will look for transactions That are sent, requests that are sent authenticated And a similar request was never sent unauthenticated The session IDs are stripped out Either from the cookies or from a query parameter And the request is re-sent And if the two responses appear to be the same Then it's assumed that authentication bypass is possible Now that's not always a problem It could be just a cascading stylesheet Or something like that But there's always a possibility That sensitive information is leaked Even in obscure format So it's something that needs to be manually checked In order to confirm whether it's important or not There are two cross-site scripting modules One of them is for testing query parameters Which is where you usually see cross-site scripting And others for testing file names Some web platforms Web lodging in particular Had a problem with this for a while Will repeat the file name verbatim Inside of a file not found message And if the file name contains JavaScript Then it's executed by the browser Both of the modules use essentially the same technique For testing cross-site scripting It will seed the input mechanism Whether it's a file name or an input parameter With a random token And if the token is observed in the response From the server, then the context is identified It could be an HTTP header value It could be an HTML tag attribute value Or tag attribute name It could be inside of an HTML comment block There's over a dozen different contexts In all that can be identified Now technically this isn't part of the Cross-site scripting module It's underlying functionality Once an output context is identified It's tagged as an input output flow That can be tested for a number of different modules For example, caricature and line-viewed injection By knowing what the output context is The cross-site scripting module Knows how to escape from it So for example, if it's in a text area block It knows that any attacks Need to be preceded by a closed text area tag There are a number of formats that a Sorry, backing up real quick Before an actual attack is sent A series of requests containing characters That are commonly used in cross-site scripting attacks But that may be filtered as sent So for example, greater than less than Single quote, double quote, and so on Any attacks that contain filtered characters Are obviously not sent Which significantly improves the testing efficiency There are a number of formats that attacks Can be sent in Basically there are just ways to try And evade cross-site scripting filters Could be unusual ways of inserting Executable content into a document Could be different encoding formats That might bypass the filters When a response is received The HTML is parsed into the document object model And every piece of executable content is run So for example, event handlers On click, on load, on error are all executed Script blocks are executed Some simple and I would say Rather naive cross-site scripting filters Will look for specific text strings Such as methods or functions Like alert or maybe document.cookie To filter on. Now it's a really bad way Of protecting against cross-site scripting An experienced penetration tester Is going to be able to quickly recognize That it was the attack failed simply Because of a rather simple filter It's a little more difficult for an automated tool So Grendel uses a fake method Called testXSS, which is an extension To the document object model that he uses Whenever testXSS is called The test modules know that the cross-site Scripting attack was successful There's a similar mechanism that's used for Intercepting external script file references Using a script tag and a source attribute So using that technique as opposed to something Like regular expressions can significantly Improve the accuracy of it because Since it's acting as a browser It's a very low possibility for false positives There are two SQL injection modules One of them is error-based Simple way of testing for SQL injection Throw a single quote onto the end And look for an error message in the response It doesn't always mean that There's SQL injection possible But nine times out of ten there will be And at the very least there's some sort of Information leakage from the error message Grendel has a collection of regular expressions That match patterns of error messages From all the common databases Even db2 and access are in there And also database drivers like JDBC or LEDB The SQL Tetologies module is marked as experimental Again, because the accuracy isn't quite What I would like it to be Testing for SQL injection using Tetologies Is well beyond the scope of this presentation But basically from an automated point of view You have to answer the question Does response A look more like response B Or response C? Something that's very easy for a human to do Humans are going to be able to read the text And understand it A human will be able to pick up on subtle visual cues That would be difficult for a program to identify So there are some techniques that I'm working on That will significantly improve this In terms of being able to score the difference Between two different responses And I'm optimistic that by the next release Of Grendel this will be out of the experimental phase And it still works Just because it's marked as experimental Doesn't mean that it's broken It just means that I know there are Significant changes that will be made Or at least important changes made in the next version A number of miscellaneous text Care to turn line feed injection Is very similar to cross-site scripting Except that instead of injecting into the HTML body You're injecting into the HTTP response headers Cross-site request forgery is pretty difficult To test for on an automated basis Mostly because it usually doesn't matter For example the Google search engine Is vulnerable to cross-site request forgery But nobody cares except from a forensics point of view There are a few things that I'm going to be Tweaking on this module to make it more accurate But in the end it's going to require manual Investigation to determine is this form Truly important enough to need protection From cross-site request forgery The directory traversal module is very similar In its approach to identifying vulnerabilities Is the SQL tautology module And so I expect that both of them will leave The experimental status about the same time generic fuzzing module can be pretty useful It takes a while to run But it's not like you have to sit there And watch it the whole time It takes a set of predefined strings That either could be default or user defined And appends them to the end of every Input parameter that's observed by the scan The response is checked to see if it's a 500 response code which indicates There was some kind of problem on the server In this context it usually means that there was A problem with input validation It also checks the response for platform error messages That includes the database error message Patterns that I mentioned earlier But also Grindel has a collection of patterns That match common web platforms like .NETs PHP, CoalFusion and so on There's a number of information leakage modules Just a few of them include the Platform error messages module Basically it uses the same collection of patterns That I just mentioned Except it passively applies them To each transaction The advantage is that with When you're able to identify an error message Like this, sometimes it can reveal a vulnerability But at the very least it will help to Identify the inner functionings Of a certain part of the application And sometimes those error messages Can be inside of HTML comments Which obviously is very time consuming To look at every single comment On a manual basis The robots.txt module is Well robots.txt is intended to Guide a search engine as to where Content should and should not be indexed Some people mistakenly use it as a security control To try and hide areas of a website Say like an admin interface It's about idea though because it actually reveals The existence of that interface The comment list just lists out the HTML And JavaScript comments that are identified Every once in a while you find something useful in them For example, once I found some database credentials That had been put in there during the development phase But never removed when it went to prod A couple of web server configuration tests The website tracing is possible when the trace Or track methods are enabled Which they usually are by default Proxy detection, sometimes a web server Can be misconfigured to act as a proxy server Especially with mod proxy on Apache And if that is present in a perimeter environment It can be a particularly devastating vulnerability Because the, usually allows an attacker To completely bypass perimeter firewall rules There's a number of application mapping functionality And again, this is mostly useful for A penetration tester Sort of as an automated form of reconnaissance I already mentioned input output flows Regarding testing for cross site scripting There's a module that will list Every input output flow that's detected Regardless of whether vulnerability is associated with it And that allows a tester to go back And manually check each of those flows For vulnerabilities like cross site scripting Or carriage return line feed injection There's a module that will make An offline website mirror Which is useful for manual investigation And I already mentioned NICDO So with that, Eric's going to give a Demonstration of the tool Okay So, first a few notes about the demonstration environment It's a Slack 6 based live CD The server target for our scans Is a typical lamp stack Linux, Apache, MySQL, PHP On top of that is running an older version Of the Zen cart open source Shopping cart application Circa February 2004 There's a couple reasons we chose this particular application One is that it contains a wide variety Of known security vulnerabilities And secondly because While we considered using a designed Vulnerable application Like I personally think that the Foundstone hack me series is really great We specifically chose instead to use A real world application And Zen cart is very commonly deployed On the internet was back then And is still today So the client that we're going to be using To test the intercepting proxy Or to demonstrate it is the Mozilla Firefox web browser 3.0 So let me go ahead and flip to My virtual machine This is Grindel scan And while we don't have The interest of time we're not going to go through A fully automated scan here I wanted to point out that it's very easy to do one All that's necessary is to Enter your base URL here Add it And then go into the test module selection tab There are some default tests enabled But you can change them to Whichever test you want to run And you must enable at least one spider If you want to do a fully automated scan And then just start the scan So easy enough In my case I'm going to demonstrate The intercepting proxy And performing a guided scan So I'm going to open up some settings That I previously configured And first thing to note is that It's configured to bind to local host On TCP port 8008 I've configured my web browser for the same In the test module configuration I'm going to go ahead and Configure SQL injection An error based SQL injection test As well as the cross site tracing Module And then I'm going to go ahead And start the scan And yeah So in the scan status window The first thing you'll note is that It's performing some tests for logical 404 responses And what this means is that Grendel is creating a number of requests to the server Using randomly generated file names And then a variety of platform specific File extensions The reason it's doing this is to Create known file not found conditions In each one of those platforms And based on how the server responds To the file not found condition Whether the HTTP status code is 404 Or something else We'll use that to determine how we judge the other responses Coming back from the server in the future So what it's saying here Is that the scan queues are empty But more content can be supplied through the internal proxy So what I'm going to go ahead And do is flip over to my web browser here And Open up zenkart Application And now I'm using the internal proxy And notice that there are some hidden fields shown here Normally they're hidden Now they're converted by the proxy dynamically To text fields, that's the setting And I'm going to select one of these Products I can browse through the application normally When I make a request That request is then submitted through the proxy The proxy will automatically test it For vulnerabilities According to the module parameters I've configured And in this case you can see it's already discovered A possible SQL injection vulnerability In the page I requested So we can view the details of these transactions I'm going to go into the transactions tab for that And I'm going to change the view settings To Show all the transactions Except the 404 detection ones And Let's list all those other transactions Of this one, 296 Which is my product request And this is the transaction viewer Excuse me And the request and response are shown here You can view either in parsed or in raw mode And the request or response Bodies You can view in rendered HTML Or in text or in an internal hex viewer So I'm going to go back To the transaction And this time I want to right click on the transaction And I've got some options I want to send it to the manual requester So I'm going to say send a manual request And immediately the dialogue pops up This is basically the same view Except now I can edit any of these Items in the request And then I can Hit execute To then send once I've got it the way I like it I can hit execute to send it to the server And then I can View the response here So the proxy settings tab Pretty simple It configures the behavior of the proxy There's the HTML form fields I mentioned to reveal hidden fields And internal proxy settings I can change the bind address Or the bind port or the number of threads And I can stop and restart the proxy In this interface The interception TED settings So as David mentioned Grendel scan is an intercepting proxy So I can Enable request interception here And this is a Regular expression based rule set To determine whether or not I intercept a certain request So There's an element of the request listed here under a component And then The value And then this is the regular expression part And then whether I intercept Or don't intercept the request Based on that match So what happens here is it's logically ended together Any Request where all of the intercept rules Match and know All of the intercept rules and none Of the don't intercept rules Match will be intercepted So just to demonstrate I'm going to go back To the web browser And I'm just going to request Another item here And you'll notice that Now it's popped up a request intercept dialogue And again here I can edit any of these Anything here in the either parsed Or raw mode And then I can accept the changes And it'll submit that to the web server So going back to the transactions tab I want to demonstrate the manual fuzzer Feature here So I'm going to right click on this transaction Again and send it, this time I'll set it To be used as a fuzz template And In the fuzz template This is the base for the request that I'm going to send In my fuzzer I want to change the product ID in this case I want to fuzz product ID So instead of 106 I want to use a bunch of different other values here So I'm going to replace the Value With a token which is fuzz With 2% signs on either side Like That if you can see it And that's where This is called the insertion point This is where my fuzz value is going to be inserted So Hit ok And then I'm going to define a fuzz vector In this case These are numbers so I want to use a numeric sequence Begin With 1 Go to 100 Increment 1 Ascending So fuzz criteria is the next part Here I've got it set to check against the platform error messages That David mentioned earlier Which can be useful This will define, this rule here The response code is one of the elements I can choose But basically If this component matches I'm going to display Or optionally it doesn't match If this rule matches I'm going to display it In the table listed below So in this case I want to say where the Response code to my request is 200 It doesn't match And so I'll go ahead and add that And then start the fuzzer And you'll notice that it's making requests Of the web server With products ID from one And incrementing all the way up to 100 All of these have an HTTP Response code of 200 So I can view the details Of any of these transactions The same as I could before In the transaction viewer So I'm just going to double click There's my response And there's the product That happens to match this particular Product ID of nine So now that I've gone through the interface I want to show you Excuse me I want to show you a report that we've previously Run A more complete scan And here are some vulnerabilities That the scan was able to Detect in the application This one for instance Is carriage return line feed injection In any of the findings We go into the severity Of the finding You are all where it was discovered A description of the vulnerability In this case So this is carriage return line feed Injected injection We've injected %0D %0A At the end of a parameter And then A header of our choice Will appear in the output If that was successful Then we've got successful carriage return line feed injection So looks like we found it here We've got our carriage return line feed Let me look at the transaction use Just to verify that that's real And indeed there's our carriage return In our line feed And you can see that the new header Of our choice was inserted in the response So we've also got A recommendation for a mediation So how would you fix that Another example here Is SQL injection So obviously pretty high severity And As a lot of you know We just put a single quote in And The Details of this request You can see Here's our single quote %27 And then in the response I'm going to go to the end of it We can see The SQL error And this is definitely Vulnerable to SQL injection So again Impact Recommendation And a reference for finding out more About this kind of vulnerability So when comparing A Automated web scanner with a manual Penetration test Automated web scanners actually do have some advantages here The first is that The training requirements are pretty minimal Doesn't require a lot of applications Security specific knowledge To start a test Also the number of man hours Pretty few You can, even if a test takes several hours A scan takes several hours to complete The scan can be started And you can walk away and come back Usually to a finished report The Up-front cost is typically significantly lower With an automated web scanner Well not necessarily significantly lower But somewhat lower Especially the incremental cost is what I'm getting at Frequently the license You might be able to buy a one-seat license Which you can use in multiple applications As opposed to a penetration test Which is targeted at an individual application So When I mentioned some advantages You'll note that one advantage I did not list In favor of automated scanners Is accuracy And there's a good reason for that There's a specific There's a pretty significant class Of vulnerabilities that automated web scanners Just cannot find And the first of which are logic flaws So one example That's been known for a while Is if your shopping cart accepts Negative values of Or negative quantities of a product To be put into your cart And say for example I put negative 8 of a certain item Into my cart that costs $100 each I could get negative $800 Or essentially an $800 credit Into my shopping cart before I check out A application web scanner Is not going to be able to Automated web scanner is not usually going to be able to find this vulnerability Because it's not able to Understand the logic involved Or what that number really even means Another example is design flaws So If you have a password recovery question A secret question used to recover your password And the question is What was the make of your first car Now that's a really terrible question And the reason is Because there's relatively few automakers You can iterate through an entire list Relatively quickly With relatively few requests And come up with the correct answer Again automated scanner is not going to be able to Detect this, doesn't understand the English language Doesn't able to parse that kind of a question Or know it's bad Abstract information leakage So a year ago The Macworld Expo had a good example of this Where on the registration page There were coupon codes And coupon codes were worth a discount To potentially several hundred dollars Discount off of your The price of admission And so it happened That in the source code JavaScript source code to the web page There was a number of MD5 hashes Listed there And these MD5 hashes were for valid coupon codes For client side validation And it was possible to brute force Through the entire list And check against the MD5 And find out which coupon codes were valid And then submit one Obviously a serious vulnerability But a scanner is not going to be able to Detect this So we've gone over they really can't understand Application logic or data Application is functioning Whereas a human penetration tester Is typically going to be able to figure that out And understand things that the scanner would not Also Scanners generate typically Far more traffic than a manual test So if you're looking to conduct a test With stealth automated Scanners probably not your best choice I want to talk about this a little bit more Because as a pen tester it's kind of a pet peeve of mine It's easy to dismiss the kind of vulnerabilities That Eric was discussing is not very Common or something that's relegated To code that was written in the 90s That's not true We were talking a couple days ago About web app firewalls And a co-worker of mine commented on how You can't filter out stupid And you can't scan for stupid either This is a exploit from a Pen test that I did earlier this year Can you spot what I'm doing here The default is Free purchase equals no And when you change it to yes, it does exactly What you think it would In fairness This is not extremely complicated Vulnerability, something that even a novice pen Tester should be able to find A better example comes from a test That a colleague of mine did last year And what he found was that Given an employee's name He could go to different parts of an application And query information about that employee Most of those queries In order to get to the data It required some other kind of secondary information Like an employee ID number But what the tester found was that All of that information was available From somewhere else in the app So in the end he was able to walk away with The employee's home address The home phone number Social security number Employee ID number And some other sense of information He shouldn't have been able to get access to any of that With the account that he had In session hijack it was a design flaw Because each query was designed Ignoring the behavior of the rest of the application And this type of vulnerability We see all the time it's very common in applications Usually in the context of compliance I often hear discussions Or read about Whether or not an automated scan Qualifies as a pen test From a security point of view The answer is absolutely not And it's ridiculous to assert otherwise We've already talked about entire classes Of vulnerabilities that are theoretically impossible For an automated scan to discover Until we have halike intelligence in our software But beyond that There are a lot of vulnerabilities that While I can imagine a way of Writing a test module to find them It's just not going to happen Earlier this year I was doing a test And I noticed that When square brackets were appended To a specific query parameter The response from the server was subtly different Doing a little bit of research Experimenting I found out that the application was based on A somewhat obscure web platform Written in Perl And eventually I was able to write an exploit That allowed arbitrary Perl code To be executed on the web server That's a pretty significant finding And again I can imagine a way of writing A test module for it But it's a fairly obscure platform And even if you had the resources To write tests for every Or even many different Software that's used out on the internet You ended up with a tool that was so bloated And so slow no one would want to use it Just about every application That I test has something about it That makes it unique enough That some of its behavior would be missed By an automated scanner Whether it's functionality that's somewhat Non-standard and written specifically For the application Or choosing some piece of software That maybe people have heard of But isn't common enough to justify The only time that you're going to get really good Code coverage From an automated scanner Is if the Application is just plain vanilla Doesn't have anything that's very complicated And is based on a very common And simple web platform Like PHP or classic ASP And that doesn't really even Get into The accuracy aspect of scanners It's just a lot of times when Things like SQL injection and cross-excripting Is just because it's not a human My wife used to be An interpreter and a translator And she made a really good analogy comparing Automated app scanners with Automated translation software I think we've probably all used Google Or Babelfish to translate a website And it does a decent job You can usually understand what it's getting at But you never dream of using them Or even a more expensive piece of commercial software For translating something that was complicated Like a piece of literature Or a legal document or maybe even a car manual Apparently sometimes You can't even rely on it To translate a single word If you can't read this it's a sign that says Translate server error when it should read restaurant Even a Chinese English dictionary Would have gotten that one right Now that you're laughing at my jokes I'm losing my train of thought There is a legitimate use for automated scanners Otherwise I wouldn't have bothered writing Rindle I think a good example is that Say a corporate information directory Basically an internal phone book Security is important there It would be a very appealing target For persistent cross-excripting by an internal attacker It's probably used widely By most of the employees for the organization And it certainly is going to be much more trusted Than some random website out on the internet But very few organizations Are going to have the resources to perform An automated or excuse me A manual penetration test against that So by running an automated scan You get some degree of comfort that At least the simple vulnerabilities have been Identified and remediated And then any successful attack is going to require Some degree of sophistication That leaves your budget for testing Important applications like those that handle Credit card data or financial records Human resource information, things like that So I want to go over the product roadmap real quick This is not a complete list Of new features, just the highlights Version 1.1 should be out Around Thanksgiving this year You'll be able to preview the scan results Before it's completed, which is especially useful If you're doing a manual test alongside HTTP-based authentication will be supported I really wanted to have this ready in time for DEF CON It just didn't happen However, very few web apps these days Use HTTP authentication Almost all of them use HTML form-based authentication With some kind of cookie or parameter-based Session tracking on top of it Multi-part mind-coast and post-bodies Post-bodies will also be supported That's a little bit bigger of a deal Most of the time that I was writing Grindel this wasn't supported By the version of HTTP components That I was using But it is now So I need to go back and change it However, the default for posts Is URL encoding and that's far more common Usually the mind encoding is used For uploads or some kind of binary data transfer The fuzzer is going to have a number of enhancements That's the manual fuzzer that Eric was demonstrating Will be a number of new fuzz vectors You'll also be able to identify a specific response As normal and then anything that Significantly deviates from normal From the fuzzing is reported And of course you'll be able to set a threshold To determine that You'll also be able to have the fuzzer Guess at what a normal response is So that if 90% of the responses look the same That will be listed You'll be able to start a scan from the command line Which will be useful for scheduling Or for performing bulk scans from a script There'll also be a lightweight interface mode Which will be useful for users that Just want to perform an automated scan They don't really need to see all the advanced Configuration options Certainly don't need any of the manual testing Features during the scan A few of the new experimental or the test modules Authorization enforcement I know it's going to be experimental from the beginning Basically it will look for Request that were sent by user A Never by user B But then try to explicitly request the content As user B And see if the responses look the same Parameter incrementing will look for something Like product that equals 10 And then try to request equals 11 and equals 9 You'll be able to set bounds to prevent it From ballooning out of control SSL configuration Testing, do you have SSL version 2 Enabled, are you running Week SSL cipher suites Air based username enumeration Does a valid username And an invalid password result in the same Air message as an invalid username It shouldn't but oftentimes it does 1.2 will be out Late spring, early summer of next year Automated Automated AJAX navigation will be one of the Test modules for spidering Right now Grindel can't automatically crawl A site that uses AJAX or JSON Or something like that Just because the interface that's presented Is significantly different than traditional HTML You can still test those sites just fine By using the internal browser And Grindel will just start testing queries As they're sent The foundation for AJAX navigation Is already there, we have the JavaScript engine The DOM implementation is solid And that's really what's needed to Implement browser Behavior and mimic it PDF and XML report formats Will be supported As will one time passwords like RSA Secure ID, authentication domains And using client side SSL certificates for authentication You'll also be able to stop a scan Save it, exit the program, start it back up And resume the scan where you left off The website is Grindel-scan.com It'll work without the dash also The presentation materials, the tool of course And the ISO For the demonstration environment That's all, let's see, do we have any time No, we have no time for questions Room 105 is where we'll be Thank you very much