 So a little bit about us. We work for Fishnet Security but we're not sales people so you really don't have to worry about that portion of it. I'm a principal consultant on the APSEC team. I also am an associate professor of software engineering at the University of Advancing Technology so they have a booth over there in the vendor area if you feel like talking to them. And even though we didn't change the slides, that's actually J-Rock, not Justin Engler. We went through our entire team and gave each other hip hop names. So that's J-Rock. Okay, Nate Dogg. I'm Nate Dogg, yeah. I'm Seth. I'm also a principal consultant at Fishnet. I'm Greg. I'm a senior security consultant at Fishnet as well. I'm Justin. I'm a regular consultant at Fishnet Security. So kind of what our talk is about here is we're going to provide a little bit of an overview of problems with current testing tools that people may or may not be aware of. So kind of our modern landscape is that we end up with people who may be in a QA or security testing role who may have come from a different background that they may not have had development experience. So when they run into problems or when they run into technologies they may or may not understand, they might not find vulnerabilities that are fairly easy to find. And some of that is a problem on the tools perspective, like not being able to handle modern web applications. So we'll go through some of the current work arounds and how that, you know, how people are handling those. We'll go through a little bit of proposed solutions like how those can be fixed. And then we wrote a tool to start addressing some of these issues. So we'll go into, we'll skip some of the stuff and get right to the demos and show you the tool. So what we aren't going to do, which this is kind of a lie. So we aren't going to beat up on any particular vendor. So that's kind of sort of not true but we're going to try to keep up. We didn't say tools, we just said vendor. Yeah, it said vendor, yes. We also currently can't solve every single problem that we outlined but we're working on it and we're definitely not going to sell you a solution. So our goals for this is one to raise awareness for people who actually test applications. We want to put focus back on the tester and not so much on the tool and that's what our tool allows you to do. I know that sounds kind of weird, like we're giving you a tool so that you don't have to use tools. So it sounds kind of strange but you'll get it by the end of the talk, I promise. If not, you can punch me in the face afterwards. And also to get you to submit bug reports for raft. I remember a couple of times when some tools came out, I don't want you guys to do what I did. So I would download the tool and give it a try and then something wouldn't work and I'm like this sucks, I'm not going to look at it anymore. So don't do what I've done in the past, I realize I'm not being a good advocate of that but we will fix something and we will take feature requests and try to work things into the tool. So a little bit of clarification. Throughout the talk we use terms fully automated and semi-automated. And sometimes we use those interchangeably so that causes some confusion and we're going to continue to be confusing about that so sorry. So this slide is going to try to explain what we mean. So if you think of a fully automated testing tool like your enterprise application testing tools, those are like the Mac 10 on the right hand side, right? So basically what you're doing is you're loading up a bunch of bullets and you're just spraying them in a current. Anybody who's ever shot a Mac 10 knows that you can't hit anything with it, they might as well not even have any sights on it. You just point it in the general direction and you put holes in stuff. Not always the best solution. So semi-automated testing or how it should be which is what you think of sending data to something like running through a bunch of different test cases. That's more like the semi-automatic sniper rifle. So you're honing in on a problem and you're really trying to focus on that problem and find vulnerabilities based on a specific set of test cases. And that's mostly what we're talking about during this talk is we're talking about the left hand side. We're talking about the sniper rifle. Okay, so I'm going to kind of talk a little bit about the current solutions that exist out there. I'm not sure how well you guys can hear me. Basically we test for a living, right? We're looking at web applications and we've kind of figured out that they all fall down in somewhere and other. I mean, you get the fully automated tools where you click the start button and it's supposed to find every vulnerability under the sun. And I end up spending two to three days configuring the thing and it comes back and it tells me there's SSL is misconfigured or some shit like that. So the automated tools fall down. I mean, there's the fully automated ones, the semi-automated ones. They have session and state problems. You've got scanners that will run and they, you know, pull what you did. And the next time that you go to the site and have the tool run, it's out of state and they can't figure out that it's out of state. So you've got hundreds or thousands of requests that are coming back in the tool or that the tool is making to the website and they just aren't valid because they're all returning, you know, 302 redirects or something like that. They have problems without, you know, these complicated applications, the modern technologies we already talked about, you know, CSERF tokens, rich internet applications, web services, the tools just don't understand them very well. Furthermore, all this data that is collected is in disparate locations, right? You've got your proxy that you're using while you're testing. You've got the full-blown app scanner web inspector. Sorry, I probably wasn't supposed to mention that. But the fully commercial scanners, I mean, getting data out of those tools can be just one huge pain in the ass. And as you go further, you've got all this data that you've collected that there's no analysis that's run on it after the fact, right? You've got a single request response, the scanner goes in, it makes its assumptions about what's happened and then it basically discards that data. There should be some sort of analysis that goes on after the fact. As testers, we need more interaction, not abstraction, right? We need to be able to understand the application in order to break the application or in order to find the vulnerabilities. And if it's not, I mean, if the tool is basically the point and click tool, you don't understand what it's doing behind the scenes. All the vulnerabilities that I find typically are because I'm in the application, I'm actually looking at the requests and the responses on a low, low level, not at the level that is being presented to me by the tool itself. Furthermore, we miss portions of the application. If you think about the mobile applications that exist and the space that's out there now, when your iPhone makes a request, you get a different application than you do when you're using your Firefox web browser. If the tool doesn't understand that, hey, it needs to fuzz the accept header or the user agent header to actually get into portions of the application, you're going to miss maybe 50% of the application is developed by the developers. And there's some risk, we could go on forever. If you really want to know about the problems, we really beat up within the white paper that we presented. Can anybody tell like anybody who tests web applications, so say you have an automated testing tool, can anybody see from the screenshot why it might have a problem? There you go. It says sign out. A lot of automated testing tools look for a regular expression to tell whether it's in state or out of state. So the application is clearly asking the user to authenticate, yet it says sign out like they're already logged in. So a lot of tools will continue to send their tests and fail based on that. And then you've got things like this risk based login that we're talking about at the beginning, financial applications, depending on where you're coming from, if it's a new browser that hasn't seen before, they're going to ask you for more layers of authentication. And the first time you step through it with your tool, it may ask two or three different questions and it'll be different the next time that you hit the application. So these tools are just, you know, they're basically killing us when it comes to application testing. There's simple features we're missing, request times, authorization checks, storage locations, the new HTML5 spec, flash objects, things like that, especially these tools that were built in, you know, 2001, 2002, they don't understand any of that new technology. So now that we've talked about why the existing tools can't do a good job on the whole picture, we're left with trying to figure out at the end of the day when I have to do an assessment, what am I going to do? So even though a lot of tools don't handle the whole picture, there's some that can handle pieces. So we run a bunch of separate tools that do little pieces and a lot of them don't have an analysis of their own, sometimes we'll write our own custom scripts to do something custom to generate a whole bunch of requests, but then we don't have any way to analyze what we just did. And not only do we not have the way to analyze just the one thing from one tool, we've got all this stuff from all these different data formats, and we don't have any way to get them all in one spot and then look for commonalities. Another problem with doing it this way is that most of these tools, even when they do have analysis, you can only do it on the stuff you just ran. If you've got data from a bunch of tools from last year and now some new type of vulnerability came out and you want to check, hey, do I still, do I have that in any of my stuff, you're going to have to scan everything again, you can't just take the results that you had and run the analysis. So instead you could try testing manually, but when you look at the scope of the assessments, at least that we get, there's not really any chance that you would be able to get anything meaningful done by manually clicking things and manually looking at the responses. We've got, you know, thousands of pages to look at in the course of two weeks, you just don't want to get it done. You need to have something that helps you reduce that burden. So just manually doing it isn't going to work. If you've got a crazy tool that almost does what you need, sometimes you can modify something to do what it wasn't supposed to do, but even that can be painful and you're spending time writing scripts when you should be spending time testing the program that you're supposed to test. Has anyone in here ever had to use like windmill or selenium to do a security test? Anybody? Couple of people. Okay. So that's kind of what we're talking about. Like, you know, windmill and selenium are more or less QA tools. They're not really made to find security vulnerabilities. So you might be looking for something specific. Like, let's say you modified and you had some selenium scripts and you were looking for a SQL injection. Well, you're kind of focused on SQL injection, but you might miss a whole slew of vulnerabilities and other data in the same request that could be easily found if there was proper analysis done on them. So the other problem is many tools were fine when they were first written so that you could, you know, present at DEFCON a couple of years ago, but they don't they've never really been kept up to date or they don't adapt. And so just like our picture, you need to stay up to date with the times or you will become useless. I would argue. I don't think she's ever been useful, but it was just a funny picture. So anybody here use NIC2 on a regular basis for doing web application testing? And please, you can get some kind of count. Okay. And how about Derbuster? A couple more hands, all right. Okay, so I'd say that was maybe 10%. So if people haven't figured it out yet, NIC2 is just a piece of crap. If you actually look at what it is, it's just a list of web request URLs that get sent and it has some pattern matching that comes back. There's no intelligence in it at all. I mean, about the only thing it's about is testing very broken WAFs. And Derbuster, we're going to talk a little bit more about Derbuster. So those word lists that you guys are using in your Derbuster tests, if you're using Derbuster or if you're importing them into another tool, those haven't been updated since 2007. And if people haven't noticed, the web has moved on since then. And there's a lot of common words that we see all the time that aren't in those lists. So there's a couple reasons for this. When those lists were first generated, they were generated by going out to websites and spidering the website, seeing what directories existed and pulling words down. Well, if you think about that, you're interested in parts of the website that don't exist, not the parts that do. So if you're just depending on values that come back, you're going to be missing all the stuff that you think you should find but aren't. And there's a lot of bad words in there at the same time, because search engine optimizers do keyword stuffing, so you're going to end up with all these strange words that are completely useless. So here are some common words that we see in our assessments on web servers that aren't in the small and medium list. I mean, ASP.net, ASP.net client, that's pretty important. The VTI directories, there's good information that can be pulled from that. There's something to be aware of. These are common things that are missing. So if you're depending on these tools and not actually looking at what they're doing, you have these big blind spots. And this is kind of we were reviewing the Derbuster list and we were like, what the hell is this? Jeremiah Grossman. I'm sure people know Jeremiah either through his reputation or at events like this. When was the last time you found your web server? Really? It doesn't make any sense. Maybe he just stopped by to say hello. The thing I love is that ASP.net under our client isn't in there but Jeremiah Grossman is. Yeah, it's like really? Does nobody look at this stuff? So that led us to say, well, we need to generate our own word lists. How do you approach this problem? And we said, well, we can go out and find words that people are telling us not to look for. And if you're familiar with the robots.txt exclusion standard, basically webmasters go through and they mark parts of the site that they don't want Google spidering. Maybe there's sensitive data there or maybe it's underlying web application components. So those are the kind of things that when we're doing an assessment we're very interested in. So we went through and we pulled down a lot of sites. We combined the Alexa and a Quantcast top million site and pulled down about 1.7 million. We made about 1.7 million requests and found 350,000 unique files. And we went through and we generated word lists based on how prevalent certain words were. So it's kind of like we crowdsourced what people are telling us not to look for. So we've been using those on our assessments and we're seeing that we're getting better results back than when depending on the Derbuster list. So those are out in our SVN on Google code. You can pull them down. Right now they're just in a 7-zip file. So pull them down, look at them. If you think they suck, there's probably some stuff in there that doesn't make any sense. Let us know, give us feedback. So one of the things that I always like to say is that tools don't find vulnerabilities like people do. So tools should be there to assist in identification of vulnerabilities, not exactly point them out. So if you have a tool that's telling you something is vulnerable the tester has to have the knowledge to look at that data and say yep, that's an actual vulnerability or it's not a vulnerability. So we decided that there's too many tools out there with absolutely no intelligence when it comes to the sort of fuzzing or fault injection testing of applications. So if you say you were testing for SQL injection and you have a good SQL injection list that you want to test with different values with modern applications, a lot of times those fall down. So if you've right clicked on your favorite tool and said send to insert semi-automated testing tool here if there's a C-Surf token that changes every single time the page is laid out that means every single one of your test is going to fail. It's going to come back and say that's a big problem because often you can use automated tool, automated tool and it might not find an instance of SQL injection that was very easy to find if a person would have tried to test manually but they're using these semi-automated testing tools and it's failing and they're assuming that they can move on to other tests. So a smart tool a smart semi-automated testing tool should have several components. Obviously because we want to make sure that our tool is smart enough to stay in session it should have sequence building and running. So if you have a difficult test case you might need to run a sequence of events prior to sending your test case and even a sequence of events after you've ran the test case. So for instance you may need to run a sequence of events to log you in, run a test case and then log out. There are crazy weird applications like that a lot of international money transfer applications have weird functionality like that to try to make them more difficult I guess so security through obscurity but any tool that can handle those three things can test them rather easily. Also content discovery and support for modern technologies so we all know that something new comes out, developers want to use it it always takes testing tools time to catch up. So here's our tool we're going to start talking about how we solve these problems and how you can use our tool so a little bit of history about raft it was written, it stands for response analysis and further testing which is actually on the next slide I probably shouldn't have said that but you're probably wondering why there's a big red raft in the center of the screen right now so this tool was created because I was on an engagement and I had to write some custom scripts to test some functionality of a web application and I got to thinking there's this data I'm collecting something specific quite simply I wanted to be able to see the data syntax highlighted I wanted to be able to see the data rendered in some sort of web view and I wanted to be able to parse out scripts and comments and all the general things so I created a simple QT interface that allowed me to do that of course the tool today looks absolutely nothing like the beginnings of it it used to be basically a SQLite browser with access to web technology so basically it's not an inspection proxy so that might throw you for a loop a little bit but we decided to take a different route and kind of change that paradigm that everybody's used to because if you think about it you're just chaining responses through another device or another application so that's really important because almost all of the workflow that you see on all the other tools you'll set up this inspection proxy use whatever browser you want to go browse through the site and then come back and look at the tool again we decided to just cut out that middleman and instead you can import data if you already have it or you can just use the browser that we have ourselves so we actually built WebKit right into the thing so it works just like your Safari or your Chrome does and it renders things the same way so a lot of tools that have their own browsers they often have something that's not as full featured or as just a little weird this one is going to work just like you expect it to we also a big piece of this is we made a custom analyzer engine so you can write whatever it is you want to find easily and then run it against all the stuff you have it's all open source it's Python and QT and it's designed for testers it's not a fire and forget click the button and your report is done tool web apps but they just need something to help out so now we're going to have the demo what you've all been waiting for yeah we got through like the boring technical stuff so we'll see if we can get this going alright this is the user interface and we're sorry about the screen resolution screen resolution looks like crap so this has a little bit different workflow like Nathan talked about this started out as a way to get data from other data sources we have our own capture format that we have defined it's an XML based format and what we have provided a DTD for you and there's a url lib 2 module for people to do Python that you can just plug it in as a processor and it automatically generates this format so we'll look at some data we've captured and you know I just want to point out here there's we're off the internet anyway but there's an important thing that we discovered when you're looking at rendered data from old assessments you may have a limited time window for your testing and you're not supposed to be interacting with the site so we have this black hole network so if you're looking at old response data there's no traffic being sent out to the internet but when you have captured data if you have all the references all the images anything that was originally referenced then the built-in rendering will pull that out of your capture data and you'll notice that it originally existed we have responses here I'll zoom this in so you guys can see we have this zoom feature which is really handy you can see the request the response I don't know if there's any scripts on this but we pull them out I'll find some with comments links we pulled all the links so you get a quick view of what all the references are any form values you get those we kind of go through and parse out of the DOM and generate the list of forms and then one of the really handy features if you do a lot of assessments especially with highly dynamic applications you know that you have to do view source a lot view generated source so we have the generated source that we render the page and pull out of the DOM and any of these references like links and forms that are dynamically added those also get included here that's pretty handy so that's imported data we also support from the wrap format we also support burp logs, burp state files if you use burp pro you know the save state files that's in the XML, saved XML and web scare of in Paris message log formats we don't do their storage but we're working on that because it's not a good interface we also have our own built in web browser so you can go and pull up sites let me copy one here don't get into my porn yeah definitely so we just have an instance of the broken web applications running locally here so this is a simplistic I don't know why I'm bending over like that this looks strange this is a simplistic view of our browser it's made as a proof of concept so in the future we're going to have the question we got the most is there is a back button you just have to right click in the page to go back that's terribly intuitive we definitely have some UI we're not GUI designers so we have some UI inconsistencies so you just have to work through those but as close that if I scroll down here we'll see that any request that we've been making we have a zoomed in view so those requests that we were just making are now being saved in our storage we use SQLite in the back end so those are just getting saved to a local storage database other interesting tools we have a little search engine you can go through and type in like let's say you want to look for HTML comments so this one I'll have some comments in there we offer a variety of search there's a built in differ let's see if I can find some pages that are similar I'll just pick two that aren't just so you can see the differences this is like a really bad example but with syntax highlighted it's using the built in diff live in python if you're familiar with that so it's based on word matching and not byte positions we're going to cover some of these things like the analyzer the requester, like simple tasks like requesting a whole bunch of URLs that other you know going through and saying well copy my site and re-request them with a different authentication sequence that's like really tough to do in a lot of tools but you know here we have it all templated out and you just copy the URLs in here pick a sequence and rerun it our crawler is a little bit different it has the traditional web spidering approach but in addition to just pulling down and analyzing the raw HTML it also renders anything so highly dynamic web applications that are based on Ajax or some other rich client technology we can go ahead and pull out any dynamically generated links and follow those we generate mouse events text events, submit forms do clicking all on generated dynamically generated basis we have an encoder probably the most interesting thing was UTF-7 generating malformed UTF-7 there's a lot of WAFS out there that check for UTF-7 and cross-site scripting attacks but they don't check for malformed versions and web browsers are more than happy to render UTF-7 if it's malformed we have a data bank it's going to hook into both the spidering and any sort of sequence if you do replacement dynamic data replacement we'll cover that a little later site map you get a view of the site in addition to your traditional cookies we also have offer views of flash cookies so this is pretty uninteresting Nathan cleaned all the porn office machines so all this flash cookies got deleted right now you can only view them but we're going to give you the ability to edit them that's really important like if you're testing risk-based applications with risk-based logins that are storing data in those flash cookies you know getting a good way to go in there values and HTML5 local storage I'll go through a little bit of a demo of that later and that's about it for now we're going to start covering some of the other interfaces J-rock doesn't know how to use a Mac sorry guys to drive it for him so as the regular consultant I get all the boring slides too so we run on Mac, we run on Linux we run on Windows Mac works pretty well we do some compiling I'm not going to go into the boring details the easiest way to use this right now if you have backtrack 5 you have to do one app to get install of QCintilla and then just download our stuff and it works we've been trying to keep everything that we need packaged in with it there's just a few things that we don't but this is the list it's up right now in Google code we will eventually have packages so for those of you on Mac and Windows it's easier for you to just download it and run it and please, please so how many of you are web application testers like us show hands? how about functional testers of things that might use tools like this for security a couple more who knows how to write documentation we need those guys too so if you guys that are testers whether it's functional or application your specialty is telling developers how much their software sucks so we need your help in telling us bug reports on what goes wrong and then hopefully some help on how to fix it too but even if you just tell us what went wrong that would be great you're probably going to want to wait a week or two so we can get back from Vegas and fix some of the problems that we found while we've been out here we're like 12 hour development and that's the way it goes we presented at Black Hat and we didn't see the outside of our hotel room pretty much the whole time okay so now we're on to the analysis engine it's obviously in the title this is a big portion of Raft we wanted something that would actually analyze everything that we currently had I don't want to have to spider the site again to figure out if their comments that may have some data that are interesting yeah I can go back through the burp state or whatever you always find yourself writing another manual tool to pull more data down but if I've already got all that data let's actually just analyze what's there so our model here for analysis is something modular we want to be able to write one time and have it analyze all this data that we have we want to be able to analyze sessions as a whole not just single request responses first request is different from the last request but we made the same responses are different we made the same request we want to know that and we want to know why so we want to find what others ignore we want to look at timings like how long it takes a page to respond we want to do some image analysis if you guys have looked at google images lately they now actually pull out exit data and will display where exactly an image was taken or what camera was used for and that's not something you typically look at during an assessment but it could be useful information especially for a social engineering engagement something along those lines so the possibilities are really endless these analyzers are extremely easy to write let me show you at least the demo right I guess we've got anal going on right now but right now there's nothing in the analyzer we haven't run the analysis or the analyzers yet but we are looking at actually hooking some of the scanner and fuzzers into the analyzer so it would kick off specific analyzers when we make a request and things like that but currently we have to actually click this circle button up here which runs the analysis and we get back in this case 120 results now these analyzers this is currently what we've written some are in flux what the developers do we're looking for error messages insecure cookies we're analyzing some redirects to see if there is more information behind a redirect than is actually displayed to the browser it takes you to the next page but I've seen applications where people actually allow you to spider the admin section because the developers didn't write the redirect portion of the application correctly timing analysis thanks a lot PHP developers ride on man the timing analysis looking for the denial of service pages anything that would cause the server itself to spin longer and to take up more cycles so we could potentially execute a denial service attack so these are the ones we've currently written if you've got other ideas for what can be implemented we did implement simple regex and strings so all you have to do is change the configuration add your regular expression that you want to look for in all of these requests and it'll display them to you so currently I think we've got we're looking for personal information so in alteromutual found some private information here phone number and I'll actually tell us in the response I believe it finds the phone number and it'll show it to you so it's easy to scope through there and I think when we were building this we decided hey we want to know if XSS has been found it took us all of 20 minutes to write the XSS finder to see if an alert box popped and if it did then if it was in the request then XSS is obviously within the application and we'll do an example of that in just a minute yeah so that's the analysis engine jump back over to keynote so a little bit about our smart testing components because we basically so far we've been talking about data you've already collected so now you've already collected the data you want to do some additional testing based on the data you've collected so we created a requester and a fuzzer and those are templatized and we'll get into those in a second we're about to do the demo but we also have the ability to run sequences so you can launch the sequence builder and then import it into or just select it from the drop down box when you're doing your testing and of course we have a browser object so that browser object can be utilized during the testing as well so this templating approach is probably better just to show you versus explain about it so we'll go through a simple example of using the templated approach to fuzzing so we're going to grab our URL here make a request for the resource so we want to actually replay this request first so we're just using the open OOS broken web application we know that this is vulnerable to XSS so it's a good place to test and we're using our own built in web browser so any of the requests that you make through here end up in your data set automatically so here's the request that I just made the name equals test easy enough to send that over to the web fuzzer and as you can see the templates here so there's some automatic templates but there's a payload drop down box for where you want the payload to go and here's the mapping screen so you can map payload names to different sources yes the two in there are hard coded that will not be hard coded for long we needed something for the demo so we'll have a directory where you can load all your favorite lists and those will be automatically available to you so in this case I need to fuzz the name equals test variable so I'll add the marker just there at the end of the URL this will be explained better in our documentation because we're working this out right now our copious documentation at this point so all we have to do is start the attack you see it went really quick again these are hard coded lists it's not everything and anything but you can actually view the results we did all of them that it sent so if we're looking at let's see so you can look at each of them separately or we can go back and look at them in the response view now at this point we've done these tests we want to see if it found any XSS so we're going to run the analyzer again and now we have instead of 120 results from the first run that we did we've got 162 and all those are in the XSS finder it looks like we've got a couple here if I actually render the page and it was successful I get the XSS pop up because I am rendering it I mean we're actually running the WebKit engine behind the scenes so we're going to have a sequence fuzzer and we were looking at his laptop it's kind of funny because it locked on us so we don't know where we are so we're going to have a sequence fuzzer and that sequence fuzzer is going to allow you to tag data in sequences we call it dynamic data replacement so you can import a sequence of events tag that CCERF token that elusive CCERF token that's making all of your requests fail you can tag that and then place it into your payload so now all of those previous tests that you were running that were failing will now become successful and in the future we're going to have the ability to do any kind of dynamic data on the DOM so the really really really really really difficult applications well you'll now have visibility into doing those without having to do them by hand yeah so I'll just do a quick sequence builder demo this is obviously not completely functionally yet but at least give you an idea because we're still in the process of getting our dynamic data replacement features to work so here we have a pretty typical log in form we'll just log in, Fufu and as that's submitting getting captured if that ever comes back we'll see the parameter the network is down the local network and we're in bad shape but you can come through here and by default any sort of media responses are excluded from sequences this is where you configure the dynamic data replacement we're going to offer the ability to run the sequence in a web browser so you can literally render the whole thing you know we discussed earlier about some of the session state problems so we offer both an in session pattern and an out of session pattern so that you can figure out well do I have some specific request that's causing a problem so let's see I think I'll log out log out people that are leaving are going to be really upset because we're giving away free cookies yeah what am I looking for here log in, log out just sign out could be no it's log out if you've configured other tools sometimes it can be a pain to figure out so we actually search through it dynamically and mark it up for you yep so how many people in here would like to search for DOM based XSS without having to use a browser plug in or having to send your website to some application off on the web and have it test anybody that should be pretty much everybody so we have a built in DOM fuzzer so it's integrated into our tool so we can identify things like DOM based cross site scripting without you having to use another tool we're going to do a demo of that right now so I have imported a couple of web pages that are I know vulnerable to DOM based cross site scripting because I wrote them I'll show you the response here so if you look in the response it's simple this resolution it's just killing us so it's just doing a document dot right location dot href so that's going to be vulnerable in some circumstances this one is also doing something similar yeah this is doing an unescape so if you come over our DOM fuzzer is very basic but it still finds stuff so if you look at these tests we've generated some unique script statements pretty common an alert that would go into a string if you're doing a string and is a pattern that's going to be pretty much unique on the page so what we'll do is we'll go through and run the fuzzer based on the oops that is the absolute first time that that's happened I swear you know that sounds like a joke but it is the first time it's ever done that so I guess we need to fill out some bug reports yeah I guess so so what this is doing is it's taking the saved html data and it's loading it into an instance of wedkit and it's setting the base url to the modified value so this is not making any sort of network requests at all but it's still replaying those values and rendering it as if it was on the original site so we should start seeing some results coming through and we're doing a couple of matches here so we're going to go to the top we hook the alert box so if we see an alert box pop up with that number in there we say bang found some DOM based cross-site scripting in other cases we're just looking for pattern matches in the rendered DOM content so like if we look at this guy you can see that's written out and if we render it boom right there so we found DOM based cross-site scripting without sending any network traffic at all so this we have screenshots of in our presentation but I'm going to try it if this works I'm not sure so one thing tools don't normally do is give you visibility into things like SSO or LSOs and allow you to modify those to test to see what would happen yeah this is kind of a twitchy one so the html5 demos if I look at this and render it come over here you should see the local storage now has a value in it yep so if we set that this is the part that doesn't always work you're not supposed to tell them yeah well like we said this is still kind of a work in progress but think no other tool allows you to do this so being the only one that and it kind of sucks right now is better than being yep you may disagree yeah that didn't work so we'll switch back over what happens is it gets cached the browser is caching that local storage reference in we're directly modifying it through sqlite so you don't get to see it so here's the way it actually should work you just have to restart it which is annoying so you modify the value and then when you go back and re-render the page you get the pop-up box so that's actually modifying the local storage and some people ask us well what is the attack scenario there and we say well it's really no different than any other situation where one part of an application accepts data and it doesn't properly sanitize it and then it gets replayed later so let's say you have a web page and you have a local storage I mean developers are starting to use this and not even realize it like they're importing JStorage and using it without understanding where these values are actually being stored and then later you come back and it pulls it out of there and it doesn't encode it or sanitize it and then you have cross-site scripting so let's say you do some sort of forced browsing to send the value to that web page and get it saved and then force the user to re-render that page and then we have no more time so what we're going to do is as far as documentation goes there's some on the project page as well as our slides and a copy of these slides is already available on our project page so if you have any questions now's the time to ask them we're actually going to be in the Q&A room as well so we can actually go through and be able to cover so thank you for coming out we hope you use the tool please submit bug reports please submit feature requests we had some people come up to us and say that they don't code but they'd like to help out well we need people to submit bug reports and help us with documentation yeah right docs please oh and we aren't an inspection proxy thanks