 Hey, thanks for the good turnout this morning when I originally looked at my time slot I was like, oh wow that is crappy, you know Sunday morning at 10 a.m Everybody at death puns gonna be at church Um, I'm glad to be pleasantly surprised that you're all here so With that being said we're gonna be talking to you about some research that has really been percolating for about A year within my mind and then I introduced Mars to it. I've been doing web application security since around 2004 worked with every conceivable application security technology from a black box perspective and the one thing that I have noticed is that Unilaterally the technology has the same problems That whisker had or that the most earliest spider technology had or that the most rudimentary form crawler and Fuzzer has and so there's this common set of Underlying weaknesses that have never really been fixed by the development community by any commercial vendor And so we've created a methodology and a system for exposing those problems And it can be used for a variety of ways and we're going to talk to you about that The big idea though the main overview is that is there anybody out here who's ever got a headache from a false positive? You've ran some commercial scanner and you've had so many false positives that it find reverse Benchmarking is to come up with a system that allows us to begin to expose the flaws and the faults in the mechanics of web application scanning technologies without violating the end user licensing agreements those little things that you click through have actually caused a significant amount of grief on the part of researchers because you can't just Debug the application. It's against the law of course to begin Opening it up and exposing its object code or looking at how it does its tests And it's certainly against the law to discuss these publicly So what's neat about this methodology is that it's totally legal You simply use the application in the right kind of Situation and those faults will percolate to the top now. I would mention just before we get off that We've started a community initiative and a nonprofit and we're going to it's at reversebenchmarking.org It's also the orb project and we're also going to be working with Dennis Cruz and with site generator to get our reverse benchmarking methodology built into that All right Mars you want to take it away? Mars luck I've been in security since like 99 I think Yeah, I've worked for a lot of places no place twice And now the people I work for don't let me actually say I work for them But continuing on with the rest of our speech We're basically that was the introduction by Tom And then we're going to talk about just some general concepts in the web application security span scanning space particularly black boxes We'll talk about the problem of false positives which most of the people that got up here are probably somewhat familiar with Then we'll introduce a reverse benchmarking and how that's somewhat different from benchmarking or baselining And then also just go a little more depth into that We'll talk about 10 of you know common false positive types and then further research and then also talk about some of the statistics that we came up with Here's an example of an end user licensing agreement If you'll notice it says specifically that you cannot disclose how it works And that's the part that we think is creating a chilling effect And the fact that there's actually been very little research in this and that you know the idea of reverse benchmarking while some people have certainly thought of it No one is actually executed on it I know OWASP with the site generator idea Dana sent out emails you know like 2005 You know starting to scratch on this surface and it's definitely a problem that's been known since the first of the web application security scanners I rolled off the assembly line I'm going to pass back to Tom for him to talk about his shuriken Well I'm going to skip the shuriken but I always read this Sun Tzu quote everywhere I go so I'm just going to read it here You know it's the obligatory Kung Fu Sun Tzu quote And you know it's whoever is first in the field and awaits the coming of the enemy will be fresh for the fight And whoever is second in the field will have to hasten the battle Now I think when it comes to reverse benchmarking there's going to be some organizations that are going to be first in the field And that's really what we want we really want people to cooperate and to coordinate because we want a community initiative that makes things better Ultimately though there might be some resistance and some vendors and organizations that give us flak or maybe even try and make what we're doing illegal And in that case it's great I'll just take off my pants and call it the Tom clause You know but at the end of the day we're going to be pushing on because really I'm interested in how we assess and improve the quality of security software overall We look at things like functionality the ergonomics how many clicks through the GUI these are things that people buy a software product for that does security Feature sets it's bling just do you like the way it looks but ultimately at the end of the day as users we have to deal with accuracy With false positives and that's what reverse benchmarking focuses on as a unit of study So ultimately there are a few concepts first off where black box technologies are kinds of scanners that you install from one point You scan and assess remotely the idea of black boxes analogy it's sort of just an opaque barrier because you can't see the inner workings of the piece of software People do this in a way today and the common technique is a bake off is anybody out there ever had a scanner bake off or your company or you've just ran comparisons Well a bake off is sort of the opposite of what reverse benchmarking is a bake off is a positive type of benchmarking Where you're after a comparison of which technology did better or which technology found more and had less noise Where reverse benchmarking is a way of making or creating soliciting false positives and causing massive false positives in any technology where there's a propensity for those things to get out of control So your goal is not to see who did better but in a way to see who does the worst Maris? Alright here's a little graph that we put together to talk about kind of the criteria that people have used to talk about vulnerabilities Like in the different type of product lines of vulnerabilities sometimes understood in different ways You have the IDS line of thinking the network security scanner line of thinking and the web application security line of thinking But in order to kind of frame this talk and the research these are just the types of four quadrants that you could possibly have when you're looking for vulnerability Positive detection would be that you have the scanner finds a real vulnerability that is there The false positive which we're focused on is where the vulnerability is not there but the scanner in fact finds a vulnerability that isn't there And so that's the focus and as things move in the space and web application security scanners become more developed Then we can start looking at the higher quality problems which would be the face where there is in fact a vulnerability but they don't detect it But that's the mystical part of the industry right now that we don't know what we're actually missing But the way our approach is that we'll first be able to look at what false positives are there and once we can get the false positive rates down in the tools Then we'll be able to move on to the higher quality problems of finding more legitimate vulnerabilities That's a great point and it's the massive sense meltfall sense of security you get from something that isn't digging into your application Yeah, no absolutely No, that's right. There is nobody and I mean I just we wanted to put it there because it is relevant Yeah, so Dennis just mentioned that there weren't any security technologies that would look at the application until you this vulnerability is not present But rather the technologies that exist are all focused on positive types of detection Whereas the technologies that could exist on the false negative and false positive side could even lean toward the certifying of an application Say against a particular vulnerability type I have looked at you with the following criteria and detail and you do not have this vulnerability That's a much stronger statement than what is currently being made but ultimately we want to phase this research into the study of false negatives But that will be down the line Or that would be the ultimate goal of any benchmarking tool is to do that We've spent some time on these slides I'm going not necessarily going to pick up the pace but I am going to say a little bit less This is just on positive benchmarking the idea is typically you throw out you figure out what percentage of accuracy did the tools have Scanner food is all found 8 out of 10 vulnerabilities that we know therefore it's accuracy was 80% Now that is what people typically do with positive benchmarking and then they whip out their numbers and decide what to do on the basis of those numbers The problem is that this methodology is examining or evaluating web application security technologies is limited There's a lot of factors like this selection bias do you really know what vulnerabilities exist within the sample application you're studying You might think you do it first until you get the vulnerability data At this point it becomes confusing to people and the process just sort of breaks down There's a you have to interpret the data of vendors can tune against a particular application so that you think you've you think you've found a particular technology that just a stellar against your set And then you realize oh wait they wrote specific rules for what I was just testing against Now this is just a point back that say like site generator was designed to sort of mitigate the tuning aspect of technologies And reverse benchmarking is designed just to go underneath that radar to make something you can't tune against because we can always create a new fuzz set that triggers your false positives The best example there would be like webgoat Yeah webgoat people tuning against it So what is reverse benchmarking my it's just designed to kick a scanner's ass and we're going to trademark that I think the point though is because I work for a technology vendor I'm not approaching this from my own vendor standpoint But that is why I wanted to create a community project an open project that way it's out of my hands because it can't be objective as long as Tom Strasner is doing it This when the community does it as a community then it becomes an objective methodology Now I'll still participate in this organization as a member of my own company and that's the way the Lord would like other participation but that's the idea Yeah are there any vendors here watch fire, spy dynamics, acunetics Oh hey what's up It depends on the false positives Yeah absolutely but that's a good point and there's also the idea of semantic differences right You can pick an obscure corner case that can cause false positives but how relevant is that to say a production environment It depends on how you're using the methodology but there's we need to work out issues like that We're definitely focusing on catching the lowest hanging fruit right stuff that should obviously have been weeded out in some type of QA process Like in the essence of full disclosure I used to work as SINZIC as well Overlapped a little bit with Tom so you know I know that there's definitely some QA gaps in regard to you know the industry in general Our focus is you know catching that low hanging fruit and so we're not you know focused on coming up with the obscure corner cases like Tom was talking about But just doing stuff that is completely wrong So once again just to make everyone on the same page what we basically what you do when you want to set up a reverse benchmarking environment Whether you're creating itself and we'll talk a little bit about how you could create one yourself But ultimately you have a web application scanning technology and then you have a reverse benchmarking target or web application with reverse benchmarking capability There are you might think of as trigger signatures and strings and strange architectures You can imagine say a javascript road test that's designed to confuse any type of crawling mechanism and make it believe that there are URIs or URLs that don't exist You can build within this trigger mechanisms that make it believe vulnerabilities would exist Ultimately you scan this application and then just get a report and if you design the application right the report can even tell you what types of false positives in a taxonomic sense have been discovered Now obviously that's going to take a lot of research on the part of the community to figure out well first what's a good taxonomy for all of the general and generic say false positive types But ultimately you're going to enumerate and then categorize the false positives It will reveal sort of broken or vacuous signatures if something has a detection signature that's just 200 then it might be triggered by it's going to be triggered until the year you know 2010 So it will have this false positive for the next three years Well something like reverse benchmarking can lay that bare really quickly It will reveal really semantic flaws and categorization and it will give you the idea if there are any like sort of systemic architectural problems with the technology When you go through the javascript road test type functionality and then into the scanning you will see where things begin to break down and you'll see limits in the technology Marce you want to talk about the trends of time wise we have where it tends to extend Well I don't really want to belabor this slide it's just basically the general idea is that applications web applications since you know 99-2000 have been progressively getting more complex And the scanners haven't really stayed pace with it they've you know taken some definitely increases But most of the changes in the scanners have been more GUI focused and how fast they can go through their signature base But changes in the signature base and the you know the technology there is you know just not really kept up with the GUI development And also the numbers they use to categorize their level One of my friends who does manual penetration testing was lamenting to me and he says it's getting harder to do my job every year I'm having to do more work not less and it's just because technology is growing faster than the technology we use to say perform assessments And that's one of the reasons why an environmental stressor like reverse benchmarking can help improve technology over the long run Because it's an equal and fair thing that will also allow the community to help to understand and educate developers And that way one day when developers are writing new security rules they'll realize that it's essential to put a no 404 check into a scanner Right that a 302 redirect this should not be considered evidence for vulnerability We'll show you some of these false positive types when we get right into them I'm going to skip ahead a little bit Just to give people kind of an idea like on my day to day it is a lot at doing black box testing of various web apps for large financial And it's not uncommon to get you know a 5000 page report that you know has a significant number of false positives that have to be manually validated So I mean that takes at least you know 40 hours of every week just going through that stuff So that's one of my motivations for doing this is you know the self serving one of hey maybe I could work less at some point in the future So I thought we'd warp ahead and look at the actual types of false positives And these are real examples of false positives that we're not showing who had it or who's was it But here's a good example of a partial what we call a partial match problem Now as we go forward as a community we can figure out say different terminology or extra false positive types But I'm calling this a partial match problem because the detection signature was literally a 200 a 200 So anytime a get request was issued for .pl back this 200 was matching the date And of course it will until we mention it will until the year 2010 And so this is this is something that would have been found in every assessment report ever run by the technology And the idea there is is it's good to know these things because you can weed them out as you're a worker I mean if you're using these technologies it can make your life easier to know what the false positive types exist You can flag them Parameter echoing is a big one and here I've put my my very elite PHP script This you don't get much more elite than this you didn't see this at Black Hat But there's something simple like this that's just echoing junk into a text area Will really show you a high degree of the semantic level problems with technologies And let me give you an idea there is there is a cross-site scripting vulnerability in this script That's echoing it in the text area and it's if you break out with a text area tag However you will get thousands or will sell hundreds of different types of cross-site scripting vulnerabilities that are bogus In other words semantically the name of the vulnerability may be a double quote Front tick backslash cross-site scripting attack works Well that's a vacuous result because everything works the the delimiters are irrelevant If you if you if you encounter a part of the application where the information is being echoed back in a non executable form Right and so areas of the application that are just spewing data back and cause false positives Because in many cases one little character that you would need to make it ultimately executable may be missing Where it may be injected into a JavaScript function causing a train wreck and the data is is just not relevant So parameter echoing this will give you just one example if you were creating your little sample application Sticking something like this in there and then a page behind it with some common strings would give you an example of just how susceptible some of these technologies are So here's a case of what we call mistaken identity now this is a good one because all there's so many open source bulletin boards And forums and blogs and everything has become so in in bread and cobbled together that you can literally get situations where you'll run a security assessment tool And go back and the test will be for search dot pl you know have found this pearl script and it tells you you know Well this is Alibaba search buffer overflow but the fact is search dot pearl matches hundreds of applications out there so you don't even know what the result is You'll also find this on if you're scanning same bulletin boards and there's a there's cross-site scripting in a Q comment area Well until you inspect the application you don't know if it's real because there have been hundreds of different applications with the same vulnerability The name that they're giving you may not even wildly match the technology you're using So this anytime you have issues where just the identity is confused there's more work for you to do on the back end and so this is a species of false positive The other other problems that we see are just simple ambiguity right the the conditions of the string that's used to detect a thing just doesn't go far enough And that just because they found to say a sequel error they're telling you give full-blown sequel injection vulnerabilities In other cases there will be some feature of the application that gets picked out and the vulnerability just resembles nothing like what is looked for And so these are issues that that make reporting confusing and are time consuming and so there are areas where studying and categorizing those will be helpful Mars you want to go ahead with response timing this is a really big section because there's a lot of tests for say sequel injection that use a wait for delay mechanism They will just wait for the they'll they'll inject a sequel command that will cause the database to count a hundred and then respond back Well you can get applications course that are just slow and so you can see these sequel mechanisms just misfiring like crazy and assigning vulnerabilities to you So with the reverse benchmarking target you can just create a portion of the application that is slow and make the timing different or gradable And see how many sequel errors just pop up this works with blind sequel injection as well Or just use portable web app and get that functionality for free Now with a custom 404 this should be ancient this goes back to the very first scanners of 1998 and 99 written in pearl and put up on root shell I mean the problem is if you do not check for a 404 message that's clean if you are simply keying off of the presence of a 200 ok As a status message as evidence that the vulnerability exists then if a web application is set up a custom 404 page where anytime you search for something in the URL You look for a resource it's not there you get redirected to a friendly page that says hi file not found but the underlying status quotes are usually a 302 followed by a 200 ok This causes scanners that don't have this mechanism to go nuts Now that may seem so old like no there's no technology in the world that would still do that today Well there is in fact commercial technologies on the market today that perform their file scanning without these mechanisms and there's a cause of rampant false positives We're going to look at some real data here shortly why don't we segue over into that Marston you can talk about the data segment Well just as an introduction we called the Tom called the application he originally wrote back at you Basically set up that application it had a number of tests some of which we've covered the slide deck will also be up on the reversebenchmarking.org Probably later this week after we get back to our respective home bases We took four popular black box web application scanners ran them against the target with their default policies and to get these results I took the results and put them into high level buckets some of the high level buckets and stuff like that are fairly arbitrary But you know we're not specifically saying which scanners did this or did that and we're not giving out any of that data so no one should be really concerned This is just the total false positives for the four different scanners as a percentage of a whole so you can see one of them did really badly And then the other ones are somewhat similar so there has been some changes in the community and some of the products are at a higher state of development as you can see Yeah just to point that back out this slide just shows of the total number of false positives we generated 92% of them were generated by one of the commercial technologies Now one of the reasons why such a desperate amount was that it did not have a 404 check so it found 5 to 6000 files that just didn't exist Just that every backup file in the world and every way you could possibly rename them is in your CGI bin Now if you take that data out they're all sort of doing equally badly right and so this doesn't meant to say that there are three technologies that are stellar and one that's just awful But really one just had a very very noisy problem that they should Well even controlling for that it would still have had considerably more Okay here's a scanner once, false positives Alright if you look at the different categories I just put in is path manipulation, it had some type of path traversal or path disclosure types of false positives There was command injection, windows, unix, probably a lot of people are familiar with that And then there was all of the scanners had cross-site scripting false positives Which that was just simply putting a script, a java script with cross-site scripting or XSS in just a blank page and they all triggered on that One thing I would add that is interesting and this shows that there is improvement going on Is that you might think that say cross-site scripting would be the most prevalent vulnerability if you had say an echo mechanism like our little elite PHP script But in fact in this application it was only 7% and the other characteristics we actually had SQL injection false positives we could generate upwards to was just 2% But with command injection and path manipulation that was combined 72% of the results And so that shows you areas of security testing where if you had to tighten the testing procedures, implement mechanisms that say Will during a crawl look for the presence of the strings that you are going to use on the basis of detection and then rule those out as sort of a pre-qualifying run Will technology improves, no one's doing that yet, this just shows you how reverse of inch marking could open doors to actually inspire people to code differently With SQL injection, those are basically strings from SQL servers that just were in text areas in a static HTML page So that's what generated those The file disclosure was any type of file, mostly the 404 types of problems Tom was talking about where it thought it found a vulnerable file that it didn't The known vulnerabilities that categorization was basically any of the semantic types of mistaken identity where it said Emily forums 1.0 had a vulnerability in PHP blah or whatever And then misconfigurations were mostly things like to do with the web servers or something like that With scanner 2, you can see that the majority of the false positives there with past manipulation and then also interestingly the file disclosure as well So as you can see there is a dispersion of the problems that are kind of across the various scanners that they each have kind of signatures that can be fooled fairly easy With very simple problems, low hanging fruit as we were talking about Scanner 3, the bulk of them were actually misconfigurations and things that it thought about a web server or a web version or a version of some type of scripting software or something like that that actually wasn't there And then file disclosure was also a problem with these Scanner 4 also had the same problem as Scanner 3 I mean these are just trying to give you an idea, the main takeaway here is just that each of them still have most of these Though some of them have less functionality and don't necessarily even test for some of these Was it a sequence or something? Scanner 4 had a big problem with SQL injection as well, it's SQL injection technology was just pattern matching And so the fact that these SQL error strings were in the HTML messed with it more than some of the other ones That's kind of just the We can let them download it from the web right off Yeah we're going to put these slides up and you can draw whatever conclusion you want This talk is mostly about getting this idea out and trying to get some community support and get people to start submitting Foss positive scenarios and foss positive types to where we can have a bigger set of tests and come up with better road testing applications We've also talked about having a tax on me of foss positives because until we truly understand like foss positives in a more robust scientific sense I think engineers are going to have a hard time understanding that and the web application of security space is still going to continue to be more of an art than a science And so putting some rig around that is also some of what we're looking to do For further research we want to improve the benchmarking target, add more tests like I said, improve the testing methodology Because right now there's so many different signatures that are kind of thrown out by the application scanners that coming up with the buckets is somewhat arbitrary And I did that late last night so it's far from perfect and then there's also I would also mention that there's a sort of a dual level here because dealing with the application spider And what causes the spider to enter erroneous states, get hung in infinite loops or even find things that don't exist is really key For instance if you just take an HTML page that looks exactly like a directory listing You can find out which of the spiders will begin following the links on that page thinking that they're entering new directories when they're just following HTML links that just double back onto itself You can create strange architectures that will expose the faults in spiders and so when the vulnerability scanning develops Sort of a side that is specific to kind of road testing and generating false positives in spiders is good And site generator has aspects of this because you can dynamically generate pages in javascript and flash And so I think that there'll be more room for fruitful interaction there I mean, what were we at? Okay, alright, that's strange, I thought I had ten more pages to go Well Tom puts that back up there, just to clarify the testing target that we used and the testing that we did didn't include any of the type of spider evaluations So that's one of the other areas to where focus is needed and like Tom said site generator So we also just want consumers of these products to be more educated and be able to make decisions based on how accurate and how robust the web application scanner is as opposed to the marketing GUI Okay, he's since we have a few minutes more going to pull up the sample app and show some of the tests Okay, and we'll also take questions, it is not with attribution, I mean we think it's legal because there's no specific attribution Dean screws everybody I was just asking is it legal to actually disclose on the eulah right, is actually disclose information of what they're not finding So because it would be interesting loophole right because if the eulah doesn't say you can't disclose what we don't find But they say we can't disclose the results right, so it will be covered by there Although from the OAS, what we will ask actually officially is that we'll ask each of the vendors to actually give us a license And we were talking about this a couple of days ago and I think what we'll do is we'll give the vendors one chance to comment on the results Just in case you know you get something really wrong So you do the thing, you send the vendors, hey we found these other results we have, is anything we missed pretty stupidly And then you run the thing again and you publish the results And we hope there's no vendor which is stupid enough to actually try to tune in the tool to actually match the patterns Because that would be a big problem Thanks Deanus, anyone else have a question or are you ready Tom? Yeah actually I want to just say a few things about kind of the junk that we put into the sample app First off just having an area which echoes back say things that look like a directory listing Things that look like the output of netstat minus AN Of course your default boot I&I where you'll have your obligatory bootloader Then some dates and directories Then you have things like Unix or Linux ID command where you just on the page following the form it tells you your UID 0 And so it's things like this that get put into the application that cause trouble for the mechanisms We begin to adapt and develop more sophisticated text testing procedures Then it's where these methodologies actually improve technology on going And so even if disclosure never happens or even if there isn't a showdown It's not our goal to make anyone look bad But it is our goal to expose false positives in a way that help the community think of understand the problem better Understand a taxonomy of false positive types, help developers when they're writing code go ooh Avoid these 10 things and ultimately make the technology better for everyone Now with Owasp and Dennis we may be doing some participating in some events and things like this But the ultimate goal of our methodology is research oriented and to help in understanding of false positives And to get people from all organizations to contribute so that we can understand the problem better No and the questions? Yeah No it's not available right now, it's a little too ghetto to give out That's my attitude of it but Yeah we're going to work with Site Generator to come up with some tests And we thought of coming up with maybe like a top 10 kind of false positive categories And then categorize different tests that test for those Which Tom you want to go back to the like top The 10 different false positive types that we kind of rushed through Oh did you take that slide out? No But anyways yeah here I'll walk out Okay Not going to work The court has length but Yeah there's definitely the custom 404 error pages that the different vendors have developed And with custom configuration when you know your applications you can certainly weed some of that out In fairness to all the vendors and to have a baseline We didn't change anything from the vendors defaults right Because if we were whittling around with all the various settings we could have biased You know for at least in our under in the way our thinking was just use everybody's default policies And default scan against the same target and that would be the closest to you know And it's a pure kind of test Let me answer in on this too also we weren't trying to do any official benchmark or showdown And we don't put any stock in our numbers that's why we didn't really even tell you who was who Our goal in getting the results that we did was just to show you that If you take existing technologies and do what we are talking about here to them You get useful or interesting results and then those are results that other people can do Showdowns or comparisons with right Right that's right and see that's where we would that's where working with you would be a big help because Reverse benchmarking.org doesn't plan to get into the business of comparing or publishing comparisons officially But we would work with you and OWAS carry that out Yeah, ultimately users tend to you know like in the marketing literature you're how many clicks to the user can set this off So I don't think many of those custom you know configurations are actually really used by you know the vast majority of users of the products Maybe I'm wrong Okay, well there is an alternative opinion So you're saying the majority of the users of the products are actually very skilled as opposed to the user base is not very skilled or Just by the check What happens then there's really a big wave of skills from the security systems Usually they know what they're doing. They usually are technically enough that even if they don't fully understand it You'll get it. You might have to believe guys here But usually who run these tools and you know I don't know what to do, you know when you go to a scanning company Usually the guys know what they're doing. They might not be security experts, the lab will not be able to break anything Well that was the easiest way to do the testing right Yeah having some idea of sorry to cut you off Having some idea of where hours are spent by people and where you know how much time is spent and something like that is definitely I think useful And maybe some academic would like to explore that you know working with the vendor and having some type of you know pool of users And you can have a control set that were you know the clickies And no zero configuration kind of pool as opposed to you know expert power user type of pool And you know do the evaluation across those but I think that the precondition for that would be some type of reverse benchmarking or benchmarking app You know like site generator that can't really be you know ahead of time the scanners can't be configured to you know go one for one against its tests Having some dynamic generation having both positive benchmarking and the reverse benchmarking types of tests So yeah this talk is just to get kind of the ball rolling and getting people thinking about this kind of stuff because ultimately you know I think there's a lot of time wasted you know I know from personal experience a lot of time going through FOS positives Even with the various types of tuning using you know multiple different scanners Still there's a high rate of FOS positive it I mean it depends on a lot of things There's a lot of different variables that we can't necessarily control for you know right now and might still be very hard to control for in the future Especially with the pace of web application you know technology and development as web services come on AJAX and stuff like that Just the complexity is going up and you know so Also well you had a question in the front? Yeah Yeah by the way we're going to be in the We'll be going to the breakout room after this if you have more questions and you want to sit around and chat We'll be heading over there after the talk Yeah that's a good point Yeah, definitely there are you know Different classes of users and the audit kind of function or compliance kind of function Uses these tools as well Enterprise technologies will also capitulate this cycle Generally if depending on who did the configuration or how many servers are being scanned There's general you know not a lot of opportunity to insert yourself into a process and make changes So it just depends on the functionality that's present but you can easily find FOS positives being More of a problem in situations where you either have the technology being configured by a team And then ran over your whole network where they're not looking at each segment and or each specific server Looking over the reports Well, there's certainly even if there's no motivation and the people have the appropriate skill set There is still just the fact that this takes a lot of time to do and if you don't know the application When you hit the ground running and you're scanning something you don't know You're going to take multiple days just figuring out how the application works And there's still the case that you know manual testing Manual pin testing of a web application is more effective than any of the scanners There you know so I've no one's published though, you know specific real results in that regard I've seen either But that's kind of you know one of the big you know things in the industry that no one really talks about is that An educated you know skilled security consultant is still going is still more bang for your buck Then you know people running just commercial tools, but there's the time function of things that you only have two weeks to do something Or three weeks to do something like you know there's there's costs associated with it And if we can get the cost of a false positive down or the number of the false positives down Then people will be able to focus on other types of problems and ultimately be more effective And the third party third party roots, you know you can potentially get you know more effective results from them and more bang for the buck Because using commercial tools is a necessity You know at least in the world I live in there's no other way to do it just doesn't scale doing all manual testing And using the combination of the two is certainly you know the way I approach work is I generally use commercial tools to do A first pass kind of you know benchmarking and then I go through and go through the false positives and stuff like that And catch the low hanging fruit with the commercial scanners and then come back given time and look for more advanced things You know with just a proxy and you know raw tools and stuff like that Because we generally don't ever have time or are we given source code so we do everything black box And that's certainly more you know I think it's in some ways challenging more challenging some of the others But it doesn't require the level of knowledge that you know some of the other source code analysis and stuff like that So in general kind of the assessment industry is going to have to come up with ways to deal with this Because all the commercial tools black box gray box white box however you call them are still not as good as a human And which is good for us right we get paid to do it But having more effective tools where the humans can spend less time you know fighting the tools instead of fighting You know vulnerabilities or whatever it's what we're looking for so thanks everybody Yeah thanks everyone