 Okay, so hi. Thank you for coming to my talk, or our talk. This is a talk entitled Traps of Gold. My name is Andrew Wilson. I work for TrustWave Spider Labs. I compete and capture the flags with these guys. I'm trained in martial arts for quite some time with these guys and this is the family that supports me to make sure that I can be here and give talks like that with you guys. So this talk is kind of in popular parlance what some people would call offensive countermeasures, right? So offensive countermeasures as defined by the paul.com group includes things like annoyance, attribution, and then attack, right? So based on where you live inside of this threshold, it's probably your best benefit to talk to somebody who is a legal advisor about whether or not some of the things that we're going to show you guys today are viable inside of the environment that you work in. Some of it's going to be straightforward that you probably don't need to get permission for and then other things you definitely want to seek kind of legal counsel. This is just some proof of concept stuff to go from there. My name is Michael Brooks and I attack software. These are all the CVE numbers I've accumulated over the years. That CVE right there, CVE 0049, I just obtained my highest severity metric. So the Department of Homeland Security issues severity metrics for the most serious vulnerabilities. This one right here received a severity metric of 25.2 which is in the top 500 of all time. I got it a part of the Mozilla bug bounty program and I got their highest bounty of $3,000 and this sweet t-shirt. But that's not why I'm talking about, that's not why I'm here today. The reason why I'm here today is because we have a problem. Our approach to security is flawed and one of the problems that I found is in a product called PHP IDS. This is the wrong fucking slide. Sorry. Okay. PHP IDS is making an incorrect assumption. And the assumption PHP IDS is, what's nice about this intrusion detection system is that it is embedded into your application so you can apply it in places that you wouldn't normally, such as a GoDaddy shirt hosting account. It's making an incorrect assumption in its security system and that is that attacks can never be repetitive. This is the vulnerable piece of code. Now, so we rely on web application firewalls. I mean, if you think of a secure website, one of the first things you say is like, well, you have a WAF, right? But the problem is that WAFs can be bypassed and a common problem with WAFs is the preprocessor. And what you're looking at right now is a method that's being called an all input. And what it's looking at is it's looking at a regular expression and this is saying, hey, I want to match a wild card, which is the dot. I want to match two of those, at least two of those. And then I want to match at least 32 more, so a total of 33 repeating. And if it finds a string that's repeating 33 times, it replaces it with the letter X. So the significance of this is that I can hide any payload from the PHP IDS rule sets by repeating it 33 times. The effect is that I completely bypassed PHP IDS entirely. Absolutely every rule set is bypassed. But then I went a step further. Not only was I able to bypass it, I can intentionally trigger a rule set in order to populate its flat file logging system with PHP code. The significance of this is that once I have PHP code on the local file system, then I can use, I can turn any local file to include vulnerability in the application in remote code execution. So I said a lot. But the point is that not only did I bypass the web application firewall entirely, I also made it less secure. So this is a great quote from Bruce Schneier. Complexity is the worst enemy of security. So if you're able to add complexity to a system and actually make it more secure, well, that's quite a hack. That is not an easy, that is not easy. And here, let me demo it for you. That wasn't the only thing I'm looking at. Today we're seeing a new trend. And that is we're starting to see XSS filters and web browsers. Internet Explorer was the first to introduce this, but other browsers are following suit. In Firefox, there's the no script plugin which has an anti-XSS filter. And also within Chrome is also beta testing their own XSS filter. It's safe to say that soon all web browsers, all major web browsers will have XSS filters. So the era of very simple XSS exploitation is starting to go away. But instead, really these security systems, these general purpose security systems, it's not just web application firewalls or XSS filters and browsers, but also ASLR. These general purpose security systems create this kind of water balloon effect. The more they try and cramp down on the application, the vulnerabilities that are present start bulging out in new and interesting ways. And although Internet Explorer is loading, it takes a while. We were thinking about putting some whole music in here for you guys. But anyway, it turns out that XSS, so a bit of background. On the CD, I go into great detail about bypassing PHP IDS in one of my papers. I highly recommend reading it. I go over a lot of my attack methodology. But after I submitted the paper to PHP IDS, the developers were ecstatic. They were really happy about it. And I'm actually now a member of the development team. I submitted a similar paper to Microsoft Internet Explorer. And so I submitted a similar paper to Microsoft. And basically it's this. I showed that. So I found a problem with the way that Internet Explorer was handling UTF-7 characters. So it turns out how this XSS filter is working is that it's looking at outgoing requests. And it's looking specifically for brackets greater than less than symbols and quote marks. If you can create a payload that doesn't contain any of these attributes, then it's possible to execute code. I found a execute JavaScript rather. So I found a way. It turns out you can use UTF-7 encoding. Now, UTF-7 has a history where it's only for SMTP. It's not designed for HTTP at all. In fact, no browser except for Internet Explorer supports UTF-7 for web pages. But that's beside the point. I was able to, the real problem here is that their XSS filter is not accounting for this. But okay, hold on. I said that no, UTF-7 is not for HTTP, right? Well, why? So that means no website is going to be running it. Except there's a way. You can change the content type of a web app of a page using a CRLF injection or HTTP response splitting. And bam. And it allows you to get an alert box. The, Peter just crashed. Sorry, technical difficulties. Yeah. Memory. All right. I apologize. I was going to show you a great masterpiece. And that is that I was going to bypass both PHP IDS's built-in rule set for XSS and Internet Explorer's rule set for XSS in the exact same attack. That it is possible to bypass both filters on the browser and the server. And be able to execute code ultimately. So it doesn't matter. It doesn't matter that we're adding all of these new filters if really the fundamental problem remains. And that they can be bypassed, they can be fooled. So we need another approach. Screw slides. We could do it from memory. Okay. So why talk about all of this? What's the point of explaining really a bypass to PHP IDS, a bypass to Internet Explorer is that these are supposed to be defensive products that we're going to be relying on for the security of applications. And as Michael was pointing out, this system is stuff that we're going to be relying on more and more as we get into the future. So you saw a slide there for just two seconds before we crashed about frustration. Is anybody here earnestly in the room think the security development approach is working? We see the news every day. And some major company, some data breach, not just a small one, but a really large one comes into play and it hits people in pretty negative, impactful ways. So everybody's kind of getting hit with that. And we feel that the ultimate reason for that is kind of our approach to dealing with security has been a quality issue. It's a bug. When you look at books that are in popular use today, stuff like the security development life cycle for Microsoft and then secure code and some of the rugged movement stuff, we treat security issues literally as if they were a quality concern and that's how we approach it, right? So we took strategies to mitigate security issues like it was a bug, like it was a quality concern and that starts with patch management, right? So patch management takes an approach that says you find a bug, it's in production, we've got to fix it. So you write new code that fixes the abuse scenario. So that way you can actually go ahead and make it safe, right? But that tends to come with some baggage, right? The first problem is that if you have a bug inside of a production system, right, it's available for people to exploit it. And if it's a known exploit, you run this patch window life cycle, right? Where Microsoft puts out a patch on Patch Tuesday, it's reverse engineered on Wednesday and then you have a time between that patch, that release where anybody who hasn't updated on the availability of the release is still incredibly vulnerable, right? So there's definitely some issues with that approach. And then you run into a secondary issue where the patch itself may not necessarily be something that actually fixes the issue. It might be a temporary fix or it might be... There's also another problem, like really the problems with IE's XSS filter are trivial to fix. It would take an afternoon. But they didn't even recognize it as a vulnerability. They're not and they won't even fix it. It's a constant problem. Like back in 2004, there was an interesting vulnerability that was released in IE. It was called the IE drag and drop vulnerability, alright? It allowed... In the attack, the victim picks up a carrot and drags it. Just as if it was a game. The only problem is that that's not a carrot. It's actually an executable on a remote share and you're dragging into your startup folder. You could get remote shell on Internet Explorer for two years. And Microsoft said, that's not a vulnerability. Like, you've got to be kidding me. Ultimately, they did patch it. And even... So four years after this attack was... It wasn't until four years later that the term clickjacking was even coined. But it's recognizing that there's a problem is an important step. So patch management obviously was not ideal and everybody kind of knows that, right? Anybody who's done regular development doesn't want bugs in production, right? Because bugs in production is no bueno. The problem with that is that we move forward. We move forward to... Here, let's just play it from here. To patch management. Oh, and the computer crashed again. So I guess we're going to do this whole thing from memory. That's fine. No big deal. It is a Mac. Apparently it has not been watching enough TV. That's my theory. Okay, so screw my laptop. So SDLC, right? SDLC came out as a measure to say, hey, bugs in production, that's definitely not the way that you want to go about doing this, right? Because inside of that system, was that... Inside of that system, you want to deal with it proactively, right? So you build it into your development life cycle. It becomes something that you have to deal with. It becomes an earlier system. You start defining it. You come up with quality gates. You do threat models. You work inside of the system and make sure that it works for what you're doing as your dev cycle. And then at the end of the day, it's sort of a process of refinement, right? You try to reduce known vulnerabilities inside of the application. I'm a huge fan of Microsoft's SDLC, right? Despite the fact that it misses stuff, I think that it's the way that people ought to be taking a look at writing software, right? It's super mature, 10 years, something of experience, lots of money, lots of manpower, and perhaps even more importantly, right? It has executive buy-in to say you can't release insecure software, right? So it's a great approach, fantastic approach, right? But you still have patch Tuesday, right? You still have bugs that do get out into the system. And so we've got to kind of question what advantage does this have? And how tenable is this as an end product for us, right? So the final thing we throw out there, right, as our solution to security is more bad software, right? I call it defects in defense, right? If we put more software like web application firewalls or we put protections like antivirus or regular firewalls in front of these things, they're going to stop, you know, bad guys from doing bad things. But kind of as we just shown earlier within the context of our actual attack component, you can bypass those things. And so when we rely on stuff like that, that definitely creates a situation for us that's pretty disadvantageous, right? It's not, it's not ideal. That's how we end up with the TSA, right? You get groped for free in the airport, which is nice, right? Admittedly, but it doesn't make you any safer is kind of the question. And what about the backscatter is security scanners, right? Like we spend tens of millions of dollars on a security system that can be defeated with pancakes. Like really, really, are we really doing this? And this is at least the second time that breakfast has been used in an attack. The first, the first, of course, being the captain crunch whistle. So clearly more research needs to be done in the area of breakfast-based exploitation. I'm just putting that out there, right? So the question then really, this talk is not about all the stuff that's broken, right? I want to point out that these are difficulties that we face. It's the reality of writing software and it's stuff that we need to continue to do. I think that's just security hygiene, right? It's the sort of stuff you do. You brush your teeth because you don't want bad breath and ideally it makes you a better person. Even refinement, right, is good too, but none of these approaches actually stop you from getting punched in the face, nor do they really prepare you for that, right? Like having clean teeth isn't a defensive countermeasure in my opinion. And so we really think that the solution or the answer here is we need to start looking at strategies in which we can fight back against people, right? We want to kick their ass, right? When I was working at a company a while ago, I used to tell developers they've got to build their applications so they can take a punch. I'm going to change my statement. I think you need to build applications that punch people back, right? I think that needs to be the way we move down this road. So how do we do that, right? How do we build in systems for that? And that's, we leverage technology that already exists, PHP IDS being one of those. We use Honeypot and HoneyNet technologies and then we write exploits that are capable of taking advantage of the fact that software that's going to be attempting to hit you also is vulnerable to all the same things that we talked about before, right? They have bugs, they have weakness, they have problems. There's this phenomenal Twitter feed from Richard Batelik who said something great. He said it's not just because we have these glass built homes. It's because there's people throwing bricks at them, right? And that's, that's what we need to deal with is the brick throwers. Hacking against our systems and attacking these things. This is, this is a human problem that manifests itself with technology. Check this out. This is good. It works nice on mine. That was a nice idea unless you can fix the mirroring for me too. We'll just move on. So they have the risk too, right? This is a human problem and because it's a human problem, I think that's exactly where we attack people, right? We chase them in the places where they're human. Bad guys who are doing bad things against you, they have things like ego and bias and they think they're going to get in and when they find results they're going to try to exploit these results because they're basing it off of prior experience and prior knowledge. They have weaknesses in the sense that the tools that they're using are imperfect as well. And we'll just go from here. And so we need to start focusing on strategies that take advantage of that, right? How do I move it down? Dude? All right, sweet. So we're about here. That's where we attack them. When we look at how we're going to do that, we're going to leverage stuff that other people have done and that includes IDS systems, honeypots, exploits and we're going to put those together in a fashion that we can trap people inside of our systems to create things like better attribution, shut down their tools so they ignore particular content areas or in certain cases just completely shut them down entirely. So a thing about lying, we're worried about China hacking into us, right? Well, we're worried about them getting a leg up in business but that's weak. China is being weak. If they have to rely on you to break into you to get a leg up, what happens if when they hack into you, you give them information that you want them to find, that you use that against them, that they steal a secret that you want them to have? Something to think about. Right. If they can social engineer us, why can't we social engineer them, right? So people will take, say, you know, potentially security hygiene is a way that you are fighting back to, right? They might say that. And I would kind of classify that as more of like a war of attrition, right? When you look at how people fight combat, how they can, you know, conflict, compete in combat, there's really two strategies. There's an attrition based model and then a maneuverability model, which is kind of a modern day guerrilla warfare combined with some other things, right? In a war through attrition, the idea is that I'm going to gather as many resources as I possibly can and kind of who has the most arms wins, right? If they have four nuclear warheads, I've got to get 10, right? And then when they get 10, I've got to come up with a better nuclear warhead. So then I'm kind of the winner, right? That approach is expensive because it costs to build it up. It's expensive because it costs to maintain these things. And then it's expensive in the sense that if you actually go toe to toe with these people, it's pretty deadly when you actually start talking about in real, real tangible terms of human life, right? But that's not actually how the bad guys are attacking us. They're taken approach, whether conscious or unconscious of maneuverability, right? It's like our good guys, our defensive line wants to create a football team, right? So we get the very best football players we can find, we give them the best food, we work out of them with them and the best equipment, we do the best training. And so they're the biggest and the baddest, right? And they're going to shut down the offensive line. And when they show up to play the game, they're there to win, right? But in this case, the offensive line is like poison their food, they're sleeping with their girlfriends on the side, right? When they go to play the game, they've already broken in their house, stole all their money. That's how the bad guys are attacking. They've set the stage up so that all the things you think you're doing that are to your advantage end up being to your weakness, right? They're not going to play your game because your game is designed so you can win, right? So they play their game and they make you play their game as a means to shut you down, right? And so if we look at strategies, I think the people that we want to look to is people who are actually fighting in a capacity to do that. And in this case, we've kind of based a model off of the United States Marine Corps. So maneuverability just kind of at a historical reference comes from a gentleman primarily known as John Boyd. I don't know if you're familiar with the OODA loop, but he's kind of the guy who is behind that and he's definitely behind maneuverability, right? Maneuverability as core is doctrine says we aim to shatter the cohesion of our opponents so they can't make decisions in a timely manner while we gain strategic advantage and basically obliterate them, right? We're going to use their stuff against them. We're going to stack the deck in our own favor. This strategy is based off of three major components, right? The first being ambiguity, the second being deception and the third ultimately being tempo, right? Ambiguity is this idea that if there's more than one way to accomplish a task, we should try to find a route that makes it very unobvious what we're actually doing, right? If we go ahead and say have a destination and there's four different ways that I might possibly get there from a pure resourcing perspective, if you're trying to provide intel about how somebody does something, you need four times the resources in order to monitor each one of the possible ones because presumably you don't know where they're going, right? That's the value of ambiguity. But that's not how we build apps. We build our applications like they're billboards, right? We're proud of the fact that we wrote it in Java or we're proud of the CMS that we built it on or we're proud of all the developers who were involved. That's why we leave dev comments all over the place, right? And we make it as if, hey, you know, not only am I going to this destination but here's how I'm going to get there. Here's who I'm taking along with me and here's all the gear that I'm going to be packing with it, right? And so some of the ways we see that are server banners where we tell people what we're running. Oftentimes you don't need those, right? It's not of any value to stuff. And when you start measuring that against things like the showdown project or when people are doing Google Dorks, when they find an exploit in a CMS or something that you might be using, like those are pretty compelling reasons that you might not want to actually sit around and tell people that stuff, right? File extensions, your browser doesn't care about file extensions. If you send them HTML back for the most part, your browser is fine. So unless you have a use case where you have like different mappings for how the file extension comes back, that's mostly a server side processing issue. And in most cases you can completely disable that and not say, hey, I've got PHP or hey, I'm running ASP.net or hey, I'm running Ruby, right? Those are things that are unnecessary. And then finally, default files, right? I was working on a CMS a couple of weeks ago and they had some default control examples for developers and it says, hey, you know, this is how you write software using our stuff. It's installed by default and it's of advantage, right? But the problem is that these control examples were bound to the top node of the CMS so they could look at every single part of the application with unauthenticated access. So from a default control example that was left enabled, you could completely bypass the majority of security in the entire CMS off of a default file, right? So these are most often unnecessary and in fact, these are some of the things that when you start looking at things like June's Clan or CMS Explorer or any of the tools that are based around fingerprinting, they're going to be using these tools as a means to identify how you've built your application, what plugins you've got enabled and how they might actually go about attacking them. So if knowing is half the battle, you should shut up, right? That's what this is about. It's not just don't be so obvious about everything you're doing. And that's kind of the first step, right? So the next step is deception and this is where we kind of get into the fun stuff. Deception is about lying, right? We're going to convince people that we're doing stuff that we're not in fact planning on doing at all, right? Instead of saying, hey, I'm going to this destination, everybody plans and gets themselves situated to do that, don't go to that destination, right? Set them up to do it. It's a trap or maybe you set up ways that if you go to that destination, you're going to get ambushed as you get there. And there's a couple of ways that we might go about doing that. Reduce the things that they can know. We lie about everything else. And we could do that by increasing the noise. We can blatantly lie about stuff. And if I had a computer to show you, I've been able to trigger every single vulnerability that's identified through major security tools like NIC2, for instance. I can trigger 54,000 or 5,400 vulnerability inside of it. If you scan my site, I could tell you every single thing is valid, right? So how do you figure out which one of those is valid if you're trying to scan my system and try to identify components in it? In PHP IDS, I added a system of triggers. So when a particular rule set is hit, let's say it's a blind SQL injection test and they're trying to get us to sleep for 30 seconds. Well, when I see an incoming request, a PHP IDS flags it and then I look at it and I'm like, well, how long do they want me to sleep for? I pull it out with a regular expression and sleep for that period of time. This fools every scanner. Another thing, like what happens if they're trying to use directory traversal and they're trying to grab like slash ETC PSSWD? Well, we'll just print out that file. That's fine. It's a fake. It's not the actual file. The same thing goes for Windows files, like trying to get win.ini. And the point is that it's easy to fool. These tools are trusting. They're not planning on someone lying to them and so they're easy to manipulate. So I actually have a slide in the one that doesn't crash where this isn't like to pick on Nick to you, right? This Nick to was just an example of a tool that we used to do this. But we've been able to actually trigger false positives on just about every major scanner, commercial or uncommercial on on the industry today. We can lie to them. We can subvert their ability to make cognizant decisions because ultimately when it boils down to it is they have this the pseudo responsibility of being safe, right? They don't want to shut you down, which means they're not going to exploit stuff that they find. And if they don't exploit stuff, they're just collecting evidence and saying, hey, it kind of sort of looks like this. So you might as well lie about the evidence and make sure that the happens. A secondary issue with this kind of a corollary is how these things work. Oftentimes they make developer mistakes. If you put together like a scanner or tool and return a 404, oftentimes developers use these things as like a null check, right? I hit a web page. If I get a 404, it means there's no content. So why bother scanning it, right? Well, the problem is that your browser actually doesn't care about what the status code comes back. It'll render that content anyways. And so one of the ways you could actually evade content being discovered is if you take and return 404s for all your 200s, it makes most scanners disappear by default and they can't identify stuff inside of it. It's actually really trippy if you're using a popular scanning tool that everybody uses in the industry or for regular work and you try to spider off the index, it actually makes the entire web page disappear because as it's finding stuff it says 404, 404, 404. And so you can't rely on it for results. They don't get included when you try to do other more advanced analysis against it because as far as the tool is concerned, the content doesn't exist at all, right? So that's not necessarily going to fool people. That's just a design strategy to say, hey, these tools are running automation against us, which in and of itself is pretty bad because if you think of the impervious study that just came out and you are getting scanned once every two minutes, maybe shutting down the scanner's ability to do that would probably be to your advantage. So people, right? This is a people problem. And kind of as Mike was pointing out, some of the lies are a lot better than other things. When you get into forensics and you're trying to understand what happens inside of an application, the only way that you're going to be able to really understand it is to start exploiting it and get information back out. And if you can create a scenario where, like he was talking about, you're getting back the files that you're looking for. You're getting back the blind sequel injection that you're expecting. Everything is working along the testing route that you've already prepared yourself to do. Why would you believe that's not valid? Why would you give it up and say, hey, maybe something's wrong? Personally, I think I'd probably spend a really long time trying to figure out what I'm doing wrong as opposed to thinking, hey, maybe they're lying to me, right? But then it can take it a step further. Like why can't I seed my application with trip wiring? Why can I create form fields inside of my app that do absolutely nothing other than if you tamper this, I know you're tampering with stuff, right? Then you put that inside of a secondary database that's isolated just like you would a honeypot, and you can actually let them exploit it. You can let them run the full broad spectrum, completely isolated away from your production environment. But while you're doing this, you're building attribution. You're building a case against them. You know at this point it's probably not a scanner anymore and it's probably somebody exploiting your system. And you actually have a case that if you want to build this attribution for prosecution later, you've created better information, better intel about what they're doing against you, right? And that leads into tempo. Tempo is about initiative. A lot of people think of pace or tempo, they think speed. And maneuverability does have some concepts based in speed. If I can overwhelm you by creating and putting forth a greater effort before you can start actively making real decisions about it, I'm pretty likely to win. Anybody compete in any games like chess, go, any things like that boxing, you never win if you play their game, right? You can't win if you play their game. You have to take initiative and you have to keep it the entire way through, which means you can't be relying on reaction. You have to be relying on awareness and then proper decision-making. There was a study that came out a couple of years ago where they had junior tennis players and then advanced tennis players and they were measuring reaction speeds to see who was faster, you know, kind of make some notices between them. And they found that that the difference was actually fairly nominal in the context of overall reaction. But where the juniors were failing and where the expert players were exceeding is that the junior player would wait until the ball was hit before they start making decisions about where to go whereas the senior people were doing things like watching where the shoulder was moving or how their hips were turning and how their arm was coming up and they were getting intel quicker into that process so they can make better decisions and they weren't relying on reaction. They weren't relying on trying to catch up. They were changing the pace to their own advantage and using that as a way to decision against people and do it. So that's something that we need to do that. And if we build a system like we're talking about where it has IDS systems inside of it that are creating a bunch of false positives, effectively embedded honey nets, you can create a perceived attack surface that's completely different than the actual surface of attack inside of your system. So as people are going down this route, you've already gotten visibility into the fact that they're doing bad things against you. If you feed this into a sim or ideally a project that lets you create better decisions against it, you can then use these things to create a scenario where you can decide how you want to respond. You can kick them out of your system. You can shut them down or you can potentially attack them as we get to it. So yeah. That's where I think the AppSensor project, the OOS AppSensor project actually has a lot of value with the stuff that Michael Coates is working on is that application has to be embedded in the context of awareness as to what's going on inside of it, which is why we chose PHP IDS as our base model for this because it lives inside of the application. One thing you can do when you're living inside an application is you can see how it's reacting. So one thing is to shut down blind SQL injection tests, we can sleep. But what about error-based detection? So we could give them fake error messages, but what if they trigger a real error? Like what happens if they trigger a MySQL error in our application? Or even worse, like an eval error. Like they're trying to evaluate PHP code. Well, when we get to that level, we can shut down. We know that they've broken something critical in the application and we can kill the application, write a kill bit. And if that kill bit exists, then do not run. And the point is that, yeah, OK, this would turn, this is now a denial of service attack, right? But it's a hell of a lot better in the remote code execution. You can go back and fix that. So we're not playing the game. We know that you've gotten too far and we can shut down. So how do we put this all together, right? That's the real question. And we love it when a plan comes together, of course. So we talked about like misdirection with the 404s. We've talked about shutting down tools, scanners, completely invalidating their results. And in some cases, as we were hitting scanners, we can just crash the scanner remotely by accident, right? At least one or two major scanners that we've hit this with, we've actually been able just to stop the scanner from working completely inside of that so we can increase awareness as to what was going on. But the real question is, can we attack people with the scanner or through the scanner? And I think, yes, we can. All right. What you're looking at is a very underpatched Windows XP VM running Acudetics. Right now, we'll, anyway. So I want to be careful here and I don't definitely want this not on the news to read Ode and Acunetics. It's actually, this attack is not based on Acunetics. This is an attack based on the fact that Acunetics is many other web scanners on the market. When they're trying to parse information, you want to do that well. The only real way to do that is to use an embedded browser or to use the browser that's on the system because you need to execute JavaScript, you need to execute the HTML in order to get a real clear picture for like, for instance, little things like if they have AJAX requests on the page and content comes back dynamically. If all you're doing is an HTTP get and then parsing the results back out, because there's no dynamic execution environment, you'll never see any of that content, right? So commercial grade scanners or particularly advanced scanners, good scanners, quite frankly, they're going to be using this as a mechanism to gather and tell about what's going on. So let's pop this. So you should attack them exactly in that spot. Ah. That kind of pop. All right. OK, so actually what I was doing was scanning Metasploit. And... Hopefully... And now there's a shell. You notice it popped it in Web Vulnerability Scanner 7. So it popped just fine. Now... Really, so what's happening here? Like, we heard about the Zeus botnet being leaked and the source code for that. So these black hats have these great tools to attack the web browser. Well, what about using these tools to defend ourselves? What about having Zeus installed on your production server with a disallow saying, hey, disallow, don't go to slash Zeus. Because if you do, you're going to get owned. Well, a Web Vulnerability Scanner is going to ignore that. It's going to see a disallow and it's going to go there purposefully. And in that case, it would be owned. Interesting side note, I've tried a number of vulnerabilities. Like I tried the in-eye animated cursor vulnerability to pop the scanner and that was the first thing I tried. It didn't work. So one thing to note is that a lot of these scanners, it's not executing the graphical part of it. So maybe some of the image-based attacks like we're seeing some of the new SVG-based attacks or OpenGL attacks on web browsers. Like these are graphics-based and if you're running a browser headless within a scanner, it's not going to be executing this complexity and that's not really a part of attack. So maybe it has a smaller attack surface than your average web browser but it's still huge. Like in this attack right here, we're using an ActiveX exploit. And come on, ActiveX has been a festering wound in Internet Explorer since the beginning. So and it is a valid avenue of attack for against some of these web browsers that rely upon IE. Yeah, so there you go. We took over somebody's boxes they were attempting to scan us. Can you switch me back? So you definitely don't want to put this in a place that's going to be like blatantly obvious, publicly accessible because that creates obviously legal problems. And kind of as we talked about the system before, I think I'm we talked about the system before, there's some legal ramifications and I don't necessarily want to undercut that, right? There's some case evidence that says this might actually be a valid approach under particular circumstances in particular U.S. versus hecking camp which the network administrator took over the box of a person who was in their mail server saying that this was an emergency, I had to respond against this, this was the best course of avenue. They use the information to get attribution to identify that this was the gentleman who had broken into the machine and that evidence was applicable in court and the court accepted it and he went to jail because he signed a ULA that said I'm not gonna do that and within the course of action the network administrator is tasked with protecting his systems in the best way that he possibly can and the best way he could possibly do it in this case was to take it over. That doesn't mean you get free rights to sit around and leave vulnerable browser stuff inside of it that you can hit anybody because accidental attacks probably would be bad too but if you put something like this into a web server that's hosted on like your mail server or on a print server or something that people really ought not to be in that's not public that maybe is an internal process now you have a better case where you might be able to come against that but again consult your lawyer, right? So kind of as a recap you need to stop acting like security is a broken egg that you can put band-aids over and think of it. You can't think of it this way this is not a tenable position, right? You need to start thinking like I'm gonna kick your ass, right? Like you need to think like this guy because I'm not a rooster but I'm actually a little afraid of him too this isn't somebody you wanna mess with this is somebody who looks like if you start playing games with me I'm gonna take you down and that's how we should be treating security as we're looking towards it and stop thinking about purely in terms of vulnerabilities and think how can we shut you down how can we take that back and how can we regain our pride, right? And that means we should fight back, right? That's what we need to do. So that's it for us. Any questions?