 Thank you to everyone that showed up at 10am on Sunday morning, so give yourself kind of a round of applause because you're not completely obliterated. You're not still dropping from Thursday, like many of us are. So kind of to start off, we're going to explain what OP4 is and what that even means. A little bit of my background, not to go too much, I was a former U.S. Army Warrant Officer. I dealt with what's called OP4, okay, or opposing forces. I used to help train teams that would deploy prior to their deployment to help them prepare for the battlefield. So being a forensics investigator, I thought what better way to get better at forensic investigations than to work with my quote unquote enemy, which happens to be Tim, who's also a member of the spider labs, who is a pentester. So then the same way on the battlefield, how we would train infantry and artillery, we train ourselves within spider labs so that he understands indicators of compromise that he leaves behind by, you know, conducting his attacks. And I see real world forensic indicators of compromise from attacks that he launches. And just as kind of a side note, right, I'm not a proponent of any one forensic utility. I know when people say, oh, forensics, I use NCASE. That's not forensics. That's understanding a piece of software, right? Knowing how to use NCASE or FTK, no more makes you a forensic investigator. Then knowing how to use Microsoft Word makes you Stephen King, okay? It's just a tool and it just does whatever you tell it to. So forensics is truly being an independent finder of the facts and understanding how those facts are laid out and then how to interpret them, right? So again, opposing forces is really something that we've stolen from the military and have kind of used internally to get better at what we do and to truly understand what indicators of compromise are left behind in real world attacks, right? And it works both ways. When Tim sees what indicators of compromise his attacks leave behind, he can go, wow, that was a really freaking noisy attack. Is there a better way for me to accomplish the same goal? Not necessarily with the same tools and maybe he's got to write a different tool, but his end result, right, will be the same with less indicators of compromise left behind, which may or may not be important to our clients, but at least we give them the opportunity, right? And it's bringing that mentality in to the attack and defense communities and it's an evolving cycle. It's not something we do one time and go, hey, we're great, we're done, right? It's something you're constantly doing. There's always new attacks. There's always new vulnerabilities. There's always new things that we need to see so that we can get better at what we do, right? So here's our plan of attack. We're going to talk about something called sniper forensics. And it's kind of something that we've developed in the spider labs over the last few years in opposition to shotgun forensics. And we'll give kind of some background in that. And then with some real-world pen testing events and examples, we'll do a proof of concept. We'll show how it actually works. Write no live demos. I don't want to get anyone all excited about live demos. It's 10 in the morning for us, too, and we're kind of vaguest out, too. So we took the screen shot. And then what some of the post-mortem activities and stuff are. So state of the industry right now. Incident response in forensics has traditionally not been about live analysis. It's been about dead box forensics. We're trying to get away from that because, honestly, that methodology sucks. It's 20-year-old methodology, and it was really cool in cutting edge 20 years ago, and it's not anymore, right? How many people in here have some kind of smart device? Smart phones, iPods, iPads, droid tablets, something like that. There any spinny drives in there? No. It's going to be all cloud-based, right? So looking at the direction technology's moving, we're not going to be dealing with spinny hard drives anymore. You're not going to be able to pull those out, stick them in a right locker, image them, and do post-mortem analysis. You're going to be imaging the cloud. So good luck with that. Let me know how that goes. So you're going to be doing log analysis. You're going to be doing RAM analysis. You're going to be doing volatile data analysis. That is where technology's moving. So instead of being stuck in the quagmire of 20-year-old technology, we need to move forward and look at the direction things are going. And that's really what we're working on. So I'm going to pass over to Tim now. So good morning, Dufkahn. I'm Tim, and from my point of view, the current state of penetration testing has some issues. I think, first of all, that we are exploit-happy. And what I mean by that is that we're focused on exploits that involve memory corruption bugs. And I want to re-appropriate the terminology there. So I think if you look at how people talk about this stuff in the industry right now, when people use the word exploit, they're really just talking about the kinds of attacks that involve throwing shell-coded at stuff. But think about the real-world use of that term and all the other things that exploits encompass. I want to re-appropriate that, the word exploit, for all that other shit. And so I'm using the term exploit to kind of minimize and push to the corner the stuff that involves throwing shell-coded shit. And then, more importantly though, I think that the penetration testing community is losing focus with or the connection to real attack patterns, how attacks actually occur in the wild. And so I think if we look at how we scope penetration tests, how we perform them, and how we document them, those are all, I think we are, in each of those cases, drifting away from real attack patterns. And I want to switch things around back to what I'm calling real-world penetration testing. And so, more of that later. But next, Chris is going to give us an intro of Sniper Forensics. Okay, so some of you may have seen, if you're in the forensic community, you may have seen or heard of Sniper Forensics. I actually wrote it on a plane. So it's kind of like staying right and messaging a bottle in the back of a bus. So I kind of wrote it in the back of a plane, but it makes sense. And just a little bit of background. Comic nerds and computer nerds kind of flock together and are kind of some of the same people. So one of my best friends is a comic book artist. So I wanted to give him a shout-out for drawing this for me. This is the official Sniper Forensics logo written by a gentleman named Dan Christensen, who's a fantastic comic artist. So if you're a comic nerd, please check out Gen-Z and stuff. Shameless plug, done. Okay, so Sniper Forensics is, again, it's the concept of going after the data that's necessary. And we use the term indicators of compromise for that. It's not going after a bunch of crap that you probably don't need. You don't need full disk images to see what's happening in memory. You don't need full disk images to create a forensic timeline. You have to know what the data is that you're going after and why you need it. And this is why I don't like point-and-click monkey forensics is because you don't know what's happening in the background. You load it in an case, you click sweet case, and you ask, what did that just do? Well, I'm generating output. That's awesome. What kind of output are you generating and why are you generating it and how is it doing it? I don't know. So open source tools, in my opinion, are so much better because I get what's happening with the ones in zeroes. And I'm making that decision, not a developer for a software company that a sales guy told me I needed. It's me as the engineer going, here's what I want to see. So we have, you hear about the APT and that's awesome marketing by Mandiant. So if anyone hears from Mandiant, great marketing for APT. And I think that was born out of the AFOSI, actually. So anyways, we have the OPT, the organic persistent threat, because attackers don't attack something and go, damn, that didn't work, and then walk away, right? It's a constantly evolving attack surface and the attackers are constantly going after stuff and if it doesn't work, they're not packing up shop and walking away. They're trying new things, trying new exploits and Microsoft is certainly helping that by having so many, you know, vulnerabilities embedded within their code with applications as well. So it's constantly moving organic threat. They're highly motivated, highly funded and organized. The days of bad guys being kids down the street, like y'all, I'm from Oklahoma, so if I say y'all, it's okay, I'm allowed. You know, like y'all or someone doing it, tagging for the heck of it to say, oh, look what I did, it's not the case anymore. The majority of cases that we work with both local state and federal law enforcement are dealing with organized crime. Hacking has become a high dollar crime. It's a syndicate. There's components to it. There's moving parts just like any business model and they really understand return on investment and they're really good at it, right? Not necessarily the cleanest attacks in the world, but the end result is the accomplishment of something that can be extracted and monetized and we see it all the time. I think last year it was a $250 billion industry just in credit card theft, right? Not to count, you know, the theft of PHI or PII or other types of information that has no kidding, like drugs, it has street value. So it's really pretty cool to see how that moves. Additionally, it's a laser versus a big rock, right? I can kill you by smashing you in the head with a rock. It's effective, right? It does its job. Versus, if I shoot you with a laser beam from space, you're still going to be dead, but the methods that I used and how clean and efficient it is are exponentially greater, right? So that's kind of another analogy I like for Sniper forensics and target selection, right? I can look at whatever data, right? I think the bad guys are going to go after, right? And you all know as pen testers, you don't pen test for the sake of pen testing, right? When I was a pen tester, my customers would give me a target and say, go get this. See if you can get to that data over there. And they'd put me, you know, seven miles away, three different subnets away, and you have to jump and pivot, but that was the end goal. So who cares how I went through it because the end goal was I extracted that data that I could then monetize. So that's where we start. And then we move outwards because that's the logical sequence of an attack, right? So, and then you get the best return on the investment, both for the attackers and for us as investigators. And then obviously pivot attacks lead to deeper, you know, penetrations, which is just fun to say, right? So advanced attacks need an equally advanced investigation approach, and that's what Sniper forensics is, right? We don't image everything, right? I've been on cases, no kidding, both when I was in the military and as a civilian, where the, you know, the lead case agent comes in and says, okay, image everything. Like, what is everything? He said, any server, any system, image them all. I've been on cases where I've taken over 250 forensic images. What the hell am I going to do with 250 forensic images of servers? I'm going to load them into NK, so I'm going to cross my fingers and hope to find evil, right? No, it's a terrible idea, right? That's old school, and it relies on something we call autopilot forensics. Click, open book, start reading, right? Again, that's a horrible way of conducting an investigation. And in the real world, you would never see a police officer come into a crime scene and conduct an investigation like that, okay? It just doesn't work. And an additional component that I was talking to a police officer friend of mine saying, look, what about tools that are court, you know, are court certified or vetted in court? People always say, well, in cases, court certified. Well, it's not. Tools cannot be certified. Investigators can perform and do certification, right? Because that's the one doing the investigation, not the tool. And saying that one tool is court certified versus another is like saying, if you take crime scene photos with a Nikon, those are accepted. If you use a Canon, no, we can't accept that, right? It doesn't make any sense. So Sniper Forensics uses logic, go figure. Most of us in this room are kind of outside the box, stamp on it, burn the ashes, thinkers, anyways. But we're all logical, right? There's a logical path of progression in the way our thought process works. So applying sound logic using low cards exchange principle, Occam's razor, and something we've dubbed the election principle are really how we go at that data, right? We extract only what needs to be extracted. I don't take 250 images because that's not where the target data resides, right? I go after the target. I allow the data to provide answers. I can't tell you how many investigations I've worked on where before any evidence is analyzed, the investigator walks in and he's got an idea of what he thinks happened. So all the evidence that he gets, he crams into this box of what he think took place and ignores everything that doesn't fit his idea of what happened. And again, that's terrible science, right? That's not how we conduct experimentations or investigations. We report on what was done, and like math class, right? You show your work and you provide your answers. So the way we've broken this down to non-technical audiences, which I know you're not, but just hear me out because it'll give you a solid understanding of how we approach an investigation, right? There's three components we call them the breach triad. Think of a bank robbery, right? Old school, you know, black and white movies. You've got the bank robbery. He's got to do three things. He's got to break into the bank. He's got to get his white bag with the dollar sign on it, right? Stuff all the money in it, throw it over his shoulder, make his getaway. It seems pretty easy, right? Infiltration, aggregation, exfiltration. Cyber attacks are no different. You've got to gain access to the target network. You've got to identify the data of value and aggregated somehow, and you've got to get it from a system controlled by the target to a system controlled by the attacker. Very simple. So when we approach an investigation, that's how we approach. How did the bad guys get in? What did they jack? How did they get out? Very simple. So the guiding principles, if you don't know, are low cards exchange principle. Everyone's seen CSI, right? Everyone's heard Gil Grissom talk about low cards exchange principle. Occam's razor was by William of Occam, and a gentleman named Michael Leckshew who I used to work for when I was with the X-Force at ISS developed this, but he never documented it. So I asked him if I could document it, and he said yes, so that's where that came from. So basically, Edmund Lowcard was the founder of the feast in our world. He was kind of the father of the modern forensic sciences. And he said, look, use deductive reasoning, right? It's very simple. There's answers for things, and you just have to follow logically those paths. So low cards exchange principle kind of boiled down. Is anyone ever seen the glitter test, right? You put glitter on your hands, and you shake someone's hand, and now they have glitter on it, and you have glitter on it, and everyone else shakes hands, and the whole room is full of glitter? No one's ever seen that? Okay, well, if you put glitter on your hand, and you shake someone's hand, you know, rinse, repeat. So that's kind of the concept. If I touch your shirt, I have fibers from your shirt on my hand. You have oil from my hand on your shirt. Okay? So think about how that applies to computers. If you long in interactively through the shell, you leave a type 10 login in the windows and met logs, right? Or if you log in through the command shell, is it a different type of indicator of compromise? Well, yes. If you launch a program, do you leave residual traces behind? Yes. Okay, there is no way around it. You can reduce the amount that you leave behind, but it's always going to be there, right? So that's low cards exchange principle. The other one's Occam's razor, right? In the army, we called this the kiss principle. Anyone ever heard that? Keep it simple, stupid, right? The answer with the least number of assumptions is probably correct. So you having crappy passwords and getting owned or someone writing ODA code on the fly, which is probably more likely. Well, bad passwords requires less assumptions, right? So that's kind of Occam's razor in a nutshell. Now, the election principle is really pretty cool. And in my case notes, when we conduct investigations, we actually have this broken down. So what question are you trying to answer? You're there for a reason. You're conducting an investigation for a reason. Why? What data is the customer afraid was extracted or what happened to the victim? What data do you need to answer that question? I don't need a mail server image to determine if credit card data was stolen from a database server. So why would I take an image of a mail server? It makes no sense. So that's not the data I need. How do I extract and analyze that data? Running sweep case on it is not going to tell me if there's credit card data on it. It's not going to tell me if a SQL injection took place. It's not going to tell me if RFI was present. PHP shells were present. I'm doing what makes sense, right? And then what is the data tell you? Very simple. So I'm going to introduce what I call real world penetration testing, though this term is kind of out there in the wild. First, what I mean by real world is, well, we have to remember that a penetration test is a model of something else. And its value is related to how close or far away that thing is from reality. So by real world, I don't mean like pen testing like the real pen testers do it. I mean pen testing like the real attackers do it. So let's think for a bit about how black hats go about choosing their methods. Well, first let's think about how white hats do it because, you know, the assumption here is we're white hats. So how do we choose exploits? Well, we put metrics on stuff. We look at vulnerabilities and we measure them according to various things. There's been a number of standards over the years. Different vendors, different industries have come up with lots of systems of measuring, you know, risk of vulnerabilities. But, you know, we've come a long way. We've all grown up and now there's one standard to rule them all, CVSS, Common Vulnerability Scoring System, which I'm sure you're all familiar with, but basically you've got measurements of things like is the vulnerability remote or local? Is it simple or complicated? Does it require authentication? And what kind of impact does it have to the target system? Right? But by looking at kind of an exercise I conducted with my teammates over the last couple years looking at the kind of attacks we use in penetration tests and which ones are most successful, which ones give us the best bang for the buck. And then kind of integrating into that some, you know, what I see, what's going on in the real world from the incident response community in terms of what attackers are really doing. Kind of look at that data and I think that these values kind of bubble up to the top of, you know, if I were a black hat these were the things I'd be focused on. Safety, meaning is this attack safe to the target? I conduct it in such a way that I'm not going to knock the target over. That gives me stealth because if you knock the target over that's going to ring alarms. But just as importantly it gives me re-usability. I can attack the same target over and over again. So that's, I think, crucial. And power, I mean, what kind of leverage does this attack give me over the target? Invisibility is simply stealth. Again, we want to get in and out and not get caught. And frequency, meaning we want to spend our time attacking commonly found systems. We're a vulnerability that's common in the wild and not some strange one-off system. So one thing to notice, though, if you look at these metrics is they don't, none of these appear in any form in CVSS. So that was one of the things that struck me when I put this data together first was just that there seems to be this disconnect between how as an attacker I go about choosing exploits and how defenders go about choosing which things to defend against. It seems like we would want those things to line up. So to reiterate, the whitehead metrics are based on vulnerabilities and the blackhead methods are based on attacks. It's kind of which your mindset is attack or defense. I think, though, there's, again, another language thing going on here. If you spend a lot of time in CVE data, I think you can get warped into this view that a vulnerability is equivalent to something that can be patched. But this is kind of related to what I was talking about with exploits and exploits. I think that there's lots of vulnerabilities out there that can't be patched. Here's a trivial example. You can't patch stupidity. You can exploit stupidity, so therefore there's vulnerabilities out there that can't be patched. I think another difference when you contrast these things is if you look at CVE and stuff that's scored in CVSS, it seems well attuned to public-facing vulnerabilities and the stuff that's public-facing is scored a lot higher for kind of obvious reasons, but I think it tends to push internal-only vulnerabilities kind of down the stack. I think that's wrong. And even more importantly though, the White Hat metrics are atomic. They're kind of atomic by definition. If you go into, like, read the rules on how to do CVSS scoring, you can't put two different vulnerabilities together and give it a CVSS score that's just not built to support that. But that's kind of the bread and butter of what attackers do, is they exploit this little thing and then this little thing and then this little thing and maybe it's a bunch of low-risk vulnerabilities but you chain them all together and you get some comprehensive systemic compromise. Just a quick sidebar here. I think CVSS needs some work. It's actually undergoing a process right now to get to CVSS version three. But here's some things I think it needs to better support. So it needs to, the basis of CVSS and CVE data needs to, you know, look at stuff that's obviously vulnerabilities that's not being scored. So just to give you a trivial example, ARP cash poisoning. That's one of my favorite vulnerabilities. I use it, you know, every fricking day and it's not, it doesn't have a CVE. It's been a CVE candidate since 1999 but it's never advanced past that. So that's an example of a vulnerability that's in the big bubble. It's not in the little bubble. Why is that? I think the most critical attack surface is internal so I want to see scoring that kind of prioritize that and brings attention to that. And I want to see a scoring system that allows us to put vulnerabilities together and, you know, put a conglom, some kind of conglomerate score on it so we actually line up where attackers are going with what defenders react to. So just to quickly move through the stuff so we can get to more of the meat, I think our new goals are going to be based on stealth and safety. If your focus is stealth, you kind of get safety for free and I think that's cool. And the demonstration of multi-step attacks. I don't necessarily mean, I kind of think the classic, what you probably, your mind's going to like pivoting. Well pivoting's cool and yeah, it's important sometimes but I think it's somewhat blown out of proportion. I'm talking though about simply using multiple vulnerabilities in succession to achieve greater exploitation. And then less importantly, blended methods, the demonstration of blended methods, the demonstration of data exfiltration and potentially even the use of black hat tools and methods. So just to summarize, here's the old goal. Here's the new goal. So black hats don't heart shells, I don't think. I think they heart, well, RSA, secrets and product designs and credit card numbers and things that can be monetized. So one of the things that you get when you focus on the data, where the money is and instead of how many shells you can pop is, I mean, you're going to be more efficient. You're going to do less stuff to get more data. So by having this safety first, stealth first focus, you're going to put the focus on the data where I think it belongs. You're going to give, if you're doing joint exercises with an incident responders, you're going to give them realistic data to analyze. And as a kind of work-a-day penetration tester, most important for me is if I can focus on safety first and I can convince my clients about the principle of safety first, I can actually set realistic scopes on my tests and not be restricted from looking at certain systems or restricted from looking at certain network layers and all that kind of shit. I mean, that's all our fault, I think, ultimately because over the years we've fucked things up and we've knocked systems over and pissed our clients off and so naturally they want to restrict where we go. But I think we can fix that. So let me just quickly cover, you know, I'm not the first person to talk about trying to push pen testing to more real-world methods. I mean, I've been heavily influenced by a lot of folks, not the least of which are these guys here, HD Valsmith and others going back to DEF CON 15 if not earlier. I'm not going to kind of summarize what they talked about but, you know, go find these presentations if you're not familiar with them, read the white papers. And here's some more recent stuff. If you look at kind of this history of dialogue around real-world penetration testing, there seems to be a focus on things like binary exploit versus other kinds of exploits, which I referred to earlier. So, you know, the importance of zero days. Are they critical? Are they irrelevant? Well, that's all kind of interesting but what I want to highlight is kind of the fundamental focus around real-world penetration testing is the focus on stealth and safety. So let's take a quick look at what someone who's not applying these methods looks like. So we're going to run a couple of scenarios past you. We've got the lazy pen tester and then the real-world pen tester will look at things first real quickly from kind of a network point of view and then Chris is going to take a deeper dive from a host-level analysis. So, of course, the lazy pen testers, none other than pen test John. And just so that I don't have just a list of kind of attack types, I went ahead and threw up some, you know, a network IDS, configured it with some default signatures, you know, and ran some of this stuff just to give you a, just I'm kind of starting to scratch the surface of what a deeper analysis of this might look like from the network layer. So here's, you know, kind of trivial example, net map and of course, net map is going to trigger a bunch of alerts. You'll hear me say later, I'm kind of down on the whole port scanning thing and this is one reason why. Trigging alerts, yes, but also if you've been doing this for a long time, I'm sure you've run into systems that actually fall over when they get scanned. And why? I mean, why are you doing that? Well, so here's another one. Specific to a particular protocol, this is a SMB taking advantage of net bios null sessions or SMB null sessions and enumerating lots of stuff, maybe from a domain controller, like users and groups and password policies and shit. And, you know, it's just a low-level alerts, but it's going to potentially trigger a lot. Another example, password brute forcing, trying to guess, you know, weak passwords in this case against SMB again. In this case, you're not going to potentially see a lot of alerts, but password failure alerts tend to get high severity. So a few alerts, but they're regged. Again, not what you want. And then here's a sploit, right? Classic example, metasploit, MSO8.067. And again, lots of red events, so I don't like these. So what's it look like from the other? Right, okay, so what we're going to look at now is what the lazy pen tester is going to leave behind. Now, understand the caveat of this was a lab, right? I knew what Tim was, well, I knew when he was going to attack, so there's a certain degree of unrealisticism going into this, but you should get what we're trying to infer. So the first thing I used was a tool called Winalysis. Everyone would use Winalysis. It's an awesome tool, it's not supported anymore, but what it does is it basically takes a snapshot of your system on a Windows system and then you run malware against it or run an attack or whatever, you run it again and it shows you the deltas. It's a quick and dirty way to find out what happened quickly. So as you can see, I've got some pretty big numbers there. I've got some red stuff and some squares, right? So basically what that means is all of those are changes to the system, right? So Tim did what he did and I've got 180 critical registry warnings. I've got some activity on file systems and I've got some activity to users. So immediately I can start looking at what he did and how noisy his attack was. I don't know if that works on my system, not yours. Okay, well, it sucks. But what that is, is I'm able to show that that is RDP being enabled, right? I see through, this is from a forensic timeline and if you're not familiar with them, there's a really great tool called Log the Timeline by Kristin Gunderson that will allow you to take master file table, event logs, registry last right times, throw them all into one big body file, create a forensic timeline with that and then it will allow you to perform different types of searches, chronological searches, keyword searches, it's a really, really great way for us to look at data in mass in a really tight, compact way. Additionally, it kind of throws time stomping out the window. I wrote a blog post called Time Stomping is for Suckers. So if you really want to know how easy it is to detect timeline modification, go read my blog post on the digital standard and I explained it in depth, so I'm not going to go into it here. I'm just going to show you that RDP has been enabled. RDP, remote desktop protocol, remote administration, there's my infiltration mechanism. So I did not draw that with a crayon just in case you're wondering. That's a tool called Zoom-In. It's pretty cool for presentations, but I circled some additional activity. So I see the Samhive being modified and I see a new entry being created on the Samhive within the Windows registry. So for all the windowsy people, new user. Wow, that's awesome. Good job, hacker. Thanks for creating me a new user. And you called it Hacker, H-A-X-0-R, which is kind of easy to spot. Now, I have never actually seen a user ID called Hacker in the Wild. What they're usually named when an attacker tries to create himself a backdoor is like a system account. The Internet user account. Something that looks kind of cool because we know all end users are what? They're stupid, right? And so unfortunately, they don't catch something like that. And realistically, how many end users know how to parse the Samhive, look at the last write times in the registry and see when a file was created. Probably not many. So we do see this user activity being created from the registry hives. And we can see that through the forensic timeline, I know exactly when this took place. So now we call this the window of intrusion. So now through my timeline and through my analysis, instead of looking at it, I've got about a oh, I don't know, 5-minute period that the attack actually took place. I'm not taking massive forensic images. I'm not looking at images and images and images and massive amounts of data. I'm looking through a timeline and then through my volatile data at about a 10-minute period. And I was able to do all that very, very quickly. So one of the things I did is in my forensic timeline, I knew I had a user ID called Hacker. And so what I did is I grep through that forensic timeline and I did a grep dash what did I do? I did a grep dash I for Hacker and then I piped that to word count minus L to get a full listing of how many files actually had that thing on it. And there was 406. So through Tim's creation of that user ID, Windows then created or made 406 modifications to existing files or created new files. I got that to only look for files that had been born. It's an attribute, a MacB attribute so an NTFS file system that B is birth new files on the system. 133 new files from the creation of a single user ID. Very, very noisy. Very easy for us to pick up on. So if you're trying to employ stealth, not the way to do it. Because it leaves behind over 530 residual trace data elements. Pretty cool. Okay, so the other thing I can see through my forensic timeline, remember I said there's different ways to search through this data. There's keyword searching, which if I have keywords to go off of, okay. What's better that I like more is chronological searches. When I have a window of intrusion that I'm looking for now I can look at all activity in that window. I can add registry last write times. I can add event logs and really in log to timeline what's cool is there's like 30 different supported log file elements and you can email Kristen if you find one that's not there and he'll add it for you. So it's a really, really great tool. So I can add all of this log data and then say show me everything that took place in this 10 minute time frame. And I can look at all the activity. So I see this download of IP scan Win32-3.0-beta6.exe. Does everyone know what that is? Angry IP scanner. Again, not necessarily the downloading of the IP scanner is noisy, but the download of a utility onto the system is going to leave residual trace data. Not only in the internet access logs that's going to show that taking place, but if it were FTP. Let's say he didn't use the web, he used the internet. HTTP, he uses FTP. What's that going to create? The execution of FTP.exe is going to create a prefetch file. So I know that he executed FTP prior to the file that was invoked and a file appearing on the system with a new birth date. All of that sequentially, like Tim said, a small thing here and a small thing here and a small thing here, chain those together and I've got a story of what took place that's really pretty good. So I can see that that file was downloaded and then I can grep through that and I can see file execution as well. Because the first time a file is executed on a Windows system, what gets created? Prefetch, right? I can parse out prefetch data. I can see the last time that file was executed, it really gives me a great deal of information. So by him being sloppy, kind of intentionally, but to show what can take place and what kind of residual data elements you leave behind by not employing stealth, maybe as a pen tester now, you can say to yourself, wow, I didn't know I created that much data behind what I was doing. Maybe I can employ a better method that's not going to leave gobs and gobs and gobs of data behind. Or we can all look at that and go, wow, I didn't know that much data was left behind. That gives me something to look at when I'm forming my window of intrusion on a specific investigation. So we got a new one. Okay, so let's reset and run the scenario again from the perspective with the real world pen tester at the controls. So like I said, in my view of real world pen testing, port scanning is passe, or it's like not cool. So what are you going to do instead? Well, the first thing you're going to do is total passive network analysis. This screen here is just an example of how to do that. This is a Metasplay module called PIG. It was written by a colleague, Ryan Lin, who presented on this DEF CON 19. And it's just a way of monitoring broadcast traffic, multicast traffic, and if it's got a protocol parser for it, it will suck that data in and populate elements of the Metasplay database so that you could potentially take these things up into next layers or next stages of the attack. But you're identifying hosts on the network, right? And maybe what kind of hosts they are. And you're not generating any packets at all, so of course you're leaving no trace. That reminds me of a sidebar discussion we could have later in Q&A about detecting promiscuous mode. That's really a red herring. So if we wanted to take that to the next level, we could start getting a little bit active. We can get active at layer two, and we can do our cache poisoning and start middling traffic on the local network. And we can tie that into the kind of analysis that we could do through PIG, and now we're learning new things because we're not just looking at broadcast traffic, we're looking at unicast traffic that's destined to other people that we are intercepting. And so now we're learning more stuff about the network. Who else is on it? What kind of protocols are flowing over it? This couple of slides or the screenshot here, the stuff below is DTEX from ARP Alert, just to give you an example of what it might look like if there happened to be some sensor on the network paying attention to ARP traffic. You know, really in the real world, I've maybe seen ARP defenses in maybe two percent of the clients I've worked with. The tool at the top is part of a spider labs utility called ThickNet. VAMP is the part that actually does the ARP spoofing, but there's the ThickNet module is what you would actually use to intercept and take over in particular Oracle and Microsoft SQL server database transactions so that you can inject your own commands into the database streams and take over databases, and it's really cool. So that's one way we could kind of take it to the next level and start getting active at layers above layer two is, you know, take over sessions. Another way to do it is through NetBios name service response spoofing. So this is just a quick little video. This is just an attack that it's just so stupid easy. The top third shows Wireshark filtering on UDP-137. The middle window shows where I'm going to be launching Metasploit and running the modules to actually do the response spoofing. And the bottom third is Windows Explorer and basically someone's going to go and enter an address that doesn't resolve into the address bar. And I think I can manage to click play here somewhere. So once Metasploit launches the victim in Windows Explorer is going to start entering an address and as soon as they're going to hit whack whack foo bar, when they hit the third whack you're immediately going to see the request come through Wireshark in the top so there's foo bar getting entered, the final whack boomer request boomer hash. I mean that's how fast it happens. And the user didn't even hit enter they didn't even go to some bogus site. Maybe they see the typo and they correct it and they're none the wiser. At this point though you have hashes. One of the easiest ways you can acquire a hash and the import there is that now you can go native and that's the goal. You want to as quickly as possible impersonate real user traffic on the network because then everything you leave behind is going to be really hard to distinguish from everything else on the normal network. So for example you can start querying other servers with those credentials you have and finding all kinds of useful data about the rest of the network or you can interact and run arbitrary commands maybe you can run GSEC dump and grab hashes or that might be dangerous. You're going to want to figure out whether antivirus is on the target first and do some analysis of that but what I like to do is first grab net stat data from the targets I've compromised and again without the use of port scanning I'm now building up a very detailed profile of everything that's on the network what kind of services are employed and go from there. Okay, so we're going to smoke through this last example because we've got about seven minutes to get off the stage. Okay, so in the next example Tim used a stealthy approach. He was able to execute WinEXE which is kind of like PSXAC didn't leave any residual traces on the target system except for prefetch files which he went from creating over 500 files to still accomplishing the same goal just using a different methodology and using a different tool to execute commands on the native system and leave me two prefetch files to look at which was awesome versus 530 additional files. The other thing that this did is it did create entries in the event logs so provided that it's awesome enough that we have event logs in investigation which any investigator would tell you is about a third of the time we do see the service entering the running state and we see the stop control. So going from over 500 entries to like less than 10. Okay, really good. Additionally I can see that the end user.dat file was accessed when he executed the command as the remote user. Yes that is Sterling Archer in case you were wondering. We see host being executed for the prefetch which enabled WinEXE to be executed and then GSD.EXE dump was kicked off, I see the prefetch for that and there goes my hashes. Tim just went native that fast. So in conclusion, attack number one touched more than 400 files, created 433 new ones and showed indicators of compromise indicating downloads and program execution. Very noisy, very sloppy, very easy to find. Attack number two, single system access of a single user ID and only creating two prefetch files and a few entries in event logs but not a whole lot. His attack surface for me to investigate was so much smaller that he still accomplished the same goal through an exponentially better attack. Okay, so I'm going to... So to summarize what I've been saying about real world pen testing, here's my top five ways to get caught. Number five, create new user accounts. I don't see the point in that. Don't do it. Number four, use password group forcers. Number five, lock accounts and otherwise trigger or announce your presence. Number three, push shellcode or port scan past a network monitor or in general use port scanning. Number two, let any virus see your password dumper. Don't do that. Number one, crash your target. Didn't give a demo of that but that's a common problem. So in summary, do the least stuff that you can do to make it better. Now, what we've really been demoing here is what you can think of as infosec mashup. We've got these different domains. We're trying to put them together to train each other and make ourselves better. So we've got the space in the middle. We want to grow it. So what other ways can we think of to do that? So predominantly, right, so JTF missions, right, joint task force missions, what do they actually look like? What do exploits really look like in memory? What do they really look like on a disk system? You're not going to find that with N-case. You're not going to find it with FTK. You're only going to find it by doing analysis real time with labs with your friends in the pentest community. And then MREs, everyone's had those. They're delicious. Meals ready to eat. 15,000 calories bind you up for four days. Great stuff. So we have incidents ready to eat. We can virtual machine that he can attack. It's got all of my detection tools built into it. It's got the tools that he needs to conduct his exploits. So we can kind of do that, extract the data and analyze what it's going to look like. Then we can take that information and move it outwards. And then future projects, right, further analysis of common penetration of the few. Actually, I was referring to where we've actually shipped these virtual training labs out to customers. So we can build a training lab for folks that want incident response training. And give them a pre-hacked virtual lab to work in. Other future projects, we need to do further analysis of common methods. And we need to develop stealthier attacks. And then that pushes the bar for the other side of the community to develop better response capabilities and detection capabilities. And I have some places that I want to go in terms of pentest research, but I'm not ready to share those yet. But that's there's some things I'm thinking about along those lines. And that's all we have time for today. And we thank you very much for being here at 10 a.m.