 Good morning, everyone, and welcome to the Village Talks. It is now my pleasure to introduce to you the very first talk sponsored by the Packet Hacking Village. It is my honor, but I rather not actually speak about this gentleman. But it is absolutely, all I need to say is this. It is absolutely my pleasure to introduce to you D9. That's his handle. Who will be speaking on Ponying the Poners with MindWare. D9, it's all you. All right. Thanks for getting up early and showing up for the first talk. I'm going to probably start a little late. But this subject, anybody have any background on it? I mean, so, well, I'm hoping that you're here because you kind of looked at the abstract. But this subject is kind of interesting. And I'll get into it a little bit more. But the concept of MindWare is really an interesting thought. And the theory came out of someone that has no computer background at all. He's actually a psychologist. And what he's talking about is programming of the brain, particularly how that programming affects your ability to make decisions. I'm going to talk a little bit about leveraging that theory and how we can use it to help the defenders out of the house. Any defenders in the crowd? All right, defenders. Any attackers or op? Yeah, OK. So this talk is about you, man, and how the defenders are going to own you or own you. We'll see. We'll see if that actually happens. I would like some discussion at the very end of this talk. So I'm going to leave some time at the end because I'm looking for some feedback. This is early R&D. I'm talking about early R&D research that has not really been done yet, but conceptualized. It's going to take over four years to actually do the groundwork with the human subjects and the computer modeling to actually build the tool that I'm about to talk to you about. So that's kind of an introduction and background. Any questions or comments before I jump off into the presentation? And for those of you out there, this is not a hacking technical talk. It's actually more of a theoretical talk about how human beings act and how we can leverage those kinds of things. So I would say this is a pretty atypical talk for DEF CON and hats off to the packet-hackered village for being brave enough to put me on the stage to talk about something that's not really technical. All right, so I'm going to get going. So who am I? Well, for one, I am a bureaucrat. I am not a hacker. I spent 41 years in the government. Three operational deployments, one to Pakistan and Afghanistan at the end of my career, because I'm not retiring as a bureaucrat. I need to go out as a warfighter. So retired colonel, member of the senior executive service if anybody knows what that is. That's a general officer, a flag officer level rank. I was civilian in the Pentagon. This gentleman up here is a Fed too, so I might as well point him out. So if you're playing that game, there he is. I directed training and education for the Department of Defense. I led their force of the future working group. I led their cyber workforce development and recruitment workforce as well. I have a doctorate from the University of Pennsylvania. Focus was on hackers. And it really was about the sociology and psychological aspects of what makes a hacker extraordinary. We can talk a little bit about that on the way too. And now I'm an independent AI and cyber researcher for a small R&D company. We're a 501C3. I'm not going to mention the name of the company, but that's what I do now for a living. So why do I think that this approach, and we'll talk about what this approach is, is viable. The interesting thing is the defensive community, how much emphasis do you guys place on really understanding the attacker? Yeah, right? So a lot of the solutions on the defensive side are focused on the technology aspects. But the weakest link in an attack is what? They're techniques? No, it's the person. It's the individual. It's the human. So this initiative is really about how do I go after the human to hell with the technical stuff? Let's mess with the human attacker. How many people have heard of the term cyber psychology? One. Anybody else? Just because I read it on the slide. OK. It's a new, when I heard it, I mean, I heard it within the last year. My dissertation was in 2018. I did cyber psychology work, and I didn't know it. But it's a new term. And you can see what it's about. And I'll define it later in the presentation. So this, again, like I said, this presentation is academically grounded. It lays out a conceptual R&D design. And it leverages cyber psychology, AI, and game theory to build a defensive tool to attack the attacker. So there's some foundational research. One of them is called the Tulerosa study. Anybody familiar with this study? Oh, look at that. What do you think about it? What was that? Yeah, I can't hear you. So it's too loud in here. The Tulerosa study was done by the Department of Defense and the Department of Energy. It's the first study that looked at a very large group of proficient red team and tried to mess with their heads. There was 130 participants. It was mostly manual, so they would put them into part tasks, hacking kinds of tasks, and ask them to work through it. And then they would plant deception or not deception and tell them that maybe there was deception. So it caused a lot of confusion and frustration and friction on the part of the attackers, because sometimes they're like, is it real or not real? Am I being messed with or not messed with? And it really slowed them down when it came to their ability to achieve the objectives they were asked to achieve. So that's one study. The second one is, and there's at least two, but this is one that I highlight, is this decision-making biases in cyber attackers. So are there very specific sets of biases? And we'll talk a little bit about what those are. The Department of Defense conducted this research. It studied an initial list of biases that affect attackers. The outcome was a list of these kinds of biases. And we'll talk about those also. There's another study that developed a list of over 80 of such biases. And these are important because these are the things we're gonna go after when an attacker shows up in our network. We're gonna leverage these biases. So can we turn the prey into the predator? Defensive people, are you happy about that? Do you wanna be a wolf or a lamb? So this study really is about saying, guys, stop crouching in the corner, waiting for somebody to do something to you. Take an offensive stance, go after the attacker. Don't wait for them to act, control the situation. Make them feel they're in control, but you're really doing it. You can see here that the other thing that's kind of interesting is defenders have home field advantage. You know your network. The attacker may or may not. Yet we don't recognize that as an advantage from a defensive perspective. And because you know your network, there's things you can do inside of it to affect the ability of that hacker to be successful. And then can we automate this approach and use game theory to double down on what we're doing to the attacker? I'm gonna use a very familiar framework, five phases of a hack. I'm not gonna go into them. I think almost everyone in this room understands what those are. Any questions so far? All right. So the building box concept. There's the definition of cyber psychology. It's a brand new field. I would say it's probably a couple years old, maybe two or three. But it's really about understanding the human behavior of attackers and how that human behavior can be used to manipulate the attacker or the defender for that matter. I mean, one could call it social engineering. I'm not sure it's, I wouldn't exactly categorize it as social engineering. I think it's more nuanced than that. Some of the components, this cognitive human bias. This is a subconscious process by which one makes a decision. Many times, and we'll talk about it, many times you don't know why you made that decision, you just make it. That reveals a vulnerability because those types of decisions, those kind of automatic lizard brain kinds of decisions, emotional based decisions are vulnerabilities. The other thing we need to talk about is this concept of biosensors and triggers. The biosensor is used to detect the cognitive vulnerability in the attacker and we'll talk about what those are and I'll take some suggestions from people here. And then the triggers are the things that once I know what that vulnerability is, how do I manipulate my network or my host operating environment to take advantage of those vulnerabilities and suck the attacker into the things that I want them to look at as opposed to things that they want to look at. So I'm gonna hit a little theory. I hope I don't put any of you guys to sleep. Engels, but this is a really interesting theory and it breaks the way we make decisions up into three separate minds. The first one is the autonomous mind. That's an autopilot. You don't really think much about the decisions you make using that part of your brain because it's rote. I mean, this term, this is a really interesting term here, right? It comes from over-learning and practice. So when you go drive a car, do you think about actually moving the steering wheel and putting your foot on the brake or the gas? No, it's learned behavior. You do it automatically. And that's a physical example. But essentially what we're saying is the brain does the same thing. There's a lot of things and decisions it makes on autopilot. That's a vulnerability. This level right here is really about the critical and intelligence part of the way we make decisions. So the arrhythmic mind is what you measure when you measure IQ. It's your ability to do analytical and logical reasoning. But interestingly enough, on top of that is something called the reflective mind. This mind is about your beliefs. It's about your opinions. It's about what you've learned through culture or something called habitus, right? So it's learned behavior and it's reflective in nature, meaning that you think about and understand why you have the beliefs that you have. That is what is measured when you take a critical thinking test, right? So it's not the intelligence part. It's your ability to reason using that part of your brain. Those three things are important because of the way in which they affect us making decisions. Now, when you go look at this chart, this is interesting, right? So here, sometimes when you're faced with a situation you need to make a decision, you just make a response. You just respond and you use that automatic part of your brain. However, the arrhythmic mind has the ability to override, right? So how many times have you experienced an emotional event and instead of doing the emotional outburst, you decided not to do that? That is the algorithmic mind, overriding your emotions, right? That's an important capability, right? So sometimes we just let our emotions take over and we make this response, but sometimes we go, hey, wait a minute, maybe I should stop and think about what I'm about to say or do before I do it. That's this override function. The reflective mind also has an override capability because the problem with the algorithm mind is that sometimes it's lazy, right? So it'll do a little bit of logical reasoning and then go, okay, I got it, right? How many times have we done that? We'll talk about cognitive misers. So the reflective mind then allows your beliefs and some of your opinions to get inserted into the creation of alternate approaches to the way you make a decision and boom, out it goes, you have a response. So that is really important because I'm gonna leverage some of those vulnerabilities, particularly the ones that occur down here or that occur in this area or where we can contaminate your mindware up here in your opinions and beliefs. So let's talk about cognitive miser. How many of you people really enjoy doing really hard thinking? Anybody? Yeah, and by the way, that is a trait of a lot of people in this business. That kind of thinking is something they enjoy doing. But even in that process, do you want to work harder than you need to? That's the question. So this cognitive miser concept is really about like, do I really want to work that hard without with the gain that I'm looking to get? So your brain goes, here, let me take some shortcuts. Not full automatic shortcuts, but shortcuts. So sometimes the miser will fail to override the autonomous mind and you'll just make a decision. Sometimes it'll go into a more reflective mode or a more logical mode, but still having some kind of focus that maybe overrides other data points in the environment and you make another decision that again is maybe sub-optimized but allows you to not have to put too much effort forth. And then finally, we talked about this default to the autonomous mindware just takes over. Now this one, this one is really interesting because this is about the higher levels of thinking that I talked about. The algorithmic brain and the reflective mind. This is a big problem. Mindware contamination. The things that are happening in social media could be categorized by some as mindware contamination. They're going after your beliefs and understandings of things and they're contaminating your ability for your reflective mind to really use good data to make decisions. That's also a vulnerability and if you look at what's going on in social media, you can see there's a lot of people that understand that. The mindware gaps is where you don't really know things about your brain that maybe you should know. So again, you make decisions with gaps in your knowledge and your understanding of things. Any questions or comments there? All right, so that's enough theory. So how can I use this as a tool? How can I use the framework I just talked about as a tool, as a defender's tool? Well in the recon and scan phase, can I draw the attacker into the attack surface with, say, planning unknown vulnerabilities? So there's some honeypot kinds of things here, but could I go, hey, I'm gonna put a vulnerability out there for the bad guy to see, or gal, in hopes that they're gonna use one of these biases and go, yeah, I got them. I'm in control. So if you look here, I mean, there's a couple of great biases here, right? So this one here is about representativeness, is about using a model without understanding the probability of that model actually producing results. So you can go, I believe this is gonna be the answer, but the probability of that actually being true is one at 100 or 100,000. But it doesn't matter because you're like, well, I like that model. So that's a bias. So if I show you something that goes, hey, this Apache server is vulnerable, they haven't patched it, and you go, I got them. Do you? We'll see. Illusion of control, overestimating your abilities, believing that you are in control at all times. I was a fighter pilot. We suffered from something called the delusion of determinacy. What does that mean? It means that I went out and did something really, really dangerous, and I believed I could do that really dangerous thing because I could control the dangerous environment. Can I control the dangerous environment? Not really, but I told myself in my reflective mind, I'm in control, the airplane's not flying me, and I'm gonna kill all the bad guys, and they're not gonna kill me. And you have to tell yourself that in combat because otherwise they will take over you. It's called the delusion of determinacy, and it is a delusion, but it's one that you need to cope with the environment you're finding yourself in. Confirmation bias, not recognizing the outlier data or other data that makes sense, and going, I got this, man. I mean, this is the way it always is, and I see it again, so it must be right. I can leverage that. Taking greater risk when you perceive increased risk or safety, that also is something I can manipulate, right? So I can go, oh yeah, it's okay, come on in. I got you, man, you're good. So you take more risk, and you probably shouldn't. The virgin abogate, that's something that came out of my research. It's definitely a core soft-skill trait of high-end hackers. There is a requirement for structure and understanding. But the other thing that's interesting that my research showed is that there's also a high level of risk-assessed acceptance, which you don't normally find in human beings. If you're someone that is very process-oriented, you're not likely to take high risks. But in your community, the opposite is true. You prior structure and process, you also are willing to take great risks, and have a tendency to be a bit of a rebel. So look around, how many rebels in the room? I'm one, for sure, okay? And then that sunk cost fallacy, I gotta double down on this, man, I got this. Even though it's costing me a lot of pain, I'm gonna do it, I'm gonna keep going because I got this. Okay. The other thing that you have to understand to draw them in is you have to know a little something about the interest of the APT or attacker, and you have to understand their TTP. So what are their standard TTPs? How do they normally prosecute a target? Can I use that to my advantage by sucking them into an area, to an attack service that I want them to come after? And then, is there any prior profiling or footprinting of that particular attacker? All those things go into the calculus of how I draw them into my honey pie. Once you're in, once they come in, I've drawn them in now, using the kind of generalized things, I'm now gonna put them through an obstacle course. That obstacle course is designed to put them through part task exercises that allow me to suss out their cognitive vulnerability. So are they conducive to loss aversion? Are they conducive to representativeness bias? Are they conducive to this sunk cost or lost gain fallacy? Once I understand those things by putting them through this obstacle course, now I can manipulate the structure of the network to channel them to where I want them to go. I'm also, and I can use that to develop these things called cyber psychology informed defenses, which are a combinations of sensors and triggers that allow them, that I can put in like a routine to go after the attacker once they've gone through my obstacle course and I've identified what their vulnerabilities are. The other thing I'm using is hyper games. Anybody familiar with what a hyper game is? So a hyper game occurs when one player doesn't fully understand all the strategies of the game. So the logic of that is, as the defender, I understand the logic of the game. The attacker doesn't know that I'm ponying them. So that model is an outstanding game logic approach when one side doesn't know the rules as well as the other. So this is an example of how I would kind of, a system overview of how I would build this tool. The attacker comes in and does a recon and scan. They find a vulnerability that I planted in there intentionally. And again, it goes back to what are the generalized biases that I can take advantages? What are the enemy TTPs that I can take advantage of? What are the profiling? What are the cultural habituses of the profile of the person that I think is trying to attack me? And how do I suck them into this particular approach? Once they get in, I'm just going to use this decoy Apache server. And the mission here is an APT has been asked to go after sensitive data about a stealth fighter on a contractor's site. So they're coming after me. I don't have to draw them in too badly. So I know they're coming. So it's not so hard for me to go. I kind of know who the APTs are that might want to do that. And so how do I put something out there that's discoverable in their reconnaissance and scanning that draws them into a point of attack that I'd like to see them attack? Once they're in, I would use some type of intrusion token. So I'm not going to rely on the IDS. I'm going to use some type of intrusion token that trips to tells me they're in the network. And once I know they're in the network, then I channel them off into my obstacle course. And I put them through a series of exercises to suss out those cognitive biases that we just talked about. And once I know what those cognitive biases are, customized to that particular attacker, I can now chart to channel them into the things that I want them to go after. The thing that's really important in here, and you can see here there's a game controller and reasoner, there's a set of cognitive models and there's a set of these psychologically informed defenses that all feed my game reasoner and provide feedback into my cognitive models that I've developed. And it's going to take about three years of human research to develop both this data bank and that one. That's what we're looking at right now. But once I have that developed, then my game controller and reasoner controls and reostats how I apply those psychological defenses based upon the feedback that I'm getting as they proceed through the phases of recondis or phases of a hack and also what I've learned about them as individuals. Now, the interesting thing is sometimes people come inscripted, right? So I have to break that script. I have to get a human intervention to occur so that I can go after that vulnerability. So that's gonna be a hard point. I'll take some feedback on how I might do that. The second thing is a lot of times, a lower-ended APT might be the human on the keyboard. And when they run into something tough, they're gonna bring in someone that's more experienced. So the game controller and reasoner and my obstacle course have to be able to understand that the APT just up their level of competence and that the vulnerabilities that I thought about exploiting no longer exist for that individual because it's a different person. So that needs to be built into the capability as well. In this particular scenario, there's also a defensive display that tells defenders what's going on and this is all automated. So you just follow it. As a defender, you're following what it is doing. You can intervene if you want to, but for the most part, this thing is using AI and this game engine to control what's happening to the attacker. As they move laterally in the system and achieve their action on objectives, there's a couple of things we could do. One of them is we can poison the data so they can get some that looks realistic and but for the most part, it's poisoned but at their level, they won't be able to determine that. So that requires them to exfiltrate bad data, take them a couple of weeks to analyze and go, we got screwed and start all over again because the objective here is not to stop the attack. It's deny or degrade it to put a cyber penalty on their ability to operate in your network while the defender remains in control of what they're doing. Another thing we've talked about is there's a way in which you can actually show them the real data but when they go to actually save it and exfiltrate it, it goes to a garbage file and they exfiltrate garbage. Again, those things will not be determined until after they've had a chance to analyze what they took from your network. So that's basically how this system is conceptualized from an operational view. Now let's go to some questions and comments. So I'll open it up to you. If you could come up to the mic, I think that mic's hot. Come up to the mic and ask your question and then let's talk about it. So the first question I have here for you, do you think this has implications for future cyber defense? Is this an approach that makes sense to you? Anybody? I'm getting head nods. You got a question? Cool. What do you got? Sure, I'll ask a question. So for certain types of organizations like if you're a Lockheed or somebody like that and you have a really detailed knowledge of what an attacker might be going for, that's a, most organizations are not in that position. Most organizations are, like attackers don't attack organizations, they attack vulnerabilities and then they just figure out, well, who is attached to this vulnerability and how can I exploit them? Right? So that's slightly different if you have like, you know, you're a stealth fighter or something that's at stake, but for a lot of organizations in medium to small businesses, they don't have that. Are there tools or ideas in here that are helpful to someone who is going to be attacked by the mob, not by, you know, James Bond? Yeah. So that's a good question. And I will tell you the customer in this particular case was definitely focused on APTs, definitely focused on the kinds of things that APTs are interested in. And it wasn't generally thinking about, well, if I'm just coming in here, you know, if I'm a script kitty, and you know, am I just coming in here and just trying everything that I can to mess with it and steal something? I mean, that's not exactly the threat they were talking about. I do think there are some things you can do. A script kitty is kind of hard because they're not really doing much thinking, are they? They're just following a script. They don't know what the hell they're doing. They're just following the script and seeing what happens. I guess, but if those script kitties are like, for example, Russian malware operators or, you know, people, typically what you'll see is like an attack starts with like a script kitty, that's your initial access person, but they're going to broker that access off to someone else who maybe does have very clear motivations, but you don't necessarily know like what they're after or why. If it's ransomware, that's pretty obvious. And some of those ransomware operators are very, very advanced, but their motivation is known, but it's hard to, you know, the psychology of that organization is different from the psychology of the person who infiltrated you. I would agree with that. So I think there's a couple of things. One, that's why the obstacle course is there because it's an opportunity for you to observe the attacker. What is the attacker doing? What things are they looking at? The cognitive model part of this, the cognitive model part of this. So if you're a ransomware person, that behavior is modeled in here. So if it sees that profile, if it sees what would be the typical kind of TTP, tactic techniques and procedures of someone doing ransomware, then that cognitive model talks to the game controller and puts those and pulls those defenses from the logic bank. This is really the, you know, this is part of the secret sauce. I mean, building those cyber cognitive computational models is not easy. There are some databases that you can use. I think the public faces ones aren't very good, but there's obviously some others that could be leveraged that would be used to inform this cognitive model. So here's the typical behavior of somebody doing ransomware. All right, likely a ransomware. Let's try something. And then this game controller and reasoner says, well, how did that work? Did I do good? Did I do bad? Are they taking my bait or not taking my bait? So I do think your point is valid and I do think that this will need some tweaking to be a general defensive kind of tool across, you know, a corporate space where maybe you don't know exactly what they're after, but a lot of times it's either corporate espionage, ransomware, you know, so money, money, you know, I mean, there's only a few motives. There's a few. There's also some that just because it's there, they want to attack your system. That's a little harder person to deal with because they're just trying to say, well, can I mess with these people? Good questions. Thank you. So we talked a little bit about the viability approach. Anybody else, any what I call bovine bandera? Does anyone want to raise the bovine bandera flag and go, ah, no way, man, you're out to lunch. This isn't going to work. Anybody in the crowd think that? Did I make sense? I mean, what I think about this process is like a lot of times people do social engineering based on intuition, right? Based on like it's like taking a professional football scout and using that person's intuition about talent or taking the moneyball route that says, hey, man, I can put some logic and reasoning and some algorithms to who I choose. I think this is the same thing rather than just leaving it to a random social engineering intuition kind of thing. I can put some science behind it. And I think that science is pretty powerful and that's kind of the punchline of what my talk is like, guys, we can put some science against this. If you understand how humans think, you have a good model of that. And then how do you identify vulnerabilities and then leverage those vulnerabilities? That's kind of where, that's the hypothesis here. I think we have a couple of years to go before we prove that this can really work or not work. And there's a lot of human experimentation that needs to take place to prove that this really works. Like I said, there's early work that says there's merit, but not enough data, in my opinion, to really build a tool like this. And I wanted to bring this to this audience to kind of see what you guys think about it. So any other comments or questions? I mean, the last one, it's going to be countered. So if you know I'm messing with you, what would be your, as an attacker, what would be your strategy to counter what I'm doing? Ignore it, right? Ignore it. But you ignore it at your own risk, right? Sir? So from a community standpoint, an open source standpoint, a... Can you put the mic a little closer? Yeah, this is a life problem, man. So from a community standpoint or an open source standpoint, a defender versus attacker, gamifying it globally, could we as a community do this? Could we as a community make a product or a project that would have value for everybody and still be effective against attackers who are aware of it? So if I think, if what I heard you say is that as a community, if you put this in play, is it something that you think that has traction, right? Is that what you're asking? Yeah, basically, if somebody knows they're being gamed and there's a product that they can look under the hood, is it still effective? It's a good question, which is why I put in here, we have... I think once we this gets to put in play, we are going to have to up our sophistication, right? The simple biases that I talked about, I mean, particularly as you go through the optical course, you go, why the hell... What is this? Why am I having to go through these hoops to get to my objective? So I look at it from a psychological standpoint, right? Let's say you're dating somebody and they're cheating on you, but you have no concept that they're cheating on you. You're inherently more trusting, more open, more accepting. However, once you have knowledge that a particular game is being played, your biases change and your ability to be manipulated has changed. So I guess that's kind of why I'm asking. Yeah, so in some ways, what we saw in the Tularosa study is that the attacker didn't know when they were being messed with, which created doubt, self-doubt about what they were doing. And even if it's like saying, you can play the game the other way too, like, yeah, I got a deception tactic in here and not have one. You still cause the same friction, because now the person... Particularly like we talked about, this illusion of control bias that is pretty common in this community, right? And now all of a sudden that control is being questioned. It puts friction into the system. And that's really what this is designed to do. It's really about friction and frustration, imposing a cyber penalty on the attacker, not making it so damn easy, and also putting the defender on a more offensive stance. I mean, in my dissertation, I interviewed someone that when an attacker would do an initial scan, and they would see it was a Microsoft system, the next time she saw that attacker in the network, she would purposely send them a Linux prompt. Their freaking heads would explode. They're like, what the hell happened? What's wrong with my tools? Look, I just did this. It's a Microsoft system. Why the hell am I getting a Linux prompt? Right? So she would mess with them. This was 2017, 2018, but she was already doing that. She was, as a defender, she was already taking an offensive stance to mess with the attacker in a really interesting way. All right, I'm getting the hook. You've been a great audience. I'm really happy with the feedback of God, and thanks very much. Take care.