 Well, I'd like for you to welcome Andy Appelbaum. He's going to give a presentation here and instruct us on how to stop, drop, and assess your sock. Cool. Thanks. Can you guys hear me okay? I'm coming out okay? Louder, okay. I'm going to try to talk louder. Cool. So my name is Andy. I work at MITRE. Are you guys familiar with MITRE? Awesome. I don't have to say what MITRE is always bungle. Anyway, I work on MITRE's attack framework. Are you guys also familiar with attack? That's good to hear because I have slides on attack, but I don't have too many, so I'm glad. Anyway, I'm going to be talking about kind of a methodology that you can use to try to assess your sock using attack as a scorecard. And to kind of lead off, you know, traditionally when we talk about network defense, we have this tendency to treat our network as a castle and focus on the perimeter. And that's not as true today as it was a few years ago, but there's certainly a mentality that says, I need to focus on my network perimeter. That's all that matters. If I patch and remediate all these vulnerabilities, no one's ever going to get in. And the reality is that's not true. Adversaries will find a way, they'll always find a way to get in. There's always going to be something that's going wrong. And at the end of the day, if you're really only looking at those walls of your network, you're going to miss a lot of stuff. I think most of you might be familiar with the pyramid of pain. Are you guys familiar with the pyramid of pain? That sounds like a yes, some head nods. This is basically things that are kind of easy for adversaries to change, or this is kind of a representation of things that are hard for an adversary to change versus things that are easy for us to detect. So hash values are really easy to kind of like for us to detect for, but they're very easy for an adversary to change. IP addresses are similarly easy. They just kind of, you know, different infrastructure, domain names, network and host artifacts. And at the top of the pyramid are adversary behaviors or tactics, techniques and procedures. These are the hardest things for adversaries to change when they attack our networks. And if we can start looking for adversaries by looking for their TTPs, we're going to do better at actually finding them. And so that's where attack comes in, is we're moving away from hash values and IP addresses. And instead we're tax anonymizing what attackers are actually doing at the behavioral level. And here's kind of what attack is. I just want to point out it's globally accessible, which means it's on the internet, attack.mitre.org. You can go there right now. You can go there at any point in time. It's totally free. It's available. We put it out there. Please use it. Use attack. Attack is amazing. There's a lot of hard questions that you want to ask when you're trying to implement defenses in your network. The first is how do I actually move up the pyramid of pain and implement TTP-based detection? You also might want to ask how effective is my defense? It's one thing to just throw tools into your network, but it's another to say, hey, here are these tools in my network and here's how effective they are at detecting things. You might also want to ask what's my detection coverage against, say, APT 28 or 29 since we've all been reading the news. You might wonder, well, here's this APT they're active. They're targeting people. How do my defense stack up against the things that they're doing? And then as you're instrumenting sensors on your network, you might want to ask, hey, is this data that I'm collecting? All these logs I'm forwarding into Splunk? Are these actually helping me? Are these useful? What can I detect by using these things? Then the last thing you might want to do, and this is not a full enumeration, but you might want to say, hey, is this new product from this vendor that's getting all this buzz everywhere? Is that actually going to benefit my network? If I go implement that, is that going to provide some new capability that I wouldn't otherwise have? And these are all questions that we hope to address kind of using attack or there's some way to address using attack and kind of help move you from that perimeter based model and to something that's more kind of holistic where you understand more about your network. So I'm going to talk a little bit about attack. I'll try to keep it brief since it seems like most of you have heard about attack, but you know traditional sock defense focuses on that left of exploit kill chain phases, reconnaissance, weaponization, delivery, and mainly exploitation. Attack for enterprise and I'm going to say mainly attack. Focuses on the right of exploit kill chain phases and that's really kind of breaking down that control, execute, and maintain into high level adversary tactical goals. These are things like initial access, persistence, privilege escalation, lateral movement, exfiltration, mentioned credential access, right? That's a fun one. I like that one a lot. But these are kind of breaking down those kind of high level, like here are these kill chain phases into, hey, here are these adversary tactical goals. And then in the attack model itself, instead of just enumerating these tactical goals, we also talk about the techniques that are the actual things, the behaviors, the adversary executes to achieve their tactical goals. And then along with the techniques we enumerate groups, you know, the threat actors with links to the techniques that we've seen them using in publicly available threat reporting literature. Then lastly, we include a little bit of software in there as well. That includes built-in utilities because a lot of adversaries love living off the land, as well as custom malware, all linked to the techniques that they're able to execute. And so this is just a quick snapshot of the attack, the attack for enterprise matrix. It's grown a lot over the years and I'm going to have different matrices throughout this talk. I think mine are a little dated. I don't have initial access in there, but this is I think the one that was most current as of April 2018. And you can see at the top level we have the tactics. These are each column is a tactic. One of the downsides of having this model expand over time is it's harder and harder to read on a slide. You have the tactics and then within each tactic you have the techniques that the adversary uses to achieve those tactical goals. And here's an example in the top. You can kind of see it there. You have scheduled task and then that kind of gets blown up in the attack framework itself. We have a description. We have examples. We have a little bit of information about the technique in the actual model, as well as specific technique implementations linking out to the threat actor groups in the software that can execute them. And all of this is available at attack.myder.org. I'm not going to go too into it, but there are four key points I like to make about attack. The first is that the framework is grounded in real data from cyber incidents. Everything is backed by either common red team knowledge or publicly available threat reporting information. That's one of the key differentiators of attack is that we're not just enumerating these things we've read about in papers. We are kind of theoretical attacks. We're really talking about the things that adversaries do execute and that we've seen them executing. One of the key things that attack does is it enables you to pivot between your red team and your blue team. And I'm hopefully going to talk a little bit about that later in the talk, but it basically gives you a common language that both your red team and your blue team can speak to as they're working in your network. Then lastly, or not lastly, this is number three. This is my favorite. Attack decouples the problem of understanding what adversaries are doing from the solution, you know, the defensive thing that you'd want to do. So we've just gone and said, hey, here are all these adversary behaviors, here's all these things that they're doing. You can go figure out what kind of thing you want to do. Do you want to focus on detection? Do you want to focus on remediation? Do you want to focus on mitigation? Attack is agnostic to that. Attack just talks about and really focuses on what the adversary is doing. Then lastly, attack helps transform your thinking by focusing on post-exploit adversary behavior. And this goes back to the CASEL model where no longer are we saying, here's my walls, my perimeters, no vulnerabilities, I'm safe. This is saying no, the adversary is going to do some post-exploit stuff. If you start looking for these things, you're going to increase your overall security. Attack is great, but how do we actually use it? One of the things I like to say is that attack kind of sits at this intersection of four key use cases. Threat intelligence, measuring defenses, detection and hunting and security engineering. Trying to remember what I put on this slide. I'm going to dive into it actually. Detection and hunting, that's really kind of talking about SOC teams, kind of your detection team. Really, you know, focusing on that detection aspect. Hunting falls in here, developing analytics, tooling configurations as well as kind of how the analyst is looking for things. And that's one of the key use cases, is really kind of focusing on that detection point of view. I'll give an example in a further slide. Pen testing, or maybe more accurately, red teaming is another big use case. And that's measuring defenses. Really, if you're using attack, your red team can go in there and help say, okay, you know, here's what my, I think my defenses are and your red team can say, hey, here's what your defenses actually were able to detect. And, you know, one of the things, one of the use cases I like to highlight here is that with attack, you can have your red team actually conduct engagements to emulate known adversaries. And that's because of the reporting data that we built off of where we say, hey, here's this adversary, here are these things that we've seen them do. One of the nice things is that, you know, kind of attack helps each of these different use cases inform the other use cases. Here, measuring defenses can actually help inform your detection and hunting by telling you, hey, you're missing these things when you're running your detections and when you're doing your hunting. Cyber threat intelligence is a huge use case for attack. I'm actually not going to mention, but there's a lot of cool things you can do with CTI and attack. One of them, you know, ingesting and sharing behaviors for situational awareness. Instead of sharing like, hey, here's this file hash or hey, here's this IP address, you might want to say, hey, here are these behaviors we've seen associated with this threat actor. Maybe you do link some, you know, file hashes and other things in there as well, but really it's focusing on sharing those behaviors so that you have better understanding what adversaries are actually doing. And I have a nice slide about this is identifying and mapping the changing threat landscape. It's kind of like tracking how adversaries are modifying their behaviors. Maybe two years ago we saw an adversary use these TTPs and maybe today we're seeing them use these TTPs. We can start kind of keeping tabs on what the trends are and maybe even forecast what adversaries might do in the future. And CTI is really great because it helps inform kind of both your measuring defenses because you can have your CTI team tell your team, here's what we think our threat actors are doing, you know, the guys that we should really be caring about, go emulate these threat actors, don't go do random things, go actually focus on the adversaries that we think are going to target our networks. And then they can also inform your detection team as well, kind of from a similar perspective of, hey, here's the adversaries that we're worried about, are we secure against them or are we not? Then the last one is security engineering. You know, kind of big use cases for attack here is informing strategic decisions to kind of prioritize your investments. And you really want a better way to say that might be using attack to guide how you architect your network and what sensors and what tools you deploy, what logs you collect. Attack can help kind of help you navigate where you should be looking and what you should be doing there. So this is probably one of my favorite things to talk about with attack. This is a notional defensive gap chart. Essentially the idea here is that we can take attack and use it as a matrix and basically diagram which techniques we have high confidence we're going to detect, medium confidence or low confidence. It's very simplistic. You can obviously use like quantitative methods to say, I think I'm going to detect credential dumping as a 20 and, you know, schedule task as a 30 and then start assigning weights and all sorts of fun, you know, stuff and metrics. But here it's really simple where we're just going to have a color coded chart to say, hey, here's what I think my defensive coverage looks like. What's great about attack is you can visualize all these TTPs, all these behaviors and one like single snapshot and I'm going to talk more about this use case in a bit but kind of branching a little bit. Another nice visualization is for threat intel. This is a chart and I don't know if you guys can read it but in pink is all the things we had really that we had an attack back to APT 28. We only had six techniques and I think this was in 2016 or so an older version of attack. After some threat reporting, I think this was about a year ago, we saw 14 new techniques that we'd seen in publicly available threat reporting and this goes back to the idea of tracking adversaries and watching how they might be modifying their behaviors. This is biased by publicly available threat reporting where you're tracking the threats, seeing what they're doing, seeing how they might be changing. Hunting is another good one. This could be hunting or purple teaming. Really red and blue working together. Attack provides a common language and here's a simple example where we have the matrix view and we're saying, okay, new service and from new service as a red teamer I jump to credential dumping and then after credential dumping I jump to account discovery and kind of walking through what red teams and purple teams are doing in a way that's accessible to the blue team because they're both speaking the same language and the last security engineering is another fun one. This is actually visualized by a tool that we have that's again free, publicly available it's called carrot. This visualizes groups as they act groups, the techniques that those groups have been seen seen to execute analytics that we have in a data repository back to the techniques that those analytics can detect data sources that those analytics need to run and then sensors that map to those things in the data model and the idea is if I can kind of expand my sensor models or thinking commercially available tools, other potential sources of information, I can start drawing a graph like this and helping prioritize what I'm doing as I'm architecting my network and choosing what things to do. We have this matrix, we have this kind of giant question mark what do we actually do? How do we actually bring this into our environment? If I'm living somewhere where all I'm doing right now is looking at the perimeter, I just have all these defenses and how do I get started with attack? One of the things that I think would be great to do is to conduct an attack assessment and I'm just going to jump backwards actually and the idea is we can talk a lot about attack but if I can come up with this chart this can help inform everything else that I'm doing, but how do we actually come up with this chart? How do we figure out where our gaps are, where our strengths are, where we kind of in the middle? Where do we actually kind of jump from? And this is where assessments come in and the idea is yeah, we can do like red teaming and that can do a lot of that for you but this idea is something a little bit softer, it's more of an approximation of first glance, just something to give you that starting point so you can kind of prioritize everything else you're doing and the idea of the assessment is kind of a four phase approach and the highest level is just discussion, you just kind of figure out what you're looking for, what's in scope, what's out of scope, make sure that everybody in the SOC from the SISO down, everybody is kind of on the same page, they understand what's going on, what their expectations are, we're not going in there and executing what the coverage might be. Once everything is set and you have kind of a schedule and you have kind of a good rhythm, then you're going to want to actually analyze stuff. You look there for mainly documentation and sensors that you've implemented things like connobs, certain operating procedures, do you have a document which describes hey, these are the data sources we're looking for, this is how I stand up a new host on the network, these are all the tools that I'm running, here are all the documents that everybody is doing, getting all that documentation, bringing it together and you can do this from two perspectives, if you're in the SOC it's kind of easy for you to know what you're doing, but if you have many different people, you really need to kind of bring everything together to figure out how to do this. The next is to talk to people and actually interview them, it's great to get documentation and you might say oh we're running all these tools, we have all these analytics, we're doing this a little bit different than what people are kind of claiming they're doing in writing, here kind of things you might want to look for are known coverage, you might talk to people who say hey, we struggle with this or hey, we're really good at this, you know, maybe some people are more familiar with attacks, some are less familiar with attack, maybe they can talk about that or you might just have general things like we kind of struggle with detecting things kind of in our form, how you're analyzing everything as you understand what your strengths and your weaknesses are, then another good one is past performance, if you have kind of successes or failures, you can go off of those and see what you did there. And the last thing is kind of processing everything together, yeah, we have all this stuff, now we need to bring it all into one, you know, one complete picture, and you don't just want to kind of get that coverage chart, but you also want to do after you've done that. There's two key points I like to make here, the first is that this process is designed to minimize stakeholder involvement, you don't want to be overburdening the sock personnel with some kind of analysis, we're just spending a year just kind of, all right sit down with me for three hours a day and I'm going to tell you what you're doing, because then no one's doing their job, you know, the idea here is really to focus on that analysis phase and when you're really don't want to be saying, all right, walk me through your day to day, because once you start doing that, it's just too much involvement. At the same time, you want to try to maximize your usable results, and that means you do have to talk to people, you can't just analyze this documentation because that only tells you so much, but you know really kind of maximizing how you can use those results too. And the other key thing, and I'm going to talk a little bit about this later because it's one thing to just say hey here are your gaps, and then okay what do you do? Like I'm missing these things, what do I actually do about it? Where should I go from here? You know, you don't want to just do that, but you also want to come up with okay, now we need a plan for making things better, and that's a big part of this as well. A little bit more information on analyzing documentation, you're looking here for things like tooling, processes, procedures, methods, things like that, that's what you really want to look for and how tools are used, that's hard when you're just looking at documentation, but if you can get that, that's great. Here analytics provide empirical details on what ISR is not detected, it's easy to look at an analytic that's one or two, I don't know how many lines an average analytic might be, but a few lines long and say okay, I think this is going to detect this technique or this technique, that can really distort, and then specific tools, those tend to use detective methods that directly map to attack, and I'll provide an example with registry based detection methods that if you say hey, this can detect things that modify the registry, okay, then I've got kind of medium confidence that you can detect these things, and the goal here is really to understand how the SOC operates before really visits a bad word, but really before kind of overburdening people and interviewing them, if you can focus your interviews, you're going to do much better than just going in there kind of blanket, so alright let's get to know each other and I'm going to ask you some boilerplate questions, if you do the analysis beforehand, then you're going to know exactly what you should be looking for. When you're interviewing, you're really meeting with the security teams to understand the general readiness, and I like to bucket this into three main categories, the people know of all the gaps, then your job's going to be really easy, but sometimes people will know one or two or maybe you talk with them and say hey, what do you think of WMI? They say oh, you know what, I think we can catch that, or we ran a red team exercise, we weren't able to catch that. The next big bucket is general evidence, and that's things that are just kind of general blind spots that might be kind of general things that are being missed. And then lastly, tooling and method details, how does the team operate and how do you use and configure tools? A lot of tools like are they being deployed off the shelf or are you customizing them a little bit, that will change how you end up evaluating how that tool is being used. When you're processing results, you tend to have kind of four big buckets of things. The first is interview results, that's combined empirical and general evidence, all mapped back to attack. The next is data analysis. And then you can kind of map those back to the attack model. Tooling and sensors, it's the same thing, it's just another big bucket. Then sock procedures that help you kind of understand how the analytics and the tools are used. And that's kind of vague, so I'm going to walk through a couple of examples. This isn't a real assessment, this is just kind of some, you know, snapshot pictures of the matrix that can kind of provide an idea of what you'd be looking for. There's a link down here on CrowdStrike, they have something called Falcon, and they have kind of a mapping of, hey, here's what the attack coverage is against a specific APT, so the APT three evaluation in particular. And this is, you know, if you're assessing an environment that's running this tool, this is really easy because that's already available for you. This provides a nice little snapshot, you can see we've got things in green, those are things that are detected, things in yellow, detection was like capability gaps. And then some things also weren't tested just because they were kind of out of scope and then other things weren't tested just for other reasons. And so this is great if this is, you know, available if you have, if you've already done an assessment of a tool, that's another common situation that you might have is you've said, hey, I've seen this tool before, I know what it's attack coverage looks like, you know, I don't need to do it again. And so that's a great place to go. What's their coverage look like? And here's an example I mentioned about registry-based defenses. You might say, okay, here's a tool, it monitors the registry, it just kind of just does stuff with the registry and you're not really sure how good its coverage might be, but you might have medium confidence that, you know, any of these things that really kind of fiddle with the registry, these are things that you probably shouldn't be focusing on if you're trying to remediate gaps. These are some, so if you look at the data sources that some of the tools are running, or some of the tools are looking at, then you can figure out where you should be kind of prioritizing your, not prioritizing, but really where the coverage strengths and weaknesses of that tool might lie. I say analytics a lot, here's a sample analytic. You know, this is just kind of some pseudo code, you can see it's, you know, kind of searching for, I can't see my mouse, it's searching for destination port 445 and the protocol is an SMB write protocol. You know, you might come into an, you might look at your environment to see a bunch of analytics and you get code like this. You might say, okay, this looks like it's looking for an SMB write request and you think a little bit more about it and say, okay, SMB write request, this can detect remote file copy pretty well, you know, that'll detect it most of the time. Windows admin shares, it's kind of moderate level of coverage to medium and then valid accounts, that's another one that, okay, that'll provide some coverage of that technique and so if you can do this for all your analytics, that can help you understand what your coverage is. As a note, I've cheated here, that analytic is actually from MITRE's cyber analytic repository, we have kind of a repository of analytics. It's available at cardoutmitre.org so please feel free to go there and look at some analytics. So if you take all the analytics spreadsheet or one matrix view, you get kind of a picture of what the overall analytic coverage is and here I've taken five different analytics and kind of created a coverage map for those five and saying okay, these are all the things that those analytics can detect. More often than not, when you analyze all the analytics that are running, some of them, or many of them might, hopefully, map to the attack framework but others might not do well with the creative process trying to figure out which of those mapped to the matrix and which ones don't. This is a very simple example of what you might expect when you're interviewing personnel and I've kind of cheated and done something very basic. You'd probably get more interesting things when you talk to real people but in this example, I'd say we interviewed them and they said we kind of have mediocre success, we have kind of like going across the perimeter, we have decent coverage and it's not high confidence but maybe medium confidence and so here I said okay anything that's going across the network, anything that's really going at the perimeter or might go across the perimeter, I'm just going to say that's medium confidence of detection. It's very simple and straightforward. You can do more interesting things like you might talk to people and they say we struggle to detect discovery and you just kind of highlight that perimeter based we're okay with it. I've kind of given you these data sources, how do we bring it all together and here's kind of, we start with one of the tools, we add another tool here, I've taken that Falcon, I've added the registry, all the things that they both can detect and they both miss, you kind of bring them together. It's the same with analytics, you kind of can see the coverage expands as you bring each of these in and at some point I'm just building, everything's nice and increasing all the time, so if I do have coverage I just add it, sometimes you might find something saying I do have coverage but then something else saying I don't have coverage, in some cases you might want to prioritize not having that coverage. Then when you bring in the interview results here's what the end thing looks like and one thing I'd highlight here is that the coverage before this slide was no confidence, really low confidence of exfiltration and that was from that tool that said we don't have any confidence here and nothing else said they had confidence but then when I said hey, when we interviewed them, they have medium confidence that they can detect all these things across the perimeter and I might say okay, that's something that maybe we have medium confidence as opposed to no confidence and that's a pretty simple example. So it's not enough to just say hey here's all your gaps, what do you actually do? How do you go about that and essentially my answer to that is you need a prioritization plan you need to focus on remediating specific gaps and as a simple example, the question is if I have all these things in white, all these things that have low confidence, which ones should I actually focus on and one idea is to focus on those that are more commonly used. This is a notional chart, it's very old but kind of things that are highlighted more, those are more commonly seen this is a little notional but credential doping, file and directory discovery, registry run keys slash third folder, these are all techniques that are pretty commonly seen we also just if anyone's interested in this slide, we have something called the attack navigator that's free and publicly available that takes in layer files and we have a layer file that has better data so talk to me afterwards and I can tell you a little bit more about that but that's one thing that techniques are more commonly used obviously this is biased by whatever data is currently in the attack model another thing is focusing on specific groups, here I've taken kind of APT 28 and deep panda, but you know APT 28 and blue, deep panda and yellow and then techniques that both of them execute in green here I'm saying I want to focus on both of these front actors and if I'm coming up with a prioritization plan, I'm going to focus on the techniques that both of them execute obviously you can also take another way which might say I'm going to focus on the techniques that APT 28 executes as opposed to the techniques that deep panda executes and when you're done hopefully you'll have a prioritized coverage map, here this is again you know notional but where you highlight a technique here and there and say these are the ones that we really need to go for right away I want to focus here and you know these are the things that I'm going to get the biggest bang for my buck B. Once the assessment is done you come up with a remediation plan I've got a lot of words here but the main thing that you want to do after that is really implementing an attack mindset you really want to move away from that perimeter you know pre exploit you know that no one's ever going to get in moving away from that and saying okay people might get in and we really should focus on having this threat based awareness and threat based methodology in our sock but some of the things you might want to do are improving coverage by acting on the coverage map it's pretty straightforward just kind of increasing your coverage having increased awareness of your defensive gaps is really good if you have that as a day to day basis you kind of have more awareness of the kinds of things you should be looking for on a day to day basis then verification is important because this is you know what I've talked about it's a bit of an approximation you know going in there with a red team about some use cases that you can do after an assessment the first is developing analytics and I don't know if any of you were at b-sides because my colleagues presented some of these slides already so it might be duplicative I've said the word analytics a lot and you know when we talk about analytics they're great but you know there's kind of a spectrum between analytics and indicators and I don't talk to this slide too well but analytics tend to really be duplicative as opposed to that known malicious that indicators talk about there's more false positives they're broader and you tend to have kind of a lower quantity than you have for indicators they're still really useful but you really kind of have to target them when you're developing them and the general recommendation for if you want to go with an attack an attack assessment figure out what your coverage is you know start somewhere, pick a technique ideally you go from the remediation plan that you have focus on one of those techniques here is bypassing user account control and I think this is an old slide deck and I had more there anyway when you're developing an analytic the first thing you should do is really read the attack page and understand the attack that you're trying to target you want to look at the references for using it think from an adversary perspective and try to separate legitimate usage from malicious usage and that's going to be a big one because a lot of the things that we have in attack are things that can also be done legitimately trying it out is also very important you don't just want to throw analytics at your network you should implement them and refine them and say okay here's the false positives here's the false negatives then writing and iterating is also important you write your first search and then you narrow your false positives in your iterate and the big hope here is that you start with your initial coverage matrix and this is great but after developing some analytics you can help increase your coverage matrix instead of going in and doing another assessment from scratch you can take your initial assessment understand what your analytics are detecting and then update your coverage chart based on what those analytics were actually detecting another use case is adversary emulation you know that's kind of red team-y-ish but I love the coverage maps and the heat maps I think they're great, they're awesome places to start but they tell us somewhat incomplete picture the reality is if I say this technique is green I can't just like walk away clap my hands and say no one's ever going to use this technique you really need to go in there and test this technique really isn't vulnerable I can't detect this and I you're not going to get away with executing it and attack techniques they really have many different ways of being executed and this is different for each technique some of them like credential dumping there's lots of ways you can do credential dumping but reading the bash history I guess there's different ways you can read the bash history so coverage maps they paint a great picture initially but they're kind of incomplete and the best way to really move beyond a coverage map is to use adversary emulation which you also might want to say is like threat-based red teaming here you want to actually go and execute real techniques on your network go verify whatever covers you have say okay I think this is green okay red team go emulate an adversary then execute that technique you're not going to get away with executing that technique and attack is great here because again it provides this common language to not only talk with the red team but to also it provides a common language to talk with the red team but also provides that structure for how the red team should be because we have that mapping back to the groups so using attack for adversary emulation there's four big things here first is scope attack can help you understand the scope of what the red team exercise can look like you might not just want to execute everything you might want to execute only specific things that the adversary is executing communication I've mentioned that one a few times repetition is also important if you're running a red team exercise and the red team just does whatever they want for each exercise it's going to be hard to understand how your network has changed over time and how your red team might be changing by using attack and structure and adversary emulation so now you can say okay here's what this exercise looked like a month ago two months ago, three months ago and kind of compare how you've been doing over time the last thing is measurement since you can kind of see I caught ten techniques and missed five techniques the ten techniques I caused those were low hanging fruit those other five I really need to focus on those I'm not going to dive into the details too much but I will say we've developed adversary emulation plans this kind of walks through how you can do how you can emulate APT-3 this works at the technique level also at the procedural level we have a few things in there as well there's a lot of cool stuff there and the big thing here is if you do want to use adversary emulation it's great to either use an existing emulation plan or come up with your own to actually tell your red teamers hey, these are the things you should be doing and help provide structure to the actual exercise looks a little bit like this this is yet another view of the matrix but you'll get some sort of coverage map which says we caught these things we missed these things we kind of maybe could have caught these things better it'll help you map back to your original coverage to figure out hey I'm actually going to validate this gap or this strength and once you're done once you've done that initial assessment you don't just stop there you keep doing it a lot of these have many different ways that you can execute and then update your analytic if you run the technique and you catch it, that's great then if you run the technique again with a different implementation and you miss it now you should update what your coverage looks like by repeating this process you can slowly improve what your coverage looks like I'm just about closing I didn't talk about it too much but CTI is a huge thing for ATT&CK part of what we're hoping for are more threatened form defenses you know, you take in CTI you describe things in ATT&CK you put that out to your realistic threat model you also push it down to your intelligent you know, adversary or intelligence driven adversary emulation plans to kind of help you structure those in a more kind of realistic way based on your CTI and all that feeds into an ever-improving and well-validated defense I've talked a lot about ATT&CK and I keep saying ATT&CK there's a lot more to ATT&CK than what I've been talking about I've mainly been talking about Enterprise ATT&CK that talks about Windows, Linux and Mac we also have Mobile ATT&CK that's another framework that's again available ATT&CK.mitre.org as well as Pre-ATT&CK which does that same kind of tactic and technique enumeration for the left of exploit behaviors and this is just a quick view you can see, you know, there's Pre-ATT&CK on the left we have lots of resources around ATT&CK first is kind of that publicly available ATT&CK knowledge base it's ATT&CK.mitre.org we've also recently converted everything ATT&CK into sticks format so now you can work with ATT&CK dynamically someone likes that because I don't know about anyone else but I actually had a web scraper that was like scraping the wiki which that wasn't fun but now it's all in sticks so you can go on cool things I mentioned the adversary emulation plans that's another thing we have we're working on some more automated adversary emulation we have a project called Caldera which attempts to kind of automate the adversary emulation process end-to-end adversary assessments we have an open source version available online and I can talk at length about that too Caldera is awesome it kind of attack I mentioned CAR as well the cyber analytic repository that's kind of a database we have a repository of analytics that I'll map back to ATT&CK techniques the last one is the ATT&CK navigator visualization tool if anyone else has tried to visualize stuff with ATT&CK it can be challenging with Excel and PowerPoint and matrices and other diagrams we now have a tool that's available online again it's open source that allows you to visualize all sorts of cool stuff like we can do heat maps in all sorts of different ways gradients, scores, hiding techniques showing different techniques emphasizing we recently added a feature where you can export the layer that you're working on actually as an Excel spreadsheet if you do want to work it into a PowerPoint so that's all cool and then links and contacts there's tons of stuff I'm not going to go through each I'm Andy although I play the Queen's Gambit now ATT&CK lots of things ATT&CK were very active on Twitter cyber analytic repository the emulation plans Caldera, I didn't talk about it we have something else called Cascade which kind of automates a threat hunting process and it builds from the cyber analytic repository that one's also open source so please take a look at that then a lot of stuff on CTI at the end and then the last thing I just want to say is MITRE is awesome we're not for profit organization we are hiring so people think ATT&CK is cool anyway that's it if anybody has any questions I'd be happy to take them also have stickers