 I am super excited to present Mathieu Sonnier. He is a security enthusiast, and he has held numerous positions as a consultant within several of Quebec's largest institutions. For the last six years, he has specialized in blue teaming, content creation, threat hunting, and mentorship. He's a senior security architect at Bel-Canada, which is one of Canada's largest carriers. In the last 12 months, he has given talk at the Gosec Conference in Montreal, Geekfest in Toronto, B-Sides Charm in Baltimore, and he has been accepted to speak at B-Sides Las Vegas in August. Please give him a hand. Welcome. Thank you. Welcome to this talk called the SOC-Contour Attack. My name is Mathieu Sonnier. You can find me on Twitter at ScoobyMTL. A little bit about me. I work in Infosex in 2000. I'm a blue team member for the last seven years, senior security architect at Bel-Canada, where my roles are adversary detection team lead and threat hunting team lead. I'm also a deaf con blue team village volunteer since its creation last year. You might have noticed, sorry, that I'm French-Canadian, so I will or not pronounce H's in front of word and S at the end of word sometimes. I'm also a big Star Wars fan, so if you've seen any of my other presentation, this one is also a Star Wars themed. Even the title is like a bad double Google translation, from the Empire Strike back to L'Empire Contre Attack to the SOC-Contre Attack. A little bit about the agenda for today, so we're going to talk about the overview of attack. We're going to dive in the attack navigator. Then we're going to explain what a preliminary assessment is using attack, and then we're going to talk about attack and some open source projects that can actually help you get started. As I mentioned, overview. So what is attack? Attack stands for adversarial tactics, techniques, and common knowledge. It is a knowledge base of adversary behaviors or TTPs. TTP stands for tactic, techniques, and procedures. Tactics is the why or adversary tactical objective. The reason for performing an action, such as credential access or dumping credential, persisting on your system. Sorry. Techniques is the how, adversary achieve tactical objective. For example, they will dump credentials or they will do brute force. Then the procedure is the exact way of using those techniques. For example, using Mimicast to read Alsace, or the exact command line such as cat, etsy, password, and Linux to access the password hash. It focuses on real attack, so it doesn't contain all the attack under the sun. It is not a bible of everything that exists. It is focused on what has been seen in the wild being used by adversaries. It also provides the common languages for red teamer, blue teamer, the community vendors. So when they talk about a technique, they all know exactly what they are talking about. And finally, it is open source, community driven, and it is actively maintained. So there is quarterly release of new content. If you want to learn more about what attack is, there is an excellent talk from last year's B side in Las Vegas called Attacking the Statue Quo from Kathy Nichols and John Wander. Pyramid of pain. So this is a simple diagram that shows a relationship between the types of indicator you might use to detect adversary activities and how much pain it will cause them if you deny them those indicators. You don't need to take pictures. The last slide will contain a link where you can download the whole presentation. So as you can see, we start with at the bottom until domain names. It is fairly simple for defenders to detect, but it's also very simple for adversary to modify those indicators. When we get to network artifact or host artifact, those can be user agent string or function name in a program or a script. Those become annoying to change for adversaries. When we start targeting their tools, then it becomes challenging. For example, if the adversary is using metharpreter shell from metasploit and you have 100% detection on that, you will force them to use another framework such as cobalt strike or empire. If you focus on detecting TTP, then it becomes tough. So if you deny them everything that has passed the hash for lateral movement, for example, then it becomes tough because they need to find a new technique in order to do lateral movement. Same thing if you monitor your registry properly and you can monitor all the run keys in your environment or the run keys in the registry, it becomes tough for them to choose a new method of persistence. This is exactly what ATT&CK aimed for the highest level of the perimeter of pain. If you want to learn more about the perimeter of pain, this is the link for the original posting in 2013 by David Bianco. There is multiple ATT&CK. There is the pre-ATT&CK, there is the enterprise ATT&CK, and there is the mobile ATT&CK which focus on iOS devices and androids. In this talk, we will focus on enterprise and this is a representation of the pre-ATT&CK and the ATT&CK framework where you can see the line in blue is what we call initial access and the two in red are recon and weaponized which are in the pre-ATT&CK. So as I mentioned, we're going to focus on enterprise ATT&CK, so the second graph in this presentation. Here is a representation of pre-ATT&CK and ATT&CK on the Lockheed Martin cyber kill chain. So you can see that pre-ATT&CK focuses on recon and weaponized as I mentioned, whereas enterprise focuses on deliver, exploit, control, execute, and maintain. A little bit about the history of ATT&CK. It was created in September 2013 as a need to systematically categorize adversary behaviors. It used to enumerate and categorize adversary TTPs only against Windows. Now it also covered Mac OS and Linux and iOS and Android devices in the mobile one. It was released to the public in May 2015. It used to contain nine tactics and 96 techniques. As of April 30th, there is 12 tactics and 244 techniques. When I wrote the slide, there were 11 tactics and 233 techniques. So you can see that it evolves rapidly. An example of that. When I first started looking at the ATT&CK framework for MITRE, Kerber hosting was not in it. It started being used by ATT&CKR, so it was added in the matrix. ATT&CK also had its own category conference called ATT&CKCON. It was two days in October 2018. It was live streaming on YouTube and you can still go to this website and be able to watch the 20 presentations that were made on YouTube. A little bit of a scoop. There will be another edition this year, October 28th to October 30th. It should still be streamed on YouTube, so you can book those dates and you can have training and you can train from the comfort of your couch or your office depending on what you prefer. Now we're going to talk about the ATT&CK navigator. So here you have the ATT&CK navigator. At the top you can see up here all the tactics. Down there you have all the techniques. There are 244 of them as I mentioned, so you're not expected to know them all. You can see when I move my mouse that sometimes some techniques are in more than one tactic. So for example, if you do not know what BITS job is and you want to have some information, you can right-click and click on View Technique. I'm going to switch to the other tab because I'm not connected to the Internet because I don't trust any of you. So when you go there you will see a Windows background. You'll see like a background of the technique. You'll see the ID, which is T and four digits. Every other project that will refer to Mitre ATT&CK will use this ID. Then you can see in which tactics the technique belonged. The platforms used, the permission that are required in order to execute this technique. And very important for defenders, the log sources that you can use to actually detect those techniques. If we go a little bit down here in the example, you'll see a list of threat actors and malware that have been known to use that technique. Then you'll have some information about mitigation. And in the detection, you will often see the exact procedures that you can use in order to leverage this technique or to detect it. Let's get back to the navigator. So another interesting thing about the navigator, if you're into threat modeling and you want to follow threat actor, is that you can use a drop-down list here. And for example, if you know that APT28 targets your enterprise, you can select all the techniques and you can give them a color using the background color. And you can say, for example, light blue is APT28. If you're more into other things, maybe you know you've been infected by agent Tesla and you want to make sure that you detect it, you can select it. Again, choose another color, maybe yellow. Now you can save those layers here in JSON format and you can load them easily by clicking on the little plus sign here. Open an existing one and you can upload from local or from a URL. Yay, no internet. Anyway, I was almost done. There's a legend at the bottom that is fairly important to use because in three months, you might not remember what the color means. So use the legend. So back to the presentation. How do we get started with those 244 techniques? Well, I recommend starting with a preliminary assessment of those 244 techniques. In order to do that, you need to select some basic question that apply to your enterprise. Here are a list of potential questions that might apply or not, so you need to select. But for example, you can start about logs. No logs, no detection. What is the complexity of building this detection for this technique? Are we talking about monitoring one file or one registry key? Or are we talking about something that's going to generate lots of false positives that might need machine learning, a lot of white listing? So this is one thing to consider. Then is the severity. Is this high, low, medium impact for my enterprise? What is the probability of being hit by that technique? If it focuses on macOS and you only have a few of them, the probability is very low. Do you have any dependency on other team in your enterprise? For example, this might not apply to small companies, but in large companies where you might have a Linux admin, a Windows admin, an IIS admin, an Apache admin, an Nginx admin, a MySQL admin, an Oracle admin, yadda yadda yadda, it might get very complicated to get all of those nice people around the same table in order to get all the logs you need to build a detection. So one thing to consider. Targets. Again, if you're a small enterprise, maybe it doesn't apply, maybe your scope is everything. If you are in a large enterprise, maybe you want to focus on databases, for example, or web servers or workstation. So what does it target and is that my priority right now? And then is there any open source project that I can help to detect this technique? About the data sources. So of course the number of data sources that you have is very important. You need to know which data sources you're currently collecting and storing in your CM or any other type of application like that when you can create your rules and your detection. You need to focus on getting the one that you don't have. For example, Sysmon can be used to detect roughly 70% of the techniques in the metrics. So if you do not have Sysmon, it's probably one of the first things that you should go after, installing Sysmon and getting the logs. And if you're not a Sysmon person, there's other tools that are similar. We're going to talk about another one later. You also need to plan on your log retention and return on investment. What I mean by that is that your incidence response team will need logs and they might not need the logs for the same amount of time. For example, they might want to retain firewall logs for 30 days, whereas they might want to retain Sysmon logs for 90 days, 6 months, a year. So you need to know how much it costs your enterprise to store those logs in hot, warm and cold storage. So what's the price per gigabyte for your organization? And then you can plan your retention around that. If you want to know another way to start with the data source when your enterprise is fairly mature, there was an excellent talk called Quartify Your Aunt, Not Your Parent, Red Teaming from Dev and Kerr and Roberto Rodriguez that was presented both at B-Side Charm and at Sands Tretonting Summit. Those are the two links for those presentations. Once you have decided your question, so you need to assign points or weights to every question that you've selected. So you can score each TTP and assign them a color in the navigator or another way of tracking. Then when you have your score, you can apply a script and apply those colors in your JSON file automatically. You can do that manually, but again, 244 techniques doing manually, it's quite tedious. This is an example of the question that you might select with the weight. How many log source do I have for this technique for 35% weight? What is the probability to be targeted by this? Maybe 30%. Does this target Linux server because my DMZ is full of Linux server and that's what my corporation wants me to focus on? And is there any open source project that can help me speed up the development of the detection? Just a 10% weight. Then with the points, you can assign color and development priority. So this is just an example. You can use anything else. One thing I forgot to mention when I was doing the navigator demo is that you can have as many layers as you want, so you don't need to cram everything in one layer. You can have one layer for everything that you want to track, that you want to show to your management. So this can be another view when you're done with the preliminary assessment, it should look something like this. Each color, each techniques are colored. If you don't have a legend, as I mentioned, it's very hard to tell what this color means. Green might mean that you have everything and this is your priority, or the other way around, that green is low priority and red is a high priority. With no legends, we have no idea. Then we can build another layer and track our coverage and progress. So here, for example, you see that there's five techniques that are selected. We have hardware addition that is in orange, which might need means for us that we only have dashboards or hunting capabilities, and the green ones maybe that we have some detection. And you see there's a light green, which means maybe we have only one rule and a darker green, which might mean means that we have more than one rule. We have one in red, that might mean that we have only the logs, but nothing else. Again, you choose your color, but make sure to make a legend. This could be our Q2 version. So that one was... Now we see that we have 15 techniques that we cover. Very visual. Your management will understand that. They will understand where their money goes. And you can see here that hardware addition went from orange to green because maybe with all the hunting that we did, now we have detection for that we have a rule that is sent to our SOC. And you can see that we have improved our detection. When you have layer like this, it's extremely easy to answer management questions. On March 19, Red Canary released the Threat Detection Report. This is the top 10 techniques that they have seen being used by adversary in the wild. So with the layer, you can answer your management in five minutes about your coverage. Because we knew in five minutes that we had 90% of those technique that we could detect. And the next print, we started working on the one that we couldn't. And now we have 100% visibility on the top 10. It goes up to 40, but I think the top 10 is very important. And as you can see, PowerShell is the most widely used technique. After that, you need to know your enemies. How do you do that? Well, you do that by doing threat modeling. There's a workshop about threat modeling at the same times. But thank you for coming to my talk instead. Threat modeling, you wouldn't attack a retailer for the same reason that you would attack a media company, for example. So if the threat actor is after a retailer, he might be after credit cards. He might want to reroute some goods to an address he controlled. He might want to change the prices before making a shipment. If you're attacking or if a threat actor is attacking a media, they might be after the journalist sources to maybe pay them to change their mind or kill them depending on the country you live in. They might want to change the meaning of an article that you've already published, or they might want to just publish their propaganda on a credible media. Once you know which vertical you're operating in, you can start looking for threat actors that are actively targeting your vertical. When you know which threat actor, you can start following threat researchers that focus on those threat actors, and that will give you a list of TTPs that those threat actors are using, and that can be used, again, to prioritize which TTP you want to detect first. Now, metrics in KPI. If attackers think in graph, management thinks in metrics. So there's good metrics and they're bad metrics or pitfall. Good metrics is a little bit what we saw earlier, showing monthly progression, showing your coverage, prioritizing your data source ingestion, alerting versus hunting. Alerting, for me, is when you send a ticket to your SOC that someone will investigate. Hunting might be a dashboard, some query that you run ad hoc or on a regular basis. There might be some reports that you get, and then you pick the entries that look suspicious to you. You might want to represent if you have a single detection or multiple detection by shades of green or by your score. Then there's the bad ones or the pitfall. One of them is assuming that all TTPs are equal. As we saw in the slide from the Red Canary Report, PowerShell is extremely huge. You can do 75% of the techniques using PowerShell. Whereas Bash History, for example, that's only one file that you need to monitor in every user directory. So not equal at all. Falling for coverage versus depth. So that means that if your team is score only on the number of TTPs that they have detection for, they might be enticed to the built very weak detection for each technique, and I will leave you vulnerable to a lot of bypass. Again, taking PowerShell as an example, if you only do detection for file download, then you're vulnerable to DLL reflection loading, you're vulnerable to obfuscation, so on and so forth. Many things that you can do. Persistence. It's also important to remember that some of the TTPs in the matrix are not for alerting. They're more for contextual when you do incident response. If we think about deleted file or created file, if you make a detection rule for that and you make an alert and you send that to your SOC, they won't be happy. They will get flooded. They will probably hate you. They will probably hate the MIRA attack forever, and they will hate you. Another pitfall would be not counting non-attack detection that you built. Again, attack is not a Bible. It does not contain every techniques that exist. For example, persistence with SSH authorized key, as it's not in the matrix, but it's something that you might consider building detection for and reporting that you actually do detect that technique. It might be added in a letter version. Another pitfall is converting all the rules from a project into a rule or an alert. This will also flood your SOC. Again, they will lose faith in the attack framework. They will lose faith in your process and they'll lose faith in you. We'll talk about that a little bit later in a few slides. If you want to learn more about the pitfall of attack, there's a good talk that was presented at ATT&CK Con last year called Five Ways to Screw Up Your Security Program with ATT&CK. Now we're going to talk about ATT&CK and open source. First project, Sigma from Florian's Rock, which is a generic signature format for CM system. This is YAML file that you can use. And then there's a converter that you can use if you have L, Splunk, ArcSight, Q-Radar, and many other CM. This is the coverage that the project has. As you can see, there's a lot. Again, if you convert all of them into alert, not good. A better approach would be go by batch. Select a number of techniques that you can actually soak for a week to a month, maybe. Maybe you go with five, I don't know the number, but let's say five. When you're done and you have a high confidence that you don't generate too much false positive, you put that into production and you focus on the next five. Then there's Sysmon Modular from Olaf Artang, which is a Sysmon configuration file where all the detection are mapped to the milder attack. This is the coverage of the Sysmon Modular. So as you can see, Sysmon covers a lot of the technique. If you're more of an OS query type of person, do not worry. Filippo Matini got your back with the OS attack. This is the techniques that it covers. Olaf Artang strikes again with its Splunk apps called Threat Hunting. Remember the name, Threat Hunting, not alerting, very different. If you're not using Splunk, do not worry. It is very simple to open any of the files and then you locate the search line and then you'll see that on this search particularly, it looks for Sysmon. It looks for Event ID 1. It looks for a process name or a command line, very specific. This you can convert to any CM that you have as long as you have Sysmon in your log. You can build a detection for that using what's in this. This is the coverage of the project again. Now, when you've built some detection, maybe you want to test them. Red Canary built the Atomic Red Team project, which are small and highly portable detection tests based on Mitre. You can change them and we call them Reaction. This is a very well maintained project. It's all YAML file that you can use. They have a Slack channel where we can discuss about detection and they also do Atomic Fridays almost every month. They'll do a webinar at lunchtime. This is the coverage and I want to thank Casey Smith and Tony Lambert for providing this to me Monday when I told them that their old layer was not working. Thank you guys. There's another project similar to the Atomic Red Team which is called Red Team Automation from Endgame. It's almost the same thing but it's built in Python. It is slightly less maintained. Now, Commercial EDR also started using Item Attack to showcase their detection. Here is the list. It's not an endorsement from my part or my company in any sense. Those are the ones that were evaluated as Round 1 and these ones are the ones that are evaluated in Round 2. In conclusion, here are the key takeaways for you guys and girls. Attack address the highest level of the pyramid of pain. You should start with a preliminary assessment of all the techniques. This way you will know the matrix and you will be able to have a clear picture of where you should start. To do a successful preliminary assessment, you need to choose the right question for your organization. Define scores that make sense for you and your organization and you need to track your progress somewhere such as the Enterprise Navigator so you can show your management what you've been doing. Try to leverage Open Source Project to improve your coverage and speed up your detection. Do not confuse alerting and hunting and reporting and forensic. And you can use Attack to make a selection of vendors that cover some techniques that you cannot detect with what you currently have. If you want to have another idea of how to start, there was an excellent talk at the Blooting Village last year called Stop, Drop, and Assess Your Sock from Andy Applebaum. Sorry, that works for MITRE. This is the link for his presentation. Finally, some thank you. So I want to thank Grifter, Piratek, and Daniel Bohannon for inspiring me to come on the stage and talk to you people. More closely to us, I want to thank LD, aka Lauren, a good friend of mine. He has also a very good inspiration. I always deliver very good talks. I was very inspired. And Olivier Bilodeau, also, I think he's a great guy in the community, makes us together, make us stronger as a community. I want also to thank MITRECOP for MITRE Corp for releasing Attack as an Open Source Project and making the community a safer place as well. And finally, I want to thank Bell, my employer, for giving me time to come here and present, prepare, and also send me an indifferent city to present things. As promised, last slide. You have the link to the deck if you're interested. I think I have like one minute left. So if you have questions, otherwise I'll be around today and tomorrow and you can come see me. I'll be happy to talk with you. Thank you so much, Mathieu. Big round of applause, please.