 We are extremely happy with the turnout here. It's pretty fantastic. The one thing that we do need to note and everyone is abiding by right now is that we do need to pretty much the only rule is we need to maintain the fire doors and egress points for the fire marshals. Other than that it's awesome to see this kind of turnout for the first ever blue team village. I'm Dev Noel and we have some amazing talks lined up for you guys today or all weekend really and yeah thank you so much. Apparently we hear that people want to start early so we'll do a little introduction here. Thank you for coming to the blue team village. I'd like to introduce Div and she's going to talk about automating. Hey everyone you've made it to Friday at Defcon and you're here so yay for all of you. We're talking about automating DFIR. Quick disclaimer following presentation contains my thought ideas and opinions. They do not represent those of my current or past employers. Any events, characters and firms depicted in the course of this presentation are purely fictitious. Any similarity to any events, characters and firms is merely coincidental and Deadpool is a trademark of Marvel characters. So since we've got that out of the way, who am I? I'm Div. I'm an incident responder and forensicator. I've been doing this for about six years now. Currently I'm on the blue team aka threat response at Uber. Prior to that I was at Autodesk, ADNT, Direct TV. My personal journey in incident response and forensic started about six years ago when I got into my master's in information assurance at Northeastern. So go Huskies. And you can reach me either via Twitter. I actually have a blog and you can email me at div at divops.org. So with that let's get started. We'll get started like all great Marvel movies do from the end. Why am I talking about this? I have been having this conversation for a long time within the blue team community and I thought that it's about time that you had, we had this in the open. And I had these great articles that I read just recently and I would totally, you know, suggest that everyone reads this. It's also on my blog. All the links to these articles. But totally give it a look. This is why we're having this conversation because we don't need to have these conversations behind closed doors anymore. We have a blue team village. So let's see what we're going to talk about. I'm going to talk about automation, its journey as I have seen it in my career in six years, how it's evolved. We'll do a little bit of retrospection on the major trends that I have seen come out of it. We'll talk about the areas best to utilize automation. Then the few areas where I feel analysis takes the forefront. And then we'll wrap up with an open discussion. I'd love to hear everyone's thoughts about it during, after, or even afterwards I'll be available. So let's get started. When I started six years ago it was really cool. I walked into this awesome forensics lab and there was like a big toolkit that was ready to go. It had like these cables. It had ride blockers. It had everything to go. We could go on site, go to a data center, go image everything like hundreds and hundreds of laptops and other stuff. It was really cool. I was really excited. I was like, okay, let's do this. Slowly that got boring. And a lot of new tools came out. And we could do network acquisitions on the fly. We had NKS enterprise. We had FTK. We had other awesome tools that came out. And if that wasn't enough, we've now gone from there to actually doing all this in the cloud. And that's a huge way. We've gone from doing full memory acquisition. And I remember when I started, SAMS was evangelizing triage as the next new thing. Which was really cool. We didn't need to actually do full acquisitions. We could get specific artifacts, maybe just memory, registry, PLIS, logs. And we could figure out what's going on. So that was cool. Slowly I realized that I didn't want to check those boxes in NKS and FTK every single time I did a case. So we did live response scripts. That was where I think automation actually took off in Blue Team Village. And we also created awesome analysis workstations. Everyone's used SIFT. We've used Remnux. Everything's out there. It's already prepackaged for all of us to use. We've also come a long way from there to get into category-specific tooling. We have stuff for email security now. We have stuff for endpoint detection and response, EDR. We have very specific tools in the market. And that's really great. When I started, vendors wouldn't play well with each other. We had to buy like a particular vendor set. I'm pretty sure everyone's gone through this pain. Now with APIs, we can actually integrate this great vendor tool with the other great vendor tool and get all our alerts together. We can orchestrate all our alerts and our processes. This has made us really efficient. We were starting off building one of the most mature security operation centers when I started off. And from there, we've come a great, long way to actually getting threat intel, security engineering as their own verticals in instant response. Most importantly, our hiring strategies have also changed. I remember applying for a digital forensics analyst job in, you know, maybe forensics in maybe just windows or Mac or Linux specific stuff. And now we've actually have jobs which are very tool set specific. You have to be great at automation. You have to be great at scripting. You have to be good at email security, maybe just EDR as your niche, next gen firewall and GFW. Everything's out there. So we've come a long way with hiring as well. And that has actually helped us groom ourselves into more specific kind of jobs, what we want to do. So in retrospect, I've actually seen three major trends come out. The first being, we've started optimizing run-of-the-mill alerts. No longer do analysts want to work on adware, potentially unwanted program kind of alerts. We don't want to work on phishing. We want to work on the cool stuff. So we've started automating all that. The second, we started bringing in third-party retainers, Mandian, CrowdStrike, et cetera, and those for big data exposures when we have targeted campaigns that are specific to us or our industry. Not only for due diligence, but also to like give us extra headcount on our team when we have those long exposures and long hours of working. And most importantly, I've seen this major trend. We've started invested in buzzword specific tooling. And by buzzword I mean AI, machine learning, EDR, NGFW, email security, whatever is the next new thing that everyone at RSA, all the vendor booths are talking about, we've started actually investing in that. What we haven't been investing is the expertise to actually utilize those tools. Those really expensive tools are going to be pretty much not doing their full job if we don't have someone who can actually utilize them to the fullest. So I'll move on to areas best to utilize automation. I actually had this great conversation with my friend in automation, and he said 90% of their customers actually buy orchestration automation tools to automate phishing. So we'll take that as an example. A phishing email comes in, what does the orchestration tool do? It kicks off a bunch of workflows. Maybe sets all the hashers, URLs, IP addresses, et cetera, to virus total or whatever you want to use, kind of automates that process, brings all that data back, sends it off to actually a sandbox that you've set up, maybe Cuckoo or your in-house, static behavioral analysis on that specific sample that you're seeing in your environment, bringing all the IOCs indicator of compromise that the sandbox is back to your environment. Determining scope of infection, how many people actually got this email in the environment? What do they do? Are they all on the finance team? Are they all on the marketing team? What is going on here? Also going back and looking at, have these users actually been infected before? Do we see any historical cases, tickets for these people? What has been the infection vector before? What is the extent of the current infection? And how did we contain this infection? Have we seen it before? Most importantly, it also kicks off, to me, what is one of the most important workspaces? It creates this ticket for all of us, so we don't have to do it manually. And it assigns it to the on-call engineer. Once the on-call engineer comes in, they have all this information at hand already to make this informed decision on what to do next. Maybe we're blocking the IOCs, maybe we're notifying the users, we're kicking off whatever our containment strategies are. So this actually got automated and made it really easy for all of us. We are no longer creating manual tickets, we are no longer doing these manual lookups. So to recap, we initiated case workflows using automation. We provided correlation data sources, IOCs, open source intelligence around them, even closed source intelligence around them from our threat streams, etc. We mapped historical case data, we went back and checked what was happening, what can we do with this, how can we actually best contain it. We created this huge data pool on the back end, which we can search quickly within seconds. And we also created, used this created data pool to highlight anomalies. Maybe this is normal user behavior in our environment, but this particular case is an anomaly. We're no longer having users look through each case and have that analyst fatigue going on. We're looking at just the anomaly, and now we have more time to spend on it and to contain it. So that's really great. However, automation only goes as far as putting tried and tested playbooks into action. It's there for those mundane, boring, repetitive tasks that most analysts now don't want to do. It's there to automate phishing, it's there to make our lives easy, to give us more time to do all the research, work on complex cases. That's what we want to really, really do. Something really important is, I learned from my friend in automation that only 30% of their customers actually have a process and a playbook in place to automate. And without that playbook, our orchestration and automation technologies aren't really going to do anything. They're not going to remove that process. What we need to actually keep our eyes on is maturing our process. Also, another really great statistic, and I was really alarmed by this, is what I heard is 90% of the customers not only want to automate phishing, but they want to stop there. And let's discuss why this is happening. Let's see where I think automation is doing really, really well, and where I think we should stop with it, but have analysts do their work. So where do humans really fit in? We're great at looking at the bigger picture. We're great at looking at, oh, this is one alert. Where does this fit? What is this environment doing? Can I move from this system that's infected to another one? Is this actually the initial infection vector? Is this patient zero? Or did we get infected from somewhere else? How do we laterally move in this environment? How is it set up? Can it get to our corp? Can it get to our AWS? Can it get to our other cloud environments? Where is this going to take us? We're great at analyzing those net new alerts that none of these automation tools, open source, closed source int, is going to give us. We need to be able to take that reverse engineer, that actual malware, or that code that we're seeing, and figure out what it's doing for us when we're being targeted. Maybe nobody's seen this before. Maybe it's just company specific, it's industry specific. We need humans to do that. We're great at reading out false positive versus true positive. And that is also very environment industry company specific. We need to figure out what is the intent behind these indicators of compromise. What is the attacker going after? Or is this just a script kitty who was scanning our network and decided to get in? This was a new CVE that got released and maybe we're not affected because we're patched. We just wish we were patched. Also, we're doing great at complex case correlation. Anyone who's been in the industry recently or even for the long time will know that we jump jobs as humans. We kind of go in, try to understand the environment, new environments that our companies are setting up, where our crown jewels are, and what the numerous attack vectors are to actually get to those crown jewels. We're great at figuring all those ways that all these attacker red teamers at Defconn are going to try to get into our environment. We need humans for that. Not only do we figure out how, but we also figure out how to respond and contain. Depending on the kill chain, where is the attacker? Are we at weaponization? Are we just at reconnaissance? Have they already deployed their code? Have they started data exaltation? Depending on where we are in the kill chain, we're going to respond accordingly. And we need a blue team to do that. We need analysts to do that. How do we respond at what phase are we in? So that's where I think blue teamers do the best. We cannot replace our analysts. So with that, I'm going to wrap up and I'm going to start with a few things that I would love to discuss with you. I think automation and human intervention needs to be balanced. We shouldn't look at automation as a few folks in management, maybe you guys have seen that before. Think that automation is going to come in and replace their workforce. That's not what automation is here for. It's here to make your workforce better. It's here to make your team look at all these awesome alerts that are coming out that nobody's looking at. Not look at the run of the mill alerts anymore. What are the streams we're going to consider when we're actually considering a new job in DFIR? What are we giving back to the community? We should be looking at scripting automation as a part of our job rather than actually fighting it. There are a lot of analysts that are fighting automation because they think automation is going to take our jobs away. Automation is here to actually help us do our jobs better. I think we should look at instant response, forensics, threat intel, security engineering as their own verticals within Blue Team. Maybe look at our career progression into moving into these laterals. I was fighting this myself too. I started as an analyst and now I've actually moved into a scripting automation kind of role because I want to learn more about this. I want to see how we can make all these alerts better. So move vertically into these awesome verticals that we're setting up and that'll make us do our jobs easier. And lastly, automation is not here to take our jobs away. It's here to help us grow as individuals. It's here to have our tier ones start out and basically get better at what they've learned in their training to see if the tool can actually validate what they've seen before. If they are, their hypotheses are getting confirmed with the tool. And that's what automation is here for. It's here to help us train. It's here to help us make us better. With that, now that you're armed and dangerous for all this, I'm going to open up for questions or even a discussion. Okay, the question was where am I getting the data to automate all this? Are they through pentests and or other sources? So I would suggest using every single source that we can. The most optimum way to do this would be basically taking our pentest data, our vulnerability scan data, making sure we're automating that. We're putting it into the orchestration system so we know that these systems are vulnerable, what they're vulnerable to, and what will happen when we actually see an incident come in. Not only that, we should put monitoring in place obviously on all these systems, not just say, oh, this is legacy. So this is going to go away. We're not going to really automate this. We're not going to put any monitoring on it, because that legacy is the one that's going to get, create the biggest incident that we can actually think of. It's going to create the biggest problem that we're going to see. So I'd say use all the sources that you can get. Work in tandem with the red team, with the vulnerability assessment, pentesters on your team, product security, get everything, all the data that you can, not only from your tools, but in-house from other analysts, other engineers that are there. Everything, every single data that you can get will help. I cannot emphasize more how many times I've actually worked on an incident that we've seen in the past, that has already come up in a vulnerability scan. And we've done nothing about it. So I'd say keep all that historical case data, keep all those scan data, just to make your point of, we've seen this before, we've alerted on this before, and this is what we need to make better. We need to automate that process. So what would I pick to, I guess, demonstrate automation at its best? I've used phishing because everyone wants to actually automate phishing. We can use it for a lot of malware alerts too. I've seen really great crafted playbooks. I wish I could discuss it in detail. But I think the best thing to do here is kick off all those work streams, because we don't want to actually go in and do all that on ourselves. It's easy, it's clean, we're not seeing all our data out there. We're making these searches available, but we're not doing it from our corporate network, so there's no attribution. And that is all that, like connecting all these APIs, having a virus total API, et cetera, like put into our work stream, is only making, giving us more sanitary data. And I think whether you use it for malware, you use it for hunting down, like, really sophisticated adversaries, use it for phishing, do whatever you think your caseload is actually high on right now. And then you can automate the other processes to make everything much easier for you. I hope that helps. I'll take the question in the front first and I'll come back to you guys. Okay, so how do I make sure I'm not missing things? That's the question. So that's a great thing. That's where your analysts come in. We have to still look at all the data that the automation technique, technology is going to give us. All the IOCs that are coming in. IOCs are going to degrade. We're going to have to assign a confidence risk ratio to it. We're going to have to say, okay, maybe this domain was infected before. Now it's not anymore. We need to degrade the confidence of it. These things change. TTPs are going to change. We're going to have to track our attackers. That's where threat intel comes into place. We need to see what is the confidence rating, what is the severity rating, what is the risk rating of this particular IOC, particular alert, in tandem as well as individually. And yes, a lot of regression testing goes into it. A lot of ticket analysis after the fact goes into it. A lot of reading our false positive versus true positive on the current case goes into it. So I feel like every ticket we get in, we need to look at it in a way that is this data actually coming in or do we need to enrich this data more? Okay, how do I balance doing my normal job with doing all these new tools? So that's a tricky one. It takes a while and I've learned this the hard way. I've been completely overwhelmed with trying to do everything on my plate. So I think take it slow. Start with doing your workflow. Build out a really strong team is what I would say. One person is not an instant response team. We need a huge team of experts. We need everyone in all the four parallels that I'm talking about. So we can actually concentrate on each process, each work stream that we're dealing with. And in your downtime, I'd say I take like about at least four hours or like a Friday to learn a new tool. I actually listen to the forensics lunch by David Cohen. If anyone is not, yes, if anyone's not familiar with it, you should totally check that out. That's a really good way. Twitter feeds a lot of articles out there. Keep yourself abreast, go to training. I'm not going to say go to all the training out there, but pick and choose what you're interested in and kind of like clear your career into that. It takes time, it takes effort, it takes a long time to do scripting and all those things, but it's totally worth it. Yes, please go ahead. I'll come back to you. Great question. So what was the evaluation of selecting an orchestration tool? That is a very complex question. So when I've done this a couple of times over now, and each place that I've been at has a different selection criteria. It depends on what you're trying to achieve, how big your team is, what the tool does. Is it orchestrating with all the current technology you have? Maybe you have a particular firewall vendor, you have a particular EDR solution you want it to integrate with. And how does it do when your tickets come in with all that? I would say do a thorough POC. I'm not going to say that I like this tool over that tool, but a certain set of tools fit into a certain environment and a different kind of tools fit into a much more different mature environment. Tools are out there and vendors are really, really responsive these days. I've worked with Demisto, I've worked with Phantom, I've worked with a bunch of other vendors, they really want your feedback, they want to help us make this better. When I was working with Demisto I asked them to help me make this forensics process because they didn't have that. And they quickly in almost six months turned around and created this forensics piece into Demisto. So they're really responsive. Pick and choose your battles, try to see the false positives, see how your threat intel feeds feed into all this, and how is the orchestration tool giving everything to you? Are you comfortable with it? Or if not, maybe just change everything around it to see how you would want to see it on the dashboard? How is it going to help you? It's all there to help you. The tool is not going to do anything on its own, it's going to do whatever you want it to do. Question front? Sure. So artificial intelligence, how you train and retrain the models on it and a good example on this? I don't really have a good example on it. I'm going to be honest about that. I've seen it being used really well when you're looking through IOCs, indicators of compromise, and how they degrade from one place to the other. How is the model kind of understanding those anomalies? So I've seen it do really well on saying, oh, we've seen this user behavior before. Maybe this is tax season, and this is an anomaly because nobody clicked on it, but we trained our users and this person clicked on it. Or this person's on PTO, and we still are seeing VPN connections coming up and forth from their computer. So looking at those models to actually employ into your work stream, it's a little difficult. A lot of tools say that you can use it really well. I haven't really seen it effectively used yet. If anyone has, I would love to talk to you, but yeah, I haven't really seen it used amazingly well. I do know that anomaly detection is one of the major examples of using artificial intelligence and machine learning. Whether I've seen it optimized to the best intent to help me, I have my thoughts on that. Any more questions? I'll come back. Yes, go ahead. Okay, have I been asked to turn off automation and how have I dealt with that? I've been fortunate enough not to have faced that. I guess conservative ideas on automation when it started. I think some things automation does well, and it took me a long while to understand where to go with it. So I thought, initially, we could do all this on our own. Why do we need automation? But I don't really like creating tickets. I don't really want to go ahead and spend 15 minutes on looking all these things up. I don't want to do that. I want it given to me. So helping you, if you really want to make that case, the way I had made it is looking at the response time, when the ticket came in to when we actually resolved it. And if that can significantly be lowered, that is your use case to actually use automation. I hope that helped. Go ahead. Okay. So the question is, how do I see it working? Is it easy to take an analyst and train them to be an engineer? Or is it easier to take an engineer and train them to be an analyst? I think those two things are completely different. People who are in the industry for a while and have done analysis have this really great mindset that they've built, and that is what we contribute to the industry. There are engineers out there who are really passionate about automation and different technologies, and analysis behind the alerts. They just want to make things easier for the analysts to do. So if you actually are considering moving from any of these roles to the other, I'm not saying it's not possible. It'll take a lot of hard work. If you're an engineer who wants to build the analyst mentality, take a bit of training, try to build that mentality, try to get into the environment, work with your analyst and see what kind of alerts are coming in, how their thought process works, how do they know what kind of artifacts they're looking for, how do they actually devise these containment response strategies. And if you're an analyst looking to get into, I guess, engineering or scripting kind of roles, there's a really great book, Digital Forensics Python Cookbook, and I think that'll help you get started. Python is really doing well in DFIR community, so I'll say start there. And then once you get into it, there are a lot of other resources that you can get that into. I hope that helps. Nothing is impossible. You can go from analyst to engineer or engineer to analyst. Any more questions?