 Hi, everyone, my name is Dan Gunner and today I'm here to present to you. Consider the data source a journey through an industrial attack. Disclaimer up front on this. The, there are a few companies, brands and trademarks in here the views are completely my own and all information is used to your own risk. So every information security journey really starts with preparation and when you think about it preparation is the calm before the storm. When you're dealing with an event or with an intrusion and during preparation there's there's tons of different controls you can do there's things like coast and network monitoring data centralization. So the standard things were used to there's offensive analysis and stimulation simulation red teaming penetration testing all of the other similar services there. And you know there's the process side to so security is people process technology, but all of this makes up really that range of events you can do during preparation. But at some point, you know stuff's going to happen and when stuff does happen, you know, preparation is still worth it. You have to be ready though for hey there's a lot you can plan for there's things you can model there's approaches that you can kind of run through and scenarios you can run through. But there's never certainty that that's going to cover what you know the full scope of what you think it will. This is a quote from a while back right. But, you know it's it's held true whether it's been something digital or whether it's something now you can plan for kind of that first encounter, but once you get past that first encounter there's always going to be those unknowns and those details. And that will catch you by surprise. And so, you know it's kind of easy to say as you're planning for that first kind of for that first iteration to say okay well, you know, I'm going to, I'm going to put all of my eggs in this basket right I'm going to, you know, come up for really focus on this one control and just kind of stick with that because a you know what I do to prepare. It doesn't matter. There's also a risk though to not always constantly improving or to doing that because you know while there is risks and cost action I mean you're risking sometimes your reputation you could be risking your job if you get breached and someone gets fired over it. But in the long run you know there's really this belief that, you know, far less are the risks of long range fixing things is actually less than the risks of not doing anything, particularly in our space right I mean, there have been industrial intrusions for a while on the screen is really just a handful of even the ones we know about there's a lot of intrusions that likely we don't know about we haven't heard about either companies have kept it in for reputation, or just honestly we don't know about it it was never detected. So sometimes, so stuff can happen, and sometimes literal stuff can happen I mean here's an example that some people talk about if you've been around the community for a while. But others don't know about this so back in 2000 and Marucci Shire. There was an inside threat. This person wasn't happy because they weren't hired for a position there was some kind of some employment drama between them. And you know with this technical knowledge and with his abilities he actually ended up doing a series of wireless a series of RF attacks against this public sewage it ended up spilling sewage a lot of places. He ended up in kind of a night fight actually between him and other operators on site that we'll talk about a little later some of the outcomes for that but really what happened is, you know he he had the equipment he had the knowledge to access this system remotely so what he would do is drive. He was driving up to different sites, connecting wirelessly. There was some stuff and if you read through the court documents there was stuff that he had to know to actually connect his inside knowledge did help some. But what's interesting when you look through this is actually, you know the night fight portion was, you know, through this network he was using the legitimate applications to generate the traffic well, they'd be like okay well this traffic's coming to site 14. And so he would emulate okay, I'm going to configure this hot my my software to pretend to be site 14 and mess with the pumps. And so an operator was like hey this isn't right someone might be messing with us well they changed to site three. They noticed hey well the actual plants now, you know site three is now trying to do it and I still have this rogue site 14 off in the corner. How this was actually, and you know what's interesting there is that's kind of really the first known example of where we see an operator that's actually, you know, detecting these events. But this is all to say that stuff literally happens in this domain, and it has for a while I mean 2000 is 21 years ago do the simple math there so. This is all to say that responses inevitable and this isn't to, I mean I'm not saying this to sound hopeless. I'm saying that in today's world. There's not an if you don't control the if and you don't control the when. What you can control if you're on the defense side of this is how fast you can detect it, and how fast you can respond to it. And so we're going to really focus on those two areas as we talked through the attack. There are many models for for modeling attacks. One of the popular and adversary actions, one of the popular ones and hot ones today we're going to use is MITRE attack. I'm sure most of you are probably familiar with MITRE attack for the people that might not be what it is is MITRE attack breaks down adversary tactics and techniques into a series of buckets so what they do is they've taken events in the past, modeled them out and tried to kind of cluster the activity or the intent throughout the whole attack cycle. And so the MITRE attack definitely Google it looked it up. It's a pretty big knowledge base there that you can use what recently happened specific to the industrial community was the Triton evaluation so for those not familiar with trices and Triton. There was a breach into Middle Eastern oil for site. This is a safety system, you know year to back year, maybe three years back. What MITRE did, and MITRE does these attack valuations where they bring in different vendor products. They run the same scenario through them. There's technically no winner but what they do do what they do is publish out the results to kind of show hey you know, here's kind of this central MITRE created scenario. We saw each of the tools. They don't declare a winner they say hey you should declare you it's up for people to decide basically how people do. And so what MITRE did here is they calculated, or they, they emulated the behaviors of what was known about that intrusion into that Middle Eastern gas and oil site. And how they did this is they, they had real hardware doing this so they set up the control system, it was a rock well control system, the burner management environment, the graph you see there is from the MITRE site. They set up the control system components there. And they did simulate some of the physical process obviously they might, they weren't producing barrels of oil I don't think but what they were doing they had the real software connected to a simulator there to try to get, you know, as accurate as you can within, you know, reasonable budget, while still exercising those security tools. What you have there what this shows is that bingo card and again we talked about attack being that the collection of tactics and techniques. You know this is kind of that bingo card and green is what they exercised and we'll talk through different ones of these shortly. But as you can see they picked really a portion of the attack tactics tactics and techniques to choose against this. So as we get into the scenario of what they emulated and again this was based on that tricest flow if you look at the link at the bottom of this slide. It talks through, you know how they throughout this scenario. We're going to start from the attacker point of view so as a defender, you might catch the initial compromises the route into it, chances are as a defender you'll see later on we're going to go with the attacker perspective just because it's more chronological. So one of the early things that they that might have emulated through this was an attacker using, you know, valid credentials that they stole right so maybe they spearfished these maybe they got these through water and hole through other means. But what they did is they jumped from the inner side enterprise side into the industrial side of the plant they used RDP over 3389 pretty common. They still be enabled at a lot of sites a lot of people allow this over the firewall still. And they said as this that it is no standard operating behavior. What they did with this was they did a program upload save some files and we'll talk about this bit more in there. One of the first things that we're going to point out as we look at this and the importance of data is out of the bat you might say hey well you know going from one network trust zone to another. Maybe I can just do this with baselines right it's RDP and that's true baselines can be helpful, but it depends on those thresholds right. Do you get a lot of RDP from the box a to box B. If box a has no reason to talk to box B then yeah that's really good baseline. You know if box a though is an engineering station in box fees a job or an engineer station on the corporate network and box B is actually the jump server inside. It would be harder to just do a, you know matter of fact well an engineer talk to a jump post okay cool. Was it done out of business hours was it done when that engineer was off shift. You know, you know you have to consider what actually is inside of that baseline and how you drive it. You know after hours is actually really cool and if you can find a way to get your shift schedule into there. You can get better baselines right and what I mean by getting your shift schedule in is trying to remove it to where you can do it human wise you're going to have problems scaling. But if you can get your baselines better by hey let me go ahead and import a CSV of my operator schedules. You can get better baselines and this is where it can be futile other futile otherwise because if the attacker is acting within that mathematical baseline or within the logical the logic of that baseline, you're going to struggle with this. And you know another important thing to say on this is and we'll see this later in there is post sources can provide details that the network sources can't see. This is a really important takeaway from the minor attack eval as we push through and definitely something you should investigate both from miters results, but also just as you evaluate products and look through what people are doing. When we talk about data sources today we're going to really talk about three areas we're going to are three different types. This is how we're bucketing them. When we talk about process network data. This is a lot of, when you look at the industrial market what you see so passive network analysis this includes people that are doing kind of NetFlow style summarizations your five tuple of, you know, source and desk IP and port and then protocol, including down into some of the summarization so when you get into looking deeper into the protocol process network data is that really efficient way of, you know, digging into some of the fields that they decide to extract why it's the process side as a vendors decided you know we're going to write logic on this field or it might not be a vendor vendor or open source product decided. Hey, let's summarize this specific field in the protocol. When we talk about process network data, the efficiency here is analysis and storage. There's only so much processing power that you have and especially as you, you know, ruggedize the box and you push it forward and a plant. You can't often have racks and racks of servers looking at a single tap point. At some point, you know, you have to you have to cut data down in the storage size to obviously storage also can take up quite a few racks. It's not process data depending on the flow of that. The con on on this is encryption so you know with the 3389 and with like TLS with HTTPS. If the data is actually encrypted you can see the fact of communications, but it's going to limit you later on if you actually want to see what was communicated what was the context inside of it. There's also protocol support so like we said for summarization it can depend on, you know, how deep and what, you know, in a protocol that's actually pulled out that the logic's written on protocol supports really important and especially in the industrial field that you're dealing with very proprietary protocols some that are known about and whole spec might not be known, or there might be, you know, protocols with products that you don't even know about, and vendors don't talk about much process host data is the next one This is things that come off your endpoint agent collection and this could be things like Windows host logs this could be things like, you know, a V logs your endpoint protection application logs really anything you can pull off the endpoint where this is processed again is it's a log source so you know there might be some filtering in there like you filter on the network side to extract out the data of interest. So again, the pro here is it's not network events and what I mean by that and I don't mean to that sound like a really vague inverse of network data. So what it is as you can begin to see like hey here's information on, here's a processes command line you're not going to see that on the network here's, you know, other, other, you know, file or process metadata. When you look like this internals and other tools. So when you get into host data, as the number of hosts grow, you kind of have this maintenance tail that you have to deal with. And also making sure that your configuration and even managing your configuration can be a challenge. Right. So, you know when you're listening to date over tap point, there are things an attacker can do to try to screw with the data you're getting with host data if the attackers on the host indicator removal is its own actual MITRE ICS TTP attack ICS TTP there. So with host data you do have to consider hey is, is root kidding involved in my hook somewhere. What are the issues is this data I'm getting the true data or could it be manipulated. The source will talk through as our will use as raw host and network data, unlike its process friends this is your packet capture your disk and memory image. The pro on this is it's not filter limited so you're not getting the opinion of the, you know, open source project or of the vendor of what they decided to put in there. The con on here, we talked about the processing space and, you know, yeah, the processing capacity and disk space needed. The con here is you do need a mature analysis pipeline because the more data you bring in, you have to be ready to efficiently deal with that. That is kind of the pro con of not dealing with filtered data but the real pro again to go back to that is, there's a lot more analysis opportunities as you dig into that. So going farther into MITRE into the MITRE eval starting with initial compromise and more than network data is often needed to dissect behavior is the takeaway here. The criteria down at the bottom so the criteria to score on this minor point was seeing TCP 3389. So again, you know, if we talk about process network data towards the top of those live that's something I can see. That TCP on UDP traffic potentially. And I might see the username with RDP. You can get the RDP cookie that sometimes includes username, depending on if the network tool you're using is actually extracting that and you actually have to see a certain part of the RDP handshake to get that. So I can answer the first part of scoring but to completely have scored on this, you actually had to have the second part which is, you know, via the MS, via that specific process, and having the user may or may not be present. This is where the network data helps but you need that host data to say okay here's the process information here's the login record with my with the username. So in my results you'll see some of that in there. Third one on there raw host and network. You can of course pull this out of memory out of this out of the P cap. It's also going to be there. But again, you're going to have to, you know, be dealing with a tool that either processes that for you efficiently, or it might take you a bit of time, or you keep. If you're doing it manually you're going to have to keep the number of boxes you're doing on it well. So outside of initial compromise and we start to talk about persistence. I'm not going to go through every step of the minor I of the attack we actually only chose the few that we wanted to talk about to really bring out some Salem points on this. But for employing persistence what was done in the attack value eval was installing a scheduled task. And then what the schedule tasks did was initiated or reverse shell, and they did this SSH reverse shell over port four four five. Why they did this was to get past firewalls so if a firewall is not actually application aware if it's just doing it off the port number. This would get past it. They were trying to guys it as SMB traffic and we'll talk about this in a minute. If you hop into this, you know, let's go into the first criteria right so a scheduled task was created. It's not legitimate. It was imported into the task manager for to score on this and the MITRE attack eval. You would not have gotten this point right if you were just going off process network data. This is where I think MITRE actually does a pretty good job because if you, you can look through now the results and say okay well who's actually processing host data. The link at the bottom actually I'll point out should point you straight to this employee persistence to where you can see how the different vendors did but in this case to score here. Okay, who's looking at Cessmon file create a lot of the vendors that scored on this we're using Cessmon file create or they were doing some type of command line analysis. Command line analysis coming out of the fact that the windows event logs do give you the command line all of the competitors were scored out of windows out of the windows event logs. This is where raw host and network data comes in, but it raises a question it really, you know, brings through the point what if no network data is generated by the tactic. Definitely something to consider. And also something to consider here is, you know, if all you're responding with is network data. You're going to see this right and what I mean by that is, okay cool I have a network alert. You know maybe from initial compromise maybe from a later step. Well, I'm going to want to correlate this persistence and so how do I correlate this persistence to that network event that I got on either side of this tactic. The approach I offer for this and the approach of MITRE showed is definitely consider those windows event logs. What I'm going to show here is low collection diversity creates blind spots. Like we said to correlate the whole stream of attack. You're going to need when the, or you're going to need network and host data. And you're going to need the processing capability to be able to analyze that. Moving on. So, we talked about that reverse shell being created, and it going over port 445 and that passing the firewall rules. This is something to where you might see it on the network side right so there is an app player mismatch. This is not a one of the attack, a participant's caught it via suricata rule. You might catch this by just saying, hey, I'm looking at the protocol going over this port and it doesn't look like SMB. It looks like SSH. That might have caught it at the process network data level. On the host side, the windows event logs and specifically what was scored in here was event 4672. That's, you know, that's, that's where the participants found it in here. Again, you can do it with Ron, raw host and network. There's a lot you can do there. Again, the same scat scaling issues apply but this is where details matter because, you know, this might be a weak signal where you're like hey, I get a whole lot of app layer mismatches on my network side. And then hindsight I can see okay yeah this was part of persistence, but you have to be ready to deal with okay you know what's my threshold of dealing with app layer mismatches if I'm just using network security, or if I'm using both or network data, or if I'm using both network and host data I can say hey there's a mismatch. And then there's this really weird command line down here. You know that's that might be a more, that might be a better threshold where you're like hey this clearly isn't right. You know, another example where details matter is when we got into collection and discovery or when you look at the collection discovery on here so again, an attacker used to custom executable for a network. 44818 is a port associated with Rockwell. For those not familiar with Rockwell. And what they were doing here was identifying devices talking ENIP, which is the protocol that these devices talks they did this multiple hours to limit detection. In process network data right your baseline again go into the baseline threshold thing. The baseline threshold might have helped you here. And the other interesting thing to point out here and a little tricest knowledge and where MITRE did a good job evaluating or emulating trices was the attackers did use what's called pie installer PIDXE basically it's a way to turn python scripts and across platform capable binaries. If you look at the filed metadata going over the wire of how PIDXE and pie installer works. There's, you can, there are your rules and there are sort of rules out there to look for these flying over the network so in dealing with this and process network data space that's definitely something to look for. And that's the detail that certainly matters there on the host on the host data side again this is one to where command lines can be super important. You might do actually a baseline of command line deviations in this case of logic maps.exe is an existing good binary and you know the command lines that are normally run with that. The command line changes in the bit and network behavior changes, you know that might be in a more advanced threshold that you decide to go out and look at it. When you go down to the raw host and network data side we talked about your for process. This would be the same deal if you pull a hard drive in or if you pull a disk image or if you pull files out of memory and start running your against it. If the process is running, you might actually catch. You might catch this tactic. You know, also in raw disk that way. But this is another case where details really do matter because again it's, you know, here's a, here's a binary that's named or binary name that I accept but wait it's pie installer where's pie installer use. There are vendors out there. There are industrial vendors, they're using Python. I don't know of many using pie installer or pie de xd at this point. And so that's where there's, you know, kind of those baselines are deep opportunities to apply detail to it. As we move on, you know, details at that network level don't just matter. As we get into expanding access here an attacker, you know, did scp over 2223 to move some tools. Yeah, yeah. And so when we talk about process network data, our challenge here is going to be the data is encrypted so there is the protocol mismatch again if you understand enough of the protocol going over port 2223. You might catch it that way you're not going to see the file contents so like this are caught in your rule here won't work as much. On the process host data side and this was interesting to my to the attack eval. There were a ton of file creation events. I know hold on what you might be thinking on that. File creation events are there and if you look through the results you'll see a lot of file creation events were created on the raw side you'll see a lot of are you could do this via file metadata. Let's go back to that file creation events. Certainly depending on the size of your network and depending on the capability of your collection stack and your ability to analyze that file creation events are definitely interesting. I think I said you have to filter them because the next question on that is, can you track every file and registry right to this system without being flooded. Again, certainly, it's, it's valuable to bring those logs in it's certainly valuable to log those but you need to consider that those baselines and thresholds that you say hey wait. File right shouldn't happen here maybe it's a file right to a weirdly named directory directory maybe it's a file right to something like see windows, you know system 32 that would definitely be potentially a weird file right. But, you know, context things like that because tracking every file and registry right won't scale you're going to flood yourself out with that. So where where you know what is important is, and we've said this a few times throughout, you know, as you as you turn up your automation and as you dig your baselines deeper. You know you're going to need context to scale scale requires context. It might be done on the industry level, some of this might be done specific down to your plant level but at the end of the day to scale this and to really detect sophisticated adversary like was seen in trices and this that scales going to require context. And even that that context might not just be technical context so as we moved into infecting the safety system control logic. In this case, you know an adversary initiated a program upload action. And for those not familiar program uploads and downloads are actually a little counterintuitive to people first in the industry so an upload is actually going from a programmable logic device to your computer. And as you can see here they did it between the safety PLC and the safety WS download is actually pushing the program logic from the engineering workstation is what any WS is. It's pushing it from the engineer downloads pushing the engineering workstation to the programmable logic controller. In this case right we have two boxes that should be communicating this really right so the engineering workstation it's probably in the established stress threshold that it should be talking to a PLC. And as the process network data you're going to see this file upload, at least in this case you will so this use the protocol Rockwell uses is called SIP. SIP you can buy this spec there's also versions of the spec that float around the internet. The known protocol, not all devices will you know the protocols used for file upload some of the advantages of some of the vendors is they do actually dissect some of these very proprietary protocols a lot of the major vendors do have support on this but you know you might see on process network data you might see evidence of the proprietary file upload command. Again it might not trigger a threshold because these are computers that should talk it. You know but maybe you have multiple thresholds maybe this happened in the middle of the night when engineers aren't there maybe you're doing more correlation on there. And also helpful can again be in where people are scoring on this and the challenge is the windows event logs with the process information. And when you get down into this some of this does begin to get into getting into packet captures with the industrial traffic. This does, you know, depending on the protocol again, you might need reverse engineering skills you might need to figure out what the protocol actually looks for and do things like writing a Lua or, you know, Lua disector for wire shark or other things if you're trying to do this manually. But in this case to the point to prove here as outside validation might be required. An easier thing is to just say, hey engineering team we saw a program upload go between, you know, this is kind of weird, you know, or, you know, we thought this was kind of weird did you do this. And you can say and they might say oh well no we weren't there. And you can double check that with, you know, close circuit TV if it's if it's a monitored site. You know door access readers you can do it actually with the windows event logs of who who logged in when and trace it down if someone on vacation did it. You know, either their accounts compromise or it's not them. But the point here is outside validation might be required to prove these out with certainty. You know, moving into the last two steps of what we'll talk about today protocol depth doesn't matter until it does and when we get into the disabling the safety function again we are in very specific operations very specific tags or data fields within the device. In process network data when we're dealing with, you know, in this case it's an adversary initiating the right tag action they're writing to a very specific part of memory of that device with a very specific value to get down to this level. You have to do deep packet inspection of the protocol in this case it was SIP and ethernet IP. Again, it's important to understand the protocols in your environment and what you actually have visibility over because even if you take a big product line like Honeywell x theory on like a Emerson delta B or Emerson ovation. There's a lot of protocols and so just saying like hey we cover, we cover this product line no you need to know like, Hey, does this product line is this product line the one that does my program uploads is this protocol the one that's writing tags to the device. And even what's enabled on the device right so, you know I've worked with devices to where both ENIP was enabled and Modbus so, you know, even if you're watching ENIP over here I can do it via Modbus over here and if you're not watching you're not going to see it. So understanding the deep packet inspection to understand these rights to tags is very, very, very important. This is a case where host data, a process host data isn't going to help you. So some of the vendors to use the application logs if they use the windows API for logging, you might be able to go into the windows event logs into the application log or logs they write to and find that you can certainly do this with raw data but are with raw packet capture but again you're going to need to either have a disector to have a little bit of reverse engineering skills. Depending on the protocol and how the tool was written, but protocol depth does matter. Finally, the last one that we'll talk about is really the last step when you were looking at the ICSE valve this was on a burner management network. These tag rights towards the end is actually how MITRE was emulating the effect on target phase of this. Towards the end there was another group of tag rights that happened. Very specific changes to the devices. What we see here is air damper settings and cascade control removal. Again, process network data shines here. Assuming you have that deep packet inspection and you can see the tag data and you know what's interesting when you go through the MITRE eval results on their site. MITRE does include the screenshots of what the vendors showed in their in their UIs. And you know it's definitely interesting because one of the things you can do is see hey well what vendors were actually able to see the values that were written in and the tag names that they were written to. You know that's that's definitely something when you look at MITRE to consider to do or if you're just doing your own evaluation or process. So in this case, even if you're not evaluating a product to see this type of impact. You're going to need to go down here so if you're doing it yourself it's still important to say okay, with whatever I'm home rolling to do this, I need to be ready to do this. You could also do this with static and dynamic analysis as you're going through stuff so throw it in a sandbox throw it on your digital twin or range or if you have a device. The protocol depth at this point really does matter. And at first you might not say well, you know why would I care about this specific tag. And again this goes back to filtering well. I'm going to filter out tag names because that's like if I track every tag there's no way I'll store this. That's right and you know that's right to a point it's right to the point to where you need it like in this case where, you know the protocol does start to matter. You know something we want to wrap up and something you know I want you to leave with with this is when you look at the ICS attack bow. It's really easy to enter this with hindsight bias and you know kind of in quote hindsight bias make surprises vanish. What we see here on the right of the slide is the skater system faults at Marucci Shire so that sewage plant that overflowed in a normal state so before those February before the January February sites. This site had about two, two to four alarms a day was the average. When the insider started the wireless attack they went from two to four events a day. To 10 1520 you know you see a spike over 40 there. In addition, they were having pumps that weren't turning on, they were having pumps that are pumps not turning on when they should they have pumps. You know that we're turning on when they shouldn't. There was a lot of weirdness. And if you look in there this unknown system faults are read and it's about from January to the end of April. At the time there was a lot of weird going on and there wasn't this correlation that this was that this was an insider that was, you know, connecting in and causing these faults on there. Only when did only when that operator started getting suspicious and getting in that knife fight. Were they able to say okay well, we know this guy we know he has the knowledge, ultimately they ended up, you know, suspecting it was that guy they hired private investigators to follow him. They found out he was actually parking your site so they sent the police and arrested him found the stuff in his car. He was sentenced, I think to two years in jail and, like at 13 or $15,000 fine. You know it's a prime example of it's easy in hindsight bias to say oh well yeah my threshold would have been there. But in cases like this know it really was the operator that saved the day. And it was a non, a non, you know, a non technical. A non technical trigger that even caused you know the due diligence to look at that it's important to consider as you look across this where you know those alerts might come in. And where those alerts that actually get you to, you know, figuring out where the event starts and how to respond to it. Because at the end of the day the success depends on detection of the chain of events. You know you might say well, you know we'll start with network because with network I just have to find one of the links to the chain. And that's true, you might find some of the links of changes going network you might find some of the links of the chain just going disk. But you have to have enough of the chain of events to properly isolate to properly contain an eradicate, because if you don't, you know ultimately your won't actually secure your network on that. And the correlation of events right. If you can correlate events together like we saw there were multiple steps where both post and network data was needed. If you can't correlate you're really going to struggle. The right data is going to put the dumpster fire out quicker and I put right in quotes, because like right there's no solid definition to what the right data is there's a lot of difference between networks between plants. But not having the right data can limit your detection entirely. You know if you, if you don't have the data like you can't, you can't collect data from the past that you didn't collect before. You might have some metadata or other options, but I mean, you can't go back in the past and get data you never got. So it's really important to consider both the host and network data that you use as you go across how you can improve your success conditions, we offer kind of six steps or six recommendations for that. So what do you know about your environment and your visibility how relying are you on hosts only or network only how deep are you going in the protocols how do you know your devices how they behave how your operators work in the plant. Understanding the people process a technology the full scope of everything you do is really important, because it's really hard to set good baselines if you don't understand yourself. And you know you can work with, you know you can work with a lot of people to try to do it but ultimately you're going to know more about yourself and they will. And then it goes into constantly improving right so how quickly can you dig into those data sources if you need to. And if you find out you can't dig into those quickly you need to go back and do it, or figure out how to solve that. If going through the minor ICS C Val results. You know you say oh yeah we would not have seen that or hey you know there's good opportunities here. Definitely use those preparation level things to improve before the event actually happens redundant sources right so you know we talked about host logs the risk being that someone can mess with them but you know you need to have redundant data sources to have that failure margin, you know I've been multiple places to where, during an incident or before an incident someone, you know, lost their entire splunk stack or they lost their entire scene. You know if you lose your entire scene or if you lose a part of what you're responding with or what your visibility is. You know if you lose enough visibility or network you need to address that you need to have situational awareness, even through a failure margin. The process side is designed for failure to a point, you need to design your security side for failure, collecting the right data like we said you need to understand your collection limits. You know you might not be able to, you might have a very high throughput bandwidth channel you might have. You know collection challenges because it's out in the field and you can only use rugged options that don't limit the processing or collection power you have. You need to understand what right data is for you, both for how you analyze data and for the realities of your operations. You need to understand your tool limits so you know how you use your products right, you know, every products great when it's getting sold you but how are you actually using that for analysis and how does that make you better and fit in that overall strategy, and then challenging your status quo right, look at your baselines constantly validate them and check them you know use, you know tabletop exercises use red team assessments use, you know, use the smart operators I mean we've had cases to where we work with customers and you know, just in a conversation with a customer on being like well how would you do this, you know, the whole chain of events and the whole chain of success starts with a five minute conversation because they know what the status quo of the environment is better. Always challenge the status quo both, you know if you're technical, you know if you're, you know, technical stack but also your baselines and how you do things. So take away here preparation matters so it's not a panacea it does accelerate response and it gets you to, you know that faster meantime to detection mean time to response which is important. But you need to also prepare for when you have to act. Cover bullets can breed complacency right it's important to validate that the tools that you know what you think your tools do actually do what they do and that you can use them in the way that they're being advertised you. Responses inevitable right you don't choose if and when you respond, but you can control the speed as we said, data diversity matters right we saw multiple cases to where both the host and network side was needed to to get to the conclusion or to in the case to actually score the point context is key right. Tracking every file right you won't I mean if you if it's a small network you might be able to do that. If it's a big network, you know you're going to have a big storage bill so if you're putting that up in AWS it's going to be a big AWS bill. You know you're going to have a big data lifecycle though around your data to say okay, you know let's store a file rights of this subset of data let's store command lines of this subset of data if they start to and we'll store them for this long and this, you know in this area and, you know this you know you can go all day with the design on that but at the end of the day what it's important to do is not flood an analyst, while also making sure that your, you know the analytics if they're automated are able to run over that to support this data. An analysis step doesn't matter until it does you know we talked about both the file context and the network context mattering. Analyzing sophisticated attackers are going to challenge some of your collection and analysis limits. So, like we saw in the impact stage of minor attack at you had to dig deep and deeper into the zip protocol. And as you take this kind of into the production world it's going to be a challenge, because again you're dealing with proprietary protocols depending on how often tags are written how often, you know, how often, how loud the network is, you know you have things like you go into refinery it's not just one data bus there's two databases. There's even seen even more complicated networks out there. Your analysis step and your breath of analysis, all matters, and you should constantly be understanding and stretching those limits. So I want to say thanks for attending the talk. It was, it's really great this actually my first def con talk to do. And it's great to do with ICS village. If you can touch. That's our Twitter and if you want to work or if you want to work together feel free to reach out. You know there's always fun stuff going on so again thanks and have a great rest of the day.