 Hello everybody. Thank you for taking the time to listen today. My name is Otis Alexander. I'm a cyber security engineer at MITRE and I lead the effort for attack evaluations for ICS. So today I'll be talking about the importance of detection context in regard to our recently released results for MITRE ingenuity's attack evaluations for ICS. So just as background this is attack evaluations first move into a new evaluation domain, industrial control systems, and we just completed our first round of independent evaluations for ICS and the results were released last month on the 19th. So the evaluations examined how cyber security products from five vendors or ICS vendors, Armis, Clarity, Dragos, III and Microsoft were able to detect TTP associated with the Triton malware. So just as background for attack evaluations our goals are to improve organizations against known adversary behavior by first empowering end users with objective insights so they know how to use the participating products, providing transparency around the true capabilities of these products, and driving vendors to really enhance their capabilities. So it's important to note that these evaluations don't result in a winner. We're not declaring a winner. They're not meant to be a competitive analysis. We're simply documenting what we observed in regard to detections and it's just important to remember across as vendors there's no singular way that you know we know of analyzing ranking or rating solutions. So we look at how each vendor approaches threat defense within the context of attack. So for this adversary emulation that we did for this evaluation we were emulating TTP associated with the well-known Triton attack and we launched our adversary emulation against an evaluation environment that is our lab which functions as a burner management system. So we launched 100 sub steps against that environment and the real goal of this was to disable safety functions and to allow for unsafe state to be induced. So the unsafe state that we were trying to induce in this burner management system was a release of gas without the system tripping at all. And once we got enough gas in the simulated facility we went ahead and ignited that gas to cause destruction to the infrastructure. So it's important to note here that many of the things that we did were just standard actions that were leveraged against the environment. The adversary emulation consisted of a lot of remote access actions. It also from the ICS perspective it consisted of like status requests to PLCs or online edits to change configurations of PLCs or even forcing points to try to drive the process into a bad state. So if you want to learn more about what we did I encourage you to go to the attack you valve site to look at our operational flow and we'll go through the individual steps. So each vendor that was participating sent us a physical appliance with their solution installed on it for us to install in our lab and they all received network traffic which was distributed to them by a network aggregator. In addition Windows event logs were centrally collected and then forwarded via syslog to any of the vendors that had the capability to collect those with the appliance that they sent to us. We also offered an opportunity to actively pull the PLCs for configuration changes. This is a feature that some of the vendors have and we wanted to see how it worked so for those that offer that to their customer base we offered the chance to do that but at the end of execution so that that traffic to the PLCs didn't interfere with the network traffic going to everybody else. So important concept is detection categories and it's important because each of these vendors has their own way of describing detections. They have their own windows in their UI where they collect these things and may be called alerting window or detection window or even notifications or something like that. It also a lot of times have a back end which collects a lot of the atomic data that's being parsed from the particular collector data source that they're ingesting. So it's important for us to be able to abstract that data so that we can talk about it in a similar way across the vendors and that's really what detection categories are. So across the top we have our main detections. NA just means that the particular participant didn't have a solution to collect a particular data source so we just write NA by that. None means that we didn't see any detection at all. Telemetry is just minimally processed data with you know little to no context associated with it. General tactic and technique represent context that the vendor product has added to the detection to kind of talk about why it's malicious. We also include two modifiers which are config change and correlation. So config change is added to a detection or to a point in time to show that the vendor made a change to their product so they have to come to us and let us know what that change is and if we approve it then we'll add that to that particular sub step that corresponds with correlation which is going to be an important topic coming up is added to a detection to show that a detection was associated with another detection or other data that had been seen and it's going to add more context to a detection so that you can better understand surrounding events. So let's talk about that a little bit. Let's talk about how we can add more context to particular detections by leveraging correlation and even leveraging correlation across data sources. So like I said before we were talking about the adversary emulation this was built off of a lot of standard actions and these actions are not malicious on their own. You really have to provide context to them to understand that they're malicious so RDP, SSH, SFTP if there's things that you do in your environment those are just standard actions that happen. Status request in regard to the SIP protocol or online edits or force points or you know the standard action that you see in control systems are not malicious in and of themselves so it's important to add context to them so that an analyst can understand that this is something that really needs to be looked at or responded to. So to prove that action is bad we really need to show something else some other context we need to give some more information so maybe it's that this particular action will negatively impact the control system or it should not have been performed at a particular time or by a particular person or at all in this environment or you know another thing that we can do is we can use correlation or tie it to some other malicious activity that it's related to and therefore we have more context about it maybe potentially being bad. So what we're really looking for is a more holistic view of detections. What we see a lot is telemetry and singular analytic detections which all increase visibility in an environment but alone they don't always provide enough context for an analyst to know that they're bad or that they should be investigated. So correlation can be really used to tie together these standard actions to malicious ones. So during the execution and while we were collecting evidence and processing results we did see correlation we saw correlation in our regard to singular data sources so Windows events or network traffic but not both at the same time. So just to give you some examples of things that we saw we saw Windows host space data sources which were tied to related events associated with the execution of malicious programs usually consolidated to a single asset something that happened on one asset. So this was good because what it did was it built a bigger story around a single action so that we saw related events and we got more context of what happened over a particular time period or how a parent process locks under other processes and a tree structure of events so that we as an analyst can really understand a sequence of events. The other thing that we saw were events pulled together in a story format more of a narrative talking about network sessions that may relate to each other or other information that provided root cause analysis to really explain why something may have been happening. Both of these are great examples of correlation over a single data source but what we wanted to see is more narrative about what was happening across data sources to really tie some of this stuff together so that we had more context as an analyst. So I provided a couple of examples of things that we could potentially do to provide more correlation across data sources. So if you look at our adversary emulation it's broken out into sub steps and each sub step has some criteria something that we're looking for in terms of the detection to say that this is related this will count as evidence of this particular step of adversary emulation. So some of our sub steps are related together and it's usually in terms of numbering but overall it's a similar action. So for instance 4B1 and 4B2 and 4C1 are all pretty much a similar goal of the adversary similar action. 4B1 is evidence that the executable S&P client is not legitimate. So this particular executable is masquerading as something related to S&P but it's truly not and 4B2 is evidence of an established network connection over TCP port 445 between the adversary machine and the control engineering workstation and this is an outbound SSH tunnel request through the firewall and what we see here is S&P client is actually plink under the hood and it's creating this outbound SSH request and then 4C is evidence of an established network connection over TCP port 3389 between the adversary machine and the control EWS. So if we look at all of these sub steps what we see is that they have different data sources so the first one is we're looking for a windows event the next one we're looking at network traffic for evidence of a network session and then the third one we're looking for a windows event again. So what we kind of saw is like for 4B1 we may see an alert that talks about masquerading and may tie together certain processes that are related to each other to correlate and kind of tell a story about how these processes were being executed but what we did see is the windows event related to the network traffic so on the right hand side what we have is you know it's a narrative but it's something that we would like to see in terms of correlation across the data sources to kind of tell a narrative so it says behavior was observed indicating network restriction bypass through RDP tunneling based on the executable S&P client that execute masquerading as plink that execute being used to create an SSH tunnel over port 445. So we don't expect this exact you know sentence to be stated within a particular platform but something that leverages information from all three of these sub steps to correlate it into some coherent story or tied together so that the analyst knows that these things are related and that these standard actions are tied to something malicious in this case masquerading for S&P client. So I have another example here of SIP communication and correlation across these types of events so these span across sub steps 19b1, 20a1 and 20a2 so 20 our 19b1 is evidence that a newly created new newly created files from a zip that are have been extracted into the temp directory are not legitimate so now we're kind of dealing with this whole masquerading again and then 20a1 is evidence that RS logic 5000 execute which was recently extracted from this zip was executed on the safety engineering workstation and this may also include things like their parameters associated with this particular executable and things like that. 20a2 is evidence of an adversary initiated get attribute single SIP request for the status attribute associated with a PLC so again we have three separate sub steps with their own criteria and they have differing data sources on the middle column here so the first one is associated with windows events as is the second one but the third one is network traffic that we may be looking for so again we saw correlation for windows events associated with masquerading or scripting and different things like that we saw event trees being created or events being tied together based on certain criteria but what we did see is windows events related to network traffic so on the right hand side again we have correlation across the data sources and what we would expect to see in terms of tying together these events so behavior was observed indicating that a newly created file RS logic 5000 is masquerading as an Allen Bradley executable and an issue a SIP request for status attribute over the network so now we have this standard action again that we may see in an environment a lot this SIP request for a status attribute for the PLC which may or may not be malicious we don't know but combined with something that relates in time or some other criteria to this newly created file that's masquerading as an Allen Bradley file now we can kind of build a story about why this may be malicious and it takes on a whole new meaning to an analyst looking at the screen so by leveraging these different data sources and tying them together we can actually tell a better story about why standard actions may warrant some more investigation so this really is our call to vendors is you know provide more context to some of these detections we saw a lot of good context provided in regards to singer singular analytic detections we saw correlation across singular data sources but we didn't see the ability to leverage approaches to correlate detections across multiple data sources so we gave some examples of host space and network based data sources but also PLC based data sources so that's the capability of your platform to actively pull PLC to get configuration information maybe all these things could be correlated in some form or fashion to tell a bigger story about what's going on in these environments so this will not only improve an analyst's understanding of the activity but it'll also help you know make sense of standard actions that may go ignored or be misunderstood in these environments without more context and really defines why a standard action may be malicious so we encourage vendors to see how you can leverage a correlation across these data sources provide more context to analysts and for those who are looking at the results we encourage you to really dig deep into them to understand how these vendors are adding context to detections what they're presenting to you as an analyst in the context of this particular evaluation because that's really important to understand you know maybe what you will be wanting to look at as an analyst versus you know things that may or may not provide as much context so we think it's very important to dig into the results to learn more so that's all I have feel free to reach out to me I'll also be answering questions on Discord so if you have any questions about the evals or process or some things that we learned please join me there to for a question to answer so thank you