 Hey, everybody. My name is Austin Scott. I'm going to kick this off here. I'm talking about purple teaming in ICS networks. So on the agenda today, we're going to be talking about some of the basic definitions, sort of set the stage, so that we're all on the same page as far as defining what the different assessments are and the different categories. People have different ideas of what they are, so I'll share mine. Then we'll get into some of the ICS specific challenges that we face doing assessments in these sensitive environments. Then we'll get into story time. I'll share a couple stories from the field where the names of the companies have been removed for their protection, but talk about some of my personal experience in these networks. So first, a little bit about me. I work for a company called Dragos, and we are a software platform company. We have a product called the Dragos Platform that provides passive monitoring and visibility into ICS networks. Our product's secret sauce is its threat-based analytics, which are curated by our Intel team in our worldview reports. So our analytics are very focused on the actual threats that we see and that have been recorded and reported in the field. The other part of our secret sauce are the play books. So when an alert comes in or a threat is detected, what do you do next? So part of my job is writing a lot of the play books that provide details to the operator of the Dragos Platform about what steps to take to triage what the different technologies are that they may find. Like what's DMP3? What's a new master station? Things like that. As these alerts come up, there's a lot of rich detail, ICS specific detail that we embed into those play books. So people will know what to do next. And I've been working in the industry for 16 years now. I started my career doing software development, actually building products for Schneider Electric, building SCADA products on their software team, and then migrated into more integration. They used to do some PLC programming and that kind of stuff in the field. And then I migrated into cybersecurity for industrial control systems and I've never really looked back. So I've got a lot of experience in the field, not only doing cybersecurity assessments, but actually doing system integration and wonderful things like that. So let's jump right into the definitions here. So I'm sure a lot of you are familiar with these terms, but just so we're all on the same page, cybersecurity assessments are typically categorized into shades of boxes. Of course, the white box is all the data being shared during the assessment or upfront. Black box being none of the data being shared. So it's a black box, you can't see into it. And then a gray box is somewhere in between white and black. So we find that gray box testing simulates an activity group that has access to an environment for an extended amount of time. And this is typically referenced as an activity group's dwell time. We see in this industry that when an activity group gets in, they hang out for a while and collect a lot of pertinent information to the environment before they mature enough to do ICS attacks. So gray box testing is more what we lean towards when we only have a one week assessment. We don't have months to collect all that information. And jumping into ICS assessment types. So these are common terms, but I want to introduce them here and talk about sort of how they are different from their IT assessment type cousins. An ICS vulnerability assessment is more of a passive review of documentation and sampling of data opportunistically to provide an overall view of the cyber risk of an environment. And an ICS penetration test is more of a white box or gray box active assessment where we're testing for vulnerabilities and actually trying to exploit those vulnerabilities to prove that the risk exists in the environment. And finally, a red team assessment is typically a more of a black box adversary simulation where we're going in without giving the IT sock or the ICS security team a heads up and detecting or testing their detection and the effectiveness of their security controls. And finally, purple team is sort of a combination of red team and blue team. So a red team on the offensive side, blue team on the defensive side, it's a more of a collaborative approach to assessments where they're working together towards a common goal of reducing cyber risk. And what's particularly unique about ICS purple teams is the blue team is usually comprised of not only cyber security folks, but also the engineering team, site operations team, and other personnel who are actively using the ICS environment and are familiar with it. So let's talk a little bit about the role of the blue team in the ICS purple team engagement. So the blue team will provide pertinent information to the red team to help them progress through the network more quickly. And this is kind of to simulate that dwell time that an activity group would typically have in an environment. And you know, we only have a week to do this assessment. So having six months to collect all that information organically just isn't an option. So having the blue team provide hints and information and diagrams and IP addresses and even credentials at times will help us to maneuver through the network more quickly and have an opportunity to really test their defense mechanisms and detection mechanisms. And the red team's role is to communicate as they're enumerating as they're attacking as they're doing network pivots and privilege escalation. And most importantly, to assist the blue team in troubleshooting their detection capabilities and coming up with recommendations for improving their detection and logging capabilities in real time. So that's where we really see a lot of that value is that opportunity to really be that adversary and tune those settings as they go. So let's jump into some of the ICS assessment specific challenges. Now there's some differences between an ICS assessment and its IT cousin. One of the most important ones is the difference in safety and reliability. Safety and reliability is paramount in these ICS environments. And quite often there's a strong safety culture in these environments that we need to adhere to and be aligned with and any behavior that deviates from that could get a contractor banned or barred from that site permanently. So we have to really be aware of that safety culture and watch out for even small infractions or major infractions. Things that can get a contractor kicked out or improper PPE, not even the right gear for the environment you're working in, going into unauthorized areas or unrestricted areas, speeding in the parking lot, not holding handrails, going into areas with equipment that are class 1, div 1 that have a, you know, traces of explosive gases. If you bring yourself one in there that could cause a real risk to the plant. So that's some of the safety issues that we are concerned about. And also touching anything, like basically touching anything in the ICS environment without permission from the operators or the operations team is a big no-no, you know, whether it's an oil and gas environment or a food and bev, you know, you can't touch anything without permission. And doing so could get you barred from the site pretty quickly. And on the reliability side, the site operations team and engineering teams are very sensitive to the reliability of the system. So putting the ICS system into any unknown state or an unrecoverable state can be dangerous to the people on the site, can be very damaging to the equipment and cause costly outages and downtime. Sometimes an hour of downtime can be measured in millions of dollars. So it's something that the site personnel are very sensitive to. And quite often people's pay, their bonuses for the year are tied to things like safety performance and reliability performance. So if you put that in jeopardy, you're not going to be very popular at the site. So really performing any action that even has a remote possibility of tripping the process is out of the question. So most sites often don't allow us to add any packets to the network. Any kind of active information gathering is out of bounds for us quite often, especially in the level one, level two areas of the Purdue network, the lower areas where the controllers and the PLC's and SCADA equipment is. So we have to use more passive methods of collecting data that don't introduce any risk into the environment. So pulling the data manually from these environments and doing data collection, peak app collection from Spanports, things like that that's very passive that won't introduce any risk to that environment. And even if you do, even if you're quite confident that you're not going to cause an outage or your scanning won't create an issue, if the site has any issues, any kind of outages or problems while you're there and you're putting packets on that network, they're definitely going to blame you. You're going to take the fall. So you got to be really careful about putting anything on that network. So why would we test ICS networks? Why even bother if it's so dangerous? Well, it's important to do the testing because activity groups are targeting these environments. They're actively targeting these environments. We see it. We've got the Intel to back that up. So it makes sense for us to actively assess these and see what those adversary activity groups would potentially run into if they did try to breach these networks. So it does require some careful planning and experience working in these environments. We do need to be under with constantly communicating with the operations folks to be successful. And often we have to find creative ways of avoiding putting packets on the network like setting up lab equipment or testing in a training environment or virtualization or even testing during an outage when the site is not running. So the next point on specialized equipment, each ICS environment is unique. It's engineered, built to solve a particular problem. And the technologies you'll find on one site won't necessarily be the technologies find another. So there's a lot of very specific engineering tools and protocols and wireless and OT technologies that are unique to ICS that you won't find in IT environments at all. So understanding those, having experience with those is very important to your success. Quite often almost always we'll do what we call a crown jewel assessment during our assessments where we identify the critical assets that can really impact the business, can put people's lives at risk. And we can focus our engagement on protecting those or trying to reach those crown jewel assets. So an example of that in an oil and gas site, you know, if you find a critical vulnerability in some disposal well and they've got 30 disposal wells at the site, no one's really going to care. But if you find a critical vulnerability in their in their custody transfer meter, that's like the cast register of that oil and gas site, they're going to be a lot more concerned about someone interfering with that piece of equipment. And communication is also critical. Operators will want to know exactly what you're doing at any given time. Quite often you'll be doing this assessment with an operator over either shoulder, breathing down your neck as you're as you're doing the assessment. It's a bit unnerving. And before you do anything to the network or you want to try anything or touch anything, you always have to ask permission and have that constant communication and that transparency with those operators. So they're comfortable with what you're doing and they can make sure if there's any risk that the process is in a safe state and that interruption is going to cause any kind of catastrophic failure. And finally, the last sort of challenge is the cultural challenges that we face. There's been for a long time a bit of friction between the engineering teams and the IT teams. Quite often in the past, these IT teams would come in with their security patches, their windows patches and they'd roll those puppies out and things would go down. So there's a bit of distrust between these different groups. So trying to bridge that cultural challenge can be an issue. But as I said, whenever you're on site trying to work in this operational environment, it can be quite a challenge with operators standing over either shoulder. You can, I always find that you can do like half as much work done when you're on site as you would normally just because of people asking you questions and interrupting and wanting to know what's going on. So if you think you're going to get so much stuff done, you know, basically cut that in half because it just, everything takes longer when you're on site. So let's jump into a tale of two ICS assessments. We're going to talk about two particular engagements that we were working on and two different approaches that were taken. One was a pure red team assessment. It was an energy company that was very capable at a 24-7 SOC and endpoint protection ICS monitoring and the objective was to pivot into any ICS environment, any ICS component and there were dozens of them. And the other one was a purple teaming assessment which was an energy company and they just completed a multi-million dollar cybersecurity program and the objective was for us to pivot into a particular control network. So during the red team assessment, our initial foothold was the corporate network and our goal was to breach into the ICS network. So we did some initial, initial enumeration and found an open file share on the corporate network and found some credentials like the good old Excel spreadsheet that has the credentials to the ICS passwords and all that stuff and we found there was a dual-homed historian server that was doing like data replication between the ICS environment of one of their plants and the corporate environment of one of their plants through this SSH tunnel. So we were able to gain access to that Windows machine and find the SSH credentials that it was using to replicate data with a file copy between the ICS network and the IT network to share that valuable data with the corporate folks and we were able to create a remote desktop tunnel through that SSH credential that we had into the OT environment. That was the only port that was open was the SSH port 22. So we were able to pivot through that with our remote desktop. We did a remote desktop into that environment, found that they were reusing some really weak credentials like the same admin password and user were everywhere once we were inside that ICS environment. There was credential reuse galore. So from there we were able to do a remote desktop and a remote desktop and a remote desktop to get to different skate endpoints and take pictures of the actual process running in the operator workstations and things like that. And we kept expecting the SOC to give us a call or detect these things but they never did. There was no detection. There was no logging. Nothing was flagged by the ICS or the 24-7 SOC. They didn't see it at all. So we wrote up our findings and provided that to the customer and because they weren't they weren't really overly engaged in the process even a couple years later we heard about another assessment that went through and they still had the same bindings. It was still the same kind of position it was when we did the assessment. So I think the lesson learned here is having that active engagement, working with the blue team at a more integrated level, having that purple team approach probably would have created a different outcome in this environment. People would have been more engaged. That communication would have happened. People would have understood the risk more at different levels in the organization. And maybe that issue would have been solved and remediated down the line. So let's jump into the purple team assessment that we did. So this was a bit of a different scenario. We were actually physically at the plant and they we had no access. They said, you know, just get in any way you can into the plant environment and see if you can get into the inner workings of the plant, the balance of plant environment. Because there's, you know, in the prodigal model there were multiple layers that we needed to pivot through. So we started off our initial foothold was a mouse jack, which is the wireless Logitech mice and Dell mice. They can act as keyboards and you can inject keystrokes into them. So we're able to get a reverse shell through a mouse jacking vector right through the plant manager. He happened to be using a Logitech wireless mouse and we're able to get a nice shell back from his machine. So from his machine, we're able to enumerate the plant's active directory environment and found that they had some misconfigurations with their laps, which is the local administrator password service solution. Thanks Leslie. That it's a way of reducing the risk of past the hash attacks and changing the local administrator password on all the Windows endpoints that Microsoft ruled out. But if you don't provision those passwords, it's a central repository of those passwords that are stored in AD and they didn't provision the some of the passwords properly so anyone could just reach in if they knew where to grab them and then pull them out of laps and we found some juicy passwords in there. One was a backup service that was running across their entire network and it gave us basically min access to the corporate side of their plant, which allowed us to find this hypervisor server. They had replicated active directory environments at each of their different plants and they had this running in a Windows hypervisor environment so when we added min access into that hypervisor environment, we were able to find the virtual machine that had the, that was running the active directory and export that VM and then just grab the dump of all the password hashes from that AD environment and then we were able to pivot into the DMZ where the one jumping point, they had it really locked down in this environment, there was only one port open that went from their DMZ into the balance of plant and it was a tunnel, a tunnel using this OPC software and we couldn't progress any further. It wasn't regular OPC, it was like a tunneled encrypted OPC that we couldn't breach so we did have success up to that point. We proved that they had locked down their environment but we did find a lot of things along the way and actively were engaging with their blue team. We had the SOC up on a bridge the whole time. We were talking with their team and as we'd run different attacks or run different enumerations they'd be like, okay do that again. We're going to tweak this rule a little bit and see if we can detect it this time and we tried again and then they tried to detect it so we were helping them to hone their rules for detection and their logging, their event logging and windows logging while the assessment was ongoing so they were getting that value as things were going on and we were able to help them reduce that risk while the engagement was underway. So in summary the purple team advantage, it really does reduce the risk in these engagements. You're constantly in communication with those operation folks. They're with you in the room quite often. You've got an open bridge, you're talking to the operators so that communication and that open channel really helps you to communicate what you're doing and reduce the risk to the operating asset and it does help improve their defenses. They can test and tweak their detections in real time and we can replay or even do a pocket capture for them to work on later of our attacks and our enumeration methods. We can even use a lot of the tools that we see the adversary groups using like Puppy and Compiled, Python and different memory, Mimi cats, custom compiles and things like that that a lot of adversary groups are actively using so we can test their metal against what we're really seeing in the field. And then we're building better relationships. We're leveraging that OT and IT team knowledge and building that bridge between these two groups that don't always get along so having that opportunity to work together and also take advantage of that knowledge. These people work in the plant every day. They understand their system extremely well so in a week's time we can do what would take an adversary group or activity group months of of reconnaissance to pull off and we do reduce risk. We're addressing the four different the four different ICS specific challenges that I talked about earlier. It does help reduce this safety and I mean it does address the safety and reliability issues that we often encounter. We have that ongoing communication that's important and we have the knowledge of the operators and the engineers specialized knowledge of the industrial equipment. Like I've worked with a lot of different environments, a lot of different PLCs and SCADA systems but I'm always finding new ones. There's everything out there is often different and unique and then helping build that cultural trust between these two groups. And that's that's the end of my presentation. Any questions? Yes, sir. I wondered in your earlier slides when you're talking about your assessment to your represent standards. So outside of NERC-SIM or some of the other utility stuff, when you're not utilities what standards are you assessing against? So quite often we'll assess against their if they have a corporate or ICS standard that they've stood up that they're adhering their environment to. It's not really fair to compare an industrial environment to a standard that the industrial environment has no idea of. It's really up to the company if they're not in regulation with NERC-SIP or something like that they don't have those expectations then it's not fair to say well you should comply with this standard or that standard. We'll do whatever the customer wants if typically we'll use the NIST 882 or even 853 or ISA, IC 62443 things like that depending on what the customer is interested in aligning with. But it's almost unfair if the site isn't aware of that if they haven't had that expectation to align with these standards and a lot of them haven't. But there's a bunch of different standards out there so it's up to the customer we'll work with whatever. We've got people on our team who are you know on the steering committees of these standards so they know them very well so we can always fall back on that and leverage that knowledge as required in these assessments. Any other questions? Yes sir. That's a very good question and it was a point of debate at S4 this year in my opinion maybe it's unpopular I don't know the Purdue model is not a cybersecurity model it's just an architecture model so you shouldn't try to build your site to it every site's different and unique has different challenges so you should try to address the cyber risk in your environment and not try to conform with necessarily the Purdue model. But it's a I mean it makes sense if your site kind of aligns with that but a lot of them are not don't quite fit into that into that weird little niche. Great question though thank you. Yes sir. That's a that's a really great question I actually had another presentation at the end of Black Hat with Elasticsearch on that topic talking about how we use batch files and batch files to dump a lot of that information from Linux, Windows, Endpoints and then we ingest that into Elasticsearch so we can do dashboarding and reporting against that endpoint because we will we will opportunistically use Rapid7 and Nessus and Endmap but it's not always an option so if we have to physically plug in and pull data that's what we'll do we'll run some batch scripts pull that data down and text files got a Python parser to to cut it all up and throw it into Elastic to do the magic. Okay now I'm all out of time so but thank you everyone for attending great to see so many familiar faces.