 Welcome, ladies and gentlemen. We're glad that you've joined us for this year's cybersecurity rig session. I'm Jim Beardsley, the Acting Deputy Director of the Division of Physical and Cybersecurity Policy and the NRC's Office of Nuclear Security and Instal Response. We're very excited about this year's session focused on the emerging threats to industrial control systems. This topic is very relevant to the current state of cybersecurity in virtually every industry, including nuclear power. Before we get started, I want to make a couple comments about the NRC's Cybersecurity Oversight Program. In 2021, the staff completed the inspections of Nuclear Power Industry's Cybersecurity Program full implementation. The staff found with reasonable assurance that industry understands the cybersecurity regulatory requirements and has implemented their cybersecurity plans. Also in 2021, the staff developed a new cybersecurity inspection program that will focus on industry's ongoing implementation of their cybersecurity programs. Each licensee will be inspected every two years, and those inspections started last month. The staff have developed a revision to our cybersecurity regulatory guide, and it's available for public comment. The revision is designated as draft guide 5061 and can be found in the NRC's Public Document Repository, the Agencywide Documents Access and Management System, or ATOMS. Now on to the session. We have a fantastic lineup of speakers covering a diverse perspectives on the subject of cybersecurity threats to industrial control systems. I'll introduce the panel members before we get started. There are more detailed bios for each of them available through the RIC 2022 platform. If you have questions for the panelists during the session, please enter them into the RIC platform using the Q&A tab. If you have a specific panel member you wish to address the question, please note that in your entry. Our first speaker will be Jacob Benjamin. Jacob is the Director of Professional Services at Drago's Incorporated. Second, we will have Joel Max. Joel is the Industrial Control Systems Lead at the Federal Bureau of Investigation's Cybersecurity Division. Our third speaker will be Bob Anderson. Bob is a Nuclear Security Consultant at the Department of Energy's Idaho National Lab. Finally, we have Mark Bristow. Mark is the Branch Chief for Cyber Defense and Coordination in the Threat Hunting Division of the Department of Homeland Security, Cybersecurity and Infrastructure Security Agency. Mark may or may not be able to join us. We're still waiting for him to arrive. If he does not, we will move right to Q&A after the first three speakers. Hey, Jim, I'm here. Oh, Mark is here. Sorry. He had some tech issues. Sorry, Mark. Mark will be the fourth of our speakers, and so we'll kick it off with Jacob. Jacob, go ahead. Thanks. So I was looking to say, okay, is the slides up? Yes, you're good to go. Okay. I don't see the slides, but that's okay. So I wanted to talk about emerging ICS threats, and the first thing to do is to talk about threat groups. So Dragos collects and analyzes information on cyber intrusions and attempts to compromise ICS networks. We create profiles of known groups targeting ICS environments so we can focus on how they operate. Then we establish robust analytics with comprehensive data around their actions, their capabilities and their intentions. Currently we track 18 ICS threat activity groups, two of which have shown interest in or targeted nuclear. So if you'll skip ahead, I think two slides, they'll catch up to me. Perfect. Thank you. So as I said, we're going to highlight two groups here, and these two are the ones that are known to have shown interest or specifically targeted nuclear. The first one, Wassonite. So in October, 2019, Dragos identified the adversary Wassonite, targeting the Kuda-Kalam nuclear power plant in India. Later intelligence research combined with public announcements from that plant confirmed that the adversaries had breached their IT network. Dragos also identified a pattern of activity associated with the same tactics, tools and techniques spanning across multiple ICS entities. That also included electric generation, nuclear energy, manufacturing and space-centric research. So we also have Dymalloy. So Dymalloy operations include deep ICS environment information gathering, operator credentials, industrial process details using tools like Gudor, Dorschel, Mimi Katz. Dymalloy targets geography-wise Turkey, Europe and the United States. It has links to other companies that track Intel, right? So Dymalloy is linked to Dragonfly 2 or Berserker Bear. So Dymalloy successfully obtained HMI screenshots while conducting reconnaissance and target operational networks, and this was documented in a CIS alert in March 2018. So Dymalloy's victimology includes US electric generation and nuclear. As you can see here, we kind of have these trading card profiles here that kind of highlight their geography, the victimology and some of their common TTPs. So I've compared these two here and they share some TTPs that I wanted to highlight. So both groups use spearfishing attachments to gain initial access, and both groups leverage valid accounts to laterally move across the network. And both groups also use credential harvesting and privilege escalation, and to do that, they leverage the tool Mimi Katz. So if you go to the next slide. So we recently released our fifth annual year and review report on ICS cyber threats, vulnerabilities, assessments, and incident response observations. And here are some key findings from our proactive service engagements. So this data set includes engagements from many industrial infrastructure sectors, not limited to nuclear. So it has, you know, electrical, oil and gas, food and agriculture, pharma, chemical, transportation. It's got a number of them. So let me highlight here on visibility. So the greatest and most common recommendation that Dregos has is to increase visibility of the OT networks. So visibility is critical for network security, and it facilitates the prioritization of future improvements. Additionally, streamlining detection and response capabilities is really only possible with an increased understanding of the network and its network components. And that can only be done with visibility. So the specific numbers here that we're looking at. So 86% of our service customers here had limited or no visibility into their ICS environment. So poor security perimeters, external connections to ICS, and shared IT and OT credentials, those are common to the OT industry in general, but not to nuclear. So nuclear has, you know, as you're well aware, it has strong network security perimeters, hard breaks and segmentation, but they do still suffer from poor visibility. And I'll talk more about this in the next slide. So if you'll go ahead and go to that. One more time. Thanks. Perfect. So one of the questions I had when we were preparing this panel, they asked me to speak on why visibility was still needed, even with the implementation of a data diode. So a harsh truth is that prevention is ideal, but it's not guaranteed. And preventative countermeasures like segmentation, they atrophy over time. And so things like spreadsheets, drawings, they become obsolete quickly, and they're almost, they almost never resemble the true state of the network. So visibility provides that site picture for how the processes and workflows are actually executed and leads to that understanding of normal in the OT environment. It also leads to continuous validation that those preventative controls are working, that there isn't any bypasses of the data diode. And then so, yeah, so like I said, visibility leads to understanding normal in your OT environment. So knowing normal adds that valuable context to accelerate like the defender's situational awareness and things like change detection. So I do want to highlight here a little bit like what do we consider limited visibility? Because it's easy to say, you know, what does that mean? So we considered, we considered a finding limited visibility if the asset owner was only monitoring the IT to OT boundary, if they had an incomplete or inaccurate asset inventory, or if they were doing limited or manual log collection, or they were monitoring OT communication but without OT protocol dissection. So without these items, defenders are blind to critical network traffic that's happening on the operational network. So that leads me to like, how do you get full visibility? What does that mean? So full visibility is achieved when network and device logs are centralized and can correlate various network segments with traffic analysis and asset inventories. And so one key thing about visibility is without visibility, most threat intelligence is unactionable. You can't do anything about it. You have this alert for this vulnerability, but you don't know if you have that asset or you don't know if you have that asset in that configuration. But threat intelligence itself is very necessary, right? So you need it to provide the context for when you do detect something to understand whether it's a false positive or to actually the most tricky situation is determining whether it's a process anomaly or a cyber attack. And to do that, you need the context that comes from threat detection. And so threat detection and visibility, they both lead into being able to early detect things with the appropriate context that allows a defender to rapidly execute a playbook and selecting the correct playbook, which will then expedite response, containment and remediation efforts. So overall, like to kind of wrap it up like in a sentence or two, like why do you need visibility when you have a data diode? Visibility leads to early detection, which leads to quicker recoveries and ultimately minimizes your financial, operational and even safety impacts. Thanks. So that's what I have for my slides, Jim. Thanks, Mark. Why don't we go move on to Joel. Thank you. So today I'm going to talk a little bit about how the FBI used the threat to the nuclear sector and in the industry. And then I'll talk most of my time about how to engage with the FBI. And answer any questions you may have on how to work through an incident both ahead of time and during an incident with the FBI. So kind of to level set, when we look at the threat landscape, China and Russia are the two that rise at the forefront, which is to nobody's surprise, especially when we look at their ICS attack capabilities, they have a lot of resources to throw at that. And I think that what we're seeing over on the landscape is that there's more and more capability being developed around the world as adversaries are looking to hold different sectors of infrastructure risk to include the nuclear sector. And so when we've met with industry and talked about the threat, I think that's pretty straightforward. Now, when we look at what are these adversaries looking for, they go after a lot of stuff that's pretty easy to get in the open source, especially things like social engineering going on LinkedIn and finding out who works at what companies, the type of makes and models, the things that you may use in your OT networks. They look at that they're looking for sense of internal documents, things that can help them better understand your network, understand the physical layout of your facilities, you know, really looking at every aspect of how to find that one week link. Because as we all know in cyber, it's so difficult to protect every part of the fence line, so to speak. And just like Benjamin was talking about, with having visibility, the FBI's number one recommendation to industry is to have a good baseline. But you can't have a good baseline of network activity if you don't know what's in your, you know, what assets you have. And so understanding that baseline and having that set ahead of time is so critical because whether it's China or Russia or other actors, they're doing things, you know, like using legitimate credentials and the stuff that we just heard about, that's much harder to detect. Unless you have that baseline, you understand, you know, how traffic is moving in your network, you know, what users are logged on and what does that look like specifically? So really we continue to recommend trying to get that baseline because there's so many of the other recommendations that come out of the government or ICS really are an applicable nuclear just because of the way the legacy systems and the way it's architecture. It's just not the same as many other sectors. And it's not as easy to go back and retroactively fix a lot of things, which kind of leads me to the portion of working with the FBI. You know, we really recommend that industry works ahead of time to identify not only in your continuity planning, but working through, you know, which office, which local FBI field office is nearest to you. Each of our 56 field offices has a cyber squad that's specially trained to handle incident response to critical infrastructure. And I know Mark's not here to talk a little bit about what CISA does, but we work very closely with CISA to provide kind of that proactive engagement with industry. So that way, everything can be kind of discussed ahead of time so that you're having those conversations before the incident, not after the incident's happened because the last thing you wanted to do is be in a situation like colonial where you're trying to put the pieces together when the whole world is watching. So if you can engage with the FBI early, talk about kind of what your plans are, and then that helps us figure out how we can best help you and also give you a sense of what that looks like from incident response. So when we come in to do things like evidence gathering, intelligence collection and working with CISA who is doing a lot of that incident response, you know what that looks like, you know, what the personnel is going to be from a legal perspective. It's always helpful to have general counsels engaged early to look at those MOUs, what that might look like. So, you know, I think that really is a key level of measure of success for entities that we've seen and victimized. Those that have had a previous relationship with us have had a much smoother process. It just makes it a little easier on everybody. So from that standpoint, the best way to connect with the FBI is the local level. And so whether you're, and that's not even just in 56 field offices here in the United States, we have a number of offices overseas in our legal attaché offices. So whether you have a presence here in the United States, you may have partners overseas or other locations. You can engage with the FBI almost anywhere in the world. And as a part of that, we work hand in hand with CISA to figure out what that solution looks like for you, to move that intelligence from classified, unclassified channels to when we can through things like private industry notification and flashes and joint cybersecurity advisors with CISA and NSA and others. And where they work to try and when we get that intelligence to be able to push that out to industry a bit more faster than what we have in the past. So that's really kind of in a nutshell how the FBI views a threat and how we like to work with industry ahead of an incident, hopefully. And if you do have that bad day and you're trying to work through the incident, we're happy to work with you through that as well to hopefully make that process as easy as possible. And ultimately, from a law enforcement perspective, bring to justice those that are going after your systems that are going after your people and your proprietary information. Those are the type of things that we're interested in doing. So I think that's it for me. I'm happy to answer questions once we get to that point. Thank you, Joel. And now we'll go on to Bob Anderson. All right. Thank you, Mr. Chairman. If we pull up, OK, we got the first slide up there. I want to talk about a couple of areas that I've been working on in the last several years that are kind of near and dear to me. And that is cybersecurity for advanced reactors and cyber-informed engineering. If we go to the next slide and we take a look at what the cybersecurity landscape might look like or the digital footprint with advanced reactors, we know that there's going to be an extensive use of digital technology. We know that there's going to be new INC smart sensors that are being developed. We assume that there will be some built-in diagnostics part of that. This may bring some new protocols that we might have to worry about from a cybersecurity perspective. We're expecting to see more wireless technology increasing. We know that field programmable gate arrays are being suggested for safety systems a lot, especially within the small-major reactor and microreactor worlds. And then, of course, with the extensive use of digital technology, we expect to see more frequent upgrades of software and firmware and patches. And what does that look like? What are the processes going to be like in place? And how do we protect against those more frequent upgrades? And then we also expect that there is going to be some type, some sort, to smart grid. Interactions with intermittent power, whether that be wind or solar or something like that, which will probably produce a different type of operations. And then we hear a lot about digital twins. And oh, by the way, what does that even mean? I'm not even sure I know what that means today. I've heard a lot of different definitions of what that's supposed to be, what it's supposed to do. Maybe it's going to help us run some models and simulation to have more efficiencies or things like that. But again, we're hearing these words here in the industry right now. But when it comes to remote operations and remote communications, what's that going to look like? How do we protect those? Those may look a lot different when you have microreactors that maybe be far away from like a control room potentially or something like that. We know that there's going to be autonomous operations that people would like to have to reduce the people footprint there. There might be some different types, modes of operation, including load following, AI, artificial intelligence and machine learning. I think one of the biggest concerns around that is what are the boundaries for use in the nuclear industry? What are those use cases for AI and ML? Like I would hope that we never really control with artificial intelligence. I would hate to wake up the next day as an operator and find out that the AI learned something new overnight and has a new way to control. So these are some things that we need to be careful about from a cybersecurity perspective. Maybe there's some increased monitoring as well. And then supply chain. I think we all understand that supply chain is huge right now. There's a lot of issues surrounding that, especially when you look at an SMR vendor themselves, they are part of the supply chain and they have a lot of sub suppliers and there are probably a lot of international systems and components that they get. And so it's really an international potential problem that we have to solve on the supply chain. But then when you go to the next slide, how will the regulators really address this advanced reactor cybersecurity? Is it gonna be, is the guidance gonna be prescriptive? Are we moving towards performance-based guidance? I heard a lot in the last couple of days about risk-informed guidance and regulation. So or will it be a hybrid of the two? Like, what's that gonna look like in the end? From an international perspective, what about harmonization of regulation? I know that vendors want to sell their products worldwide. Is there a way to talk to regulators from other countries to make sure that whatever regulation and guidance comes out is harmonized in some form or other so that there's not a lot of rework potentially on the vendors themselves? What about design basis threat updates? I would imagine with the high digital footprint that we may want to update our DBTs more often. We always see new techniques and tactics and procedures from the adversary. So there's different capabilities and intents and motivations. What's that gonna look like, especially from like SMRs or microreactors? And then we come down to a main topic with staffing qualifications and not just competent authorities, but operators too. And what's that gonna look like from a cybersecurity perspective? How do we educate everybody in the nuclear industry about cybersecurity? And that really leads me into the second area that I wanna talk about in the next slide. And that is cyber-informed engineering. This is an area that the Idaho National Laboratory and Department of Energy have been working on for several years. Jacob Benjamin, who used to work for the INL was a contributor to this as well. But it's a cyber by design concept. And this is really looking at cybersecurity as a foundational element of risk management, especially for functions supported by digital technology. So what does that exactly mean? What that means is we want to make sure that we reduce the risk from a cyber attack, right? If we, can we mitigate it? Can we eliminate things? This whole concept is focused on engineers and technical support. So it's an engineering approach, not an information technology approach. We were looking for opportunities where potentially we could engineer out some of the cyber risk. And maybe through that process, we've got some cost savings, but ultimately we want to look for applying more effective cybersecurity approaches. The last thing we wanna do is wait until the design is completely done and then find out that we really can't bolt on any cybersecurity controls when we could have done it earlier in the design phases. And again, part of this concept is looking at the entire system and engineering lifecycle. If we can get a small module reactor and microreactor vendors looking at their conceptual stages from a cybersecurity perspective, then maybe there are some areas where the cybersecurity specialist can get involved and help direct some areas that may be less vulnerable to cyber attack. And of course, we're really trying to stimulate an engineering culture here of security similar to what we already have was safety. And if you go to the next slide in 2020, the National Defense Authorization Act erected the Department of Energy to develop a dewy national cyber informed engineering strategy. I was a part of the technical team that helped put this together. And it's really cybersecurity as a foundational element of risk management for functions aided by digital technology. And again, this is for the entire systems engineering lifecycle. And coming out of that strategy, we have five strategic pillars. One of them is awareness. So what is cyber informed engineering? Who's it for? What does it work? What conditions do we use it in, et cetera, et cetera. It's really about getting the awareness of CIE out there to the community. Number two is education. How do we educate the young people coming out of the universities? Can we get into the universities and help them provide some curriculum on cyber informed engineering? And how do we educate, of course, the existing fleet of engineers that are out there within the energy world? And then development, that's actually developing the products and the application for CIE. What are the tools that we're gonna need? What are the processes and procedures that go along with it so somebody can take it and actually use it and not just having a conceptual idea in the sky there. So that's what the third one is about. And then, of course, we take those tools and processes and we apply it to the current infrastructure, energy infrastructure, and not only to the current one, but also to the future, R&D energy systems, which small module reactors and micro reactors totally fit right into that slot. And so if you wanna know more information, we have a website on cyber informed engineering that's currently hosted on the INL. And if you want more information, if you go to the next slide, you can talk to Sherry Cady, who's the Senior Advisor for Cyber Security for DOE's Caesar Organization or the Cyber Security Energy Secretary and Emergency Response Organization. Or you can email myself and I will get you the answers one way or the other. But those are the two main areas I wanted to talk about that I've been dealing with a lot recently. I also work with the International Atomic Energy Agency a lot. I just got back a couple of weeks ago on the technical meeting on small module reactors and micro reactors. And there was a lot of great information that came out of that, that hopefully the agency can help answer for a lot of the member states. And with that, I'll give it back to Mr. Chairman. Thanks, Bob. And now to hit clean up, how about Mark Bristow? My favorite spot in the lineup, to include right before lunch, those are always the best times. So thank you and apologies for my technical difficulties to get us going. But there's just a couple of things that I want to hit on because I think I really want to leave some time for Q&A and quite frankly, my colleagues here on the panel have really hit a lot of some of the same points that I want to make, but I want to touch on a few things. So the first thing I want to touch on is, so hi, Mark Bristow. Sorry, I didn't introduce myself the second time. I'm the branch chief for cyber defense coordination in CISA. And I want to talk about three things in particular. The first one is, if you're not familiar with CISA just and like working with us, so the kind of what the threat hunting's role is here. And similar to what Joel was talking about with the FBI is ultimately, the CISA is here to support the nuclear industry as well as all the critical infrastructure industries in identifying or remediating vulnerabilities and identifying and remediating cybersecurity incidents and ultimately to help bolster the cybersecurity posture of the United States from a national security perspective. And I don't think I need to explain to this audience why this community is critical to the overall national security. That just goes, doesn't need to be stated. But working with CISA is something that some of you have done. Some of you, I can't actually see all the people that are on this call, but I'm assuming, I know that some of you have worked with us and that some of you haven't, right? And so what's it like to work with CISA? So we're a little bit of a different agency. Our kind of perspective and things like we're here to kind of support critical infrastructure and national security. And so we have a lot of resources that can be made available, but may not, right? And that's the biggest misconception I think people have about CISA is that we will be there to save you. And while we'd love to be there to save everybody, we can't save everyone, right? And really kind of our role is to surge towards the kind of most nationally critical incidents or issues that are occurring, right? And so CISA is definitely here to help support the backstop on a really, really bad day. But we aren't going to be your day-to-day cybersecurity team. Like that's why you have your own cybersecurity team. That's why you should have your own resources. That's why you should work with your industry partners that can provide those services. We are here to provide that kind of national level object. And that's something that I think is often kind of misconstrued about where CISA's role is in this entire exercise is that we're really there to just kind of mostly focus on those types of nation-state threats. And we're often working hand-in-hand with our partners at the FBI, like Joel and I know each other well, because the FBI is pursuing the kind of the adversary and they're there for the imposed cost portion, which is an important portion of how we deal with cybersecurity issues broadly. And then we're there to help what we call asset response or help the asset kind of get better or get on a path to right in the near term. So, you know, we are here to support and we do a lot of kind of broad messaging and stuff. But, and we, you know, when you do have a kind of particularly bad day, we are here for you. But, you know, we shouldn't necessarily be your first call, but we are here to help. And we especially love when we get tip information from the private sector that we get that can enrich what we understand about what the adversary is doing out there in cyberspace. So just kind of want to, you know, we're a little bit different of a federal agency. I just want to kind of set that record straight. There is one thing that I want to talk about that, you know, Jacob mentioned during his piece, which I thought was an important, and that's about visibility, right? So if you do not have, you know, and Jacob mentioned this and so I'm just going to double tap it, it is, it would be best to stop the bad guys from getting in your environment. Like that would be what we would all like, right? But just the diversity of vulnerability, the diversity of risk, the complexity of some of these systems, that is not going to be successful 100% of time. There's just no way it's ever going to be 100% capable. And really detection is the next piece, right? And this is an area where, you know, we also see a lot of as owner operators not have the detection capabilities that they need in their corporate networks and their control systems networks and can't tie the two things together, right? You know, because from my experience and I've been doing incident response and control systems for like, I don't know, 12, 13 years now, you know, most of the intrusions that we see come in through the corporate network, right? And so oftentimes you're going to have some reflections that you're going to be able to see in the initial intrusion phase that come through that vector before they actually get to the control systems, right? Because at the control systems level, yeah, we really don't, we want to keep them out of there, right? I'd rather detect them when they're a little bit further away from, you know, the level four assets, right? I think would be the best thing in everyone's interest, right? And so making sure you have good detection engineering, detection technologies and strategies in place is really, really critical. And it's also really critical to ensure that you have behavioral and analytic techniques involved there, right? I would love to go through one meeting in my life where no one asks about indicators because indicators, well, they can be helpful and they have a role in a place are usually the last thing I'm thinking about, right? I'm much more interested at looking at adversarial tactics, right? You know, they move towards these types of assets in the control environment. They try and go after, you know, the administrative user accounts or certain actors like the target executive assistants because they tend to have a lot of access to mailboxes and files and systems in order to facilitate things, right? And so like those types of tactics and those types of behaviors that the adversary have are things that we can, we can fingerprint on, right? And I think we just got a really great example a couple of years ago, man, it feels like a couple of years ago it was only like 12, 13 months now in the supply chain compromise that was SolarWinds. And I don't call it just SolarWinds because it wasn't just SolarWinds, number one. And number two, you know, the adversary was using multiple intrusion techniques not just supply chain compromise. So like it was bigger than that. But, you know, I think that was really instructed in behavioral analytic techniques, right? Because at the end of the day, the way to identify that you've had a compromise in that particular case, yes, there were some indicators that the adversary very, very quickly changed. You know, they had one kind of callback domain which was helpful, but in many of the responses that we did, the adversary quickly went to bespoke per victim infrastructure for command and control. And so IP addresses and domains and observables from there were useless to share with other organizations. It just wasn't going to help, right? And we've spent a lot of time kind of coalescing around information sharing, which we often mean as indicator sharing and that is a very fragile way. And our adversaries have picked up that we've done that and they've moved to these types of more, you know, indicators are a lot more severable than they used to be. And so by looking at and having your detection engineering and more of a tactics-based kind of concept, you're going to be much more successful at catching the things you don't already know to look for. I'm not saying don't look it for indicators, you should totally look for indicators. Yeah, because you want to catch that low hanging fruit, but especially for an industry that if somebody wants to hold at risk is definitely a little bit more weighty than holding at risk a small water utility somewhere, right? Someone hacking into nuclear reactors or nuclear fuel cycle facilities, that's a horse of a different color from an adversary perspective. And you're more likely to come against the A game than the old hard water utility as an example, right? And so focusing on behaviors and tactics is going to be a much more useful thing to catch that zero day type exploitation activity, right? And looking at user behavior patterns. Why is Bob logging in from two places at once? Hey, let's have help desk call Bob and find out if he's really in two places at the same time. It might be true, it happens to me all the time. So, but anyway, so it's really about behaviors. And I will mention one thing that I know some of you work with us on. You know, one of the things that Sysa is trying to do is kind of get that national perspective on behaviors, right? And we're trying to better understand what our adversaries capabilities plans and intents are and how well positioned they are to hold US assets nationally at risk. And so one of the ways that we are doing that is getting some additional visibility. And we're doing that kind of in two different ways. The first way that we have is something that we call cyber century. It's a new platform that we have that's turning into a full program. We've been piloting it for a little while with some nuclear asset owner operators. I won't name who they are. You can ask your friends and see if they'll admit to it. But basically what we've done is put a rather extensive sensing suite that the government provided into the control system environment and into the corporate network environment on the inside of the firewall to be able to get that telemetry from both those places and look at that. And we're using cyber century to cross correlate across multiple sectors in very, very, very deep data waves, right? And it's not a pilot. It's not a program that's for everybody, but it is something that we are kind of now talking about more and expanding to provide an overwatch capability into your environments. Again, it's not something that's for every asset owner operator, but it's kind of going back to my first point. It is absolutely in addition to what you're already doing for your security. It's not supposed to replace your security. It's an overwatch, right? But it allows us in CISA to use some more bespoke analytic techniques and those sorts of things that we can't necessarily do at scale in a sharing way that we can then look for those types of things, right? And it's been rather successful in catching some interesting adversary activity that I could go into in a very different context. The second thing that we're doing from a visibility perspective is partnering. So we've lately been, CISA have been working with a variety of managed security providers or vendors. Jacob is from Dragos and we announced recently that they were the first one that we've actually gotten through the hoop on this, but we are working to work with those managed service providers to get access to query some of their telemetry and some of the data that they have, right? Because CISA, like cybersecurity is kind of the thing we have and that gives us lots of deep insight, but we can't put it everywhere, right? And so one of the ways that we're trying to scale is working with managed service providers like Dragos on the OT side, but then also others that I can't mention because we're not actually done the process yet, but it's not just them, we're working with several other vendors of a similar line to work with them on some of that telemetry exchange so that we at CISA can kind of get a better sense for what's actually happening across multiple critical infrastructures domestically. So that visibility piece is super important for you and it's super important for us just kind of different visibility. The last thing that I wanna touch on before I'll hand the mic back, I'm gonna get in the Q&A is, and I just gotta address it, right? I mean, so may you live in interesting times, right? What's going on over in Ukraine is as someone who's been there on a number of occasions, it's heartbreaking and as a person, it's a little tough to watch in some instances, but it is important for this community to understand that just because it's third or so of the way around the world doesn't mean it can't come home, right? One thing that CISA has been really trying to espouse and I know our colleagues at the FBI have been doing this as well is, it is in times of heightened tensions where adversaries start to look for other tools to influence the geopolitical situation, right? And so if you have an adversary who wants to keep America out of a war, they're gonna start to look for tools to do that and some of those tools could be things like going after critical infrastructure in the homeland. That's a rational thought by some threat actors, right? And so this is a great time to think about your security posture as it relates to potential nation-state intrusions and whether or not you are comfortable with the level of risk you are managing in your environment. Now, it is not a time to panic. I can't stress that enough. The war that's happening over in Eastern Europe does not suddenly make more hours in the day and does not make, if you weren't doing good logging yesterday, starting a logging program is a great idea now, but you're not gonna implement it in 24 hours. Don't burn out, but it's a good way to reflect that. Like, hey, if someone wanted to use bespoke capabilities against my environment, would I notice? How would I notice? Have I tested these hypotheses? Have I actually done a hunt? Are they already here, right? These are the types of things that these types of geopolitical tensions are a great reminder that we do live in a broader world and sometimes that broader world wishes to do us harm and it's kind of our job to blunt that harm and if we can't stop it, at least minimize its impacts and detect it early, right? And so that's just the thought I'll leave you with is just to take 30 seconds to think about your risk posture and if the best possible adversary that you can conceive of was coming at you, would you know, right? That's a really great question to ask. Anyway, with that, I'll let you ask questions. So I'll kick it back over to you, because I think you're gonna moderate. Thanks, Mark. So at this point, we have a number of questions we're gonna ask the panel and some of them are specific. Some of them are for the whole panel. So the first one is for the whole panel and I'll ask Bob to kick it off, but the question is, what do you see as near term or over the horizon threats to industrial control systems or to the nuclear sector and how might they differ from what we've seen in the past? Go ahead, Bob. So I would say initially supply chain is obviously one of the top ones that we have been focusing on. There are so many aspects to supply chain that is really difficult to nail down here from a cybersecurity perspective. But what I really wanna shift to and I really feel this pretty strongly is with the silver tsunami that's happening in the nuclear industry, I really think the people are one of the biggest areas that we need to focus on. And again, not to beat my own drum here, but cyber informed engineering and other cyber by design processes and different education that's available to engineering types and others on cybersecurity is gonna be imperative in the future coming up. And so I would give those two equal weight. Okay, any other thoughts? I think one thing the FBI has seen in the short term currently is just the increasing use of ransomware. This isn't new to anybody, but I think what is concerning to us when we look at some of these threat groups is that there are specific ransomware variants that are not specifically targeted solely at ICS systems but have aspects that can affect control networks. And so as these actors become more aggressive and go after different targets, that's one area I would be concerned about. Not necessarily even because of an intentional attack on your OT network, but it's something that spilled over. We've seen that quite a bit in the water sector. A number of instance that started on the IT side and just spilled over and OT due to port segmentation resulted in facilities having to go to manual operations. So I think that you're looking at that as a threat is really important, especially when a lot of these groups are, they're not really state, they're not affiliated with a particular nation state. And so we're able to operate in a space that's much harder for the US government to impose risk and consequences on these actors. Thanks, Joel. Other thoughts? Yeah, so I wanna second the Gray tsunami comment. I think a number of sectors are having that same crisis of expertise. And that is something that is, you know, if I was listing the risks to most critical infrastructure sectors that would probably be at the top or near the top of every single sector's list. As a younger but feeling old kind of guy, I guess I can, I'm looking forward to being part of that problem. But yeah, it's definitely a big thing. It's been my observation. You got a lot of operations engineers in their kind of 50s and 60s. And then like a huge gap and like there tends to be like some, maybe 20s and 30s kind of range. The nuclear sector maybe a little bit of an outlier there, but there's just like not a lot in the middle, which is I think part of the challenge. You know, I think though that the thing that I see as, and it's not a risk right now, but it's one that, and I know that the NRC is working on this, but it is the nuclear sector used to be one of the sectors that I quite frankly stressed less about than some of the other ones. And part of the reason was is that quite frankly, while I definitely care deeply that business networks and that kind of stuff get compromised, at the end of the day, the way most nuclear power facilities operate are electromechanical controls and things that ultimately don't have a lot of cyber risk. And so from a, the part of the infrastructure that like from my chair at SISA, I got to make sure it still works. You know, I didn't have as much risk as I did in other sectors with more pervasive controls. But as digital controls are becoming more an item in the nuclear sector, that is something that is going to change that equation. And it's not that I don't have confidence that it won't be done well. That's not what I'm trying to say at all. But just during any type of transformational time bringing in new risk vectors is something that has to be done very thoughtfully and very carefully. And this will be effectively bringing in some new risk vectors that were not really widely available to nuclear. Used to be, people would ask me like, oh yeah, you have to nuclear power, but like, no, it's like not a thing. Right? You know, it's like with a wrench? Sure. But you know, now we're, you know, there's some things. And so I think that while many asset owner operators have robust cyber security programs, there's just going to be more risk on the table than there was before as transitions happen. And that's going to be something that the industry as a whole is really going to have to work with. Thanks, Mark. Any other thoughts on the first question or we can go to the second one? I have a thought on the first question. And it just, it's a bit of an echo of the two comments is that it really truly, it's two different problems that went together makes the problem worse, right? So you've got the silver tsunami and then you've got the rapid digitalization. And so when those two things are happening at the same time, something goes wrong, the expertise isn't there anymore to answer, to go back to the manual controls and things like that, that would be a worry there. And then, and in general too, a little bit on the horizon is just attacking nuclear for the sake of not necessarily thinking you're actually going to get in or get through, but you're attacking the reputation. This combined with the silver tsunami, like the people will lose faith. Do we have, do we really have this under control? And then, and then we have to start thinking about where we're going to get the other percentages of our power, our power generation, right? And that's one part of it. And then combined with, it's slightly recently, it's just this, it's sort of recent disregard for safety, right? At one point you would think it would be out of the realm of possibility that somebody would specifically target a safety system to hurt people. And we saw this with Trisis in 2017 of we're in the Middle East. And now that wasn't nuclear, but it's boiling gas facility, right? We saw this lack of disregard for safety and similar recent events as well, we're seeing similar situations, right? And so that combined with everything is certainly a threat I'm thinking about on the horizon. So thanks, Joe. Thanks, Jacob. So our next question is for Jacob to start and then we'll see if anyone else has any thoughts. Jacob, you indicated from 2021 key findings on the front line, 77% poor segmentation. Can you speak on the lessons learned or suggested improvements for segmentation? Absolutely. So segmentation as a whole, this specific metric here is mostly talking about facilities that don't have heartbreaks like nuclear has. And so for those organizations, I would talk about implementing something similar to a heart. I'm not saying everybody needs a heartbreak, but validating this segmentation. I'm a big fan of doing it through software-defined networks. I think they're easy to maintain. I think they're wonderful. I'm a big fan of software-defined networks. I think Bob will probably like this answers and this is specific for nuclear is, I love the IAEA approach of zones and levels, zones within the levels. So one thing you see often in the nuclear, we have the four network levels. And then oftentimes you get into one of the network levels and you can communicate to anything within that network. There's not any horizontal controls, if you will, or many places lack access control lists and things like that within the level. And so you have to kind of assume that whole level is compromised if that level gets compromised, right? And so we'd love to see more horizontal protections in addition to those vertical breaks that we're seeing between the network levels. So I hope that helps. Second, any other thoughts? No, I know that that's a really important point and thanks for bringing that up is that like, there's nothing that exploitation engineers like more than once they get in, they can move anywhere they want inside that environment, right? And so lateral movements is definitely something that you need to be able to not only detect, but then if you can create segmentation, I hate to use this term, but it's kind of zero trust type of, I always call it real-time authentication authorization, but it doesn't sound as good. But no, those types of principles though are really valuable in locking down a network and constraining an adversary. So if they get in, they only can go so well. That's a really great point. I would add also back to Jacob's comment about levels and zones that we need to keep in mind that when we look at security levels, it's all about the functions that we're trying to protect and those control systems that provide those functions. And so I see it as a graded approach where we really want to make sure that those critical functions are protected at the highest level. And then we work down from there on those functions that may be less important because we all have limited resources and we can't protect everything at the highest level. And so yeah, I really agree that levels and zones are really important as part of that segmentation. Thanks. So our next question was specifically targeted Joel. So we'll start with Joel and we'll see where it goes from there. Joel, we often read about cyber attacks, but rarely do we hear about hackers being caught or convicted and punished. Are hackers getting arrested? So the short answer is yes and no. When it comes to a lot of our, like the cyber criminal side of the house, we've had much better success getting handcuffs on bad guys. The truth of it is that when we're talking about our nation-state adversaries like China and Russia, it's much more difficult to actually ever arrest somebody and bring them to justice that way. However, one of our new, kind of our pillars within FBSI division is about imposing a risk and consequences. And so in the cases where we cannot maybe affect and arrest it when we would like to, we try to impose that cost in other ways. So for example, we recently, well, like recently in the last year released indictments against a number of GRU officers for a spate of different attacks to include Ukraine 2015, 2016. And these type of indictments are useful on that. These individuals are sanctioned. They can't travel for fear of being potentially extradited if they leave Russia or a similar country. It also eliminates the prospect of getting work in the West and other places. We've seen this with China recently with some of the indictments related to some of the COVID research, some of the vaccine research. So the FBI and its partners at the Treasury Department and our international partners have really been working to try and impose costs that way. There was a note, we on a few occasions got lucky and been able to work on getting some extraditions, but it really is difficult nowadays when you're talking about a state actor. Thanks. Okay, our fourth question, this is for everyone. So if you wanna hear it and you wanna jump in, let me know. Actually, I'll send it to you yet. Do you see any best practices in other sectors that the nuclear sector might want to leverage in its cyber defense? I'll just, I guess I'll go first, just to kind of bring up a point I mentioned earlier and I think that everyone in the panel has mentioned so far that really we're in the age of really needing to have a good asset inventory to have a good baseline in your system because trying to understand that native, all the different native functionalities that you may use in your networks and how the adversaries can use those back against you. I think that we've seen, at least in some of our engagements with industry, some folks are doing some really innovative work and doing baseline and understanding network activity, user activity, things of that nature. So I think that really is one of the keys that we've seen. Okay, other thoughts? I have one as well. So, and it's difficult because for a long time I always considered nuclear to be a part of the electric sector, but I guess sometimes we're separate and sometimes we're not, right? And so the electric sector is doing such a great job with information sharing. The EISAC and things like you had alluded to earlier with our neighborhood keeper where they can send anonymized information sharing of analytics to government agencies or to others within the program as well. It's just, it makes it more of a team, right? It's not just the small co-op that's battling a nation state. It's, you know, all of us together, right? And that helps us scale and things like that. So I would say the information sharing in a technical sense because nuclear has always done a great job of sharing information with each other like in a non-technical way, right? Like the forms, things have been wonderful. But specifically like with the analytic sharing that we're talking about with like neighborhood keeper. I would say that and actually within data centers too, we've been doing a lot of building automation assessments and they have really robust and centralized logging that's very easy to then go through and do hunts on. So a big fan of that and I would like to see that more so in the nuclear sector. Okay. My point would be, because I think actually nuclear is doing a lot of things right. But one thing that, especially segmentation like from a big block segmentation that's been the kind of standard that I point to for other sectors. But one place where I think more focus could be is identity. So identity is the new perimeter. And I think that especially as we move to more as a service provided capabilities and this is the things like identity starts to become more important than assets. And I have not seen as much focus in the nuclear sector on identity analytics and identity management as I have in other sectors. And so that's one area where I think nuclear could stand up for a little bit. I would just add too on the information sharing part that maybe a little more threat type information sharing would be helpful too, because that's very, very closed within the nuclear industry. And again, I remember several years ago at the IEA we were talking about how to increase some of that threat sharing information whether there's a neutral broker for everybody or not. And it just was very difficult. There's too many issues that surround that. And so yeah, I just think there should be some way to be able to share threat information better. Okay. Yeah, I do know that there's a lot of big government efforts to try and look at sharing information, working with industry, working with IT companies that doesn't necessarily help in the OT environment but we have to start somewhere. And as that starts to mature, sharing that with the nuclear industry is something that'd be very important. It's something that the NRC is keeping an eye on as well. Okay, next question. This was targeted at Bob, but we can have anyone speak to it after that. It has been asserted that the use of field programmable Gator Rays will lead to increased security postures and nuclear power plants. Can you provide some brief thoughts on that? Yeah. So for sure it increases security, but it's not the silver bullet a hundred percent. There, we still have to deal with different types of FPGAs. Some of them actually incorporate CPU processing on them. So it really depends what type of FPGA that you're using. Our lab has found some vulnerabilities within FPGAs. It may be a little more difficult to exploit those, but a few of them do exist. And then you've also got data streams that you have to worry about that gets propagated into the FPGAs. So there are some areas that need to be looked at, but it definitely does provide, provide an extra level of protection, I guess. Okay. Other thoughts? Yeah, I think that, again, FPGA technology is one that can bring a lot of value, but it creates new expectation pathways as well. And again, it's all about the context in which things are deployed, because before a set of circuits is only going to do a thing, right? Like physics kind of says it's only going to do a thing and you can degrade or destroy it, but you can't really make it, you can't make a circuit design to tell time to cook your breakfast. This is not going to work, right? You can think an FPGA is designed to tell time to make your breakfast. I know I made an FPGA in school that ran a robot that cooked food. So it was a project in class, right? And so because they're programmable and they're reprogrammable, it starts to become a conversation about the management backplane and like how programmable are they? Just like our PLCs and our TUs, right? Like if an adversary can get into the logical extension days, we'll want to, they're great, so we can reprogram them. So we're going to use them, we're going to make it available to reprogram them on the fly to increase efficiency, which means they get put on a network, which means they then can get reprogrammed by anybody who has the appropriate credentials on that type of environment, which I'll go back to my comment about identity being the new perimeter. So I actually, I love FPGAs, don't get me wrong. I think I'm a big fan of the KISS concept of like falling back to analog, actually in a lot of instances might be something really useful to do in a lot of contexts. But again, we just got to think about making sure that we don't fall into some of the same traps that we've fallen into previously. All right. So our next question, this is for everyone. So what can non-ITOT staff take, what role can non-ITOT staff take in detecting and responding to cyber incidents? So I can start off, if you don't mind. We see a lot of this with our, we do a number of incident response tabletop exercises all across the industry, right? And so we see cross-organization help. So sometimes it's just bridging the gap, between IT and OT and making sure everybody's in the same room. Sometimes it's like, whether it's creating a racy matrix and making sure everybody's roles and responsibilities are thought out and known. And we have a defined incident commander and we're following the documentation of our incident response plan. And some of it might be just making sure you have an incident response plan. I know all of you do because it's required in our, you know, in the Fred guide. So, but for some industries, they don't always have that. And so just testing that incident response plan and going through, you don't necessarily have to be an IT person or no T person to do that. Ideally, you're involving others. You're involving the entire organization. And I think Bob's got to follow up on this as well. So I'll pass it to Bob. Yeah, I would just say, you know, who uses cell phones out there? Like everybody. I'm seeing all the hands going up right now, right? Do you use the computer? Of course we use computers, right? So I think education is huge and it's for everyone. It's not just for the ITOT folks or, you know, technical people, but it's even your administrators who do a lot of that pre-processing for a lot of us within our email system and all. So I really would push high that education on cybersecurity is huge for every organization. Okay. I'd like to add too that I think having really, full some discussion with your executive management is critical too, whether that's your C-suite or your board or folks that are in the position to one, give you resources, but two, also are the decision makers when you're gonna be in an incident. We recently worked with a private industry partner on a tabletop where they have their board of directors who work through a ransom incident of, you know, who are we gonna call? What are the procedures? You know, what does that look like exactly? And I think having kind of giving them a little bit of education about what that looks like from all the way down at the line level up to that executive level is absolutely a fair amount. Okay. So the next question is sort of circling back to something that was mentioned. It's been mentioned a couple of times, but it talks to supply chain and, you know, strategies for dealing with the increasing world supply chain and cyber defense. Given that they're off of the sense that it's beyond the operator's control, what recommendations would you have to an operator in trying to shore up their supply chain cyber defense? Go ahead, Bob. Oh, maybe I can start first because I work a lot with the IAEA. They just put out a technical document called computer security approaches to reduce cyber risks in the nuclear supply chain. And so I would highly recommend that people go and find that and download that and see if that might be a really cool tool for them. I would echo that. Definitely, definitely check that out. One thing I wanted to say is one thing that we see a lot as we were responding to SolarWinds last year on a number of these, you know, active cases that we're responding to is oftentimes the asset owner couldn't answer. We were unable to answer the hypothesis of whether it was compromised because they didn't have the data. They didn't have the logs. They didn't have the, you know, you needed three months worth of data to identify whether it had been exploited and they only had seven days worth of rolling logs, right? And so understanding that you need that information, especially the timeline for supply chain tax being much broader, right? Or being longer, sometimes nine months, 12 months out. And so you need longer windows of logs to be able to then go a hunt and see those things through. So one thing we're always mentioning is building out a collection management framework. And that's basically a table, a dossier of devices on your network. The data that they have, how important that data is that for what questions you might try to answer and then follow up, right? Like so I checked this firewall to get this answer. And then my follow up is the registry of this device over here. And so just having all of that ahead of time really can speed up the incident response situation to hopefully you can shrink that window and be able to answer your hypotheses. Thanks. Other thoughts on supply chain? All right, next question. We'll load it up. Okay, so this is, I think for everybody, is it possible? All right, so I'm gonna take a first crack at this one and then I'll ask Mark to jump in. The question is, is it possible for the NRC to get private industry notifications for incidents that may involve nuclear reactors? And if so, how will we go about getting on that list? Well, first of all, the nuclear power industry has a regulatory requirement to report any cyber attack to their safety, security, emergency preparedness systems to the NRC and even possible cyber attacks. They have a requirement. If the NRC received one of those requirements, we would turn that around as quickly as we could and inform industry. So that's a service that we will provide. We also rely on the sector risk management agency for the nuclear sector, which is part of DHS SISA to share the alerts that come out of the cybersecurity part of SISA. And if there's any entity in the nuclear industry that is not part of that communication stream from the sector risk management agency, we strongly recommend that you get on that list. And if you don't know how to do that, please reach out to us and we'll help you. Mark, any thoughts? Yeah. So on the second part, absolutely. We put out a variety of alerts, not always public. Oftentimes we have some security controls that are in place around it. So we'll have like a different, we use the TLP traffic like protocol standard for articulating. So it might be TLP green or TLP amber, if you know what that is. Let me just go to usert.gov slash TLP. There's an explanation. But when it's more sensitive, we will put it out kind of on our mailing lists. You can reach out to SISA at central at sisa.gov to get signed up for those alerts and get on the right mailing list and those sorts of things. Or you can reach out to your NRC point of contact or if you have a system point of contact because we do run the SRMA for nuclear, they can also do that for you as well. But yeah, there's definitely lists to get on. But I will say broadly speaking, we strive to put as much out as we can, just TLP white or public, because we realize that our kind of lists aren't gonna hit all the right people that need to get the information ever. And so we strive to kind of hit the public surface as much as humanly possible. But in some instances, it's just not, their sensitivities is just not possible. But yeah, anyone in this industry can be signed up for like the TLP amber stuff. That's not a problem. Okay, any other thoughts? I know this is sort of specific, but anyone else? Okay. So the next question came in from the system and we phrased it a little bit. And we may have already talked to this some extent, but what are your thoughts on protecting from zero day attacks? I mean, it depends on the attack, right? But in general, this goes back to something Mark was talking about before and that's, not putting all your eggs in the IOC basket, right? So focusing on more resilient threat detections, focusing on things that are hard for the adversary to change. So you look at, I think David Bianco created this pyramid of pain, right? And so you've got indicators, you've got domains that they need to change. So whether you're changing the hashes or the files, the domains they're using, the infrastructure, and eventually you get to a point where it's very difficult. They're always gonna want to do initial access, grab credentials and pivot off that device to another device and call home from that device. That's not changing. That might change once every two years as opposed to a month or a week. Some of these indicators are really only good for a very short amount of time. So I would say focus on those resilient behaviors are gonna help you with the zero day attack as well as making sure that you don't have this prevention mentality that, and actually for a long time I felt nuclear did, that you hide behind the data diode, you don't wanna do anything else, and the data diode is not a silver bullet, right? And so at some point in time, you have to make sure that you have other controls. It's not just prevention, you're also doing the recovery and response and detection pieces of sort of like the NIST framework, there's more than just the protect. And so just making sure that you can facilitate all of those and not put everything in the prevention basket, understand that prevention is ideal but not guaranteed and you will need to be able to detect and respond. I'll pass it to Bob, I think he's gonna stand up. Yeah, I would also come back to the people answer that if you don't have people looking at the logs and looking at your alerts and things that you have and looking at the data traffic, things can go by that you may not see, depending on how good the zero day is. And so again, it comes back to the people doing the jobs that they need to do. And there's a lot of tools out there obviously to help sift through all that traffic. But if you have people who are trained and educated that are looking for some of those suspicious patterns, I think that's just as important as all the technical stuff. And you need visibility to know what normal looks like. So when abnormal happens, it's not, is this a process anomaly or is this a cyber event? You know, hey, this is weird. My PyHistorian is sending commands directly to a PLC. Like that's strange and that's probably shouldn't happen, right? And so knowing that and then keying in on that is definitely key. Yeah, absolutely. You know, I think there's a really important point. That's weird is probably the scariest thing you can hear from one of your analysts. Okay, why is it doing that? Right, that is, you know, everyone always thinks that when cyber attacks happen, clocks on go off and those types of things, right? And it's really some analysts going that doesn't make sense. I don't understand that, at least that's broken some of the most consequential cases that I've worked. But again, I just want to just drive this point home too that you can only see weird if you know what normal looks like. And you can only see what normal looks like if you have good visibility, right? And Jacob mentioned you having a collection management framework, understanding where your visibility is, where it's not, you know, if you, those are things you need to pre-position and that's stuff that you can do now, right? You know, can you defend yourself from a zero day? You know, you can maybe, maybe not, but what you can control is whether or not you can position yourself for success that you have control over, right? And so making sure that you have logging that you have a SEM that you have well-trained analysts like these are all the prerequisites to repelling unique or bespoke attacks. And I want to just do a quick call to CISA. There's a really great resource that CISA publishes every year. It looks at the top 10 vulnerabilities that we see exploited. And many times these, you would think that advanced actors are using some sort of exquisite tool. When oftentimes they're using the most readily available exploit. So I would encourage everyone to go check that out. There's also a number of resources that are tied to each of those exploits in terms of mitigation. Yeah, thanks Joel for the shout out. And I'll just drop a little bit of extra bomb on that one. So we recently started putting out this thing called the Kev or the known exploited vulnerabilities list, which is for you in the federal space, they're required to fix these vulnerabilities first. We recently put out, I think 91 vulnerabilities on the Kev like at tail ended last week. And some of them were from 2000 and I think they went back to 2004 because they were being actively exploited like now. So I can't stress that point that Joel brought up which great perfect point enough that like, you think nation-state and like high order actors are using only the new hotness, you don't get extra points for the cool hack like they still have bosses and spreadsheets and budgets too. So like if a 2004 vulnerability will make it work, that's what they're gonna use and that's what we're still seeing in some environments. I would just echo one of the number one credential dumping tools that we still see in our activity groups still use all the time I brought it up earlier today is MemeCats. MemeCats was released in 2007, still works, still super valuable, that's not changing, right? Okay, good discussion. Next question. This is for everyone. What are your thoughts on risk informing the selection of digital assets that need to be protected or subsequently the level of protection for the assets? So maybe I'll start on that. We have our design basis documents. We have our tech specs. We have our safety systems, safety related systems. Of course, those would all be things that you would want to look at first and then you work yourself down to just what are those systems that may even influence some of those safety systems and work yourself backwards. But I think there's a lot of documentation that already exists out there on trying to identify what those most critical functions are. Jacob. Yeah, I would just kind of what you said, right? You can't protect everything the same. It seems unreasonable that I would protect a chart recorder the same level I would protect a safety system that doesn't make sense, right? Unless, and so definitely making sure that we're applying controls as appropriate, right? We have limited resources and things like that. And as you said, there's a number of publications out there of how to go about identifying that and doing that. But definitely a great approach and a risk informed approach is the way to go. Yeah, no, I completely agree. And you cannot protect all the things. It's just like not a dual thing. And so you have to prioritize. You've got to prioritize. Well, remember to prioritize like an attacker, right? So yes, one mistake I often see people make when trying to make risk-informed prioritization decisions is they're thinking about what's important to the process which is important. But you got to also think about what the attacker's kill chain looks like and what type of techniques. Again, I'm going back to TTPs, the attacker's techniques of procedures. What TTPs the adversaries are using to get to those assets, right? So your chart recorder might not be ended up in itself particularly important, but if it's a pivot point into the environment where you, you know, compromising that can give you access, it's maybe a little more important than it looks, right? And this is where I love, you know, using the MITRE ATT&CK and MITRE ICS ATT&CK lexicon for overlaying onto your collection management framework of where you can see, you know, what tactics are being used by adversaries broadly and then can we see those tactics, right? And then that will start to show you what assets in your environment actually are critical to the attacker's kill chain, right? Because it's not just what's important to the process, it's also what's important to the attacker's kill chain and that is not, that's a Venn diagram, not a circle. So you gotta take both things into account. All right, so the next question is a little more open-ended. As new licensees, small and marginal reactors and advanced reactors, licensees look at incorporating technologies such as wireless networks or autonomous control recommendations would you have? So I'm assuming that there's a lot of engineers on this call, right? I'm just gonna put that out there, right? Nuclear engineers, but like everyone takes like basic, you know, EE courses, right, at some point? I did a computer engineer by education, so, but like D squared, right? Right, so like energy dissipates over D squared. It dissipates, but it doesn't go away, right? Like wireless can be hacked from space, just so we're all super clear. So anything you put over a wireless signal, just to understand that there is no way that that signal is private, can't be denied and can't be intercepted or integrity cannot, can be de-maintained on the wireless signal. Now you can put things on top of it that can maybe increase that security level, but like wireless goes on forever, right? Over D squared never actually hit zero. And so that's something to really think about. And in more true DCS type environments where you don't have a large physical footprint, fiber doesn't cost all that much. And it's, you can't hack light from space, not inside a little glass tube. I just will follow up, I did a number of research on these ideas while I was at Ido National Lab. And so when it comes to like autonomous control, that's a big yikes for me, right? I'm not a big, I'm with Mark on the space hacking there. And however, when it comes to plant optimization, so polling data to help with preventative maintenance and things like that on non-safety systems, on systems that are maybe important for efficiencies, but not important for safeties and things like that, I think that's fine. I do say just roughly, I mean, I haven't looked at it in a couple of years now, but when we were looking at it before, there was one plant and implementing wireless in a plant was something like $50 million, but then implementing like a DAS LTE override was maybe $1 or $2 million. So there is significant cost savings from going there, right? So there is reasons to look at it, but I would hesitate to do anything on the safety side or control side, but I would be okay with some kind of controlled plant efficiency operationalization that way. So I would just add that as far as autonomous operation, baby steps would be a good thing. What we automate or do that automated control on might be another thing we wanna look at. And that kind of goes with wireless too. Maybe there are some places that wireless makes sense, but there's certainly other critical functions that maybe it's not so it's a good idea. So that's it. Okay, other thoughts? All right, I have one more question. Let's see here. And it sort of plays off some of the discussion we had earlier, but I'll go for it. Can you talk about the importance of hardening devices in order to facilitate understanding and identifying normal traffic and behavior? Yeah, so understanding normal is probably one of the most important things you can do. It is the least sexy thing you can do in cybersecurity, but it is the most important thing, right? It is part of the set of brush your teeth and eat your vegetables, which I could use a little bit more vegetables. So, but it is, you know, if you don't know what's supposed to be happening in your environment, you're not gonna be successful. And another thing, you know, something that some analysis that NSA, FDI, and DHS, it was insisted then put out a couple of years ago that every time we go back and check it's still true, we put out the seven steps to effectively to defend industrial control systems. And when we did that initial analysis, one of the things that was a little shocking to me was when we looked at root cause analysis of incidents that happened broadly across control systems environments, allow listing applications or formerly known application whitelisting was actually the number one control that would have stopped the most things. And while it is by no means a silver bullet, just like multi-factor authentication, super useful tool, not a silver bullet, that is a hardening thing that requires you to understand your baseline and what applications and software needs to run on your system and then make your system only run those things. And then when you come in to bring in malware or other code, don't be wrong, especially identity, this doesn't help with that, but like stops a vast number of different intrusion capabilities. And if you've ever gotten an email from me at the bottom of my email, it says, how'd you frustrate the adversary today? It's my tagline. It will definitely frustrate your adversary if they can't execute anything new on your system. So hardening and baselining kind of go hand in hand and are just a paramount importance. So on the flip side to that, from some of the lessons I've seen in the field, I've seen situations where the way it's configured is the vendor actually doesn't allow, they almost do it the opposite, where the SCADA software and everything there, that folder and everything is actually whitelisted from the AV that's installed on there. And so it makes a great staging point for my pentastic malware that we use for things, right? And even then when we're looking at group policy, hardening and things like that, oftentimes you'll see some privileges, some privileges that are there that don't need to be there. One of my recent favorites was the SEI impersonate. You can use this juicy potato exploit. It's one of my favorites to immediately get system level access, not just administrator but system level. And so I've seen it, I'm just like we would look through different networks and several of the computers on that network didn't have that but we found one that did and then we were able to exploit and then obviously stage malware in the whitelisted directory there. So I would caution that it seems like we do have to get vendors on board with this approach because a lot of times the asset owners' hands are tied and they're not allowed to configure and harden them to a certain meat, to the way that we would like them to be. And so getting the vendors on board to help us with that is a big step. I would just like to add a reminder that operational technology is pretty easy to know what our traffic looks like. It doesn't change very often. I mean, we have these systems running for 20 plus years. And so yeah, absolutely we can totally whitelist or whatever we need to do because we understand what the traffic is. It's a lot harder on the IT side. And the other comment I would make on hardening is for me that's just a hybrid hygiene issue. Everybody should be hardening their systems no matter whether it's the OT or IT side. Thanks. Any final thoughts? Okay. I'd like to thank the session participants and all those who logged into the other discussion. We hope you found this subject matter interesting and have a greater appreciation for the potential challenges of cybersecurity for industrial control systems. And I'd also like to thank the inter-seize RIC team and the contractors that made this session possible. It went very smoothly and we really appreciate it. Finally, I'd like to thank my support team for the cybersecurity branch. Tammy Rivera, Brian Yip and all the other branch members who helped participate came up with the idea for the session helped us put the session together. And Tammy and Brian in particular did a lot of legwork make a lot of phone calls sent me 100 emails making sure I was paying attention to what I needed to do. Thanks everyone. With that, I'll close out our 2022 cyber RIC session and wish you all a fine evening.