 So hello everyone, I'm Sarah and I'm one of the co-organizers of the top 20 secure PRC coding practices project and Vivek Ponata, my co-organizer and I will give you an introduction to our project today, why it's important and to why we did it and how we did it. Well, the secure coding practices project is about writing secure coding practices for PRCs, so for programmable logic controllers and it actually was sparked or inspired by Jake Brodsky who gave a presentation as for in 2020 in Miami. And the presentation was on secure PRC programming, obviously, and on a few hints and tips and tricks how someone who's been working in the field for a lot of years programs of PRC to be more reliable or more and more secure in the end. The video is available on YouTube and it's become a classic by now. And after this talk, a couple of people came together and said why we need to make something out of it, because one of Jake's main statements was secure PRC programming. Really, no one knows this at school. And what he meant was that the engineers who programed PRCs do not learn anything about security at school first, but then also they don't learn anything about how to securely program a PRC at school. And the simple reason for this is that there is nothing to teach because unlike for normal IT software where we have a lot of secure practice, secure coding principles, secure coding practices, published by Microsoft, published by universities that are normal to learn for programmers for PRC, there is nothing you could teach there are no secure PRC program practice programming practices. And after that talk, a couple of people said, okay, let's change that different by day Peterson was organizing as for we said, okay, let's create something from a community from engineers for engineers. And let's try to define how to securely program a PRC. And the reason why it was important was pretty obvious because we all knew for at least a decade, maybe even longer that PRCs are vulnerable. There was a project that was for project based camp. One decade ago, show PRC vulnerabilities and a lot of things have followed for virtually all PRC vendors. Their PRCs are insecure by design and that's not always used to be a bad thing because some some parts of these PRC vulnerabilities are actually PRC features. There has not been built a program with security in mind so it's no no surprise that they're vulnerable, and all these advisories and all these news that there are new abilities found in PRCs keep coming, leaving engineers, more or less in despair, or at least with nowhere to start because all these vulnerabilities keep coming but we don't really develop things a so what so we develop ideas how to fix certain mobility we don't really develop something to make PRCs more secure in general. So there's so many problems. We all know PRCs are somehow the Achilles heel of a plant, one of the most vulnerable part of a plant, and we don't really have anything our hands that can hand over to engineers and say well, if you do it like this, it even gets it gets better, at least it doesn't get worse. And you don't build in all these vulnerabilities. And there's one big argument, or one big understanding in the industry, saying, well, PRCs and security that just doesn't fit. You know, PRCs, they, they are just not made for security they insecure by design. There's also many of these standard well known secure coding practices that we know for other software that's not even applicable to PRCs we don't know how to do that for PRCs. And that's very much truth in it because it doesn't really doesn't make sense to simply take the secure programming practices you see from it and try to try to implement them on on a piercing the same way that that won't work. If you ask a fish to climb a tree the same way that a monkey does just not made for it. It doesn't make much sense for it to climb a tree at all. But that doesn't really mean that PRCs are not made for security after all they just need to, we just need to rethink how to achieve security for PRCs using its characteristics. Because the PLC, you would even say, or some people would say, many IT people would say, well, obviously it's just to dump for security. And I can understand where that comes from because in a way a PLC is done on any way you could describe everything that a PLC does with well, you have one job. Or to be a bit more fair, you had four jobs, but that's it. So a PLC has a recurring scan cycle that it does over and over again. So it reads inputs from a sensor. So in this example here, we've got a fluid level sensor. It reads where the fluid is, maybe water or oil or anything. And it reads that input from an input interface, processes it and it's in its CPU and spurs its memory and then it loads the program for memory and executes the logic it loads and the logic looks way different than it would look in a normal computer program. And it may be communicated with some kind of external component like an HMI or a programming laptop, and then it writes output. So if the level, the fluid level here is too low, it might say, well, stop the pump and pump some fluid in there. And all these four steps, it repeats endlessly. That's the scan cycle and does it always the same way and always in the same time. And that's it. So a PLC has certain characteristics. It's a process expert. It really communicates directly to the process directly with process. It knows what the process is about. It doesn't have to do a lot of fancy anomaly analysis because it's own job. It's, it's very job and it's only job is to know about the process and to keep track of certain process values and keep them keep them on track. And yes, it's true. A PLC mostly has limited resources because it has a very designated task and that doesn't change much and you don't need all resources in the world for it. But instead, it's very important that it's reliable so that you know, if you if you give it a value, and after a certain time there is not put into you can. It can absolutely guarantee that that output comes within a certain time and that that's what we call real. It's real time capable. So the pump absolutely relies on the PC, giving it the output. It needs to be deterministic. So when you put something in an input in it always returns the same output. And all I see people know that that's totally not normal in for normal devices and for either networks. So a PC really is not dumb. It's just different. It has one job for jobs, and it's really, really well at doing these jobs. And this is the setup that we need to live with and that we need to accept and that we need to look for strength that we can use for security benefits. So, what we wondered at the beginning of our project was, what does it mean after all to securely program a PC. We all know security is important and we all know that PCs are vulnerable and we want to make them more secure. So, what does a securely program program PRC mean, and what is a secure PRC. So these were the two leading questions that we tried to answer with the top 20 secure coding practices list or at least we try to give a first base for discussion. So, that's all for these questions. I give you a short introduction of how this list looks and what how it's structured and what our assumptions were, and then we would give you a deep dive into a couple of practices to give you a better feeling. So, the top 20 list. If you download it, you can go to the project website. We show that at the end. You can download it for free. It has the most permissive license so we can virtually do anything with it. We wanted to spread we want to spread to engineers. Anyway, if you download it, your first look at two pages. And these two pages contain all the 20, 20 practices that we propose. They all have a title, they all have a short description and they have a number and that's it. And the number that's important to note is not a ranking. So, a practice with number one is not more important or, or anything than the practice with number 20. And then, if you keep on reading, that's about 40 more pages with details on all the secure coding practices. And this is an example on what the details look like and the example practice for taking us validate inputs based on physical plausibility, which is one of my favorite practices to explain because it really is a practice that uses the character and uses the superpowers that naturally come to a PSE that appears is really good at like fish is that swimming. And that's of course because the PSE as we said as a process expert, it's really good at knowing what values, what process values are normal, and are good and are acceptable. If you, for example, have a gate and the gate takes a certain time to open physically, it always takes five seconds until it's open because it physically can't be done more quickly. And the PSE gets a value from a sensor which says well gate is closed, but it's only half a second. Since since it is just since the gate starts to close, then the PSE knows something is wrong, at least can know something is wrong, you can ask the PSE about that. So that is something that you can use to validate PSE inputs and that's something very unusual because it's not something how we could validate an input for normal software because normal software usually doesn't have physical inputs or physical inputs, but the PSE does. So there are things where you really don't need a fancy security monitoring tool, and you don't need a AI based anomaly detection or anything, but you just need a PSE whose job it is to know the process. And for this practice, we've got the title we've got the short descriptions that basically summarizes what I just said, and then for each practice we have a security objective, and that summarizes in a quickly in a quick way as a tag if you so well. What the main security objective is that practice fulfills, and that could be either integrity. And a lot of times it's integrity for PSE of PSE logic of IO values of PSE variables. In this case, it's integrity of IO values, but it could also be something carding or resilience or monitoring. Look at that in a minute. And then we also define talk group, because not all practices can be implemented by the same talk group. We use the ISA 62 443 talk groups you may know them as product suppliers. It's integration and maintenance service providers and its asset owners. Truth is that a lot of practices always depending on who programs to PSE is of course, but a lot of practices cannot really be integrated be implemented by asset owners because a lot of, especially low level programming happens at product suppliers and the integration service providers side. And then we've got a section called guidance, where really everything goes into that helps implement the practices, be it screenshots, be it cooking recipes for for different makes or models of PSEs, be it background information that you need to know in order to implement the practice, whatever goes into the guidance. We've got a section called examples, which can be implementation scenarios or it can be scenario examples for certain industries or products. Or it can also be examples of what could happen if a practice is not implemented, which is important to understand the security benefit. Talking of security benefit. That's another very important sections we have in our for each of our PSE programming practices. And these sections involve the why section. So why is this program practice important. And of course it's secure PSE programming practices. So it's actually all of these practices are beneficial for security, obviously. But then also, many of these practices, and that's a very important point to make. So that that sink in a lot of these practices also have benefits for reliability and they also have practice have benefits for maintenance, for example. If you take this example here. So, validating input space of physical plausibility of course that's good because you can validate. If an attacker has manipulated value tempered with a sensor or anything. So that's that's that's good you can give an alert, but you can also use the validation for reliability purposes so if something doesn't work if something maybe broken mechanically. So the same the same feature can can give you a hint that something like this is wrong. So that is things where that statement that we often read that there's always trade off between security and comfort between security and operability between security and usability and operations that there's always a trade off. That's actually not always true, at least for the secure PSE coding practices. That's actually not the case for every single practice that we have here. We've got not only a security benefit but also a benefit that falls on the realm of operations. And that's very important point to make. Because that's often the case in OT because security with having processes and having PSE characteristics in mind is often about reducing complexity it's often about better monitoring it's often about better structuring of code it's often about better documentation and all of these things are actually things that mostly do not stand in the way of usability like long passwords would do, but they really help operations. Last we also have references references to existing standards and frameworks and that could be for two courses. So first we reference attacks or weaknesses of vulnerabilities that is certain practice could prevent that would be minor attack or minor CWE in these cases here. And we could read we also reference security requirements that the practice under consideration could help to fulfill. And these are in these cases here is a 6443 requirements. And that could be extended. Small look at the security objective. What can we expect from security program PSE and what can we expect. Looking at the objectives. We see a pretty clear structure here so we see that most of our 20 secure program practices have integrity benefits. And that's actually not a surprise because what a PSE does is making sure that values process values I owe values. Stay on track. And the most important thing that you could imagine the most important threat scenario you could imagine doing to a PSE of course is tempering with its integrity. And then the other part would be monitoring because as we said before, a big issue with PRCs and the big benefit of PRCs is that they are process experts. They know what the process so much better than all your fancy security tools can learn in a long time. So for some cases, for some monetary purposes, PRCs can actually do a kind of work. And also they can help with resilience and hardening with a few practices for these. One very important point to make us securely programming a PRC is most likely not the first step you will take in a security program. It's not the most important. And of course, no, these practices will not have prevented Stuxnet, at least not alone. But it's simply, and that's the whole goal of all this turning a PRC, which is always been one of the most vulnerable components in a plant. The aqueous here in a plant, turn these vulnerable components at least to one more layer of security to a last line of defense that stands like a bodyguard in front of your most important processes in a plant. Last two questions that we often hear when we talk about PRC secure coding practices is why is a certain practice not on the list. And the most important answer or the most popular answer is probably because it's not in scope, because we got a, this was a community community project we had about 900 people registered on this course platform, who collaborated on on this and to submitted potential PRC coding practices. And we have to limit the scope at some point, we had to make sure that our top 20 secure PRC coding practices actually kept the coding practices. So we have the scope that's that's lined out here. Everything is a scope anything that includes changes to the PRC itself. So this one here was not in scope is, and that's a potential second scope that we could be doing is top 20 secure PRC environment practices as a working title so everything that involves architecture that involves HMI programming in device or IO requirements or documentation could be on a second. Second question that we get a lot of times is, why is this practice on the list at all it's so basic, but they're actually three cases why someone would say it's so basic. And this is a security person says that the PRC is a practice is really too basic for security people, everyone knows about them. Well, a simple answer why it's on the list anyway is because security people don't program PRCs doesn't help if security people know about it. PRC programmers need to know about it and that's the target group of our list. And also security people may know about this practice but do they know about how about its implementation on a PRC, the quiet requirement maybe basic but not its implementation. And then we also wanted to do something against the myth that security practices are simple but PCs are simple to dump to implement them. It is true for some practices but it's not true for others and we want to put the practices explicitly explicitly on the list that can be implemented on the PC. So that's the first one. The second is PC program I say well, that's so basic it's this. I've been doing that for years every PC program knows about that. That may be true. They may be, they may be know that it's a good practice, but they may not know it in fact has a security benefit. And then they also should do it because for security reasons, and that becomes important first when you want to talk about what a secure PRC is of course, but also it becomes important because it adds one more reason why you should do a certain practice. And if you decide not to do it because it's just for efficiency reasons and efficiency is not as important. It's important that PC programmers know that this practice they may be leaving out also has security benefits and they on the other have on the other way around they also have a security problem if they don't implement them. And then, lastly, there are practices, we'll just say well, it's, it's too basic because everyone knows that everyone has been doing this for years anyway. But if anyone, everyone has been doing a certain practice not to securely program PCs for years. I hope we can, we can agree that absolutely belongs in a top 20 secure PRC coding list so that's why it's on there. And also, there's always rookies. There's always newbies that's always new people that do not fall on everyone and they don't do not know. And I want to remind you of our true leading questions that I mentioned in the beginning, which were, we want to answer, we want to pull up a list which college defines what it means to securely program PSC and we want to have a list that answers the questions what is a secure PRC so we want to be basic be basic is really one of our list purposes. So, now that we've answered most of your questions hopefully that arise that could arise by looking at that list and you have a small I have an idea of what it's all about. I hope you're ready. I'm hoping you're ready for the deep dive into a couple of questions practices. I'm hoping to be able to introduce to you now. The next stage is yours. Hey, I see as village folks at DEF com. Hope you're enjoying the conference so far. I'm here with Sarah fluke to talk about secure coding practices for PLC. A little bit about me. 23 plus years experience in industrial control systems. I've been doing ICS OT cyber security for the past eight plus years. My background is trying some communications engineering. I have an MBA in finance and the GIC SP search from sense. I started off as a controls and instrumentation technician, calibrated valves, transmitters, configuring PLC's became a field engineer. I've been working on managing services since Key Tervins control systems around the world for utilities and gas customers. Then, back to business school. After that became a sales and business development manager. And currently I serve as a service manager for Canada fleet, my company covering all the utility and holding gas customers within Canada. countries, my contact information is below or LinkedIn and Twitter. All right, now that Sarah has given you the background, let's take a deep dive into a few practices to learn more. Let's pick something straightforward for our first one, Practice 13 says, Disable unneeded unused communication ports and protocols. So the controllers that we are used to generally support multiple communication protocols because they are used in various applications. Now, most of these protocols are unfortunately enabled by default, even if you're not using them. As an example, Telnet or FTP, you have to actively disable them if you're not using them. Now the best practice recommendation is to develop a data flow diagram that clearly shows all the required communications, what ports are required to be open and how the logical network segmentation means, what protocols are used so that it's clear as to why certain ports and protocols are used and why the others aren't. Now every additional protocol that's enabled adds to the PLC's attack searches and the attackers can't use if a certain port is disabled or a protocol is disabled and you can typically alert for something that's being enabled or perhaps you need to download to the controller to be enabled before or be enabled that particular protocol. That gives you ways to find out when something is going wrong. In addition, perhaps you have a network sensor or firewall that can detect if a particular protocol is in use and if you have previously disabled that, you can get a sensor that way. But even natively, there are ways to find out if a certain disabled protocol or port is enabled. Now following this practice of disabling unused ports and protocols, you can also reduce the potential for malformed traffic to affect PLC. So most PLC's don't really do well when there is a malformed traffic and a particular protocol is not in use, any malformed traffic from a malicious actor on that protocol is not going to affect the PLC. Now following the practice also reduces the overall complexity because what's not there or what's not enabled is not necessary to be maintained, administered, updated. These days you hear a lot about S-bombs, software, the materials. The idea of being volubility is found in the network stack from your vendor to investigate and find out if it's a relevant threat for your application and then follow the mitigation path. However, if those ports or protocols are not in use, that just reduces the attack surface at a more risk. Next up is practice 5. Again, another pretty straightforward one, using cryptographic checks or checks and integrity checks for the PLC code. Now some PLC's have a built-in mechanism for checks and future and if that's available, by all means, write that to a register, log it, esterize it, alarm or alert it when it's changed so you can verify the integrity of the code in the PLC in a pretty much straightforward manner. Now, most PLC's do not have the processing capacity to generate or check hashes, in that case, you can use EWS, the engineering workstation, to generate the hash or checks of the file software and then you upload the binary back on the PLC, you can break that within the EWS, that way, verify the integrity. Now knowing if the PLC code is tampered with is essential for a few things. Number one, it'll help you notice the compromise. So the integrity of the code is suspect, compare, some of the PLC is not what you expect it to be and also after compromise, the file that you have, the binary that you have that you verify with this cross check can then help you understand if the PLC is safe to operate after a potential compromise because you can download the binary that you know is good and then run with it. And then finally, it's also a means to verify the PLC is still running the code approved by the interpreter or the manufacturer because sometimes that's necessary for warranty purposes or perhaps for compliance where you have to run a particular version of code and confirm by using this integrity check. Going a little deeper into some of the more interesting ones, leave operation logic in the PLC, wherever feasible, practice 3. So on the right side, you see the HMI here, the PLC here, this practices, you know, don't put a lot of code in the HMI here but put all the operation code in the PLC. So the operator reservation software in the HMI, these days, has a lot of coding capability. Initially it was to add a few alarms, ad hoc, maybe change some limits for those alarms, or give some lower level access to an operator to get some kind of code in there without necessarily touching the PLC or download on the controller. However, over time, some programmers started utilizing the HMI code because it's just easy to work with sometimes. However, calculating values like totalizers, if you do that in the HMI versus doing the PLC, it's going to be a lot more accurate in the PLC because PLC is much closer to the field because the iOS connected to the PLC. The latency in communication between the HMI and the PLC, that's a big deal because the HMI is typically polling once per second or every second or as needed. So if the HMI is not getting enough updates, it's not in a good position to be a totalizer or a timer based on what the PLC can do versus the HMI. Then the HMI, usually the booting in HMI shouldn't cause any process upset. It shouldn't affect the PLC operation. However, if you use the HMI for these data or updates or values and totalizers, and that's a problem because if you reboot the HMI, those get reset. Similarly, if you're doing the Windows patch or if you're taking the backup, that periods of availability or unavailability of the HMI might affect those values which you do not want. So anything safety or protection layer is better off being handled by the PLC. As an example, enable or disable actions. I've seen rare operator clicks on the enable button and that click is shown visualized to the operator as if the action has been performed and then the command is sent to the PLC. However, what if the PLC cannot execute that command? What if the PLC has some force items that is stopping it from executing it? Those are not visible to the operator because this code is in the HMI. Similarly, if you have a time delay for a motor restart and you can put that in the HMI code, what if because of comms issues or any other problems? The HMI cannot calculate the timer properly that it's not a safe situation for the boarder to restart or not to start when you want it. Force signals, they're typically visible only in the PLC and not in the HMI so you can't really put some safeguards in the HMI code as much as you can in the PLC. Similarly, inconsistent visualization or status when the HMI values are not properly back to the PLC. So if you configure something new, some new points, some new alarms, but they're not represented in the PLC, then you have some inconsistency and also the PLC doesn't know that these other things exist. And then finally, consistency in code maintenance, audit and change management, because you split the code into two different places, these become difficult. Furthermore, if you unify your code in the PLC, you're reducing the attack surface. You all can imagine more scenarios where the HMI is open to more public facing, maybe the internet and someone is able to access and move the mouse around. We've seen that, for example, in the Florida water situation where they were able to even enter some values much higher than what the water should contain. Those kind of attack scenarios you all can imagine in the HMI level. However, if the code is in the PLC, then there's a lot less risk because the threat actor needs to be a lot more sophisticated, a lot more focused on proceeding with attack versus randomly moving the mouse around and even disabling. Next up is Practice 11, which is instrument for plausible checks. Now, I've given an example here for you all to kind of imagine some of the examples I'm going to talk about. We have a couple of tanks, some pumps, and some instruments here. So this practice is about comparing integrated and time independent measurements to confirm if the measurement that you're seeing is accurate. For example, a meter pump on the tank level, a volumetric change in the tank should equal or be proportional to the integrated flow. If it's not, then it is an alert. Similarly, burner and a boiler, the added color heat from the fuel should equal or be proportional to the temperature increase, only if it's not raising alert. The idea being to compare different measurement sources. So measuring, for example, the same phenomenon in different ways. Another example, compressive stall is typically visualized by a reversal of flow. However, we also have these vibrations. So those two are measuring the same phenomenon of different rays. And also, not necessarily different sensors, but maybe the same value coming from two different communication channels. Maybe that PLC has a bad brain communication and the model goes out to the DCS, and then also four determine milliamps to the DCS or some other equipment where you can compare. And if it so happens that whoever is manipulated one, did not manipulate the other, you will get a word that something is wrong. So the idea behind this practice is to facilitate monitoring for manipulated values, as long as not all sensors are manipulated at the same time. Just another level of male protection in a PLC where none of this is taken before. Now, it also prevents acceptance or identifies wrong measurements, like we talked about before. The tank level suddenly shows much higher than what it should be in correlation with the volumetric or the flow coming from the flow meter, than you can identify that some measurement is wrong. Also, able to rule out the physical causes for failures more quickly. If you do believe that tank level, for example, you might think that this particular valve has failed. However, if you're comparing that with flow, you kind of know that the valve is okay because based on that flow, you could not have this level, for example. Alright, next step is practice seven, which is validate and alert for paired inputs and outputs. Now, paired inputs and outputs of those that are physically not able to happen at the same time. For example, a motor could be running or not running. So it could be started, it could be stopped, but not both at the same time. Similarly, a rear belt could be forward or reverse, not moved at the same time. And of course, a valve could be open or closed, but not both at the same time. So the paired signals cannot be asserted at the same time, unless there is a failure. For example, this open-input switch is working fine, but the closing switch failed and also shows close. So that's an instrument failure, or a malicious activity where someone is trying to force a bunch of things, and it also happens to force both the open and close, or also the close wherever it's actually open. Now, some additional recommendations as part of this practice is to configure stop and stop for distinct outputs, as long as the MCC is capable of receiving those outputs, instead of a single output that can be toggled on and off, because it's way more complicated to rapidly toggle two distinct outputs, especially that recensive way. It's very difficult in a PLC versus just one output that you can rapidly switch on and off. And then also consider adding a timer for restart after stop is issued. This again helps in avoiding rapid toggling of the start-stop signals, whether doing errors or doing a bad intent. And finally, let's take a look at practice 16, which is summarize PLC cycle times and turn them on the HMI. So, cycle time is a time it takes to compute each iteration of the logic of the PLC, and these cycle times should be fairly constant in the PLC unless there are changes to the environment, or the PLC logic, or the process. As an example here, you see the time over here and cycle times. So, the expected fluctuation here, but then every so often you get the spike which can be malicious both. Now, unusual cycle time changes can be indicated that the PLC logic has changed, some code has been added. Visualizing these values over time, across a tangent to these anomalies, because let's say you have a threshold somewhere here, none of these triggered it, but visualizing it this way, this manner can help you understand that something is going on. So, many PLCs have this maximum cycle time monitoring at the hardware level. So, in this graph, there's a number of cycles at the time of the cycle. So, if it exceeds a maximum value, usually the CPU issues a stop. So, at this Dmax, it's number five, the CPU would stop. So, attackers would know this, so they typically don't add code that much. They typically eat the code lean so that you never reach that threshold. However, if you happen to create these boundaries, these acceptable thresholds, one and three would be okay, but anything about would be alerted. This would be based on the application of the process interviewer's understanding of the process, of the PLC control engineer, working with the process engineer to find these thresholds. But the idea being about these thresholds, you have some alerts that if and when some malicious traffic or malicious code is added and PLC cycle time is higher than what it should be. All right, now I hand it over to Sarah to talk further about what the expectations are on the project. So, yeah, that's me again and actually my last words are about the outlook. What do we have to, what do we do with this project in the future? Because obviously we're not done yet. We've worked on this for about a year. We put our first document out and we don't have got a lot of things that we need to do. And the important thing to know is there's a core team that meets every couple of weeks and that has a few things on its to-do list. So, one is we are thinking about doing top 20 secure PLC environment practices. For example, we're thinking about building a playground. So, building virtual PLCs where you can try out practices virtually where there are demonstrators that could download or try out online. Of course, we're thinking about improving the practices or publishing version 2.0 based on comments that come in. If you go to our website that I'll show you in a minute, you can download a comment form and leave us any comment you want and they're really all welcome. And we want to improve these practices. Comments can be, this practice doesn't make sense. It can be, I implemented this and this could be a good example. It can be, I don't understand this. We could explain that better this way. Or it can be, I've got a completely new practice that needs to be on that list that I want to propose. We've got a template for that. We're also working on translations to other languages because obviously we want engineers to adopt that list and the easiest way to adopt something that is in your languages is just way easier. And we're also thinking about training template for PLC coding practices, how to involve them in purchasing specifications because that's one easy application that could be a lot of use for users, for asset owners. And that's the core team. We've got commenters, so I said that before everyone is really invited to comment. It's not a lot of work and we're looking forward to talking about your comments. We've got supporters who build trainings, who write articles and podcasts. And we're thinking about collaborations, which is an important part. So we're collaborating with minor CWE in order to maybe put a module for PLCs in CWE because it doesn't have something like this yet. But we really, really, really, and that's one of the most important points that I want to make in the end. We want to get this list seen by engineers, used by engineers, trained for engineers. We want to get this list out there and we're really looking into how to best reach engineers because security bubble knows about this, but we need engineers to know about this. So we're looking for engineers associations. We're really interested in what PLC vendors say. Say we're really interested into PLC user groups. If you have contact in there, contact us. We can get these slides there. We can get a presentation organized. We can get trainings organized. We can get you in there. We collaborate with you in order to bring this knowledge into your communities locally, regionally. We have people all around the world in the project that are happy to do that. And lastly, what's very important in this slide is the green bubbles. So this is not a closed club. It's a community project and we really want and need everyone who's interested and passionate about making PLCs more secure. You don't have to be a PLC programmer, you don't have to be a security expert. If you're passionate enough you can totally use your knowledge and your engagement. And really in each of these groups there's a lot of space for for taking up new people and we welcome you with open arms. Lastly, here's the project website. It is all our contact details. If you want to contribute, if you want to be part of any of the teams of the core team, shoot an email to PLC security at amritsia.de. We've got a Twitter account. You can follow both Vivek and I on Twitter and on LinkedIn. And there's also a comment form on that website here. You can download and send to that email address. You would be very happy to hear from you. I hope you learned something about secure PLCs today and I hope you continue enjoying DAFCOM. Thank you.