 I'm Aaron Grealius. I'm a security researcher with Grimm. I'm a cyberphysical systems team. We work on improving the security of embedded devices everywhere. I've also got a background along background and embedded software development on critical systems. The telecom aerospace medical and a bunch of different fields. My name is Tim Brun. I run the cyberphysical systems team here at Grimm. I've been at Grimm for about five years now, hacking all sorts of different types of embedded systems from aerospace to automotive, heavy trucking, medical. We're here today to talk about some of where cybersecurity meets aviation regulations. I'm going to start by talking about the current status of cybersecurity regulation and more how the software development happens in aerospace. The current process that we have descends from the historically how airlines have approached the topic of safety. Airlines have approached the topic of safety usually historically from a purely mechanical standpoint. This part will last for about this long in airplanes. It has to be replaced every so often. There's maintenance procedures and awareness directors that talk about how often these parts need to be replaced based on how it takes them to fail. There's things that they test. Like anywhere you have stresses on a metal joint, you'll have testing done, both visual inspection and other types of testing depending on circumstances to see if it's starting to fail. It's starting to get cracks forming in those areas. But it's really designed around this concept that once you make a part, it behaves in some sort of defined way that doesn't change. Once you design it, do all your fairly rigorous testing that they do in the airline, the aerospace industry and install it. It will change in ways they're defined essentially by the physics of the situation. Stresses on metal, opening and closing of latches and stuff like that. They have a defined physical characteristic and you can define maintenance procedures that will detect all of these potential issues on the airplane. These maintenance procedures are defined to replace things that need to be replaced periodically. These maintenance procedures are, you can follow between the commercial aerospace world will be followed. They rely on that happening that does happen. And they do it in places like this. They have drawers that have labels, places for every bit, every socket, every tool. It's access controlled. So you know that when you do your maintenance procedure, there's a procedure for you using this size socket to remove these bolts and you put it back when you're done. So you don't wind up leaving a socket in an airplane. They have this fairly strictly defined procedures that you follow in order to make sure that all these minutes is done correctly and done completely. So that's the kind of history of where we came from. As far as the current regulations that drive the software development process for our modern aviation software, that's regulated, at least in this country and many other ones, by something called DO-178C. We're not going to go into this too deeply, but knowing how things work right now and how levels of safety are defined will help us think about cybersecurity and where, how to go forward with that. This is developed by RTCA. It stands for Radio Technical Commissions for Aeronautics, which I never actually know until I look that up. It's not super important. It's just an organization that develops a bunch of aviation standards. And again, these are adopted by FAA in the United States, Transport Canada, and European Aviation Safety Agency, EASA. So what is DO-178C? I like to summarize it this way. You basically tell the FAA, well, a DER, designated representative that stands in for the FAAs, tell them how you plan on designing, creating, and testing your software. And then you go and follow your plan and do those things, and then you have to prove that you followed your plan. That's it in a nutshell. Now, there's a lot more paperwork involved. There's a lot of checklists you do have to go through and make sure that your plan actually does identify all of the areas of concern that DO-178C points out. And if the DER thinks that anything is lacking in your plan, they will tell you and help you make sure that it gets revised appropriately. That's the way things go forward right now. And as you can see, it's kind of a very water folly-based model where you do all of your design upfront. You create a design, you create a spec, you know, that actual product that implements the design that you do testing on it in order to make sure that your, the product you designed actually meets the requirements that the product was supposed to fulfill. That is a good point. The way this does work in practice, it is extremely waterfall. There have been attempts to try and figure out how to fit agile methods into this, but it is very much planned to prove just pretty much in those steps. And this again comes from that historical background of making mostly mechanical components where that is how you design them. You don't do iterative development on, you know, a way usually. I mean, if there's issues you'll go back and fix on the course, but like you don't really do, you know, there is no fail fast on making a lot of these mechanical components. So it's trying to shoehorn software development into how they make mechanical components for these aircraft. Right. And has anybody who's been in software development for a while knows that waterfall has been around for a long time. The Q178C categorizes different levels of software according to the potential consequences of a failure of that software. So levels A through E and basically A means that something catastrophic would happen if it fails. Level E means that nothing is going to happen safety wise. You know, maybe your passengers can't watch movies, right? That would be a level E software. The level A is also the highest cost. It's probably unsurprising. The last numbers I can find show that it's estimated to be about $100 per line of code for level A software. They're not all software on an airplane is level A. There's usually just a small percentage as possible because of the cost. But Boeing 787, for example, is estimated to happen between 6 and 7 million lines of code. So even if it's a small percentage, just the level of certification alone of the software drives a great deal of cost. Basically, these go from different levels. So level A catastrophic software, usually software that is level A rated will have autonomous control, which means that there's no way to intervene if something fails, hence why it's so expensive and so time consuming to develop. Then they go on both their level B would be something where something hazardous could happen, but also level B will typically allow for pilot intervention. So it might be something that increases the significantly increases the workload or the challenge of flying the aircraft, but it would be something that can be addressed by the pilot train. Level A and B is usually lost in life. Like a level A failure was likely to lead a loss of life. Level B failure is not likely to lead to loss of life or severe injury. Injury sustained at a level B failure should be minimal. And then it goes on down from there. C and D are major or minor impacts to the flight of the aircraft. Also, so entertainment systems are going to be level E. It means that there's lower, less regulations and less checklists, essentially. As you go down in level, you have to do less to prove in that proof set. And D-178c. Code is expensive. So stuff, especially if it's a level B or level A, is very unlikely to have unused code in it or unnecessary code in it, like a shell onto a system. It's, you know, actually D-178 level A, you're not allowed to have dead code in your design on the system. So there's a, because it's so expensive to develop the code, there's usually a minimal amount of code on these systems. There is extensive testing done for the higher certification levels. It's at least a significant quantity of testing done. Pilot training, taught how to use the software is, and the other systems are designed for redundancy and safety. So if you do have a failure, there should be a redundant system to back up that failure, unless you have a 737 MAX, but we won't talk about that right now. There's a little very, very limited interfaces on these flight-critical systems. They use the differences like R and 429. R and 429 is a UR style bus. You're limited to sending the 32 bits of data at a time, as well as not really room for buffer overflow here. You don't really have longer messages on top of R and 429. These interfaces are very strictly defined and fairly limited, provide very limited functionality beyond just sending the necessary critical data back and forth on the bus. Yeah, there's a great deal of emphasis on determinism and reliability. So there's some bad things related to the current way software is defined and designed in aerospace. This is kind of, so as I said, as we said, both Tim and I worked in aerospace kind of on the development side of things and also on the cybersecurity testing side of things. The things that I found many things frustrating when I was doing on the development side of things, this is kind of a piece of that list right here. Data validation is guarding against accidental data modification. The design of how software is made to be robust in aviation systems is guarding against like cosmic ray bit flips, single event upsets. So simple methods like CRCs and checksums are sufficient. And also the software that runs it is typically slower and older, more reliable, and therefore modern data validation methods are going to be very time consuming for those, for the hardware involved. There's very little code to guard against unexpected scenarios. If you don't, if it's, if every single line of code costs $100 approximately, probably more nowadays than it was when I found that statistic, then you're not going to have a bunch of code for guarding against things that the system designers say won't happen. In fact, as Tim says, dead code, code that is never expected to be executed, is not allowed in DO170AC. There's a lot of complacency. Because of the level of tests involved, because of the amount of work that goes into designing the software, there's a lot of point of view in the industry that the way we do things, the redundancies that we design in the system protect us from bad things. And it protects against single event upsets. It doesn't protect against determined and directed attacks. One thing, one piece of software I was working on a while ago, I had a just a straight up buffer overflow attack against it. And I had to fight fairly hard to actually get it fixed because the incoming data that hit that function with the buffer overflow could only come from a trusted source that would never send more than such amount of data. And if there was some sort of, like Erin says, a single event upsets, a random bit flip that caused the length field to be long and was supposed to be, there's a CRC check that would have failed and they wouldn't have even processed the data at all. So they felt that they were completely protected against this scenario because they just don't have this concept of an adversarial attacker with knowledge of the system being able to get data into this function. They believe that the only way for data to get this function is from a source that they trust to not send data that's going to cause a buffer overflow. So Tim's kind of jumping ahead there. The last bullet point on this one is software's only tested against expected flight configurations. So you will only say, here's, you know, this is what we're going to do for this flight. And that's what we're testing because testing is expensive. There's a lot of tests that are performed as part of this software certification process. You don't want to run more than necessary. What that means though, is that software aviation software and model robust in flight configurations can be extremely brittle when it is put into conditions that are not expected to happen. Some other things here, ground maintenance devices, so things that actually program the different flight devices, those are not DL17C rated because they're not flying, right? They're just staying on the ground while the airplane flies. They're covered under some different guidelines about tool development, but they don't have to go if they're not even levelly certified. It's something different. Maintenance software. So on each box, there's typically a component that will help reprogram that device. That's considered to be deactivated because when the airplane is in flight, you have a signal that says, are the wheels on the ground or not? Sometimes it's called weight-on wheels. Sometimes it's called weight-off wheels. Make sure you read your documentation very carefully. And whatever signal that indicates, if that signal is then used in the startup process to make sure that the reprogramming software component doesn't run, so this is considered to be deactivated. There's also frequently a physical switch on the piece of hardware that you have to switch into some sort of boot mode, maintenance mode, programming mode. So you have to not only have the weight-on wheels signal get there, I also have to have this switch set in the activated position to boot the piece of hardware into the maintenance mode. And then the plane won't fly if that piece of solid hardware is in the maintenance mode or programming mode. You have to switch it back to flight mode before the plane will take off. Right. And it'll be like programming. Sometimes it'll be built into the wiring harnesses themselves. So when you plug in the harness for reprogramming a box, that thing has a wire jumper. And if that forces this type of mode. So there are some current cybersecurity documents out there, DO326, which outlines the activities that should take place at a system and airframe levels. These are the type of security activities that you should do if you're developing either a system for an airplane or developing the entire airplane. These are the various testing activities and testing documents that you should create and how to create them. DO356 goes a little bit more into the nuts and bolts of what testing should look like. Something I'd like about DO356, the DO356 does make provisions for the more ad hoc pen test style testing that hackers tend to do, the adversarial feeling around trying to see, scan different systems, trying to find some way and some chink in the armor that you can then exploit and get out of the system. This is a test in this much. Most testing that you see, such as the DO170C type testing, is extremely formalized. Every test procedure is written down. You do this set of procedures. This is your test. There's conditions for the test to pass. There's conditions for the thing. If they don't meet the passing conditions, the test failed. DO356 does provide those provisions for the more ad hoc style testing that can be much more useful for security testing, provisions for things like fuzzing, which doesn't really have an analog in the DO170C that you can't really formalize a fuzzing procedure. So that's where we are right now. We've got some concerns with current set of, with current way aviation software is currently defined. What do we do? So what do we do about that? Well, Tim and I have a plan. First, you know, as with anything, we've got to figure out what our actual goals are for how to, what needs to be secured, what devices need to be secured, and come up with some examples, come up with some measures that we can use to reach those goals. It's all very well and good to say, secure the system. But if you don't actually tell people how to go about doing that, you're going to end up with a large number of ways to do that, some of which may not be very effective. So the problem is up front here. Everything is hackable, right? Give some, give enough access, motivation, time and money. You can get into any type of device. You give us a piece of hardware, and we can get into it. There's a lot of measures you can use to make it more difficult, but depend with a large enough playground, you can get into anything you want. So for goals here, we, so for goals here, we need to assume that attacker has all the necessary information and resources to do what they want to. If we're going to design an aircraft to be secure, we need to start from the assumption that the adversary knows what they need to know to get into it. We need to make sure that we have some way to prove beyond the most up-to-date way to prove that the software on a particular component is 100% correct. If you cannot prove that, then it's going to be impossible to figure out how to get to secure an aircraft. We need to prevent accidental transmission of something malicious. If you know that everything is secure at some point in time, you need to protect it from going forward and then put multiple barriers in place also all these different components. We know that given enough time and money and so on, anybody can get into a device if they want to. But it is possible to figure out how to prevent, make an attack practically impossible if you limit some of those things. So if you know when an airplane is on the ground that it has all the correct software and configurations in all the different parts that are on the aircraft, then an attacker would have a limited time frame of the actual flight itself to perform an attack. So that's what we're aiming for. And also aiming to get the idea into aviation that a software component can go from good to bad very quickly. Just one exploit and all of a sudden this device that was working completely fine is now no longer working completely fine. And it's not based on some statistical degradation over time. Now that it can just go to be good and then be bad. So to define what is important to secure, here's my list. We have things that are important for flying aircraft. And then we have things that are not important for flying aircraft. These are two device categories. The level D178C defines five different levels to of how critical software is and the consequences of failures here. For us, there's really only two categories, important or not. Because if you leave something that has partial measures in place on the aircraft, then that is a potential point that an attacker could use to pivot to other parts of the aircraft. Flight devices are any component that's used to operate the aircraft. Any or any device that communicates with those components. We've seen instances where you may have a level E hardware device that communicates in some way with even a level A hardware device. And this is allowed because that communication has been strictly defined and tested on the piece of a little A software to make sure that it should reject any bad inputs. So oftentimes that very robust checking and all of these bad inputs actually are. But from a security context, anywhere that you can touch something important from also needs to be secured. Because if you can have a lower functional device, a lower classificate lower D178C DAL classification device that can can maybe in some way disrupt a higher level of no more critical device. And this is something that you have to think about from the security perspective as an attacker using this as a potential pivot to get on to a more critical system. Also any device or software that interacts with these components, whether in the flight or on the ground, all of your ground maintenance software is now used to be considered essentially from a security level, the same as a flight device, because these are things that are designed to be able to modify the behavior of these pieces of hardware that are responsible for directly controlling the flight of the aircraft. And I think that's an important point of what we're defining here. Things that we consider like devices includes the things on the ground that program the device. Because those are very critical to making sure that the correct and only the correct components are soft our program. You know, one thing also we talked about the different components, a different level of components talking together on the same bus. One very common level, a software item on recent aircraft is an operating system running on a particular device. There is a standard called Aaron 653, which defines ways for a partitioned operating system. It's effectively like a virtual machine can have multiple different severity levels running on the same component. So as we all know in security though, virtual machine escapes are definitely a thing. So if there was an attacker who was able to get into a lower level component, you know that they would be able to from their pivot into the operating system itself, or more critical component actually. So that's our devices. How do we go about doing this? Well, we've got a couple different things here. So there's adversarial testing. I choose the word adversarial testing. I don't want to say pen testing, because pen testing means different things to different people. This is the idea of attacking this, you know, testing the system as an as an attacker, what you know, you're not no longer just strictly checking against a specification. You're actually looking at it from a, if I'm an attacker, how could I leverage this system to do something? How could I use this protocol to do something? All the things that, you know, security professionals that we do in our jobs. How do we, you know, do in these things to now an airplane? Another method is a sort of pre flight software and configuration authentication. If before every flight, there was a way to go and check that the system is actually running the software that you think is going to be running actually has a graphically secure method of validating that the software that's running is the software that's supposed to be running that your configuration is correct. Other methods could include like a physically incompatible interfaces on flight systems to make sure that of no flight system, you can't just plug your iPad into a critical flight system, that there's the interface on these flight systems just a little speak these commodity protocols like USB or Bluetooth or Wi-Fi. We'll talk about this a little bit later. We also need to talk about air gaps. You know, anybody in ICS is definitely laughing a little bit right now, but on an airplane with a strictly controlled configuration, you know, air gaps could certainly be a part of the solution for making sure that you can't jump from, you know, the non-flight critical portion of the aircraft, the flight critical portion of the aircraft, at least as a passenger sitting in the plane. The adversarial testing review and test all interfaces for vulnerabilities, you know, you have to, you can't trust, just blindly trust the external inputs into the system, even if they should only be coming from a system that's also been tested at the same level and should only be producing correct outputs for you to consume, you still can't trust this because you can't always trust that the system hasn't been compromised in some way. I like to give an example from Automotive for that. If you have one component that tells you the brake is pressed and you make sure that the thing that actually generates the signal, saying the brake is pressed is very, it works perfectly, can't be cracked. The thing that interprets that signal works perfectly, can't be cracked. It's all well and good, but if you then have an entertainment system on the same bus and that sends a message saying the brake is pressed, well, now you can't trust your inputs, can you? Yeah, and testing your protocol stacks and mills does your vulnerability. I mean, we've gotten a little bit better, but there's been, you know, lots of vulnerabilities found in TCP IP stacks and other network stacks over the years. Our review into software and compile code for vulnerabilities and making sure you're doing like static analysis on the code, not a perfect solution, but does catch some things. Secure Boot is, I think, a fairly critical part of making sure that we can actually trust the software running on the systems, having a good secure boot platform set up. Good to go a very long way to preventing at least an attacker from modifying the operating system. That should then be modifying the software that runs on this platform because it should be detected. And, of course, having multiple layers of protection. Defense in depth has always been one of the strongest things that you can do for the security posture of basically anything. The topic of secure boots, one that could be talked about for a long time, and different people have different definitions of what secure boot means. At a simple level right now, I think what we mean is just that there's a way that the hardware has the ability to use cryptographically secure methods to ensure that only authorized software and components are installed and operating on that device. And a way to store the keys that should be both, that should be immutable. Right, and also should not, if once somebody gets in and pulls apart the hardware that they're not able to go in and break out, break the secure boot for all of the devices. Right, something that is, uses perhaps asymmetric cryptography methods and secure hashing to be used for device. So pre-flight software authentication, talked about this a little bit earlier. We want to make sure that the before takeoff that everything on the aircraft is 100% correct and secure and authentic. So this would be some centralized way to query every single device and find out what software and configuration are running on them. This is already somewhat in existing, before an aircraft flies, this is already somewhat what happens, right? We need to make sure that only the correct software is on each component. So people who fly aircraft will already have a list of different software versions that should be running on each component of the aircraft. So this isn't, this isn't as outlandish as it sounds. You can use cryptographically secure algorithms, no CRCs, no checksums, you should be using something like SHA-2, SHA-3, something better. And there should be also the protocol, the way the centralized device queries every single component. There should be a challenge response protocol here. The protocol should also be tested adversarily to make sure there is something that cannot be spoofed. We need to make sure we can prevent replay attacks. So if there is some malicious software, malicious component on a box, it should be incapable of falsifying the authenticity of that software. You also validate the entire range of memory and flash on each device. You want to make sure that you can't like set the length to zero. You want to make sure that you don't have the correct software and the correct configuration in both the mystery component in the middle and that that gap just isn't checked because it wasn't thought to be important, right? You need to make sure that the entire range is validated. Yeah, physically incompatible interfaces. We're seeing the flight crews, they have the iPad devices that will be there, their flight book that has the flight information. One of the ways to make sure to, in order to help prevent malicious software from being loaded onto these flight critical systems is to make sure that there are physically incompatible interfaces between the non-flight systems and the flight capable systems. So your flight capable systems shouldn't have USB, they shouldn't have Bluetooth, they shouldn't have Wi-Fi, they shouldn't have these standardized protocols that everything speaks in specialized hard wired connections like the connections we have a picture of here, very common in aerospace. Just make it a lot harder for the flight crew to just plug their iPad into a flight critical system. And if you can see on these connectors, some of the important parts of them is that they've actually got different keys. So there's usually a bunch of different, you know, a bunch of different pins so that you have all the necessary signals that you need for a particular connector. But then you, there's a lot of different varieties of keys, some have five slots, some have four different widths, different positions. And the whole idea is that you want to make sure that you cannot connect a component wrong. This is already what's done on an aircraft for connecting all the different flight components. And this is something that we say is necessary when connecting anything to a flight system. So air gaps, as Tim said, everybody who is in ICS is probably laughing when we have that out there before. But the whole point is that an aircraft doesn't change. If it changes, it changes because the design of the aircraft changes, it changes because there was an actual document and, you know, tested reason for giving it so. So designing the system, you should make sure that the flight systems and the non-flight systems cannot talk at all. They should be physically separate. We, as I said, virtual machine escapes are definitely a thing. You should not be running on the same box. They should not be able to even talk on the same bus. So anything that needs, if somebody, you don't want to have to test everything on an aircraft, right? You want to make sure you can, you want to try and trim what you need to test because testing is expensive. So something like an entertainment system, maybe somebody doesn't want to go about testing that. Well, that's okay if you can make sure that that entertainment system can never ever talk to its flight system, which means no common buses. It means that the only thing on the aircraft connected to that entertainment system from the flight components is power. If you need somebody to, if a crew needs to, like, disconnect a component, they would need to be able to, you know, toggle the power to that device instead of actually sending it a message to shut down. So some examples of what we've got here. We're going to go through some different category areas, three different types of systems, and talk about how we would apply these things to ensure that they're secure. Maintenance, and these cover, this is pretty much the, you know, all the different types of devices that you would have them on your aircraft, right? Passenger-accessible systems are things that passengers can get access to, so on. So maintenance devices, these are the things that program the different components when the aircraft is on the ground. They usually, a fancy laptop, maybe a robust laptop with some sort of waterproof case and special cables and extra interfaces to talk to the aircraft parts. These need to be adversarial attested. This should be unsurprising to hear by now. Communications with the different flight devices should require full authentication, and these devices should also have a plan and method to periodically validate that the device itself has not been compromised. As I said, they're typically a regular laptop or some sort of a PC type device. So any of the standard ways to secure a PC, you know, a laptop should be used on these devices to make auditing. They should have an auditing on. They should have all the different security protections in place. Programming other devices as well is typically, it can happen through, there is an airing standard for this as well, although it's optional, not every device implements it. It's called Airing 615 for data loading, and that can happen over multiple types of device buses, such as Airing 429. These devices will then usually talk over that. It's kind of a double-edged sword. On one hand, it means that you've got a standard way of doing things. On the other hand, it means that if somebody implements Airing 429, you potentially be able to go up in program with an unauthenticated device, which is why communication with the flight device should require full, you know, back and forth challenge response authentication on both ends. The device should make sure that the programming, the device programming is who it says it is and vice versa. Full mutual authentication. That's the term. So passenger-accessible systems should either be incapable of connecting to aircraft systems, not physically connected in any way, not physically sharing any, you know, any piece of hardware, ideally not even speaking the same sort of protocols, so they couldn't even accidentally be connected together, and you can't just plug USB into Airing 429 and have it work. Or they need to be saying, as everything else, adversarial attested and not have external connectivity that a normal passenger would be able to just plug into and use. If you have like a USB for charging, it should only be for charging. Only the data lines should be connected. It should have the need of the, it should be capable of passing data back and forth. The only case where there's an exception then is if this is a system, if the entertainment system is incapable of being connected to critical flight systems, or to flight systems, then you could have a USB audio interface on it. But that's the only case where that would be allowed. You would need to make sure that this device cannot talk to anything else. And the whole goal here is to make sure that something can't be done, something cannot accidentally or through, you know, through somebody dropping a bad cable somewhere near the airport, have somebody connect something that shouldn't be connected to the flight systems. That's the whole goal here. If somebody is going to try and attack an aircraft during that small, that limited window of the flight itself, we want to make sure that it is incredibly obvious that they're doing something that they shouldn't be doing. So just plugging a USB in is very benign. But if they're starting to rip up panels and trying to find the Air and Fortune 9 bus somewhere down there, well, that should hopefully be obvious to some of the other passengers or the crew and corrective measures could take place. But that's the whole goal here. We want to make sure that can't be done accidentally or on purpose without it being obvious. The flight systems, the flight systems themselves, we've talked about these quite a bit already. These need to be adversely tested, of course, we need to make sure that there's multiple levels of, there's multiple layers of protection in the way for anybody who is trying to attack a system that they would need to get through. No consumer interfaces. So all those flight systems, you need to make sure that you do not have the, you don't have the, you know, there's no USB in there. There's no Wi-Fi in there. So iPads, there's a flight bay or thing now. And obviously, there's an appeal in terms of how easy it is to update. We don't have to carry around stacks and stacks of paper. There's, it's an easy way to store a lot of things. But that means you've now got a device, which is in the cockpit, which is considered a flight critical system, which is a consumer device. And that gives me some concerns. So I think there needs to be some thought on how to secure that type of system. Maybe it just needs to be not an iPad. Maybe it can be a special purpose device. There's plenty of things out there with an LCD. They can store maps and have a program in them. No consumer interfaces. Yes. So no USB. You wouldn't be able to connect this. You don't want to have somebody accidentally plug in a USB. It's not supposed to, right? So just as a quick summary here, again, our goals are going to make these are the things that we want to achieve. And I think we have for the, what we've put in place, making sure we still have to assume again, up front, our attacker has everything, knows everything. We've talked about providing a way to make sure that the systems can be validated and be correct. That the software and configurations on the different components are correct when the aircraft, before the aircraft takes off. We want to limit that potential attack window. We've talked about the ways we can prevent accidental transmission of something malicious to the physical air gaps, physical disconnects, physical incompatible interfaces. And then again, if this is all the goal is make multiple barriers there, multiple barriers or an attacker wouldn't need to get through. Yeah. And then two device categories, there's stuff that can affect flight and stuff that can't affect flight. And what that flight does in this case mean physical full physical isolation from anything flight critical. Methods, we've talked about these at length, adversarial testing, I believe is a big key part of this. One thing that I guess we didn't really talk about is kind of design standards. It's something that could be addressed as well. Making sure that the software as it's developed complies with the different best practice recommendation of how to develop secure software in terms of program functions that are allowed and such. That's another potential method. This is, we are more looking at the fitting things on top of what was there already. But we still do have a few problems that we need to solve. So we're not going to try and pretend that we solve everything here. We'd like to think that we solved a significant part of aviation cybersecurity just in this talk. But there's a few things that we need to think about going forward. Supply chain security is still a concern. Obviously, as I said, anything can be hacked. If some, as the devices are sitting in storage, an attacker could go in and modify them. They've got plenty of time to go in and modify them, figure out how they work, either modify the software or just physically, physical modifications to the hardware. And how to make sure that that doesn't happen or can't happen is a challenge. RF communications is also another area where that we can really address. The airplane has a fair bit of RF communication that has radio and ESP, weather, et cetera. This is also something that wasn't addressed by this, what we presented here in this talk. Yeah, I mean, the flight crew, the pilots training is a significant part of that. If there's issues with communications between the aircraft and the ground, that's something where training can kick in and deal with the situation. It would improve matters if there was a way to secure the communications. Things that we might also want to think about in the future, how often, how much change would require something to be retested, more adversarial testing? What level of change, what type of change? Do you want to send me a C to find that for software where different amounts of change are considered to be minor fixes and they don't have to require a complete redesign of the system, but some level of change does require a complete redesign and retesting of the system. So that's something that would need to be looked at in the future. And then there's also the maintenance devices themselves. We talked about how maintenance devices are used to program all the different components on an aircraft. This means they would be used to program both flight systems and these air-gapped non-flight systems. So maybe those need to have a slightly different category. You could argue, you can put together a committee meeting and argue about lots of this stuff for a while if you want to. But I think that the, what we've given here is a fairly complete approach to address cybersecurity. Any questions? We'll be in the discord for any questions.