 So, have you ever called IT only to be told have you tried turning it off and on again? Today we'll be talking about pilots. Usually pilots are called pilot in command. And we expect a pilot to be just that, in command. But today's pilots are turning more and more into computer operators and have less and less actual hands-on flying ability. So now imagine you're the pilot flying a gigantic computer in 30,000 feet height with 200 souls behind you, only to be told by IT have you tried turning it off and on again? So I would like to welcome Bernd Zeke, who is a systems engineer and an aviation accident analyst. He's specialized in reverse engineering and he's developing formal methods for development of safety critical systems. And he will enlighten us about problems in aviation automation, because apparently every pilot has uttered the words, what's it doing now? Yes, thank you. Thank you very much. Yes, the first I'd like to learn a bit about the audience. So how many of you here in the hall today are pilots? Oh, quite a few. So commercial pilots? Far fewer. ATP anyone? Yeah, there's one. I heard one. So I can't see. Okay. So some of you will know about some of the stuff. I hope there's a bit new stuff for everyone. Let's get right into it. What the announcer said was a bit of nice folklore. It's not completely true, but there's a little bit truth to it. What I'm going to talk about is automation in aircraft. And the idea is often, as he said, that it's just a computer and the pilot doesn't have to do anything. And there's one saying that in modern airplanes there will only be one pilot and a dog. And the pilot is there to feed the dog, and the dog is there to bite the pilot if he touches anything. So that's not quite yet how it is. I talk very little bit about the analysis method that we use to analyze accidents not only in aviation but mostly. And then I'll tell you a short tale of two throttles or two thrust levers, as they are sometimes called, and also talk about human pilots, how they cope with failures or don't as the case may be. And I haven't seen a lot of other talks here about self-driving cars, although they are now becoming a very big thing. So I'll touch that briefly and have a tentative conclusion. I can't see very fine to the future, so I'm not sure if I'm right about that. So what is automation in airplanes? The most obvious thing is that automated flight controls on every airliner and on many small airplanes these days, there used to be a requirement for a simple autopilot, even on small private single-engine airplanes if you want to fly under instrument flight rules. That has been relaxed somewhat now, but many small planes still have them. So there are three levels of flight controls. The first one is manual flight, where the pilot moves the flight controls, and the airplane does what it's told. Then there's the simple autopilot, where the pilot just sets airspeed, altitude, climb rate, stuff like that, and there are managed modes, where there's a more sophisticated computer, which has knowledge about the whole flight with waypoints and altitudes. And there are other automated systems, not only the flight controls. Spoilers on the ground have to extend to help slowing down the aircraft. The high-lift devices are automated. Radius may be auto-tuning. There's the computer that controls the engines, full-authority digital engine control. There are things like cabin pressurization, and many other small subsystems are automated, as they are in cars these days. So what is automation not? It's not yet, except for a very few specialized drones, a self-flying aircraft. The pilot in command still is in command at all times. You can turn off the automation. You can hand-fly the aircraft at any time, if you want to. And barring any serious errors, which are extremely rare in commercial aircraft. The airplane does what it's told. The autopilot really doesn't have any decision capabilities, except at the very lowest level, deciding on a bank angle to make the right turn and things like that. It is also not a panacea for any errors that the pilot can make. You can still fly a highly computerized modern aircraft into the side of a mountain, if you want to. So some military aircraft actually have systems that will prevent you from flying into a mountain, if they're active, or if you've passed out, but airliners don't at the time. And of course, the pilot in command still bears the ultimate responsibility for the safe conduct of the flight. So as I said briefly, manual flying is just stick and rudder. You move the stick and you move your rudder pedals, and the airplane moves the control surfaces mechanically on small airplanes, hydraulically assisted, or even computer-assisted on some airliners, on most modern airliners, called fly-by-wire. You may have heard about that. Then the simple autopilot modes, where you directly select the heading and the airplane flies in that heading. And managed modes, as I said before, where you have a sophisticated flight management system, which then in turn sets headings and climb rates and things like that on the autopilot proper. They're not super reliable. They can be thrown off by many things. And mostly they turn off when there's any small error in any of the small subsystems, any of the various input values that you get, airspeed, altitude, engine power, anything. If any of those have invalid readings, it'll turn off. And the pilots have to assume command in their case. They cannot handle basically anything unexpected. Most air sensors are there in three-fold. So if only one of them disagrees, the other two are usually taken as valid. But if they all three disagree, then the system just says, I don't know what's true anymore, what speed is, and all the automatics drop out. And most of the computer-assisted manual flying also is turned off in that case. So this is very briefly the method that we have developed at the University of Bielefeld under Professor Letkin for analyzing accidents. It's called YB-COS analysis. It uses a formal notion of causality, called the counterfactual test. And then you can make a very nice graph for accidents. They're usually bigger than that. But it's more or less objective criterion for causality. And when different people with some experience in the domain make YB-COS graphs of an accident, they usually are very similar to each other. So there's a lot of automation on modern airplanes. And it's quite hard to get it right. And one of the reasons is that unlike for many situations in cars or in rail vehicles, there is no default safe state. You can't just turn everything off and stop by the roadside. So you always have to decide. The engineers always have to plan for many eventualities, what can happen in the air, and decide what, given a certain set of circumstances, is the safest state for the airplane to be in. And that is not always unambiguous. And it's a very hard decision to make. And sometimes they get it wrong. And sometimes you just get into that situation where in most cases, the set of values, the set of measured values that the system gets, when most circumstances one set of decisions is the correct one, and you get into that situation where the computers get the same inputs and that decision is the wrong one, and that may still lead to an accident. Those are very few and very rare, but these things can happen. So a few of the decisions that the engineers have to take when designing the automation in airplanes is what to do if things fail, if certain individual things fail, if a combination of things fail, little motors, little engines, sensors fail, some actuators fail, a hard-dry look system fails, anything like that. What to do in that case with the remaining systems? And what to tell the pilots? Well, naively, you might assume the pilot wants to know about everything that is broken, every little valve, every little system that is broken on the airplane, but if a lot goes wrong at the same time, then the decision has to be taken. Which of these things that have gone wrong are the most important for the flight crew to know, and that's not trivial at all, and it can very easily lead to sensory saturation of the pilots. So they don't know what is what anymore because from all sides, alarms are blaring and there are lots and lots of displays that they have to watch. And so certain error messages are suppressed in certain states of flight, certain stages of the flight, so as not to overwhelm the pilot. And some things that may be essential to have on the ground, some functions, for example the wing spoilers, those are the big flaps on the top of the wings that come up after touchdown are important to have on landing to dump the lift so the airplane doesn't jump up again. Because it is a touchdown, still at a speed at which it could fly, at least for airliners, for small airplanes it's a bit different, but airliners are safely above the very low speed they can go when they're touched down, so they need to have some means to make sure they don't jump up again. They still do sometimes, but not very often. But the spoilers destroy most of the lift, so deploying them in the air close to the ground is extremely dangerous. So the computer has to be absolutely certain, so to speak, to know that the aircraft is on the ground when it gives the command to deploy the ground spoilers. If it does that a few seconds too early when the airplane is still 100 meters above the ground, that will likely be a fatal accident. So at least in most jet airliners, not in all propeller driven, but in almost all jet airliners, there's an automatic thrust management. So the computer does not only control where the nose of the airplane points, but also how much power the engines produce. And there are two different, one might call them, philosophies between the two major airframers, and Boeing and most others too use back-driven throttles. So the computer sets the thrust and moves the thrust levers to match the commanded thrust position. And Airbus has a different system where the thrust levers remain in one position throughout the entire flight. Basically after takeoff when thrust is reduced for the main climb and cruise and descent and everything, they remain in one position and the computer tells the engines directly which thrust to produce. And there's an argument which one of the systems is better, but I'll show you three accidents in which the thrust system, the throttle system, played a role. So the first one has a little video. You will see, I think there are two different camera perspectives. You will see two airplanes landing of the same class. They are small airliners, 200 people, something like that, 150 to 200 landing. And the first one is a normal landing. So it's already pretty slow, takes its time. And the next one is the accident flight. It's on the same day. It's only minutes apart. It's on the same airport. And you can see that one has slowed down and the other one is still going very fast. So there's the first one and there's the second one. And as you can imagine, that didn't end well. It was one of the worst aviation accidents, maybe still the worst today in Brazil where 200 people died. And as you can see, this is a transcript of the flight data recorder, the digital flight data recorder. And the first two lines are the interesting ones. It says TLA, which is thrust lever angle. And normally what happens on landing, just before touchdown, the pilot pulls both thrust levers to idle. The engine thrust goes down to idle and then it touches down, engages reverse thrust, spoilers, brakes, everything to slow down. And what happened in this case is that the pilot only moved one of the thrust levers to idle and left the other there, put the one thrust lever and reverse, but not the other. And that led to the computer getting conflicting information about whether the pilots actually wanted to land or not. So it didn't deploy the automatic wheel brakes. It didn't deploy the spoilers and reverse thrust only on one engine. So that went pretty badly. And some people said, well, with tactile feedback from the thrust levers, if the pilots had been used to that, they would have noticed earlier. And we can't really be sure because the pilots also died in the accident. But there were some people who made a case that moving thrust levers would have been a lot better in this case. So is that always better? Is another throttle-related accident? And this time it was a Boeing 737 at Amsterdam Skripal Airport. There was a small technical malfunction which caused the computers to think the airplane was actually eight feet underground. That was the reading that it gave due to the way it works. And so I said, oh, I'm below 30 feet. I have to reduce the thrust to idle. And that's what it did, although it was still a couple hundred feet high. And the pilots didn't notice early enough and let the speed decay. And the wing stalled and crashed. The airplane crashed and nine people died. It was a moderately, only a moderately hard crash. So most people survived, actually. But there was still a problem. And the way the auto thrust and auto throttle system works in this case, if the thrust levers had been static, this wouldn't have happened because the pilots would have pushed the thrust levers above a certain detent. And it wouldn't have reduced the thrust automatically again. So it's very hard to say which system in total is better. You can count the accidents, maybe, in which it played a role. But there are so few. They're just really less than a handful in each case. So they're not statistically significant. So you can't really say by statistics alone which system is better than the other. They both have their own problems. And this is one of the decisions, as engineers, that you really can't make a decisive argument for. So one manufacturer chooses one and the other chooses the other. And there's another one, Asiana Flight 214 in San Francisco. Many of you may remember that. Only three people were killed in this one because it really burned out only after the crash, after everyone had evacuated. So the auto throttles didn't work as expected in this case. The pilots thought, oh, the auto throttles will hold the speed. We don't have to worry about that. As far as I remember, there were five pilots in the cockpit. And when finally someone noticed and pushed the throttles forward, it was already too late. The engines take their time to spool up. The legal requirement is that they may take up to eight seconds to spool up from idle to the necessary power to go around. And there wasn't enough time for that. Because after the engines have spooled up, the airplane also still has to accelerate to get back to flying speed again. So in this case, again, the wing stall, the airplane crashed just short of a runway and three people died. And the third case was even one when nothing was wrong with the airplane, except you could argue it was a design flaw. But it was working as designed. People who were going to fly the aircraft learned how the system worked, learned everything about it, hopefully. And some more training may perhaps be the answer. That is one thing. System knowledge to crew resource management has been a big thing in previous decades that the pilot in command is not a dictator on the airplane. He has to listen to the others, to the other pilot, even though he has ultimate authority in decision. So do pilots always screw up if the automation fails? No, luckily not. Or if other systems fail? In this case, not the automation, really. But there are two cases which I will briefly mention. Jesse Sullenberger, everybody knows about him. The movie has just been out. The ditching in the Hudson, a superb pilot, great decision making to find the biggest flat surface in the area to put it down. And Peter Burkill, many other. So who knew about Peter Burkill? Right, a few. He was the one saved about as many people as Sullenberger when on approach to London Heathrow, both engines lost thrust, most of the thrust anyway. And he managed to put it down within the airport but short of the runway, it was a crash landing, the airplane was destroyed, but nobody died. So that was a pretty good outcome. So airplanes are one thing, another thing are cars. And anyone here has a self-driving car or at least a lane assist or something? Many, so many people don't trust these new fangled systems, I guess. One of the big differences is that pilots who are going to fly a highly automated aircraft have to take a long training course beyond their pilot's license to learn the specifics of operating this specific aircraft. And maintenance is very highly controlled and regulated. So that's another thing. And the thing is for cars in general, if something's wrong with the engine, you can just pull over to the right and stop, in most cases. And cars cannot just take off and take evasive action in the third dimension. And there are lots and lots of obstacles on the ground. There are trees, cars, people, houses, everything. Whereas the air is mostly empty, not entirely, air-to-air collisions happen. Media collisions do happen, but they are very, very few. And the automatic systems in the self-driving cars or the autonomous cars that we have today require constant monitoring. And if the systems work too well, then drivers may actually forget about it and think they are perfect and let their attention wander. Pilots sometimes are prone to do that as well. But the thing is that in cruise, in cruise flight, if the automatics drop out, the pilots have on the order of minutes to react, really, at least several seconds. Whereas on a road car, if the automatics drop out and you're in a curve, you have fractions of a second to save the car with the current state of the technology. Some of you probably have heard about the trolley problem or trolliology, as it's sometimes called. It basically boils down to that a fully autonomous car, a highly automated car, may eventually have to make the decision between killing the occupants and killing people on the road. And I think that is fundamentally an unsolvable ethical problem that we cannot just leave to the engineers or the car manufacturers to decide that maybe the occupants are always more important than people on the road. What if there's only one person in the car and there's a crowd on the road? And you have to decide between stealing the car and to the tree, killing the sole occupant or killing several people that are in front of the car. These are situations that may actually happen. So I really can't see what the V-Rite answer is to that if there is one, and maybe there isn't one. Some engineers have actually suggested that making a random decision in that case is the answer. I'm not too sure about that either. But whichever the decision the software takes at that moment, then people will die and they will take the blame either way. And we don't know yet how that's going to turn out in front of the courts. So automation is hard to get right. And in some cases, self-driving cars, it may be impossible to get it absolutely right. Which state is the safest for the systems to be in? And at what time? Who knows? It's very, very hard to get it right, even in limited systems such as airplanes. And what to display to the operators and when? In many cases, it would help the pilots a lot when the automation drops out to know intimate details of how the system works internally. Airbus has some logic diagrams in their pilot's handbook, but they are labeled for info, which means they're not required for any exams. It's just interesting to know. But in case of the logic for extension of the ground spoilers, it's quite helpful to know which conditions exactly have to be satisfied for the ground spoilers to deploy. But some of these problems, I think, cannot be left to engineers and scientists alone. And we need psychologists and maybe sociologists, other people who know about the psyche of people, who know about how people think, how people react, how people process information to make good engineering design decisions to build safer systems. And as I said, some of the fundamental ethical problems may turn out to remain unsolvable. Thank you. I think we have a little bit of time for questions. Yes, we actually do. We have some time for questions and we're gonna start with the internet if there are any questions. No, they are not. Then it's microphone number three. Yes, you mentioned the ethical problem of the decision-making, the trolley problem. So whenever this comes up regarding automated driving systems, whether it be flight control or car driving, I always get a little bit mad when philosophers come up with that. There is one decisive decision you can make and that is the whole thing should act predictably, especially in road traffic. The uttermost importance is that all participants behave predictably, swerving out of lane is the most dangerous thing you can do. And if you have to make this decision, people say you have to make a decision, then I say, no, there is a definitive safe state. That is, drive with enough distance to the guy in front of you. Don't tailgate. Don't speed up, because if you're a regular driver. Hello, please ask your question. OK, so the question is, why are people always saying it's ethically not decisive? It isn't because if just keeping enough distance to solve our problems, that would be fine. But cars are not the only participants in traffic. There are people, right? And they can just jump in front of a car. That is not predictable. Yeah, you can require people to behave predictably. Good luck with that. I would like to counter that. OK, unfortunately, there's not much room for discussion right now. But microphone number two, please ask a concise question. OK, let me try. So you said about automation in airplanes that whenever there is a small malfunction, the autopilot will disconnect and expect the pilot to fix the situation, right? So it is my understanding. Not the smallest problem, but some, yeah. OK, so but it is my understanding that the pilots are still expected to follow procedures and not make any random gut decisions in most cases. So my question is, do you have statistics when the standard procedures were actually not applicable in how many cases and in how many of these cases did the pilots actually manage to save the flight? No, I'm not aware of any statistics. And one of the problems with that is that in general, the data recorder is only read when there was an accident and it is strictly off limits in all other circumstances. Some airplanes have a quick access data recorder which they can routinely read, but only anonymized. So and I don't think the airplanes publish statistics about that. OK, last question, microphone number four, please. Yeah, I just want to bring this back to this sort of the IT security part where what I find very good about the way accidents are handled in aviation is that the report is completely public. So if you want to read the Challenger Catastrophe, you can actually read all the technical details and all the stuff that happened and all that information is there and the way it's adjusted. Now the question is why is this not happening in the IT sector where clearly millions of people are being affected and somehow we haven't reached a stage where the data and the analysis of the data is public so we can all learn from it and get better as it has been the way. I think the short answer is, excuse me. I think the short answer is because there is no legal requirement and if there weren't for accident reports to be distributed then many airlines wouldn't do it. But why? It's because it's embarrassing if you have an accident. That's basically the thing I think. OK, very last question, microphone number one, please. Hi. So one of the reasons we have automation in aircraft in the first place is to reduce pilot workload where too high pilot workload is a major cause of accidents. It seems like one of the issues we're talking about here is that in a situation where something's gone wrong the presence of that automation and needing to understand it means you've got a higher pilot workload in that situation. The question, what is it doing now? What's the industry's sort of approach to that effect and what do you think about that? I think the traditional approach is to just pile on more automation so that if that fails the pilot has an even higher workload. But the current thing is that manufacturers and airlines go back very, very slowly to letting the pilot hand fly more often. For a long time the mantra was use automation whenever possible, the highest level of automation that is appropriate for the situation. So only the takeoff and touchdown were flown by hand. And now it is very often used the appropriate level of automation. And that means if there's not very high workload and not a lot of traffic then hand fly the approach, for example. So to keep in good practice and to maintain proficiency for all situations, hopefully. Thank you. Please give a warm hand of applause for Ben Zika.