 So, dann fange ich schon mal an. This talk was originally announced as being in English. Is there anyone who doesn't speak German? Alright, then I try to do it in English, if that's fine with the rest of you. The slides are in German, I apologize for that. I talk more than is on the slides anyway, so I try to do it in English. No problem. There doesn't seem to be a Herald for this talk, so I have to announce myself, I guess. My name is Bernd Sieker, I'm talking about Autopilots in aviation and Autopilots in cars. Automated driving, autonomous driving, self-driving cars. What can go wrong with Autopilots in airplanes? And what also can go wrong with Autopilots or self-driving cars? We don't know a lot about that yet, because there aren't that many. But there have been a lot of problems with Autopilots in airplanes. Oh, sorry, it's rolling by itself. Okay, I start with this one, I hope it doesn't go automatically. You already know everything, so maybe you can go now, because you've seen everything. So, let me adjust this. Okay, then let's start. This is a brief outline, most of you, I guess, can read German. So, I talk about automation in means of transport generally. Mostly it will be airplanes and cars today. There is a lot of automation in trains. There are actually self-driving trains, some limited lines, some Paris metro lines are completely driverless. And there's a bay area rapid transport in San Francisco, which is also driverless, has been for about 10 years or so. But today I'm not talking about trains, just planes and cars. I try to talk for about 45 minutes, and then there'll be 15 minutes for questions in the end, of which I hope there are many. All right, so automation, what's it about? There are different levels. The lowest level, of course, is we do all the driving or all the flying by ourselves. My car has hardly any automation. I think it has an anti-lock braking system, and it has electronic stability program, and it has cruise control, but a dumb cruise control, not an adaptive. So, basically I have to drive by myself. Most small airplanes are also hand flown all the time. Then there's partial automation in which the car does some things automatically, such as lane changes or adaptive cruise control, things like that. And then there's what the PR departments of the companies like to call autonomous driving or self-driving cars or driverless cars, cars which can perform some tasks autonomously. Can, for example, drive on the highway for extended periods of times, but not in the city. So that's partial automation, and then there's the full automation. I guess some drones, some quadcopters can do fully autonomous flights, pre-programmed flights. Airplanes can fly a lot of a trip automatically autonomously, but they are not really autonomous systems. The pilots always have to tell it, quite specifically in some cases, what to do. And of course for cars, that's the ultimate dream, I guess. There's a driverless car that holds by a front door, stops by a front door, you get in, tell it where to go, and it will deliver you to another front door, completely without supervision and without intervention. So, there are some differences in automation between airplanes and cars. The environment in which they operate is very different. For airplanes, the environment is very highly regulated. The airspace is very highly regulated. You can fly in some parts of the airspace, in some countries, you can fly almost freely. In other parts you can only fly with clearance from air traffic control. So, somehow the sound has changed, I don't know, is it better or worse? It's better, good, thank you. Whereas for cars, the environment is almost uncontrolled. You can just get in your car and drive anywhere with few exceptions. If you like and have the car for it, you can even go off-road. Maintenance and operation for airplanes is also very highly regulated. For cars, it is quite different from country to country. In some countries where it is hardly regulated at all. Who can drive a car or who can repair it? In other countries it is more highly regulated. In Germany, every car is still go to a technical check-up every two years. That is mandated by the government. And as far as training and education goes, the difference is similar again. It is very highly regulated for airplanes. And somewhat regulated for cars. But once you have your driving license in most countries you never need to take an exam again. And you can drive until you drop dead or whatever. And another thing is the automation in aircraft doesn't need to be super reliable. Because there are always pilots there. And it regularly does things wrong, drops out or arrives at the wrong altitude or bust the altitude, flies higher than intended. Things like that just stops for any reason. Airplanes also never take off by themselves. The take-off of an airplane is always done by the pilot. On the other hand for cars, the requirements and reliability are extremely high. If we are talking about higher levels of automation where the car basically drives itself. And the driver is no longer necessary. But airplanes aren't there and it doesn't seem like they are going towards full autonomy anytime soon. So, I asked that on the Chaos Communication Congress a year and a half ago. Are there any pilots here today? Private pilot, commercial pilot, ATP, I was asking. Single engine instrument, that's good. So, when flying an airplane, there are usually three levels on which you can fly it. If you have an automated airplane with autopilot, you can fly manually, just moving the yoke or the stick. And you have this simple autopilot, you just tell it to go in a certain direction or hold a certain altitude and it will do that. And there is the so-called managed mode where you can pre-program an entire flight with waypoints and altitude restrictions and it will do that. Then there are various automated subsystems, but it's the same in a car basically. And some systems may come on automatically. The engine control is usually done by a digital computer these days in airplanes as it is in cars. Another big difference is that a lot of the automation in airplanes depends very highly on external infrastructure. That is true to some extent for cars if they use GPS for navigation. But for airplanes there is a lot of terrestrial infrastructure that is required, not only the airports but radio beacons and instrument landing systems and things like that. So an autopilot doesn't really have any more information than the pilot. It doesn't decide anything, the pilot really tells the autopilot what to do and then the autopilot basically moves the control surfaces and flies the airplane. And the autopilot also can never relieve the pilot of his responsibility. In an airplane there is always exactly one pilot in command and he bears ultimate responsibility for the safety of the flight. And he needs to do whatever is necessary to ensure the safety of all on board. And I don't see that commercial airliners are going to fly with less than two pilots any time soon. I think some airlines would like to do it but I don't think it's going to happen soon. There are some small commercial jets which can fly with a single pilot, even commercially with passengers. But over a certain size and unless they have a special certification there always have to be two pilots. Of course there are a lot of advantages of automation. Automation can often react much quicker. In many cases it does have more inputs, especially in cars. It will have infrared vision and radar and LiDAR and ultrasonic sensors. In airplanes they usually don't have a lot more sensors. There's a weather radar but that is usually not used for flight control or for the autopilot but is only interpreted by the pilot. And of course the autopilot can do monotonous tasks such as just keeping the altitude and the course in cruise flight, which usually goes on for several hours without really anything to do. The autopilot cannot be distracted by most things. So I show you a brief video of some of the more impressive things or one of the more impressive things that an autopilot does. And that is, let's see if I can get that full screen. I hope there will be some sound. No the sound is important. It's not coming out of this one. Is that really the best way to do it? It should work the other thing but it didn't. I try it this way. It's not super important but it's quite interesting to hear. Because you can hear the altitude callouts from the flight warning computer. And that's the view out of the front window. Did you notice the screen was only gray but the video was actually running. And it counted down the altitude 50, 30, 20 feet above ground. And you could hear a touchdown before you could actually see the runway. And that's Autoland and that is something that human pilots can't do and are not allowed to do. Okay, I hope it's working now. So that's something. I imagine it's quite disconcerting, to look out the front window and not see anything and just feel a touchdown. And you have to trust that the computer is going to do it right. One thing that can happen is if automated systems are too reliable that the pilots or the drivers, as it may be, will come to rely on the automation. Imagine a car that doesn't fail on average for 100.000 km. So you can drive several years and will never see a failure. And you come to believe that the automation is perfect. Until then one day it does have a failure and drops out and doesn't do what it's supposed to be. It doesn't turn or whatever. And so it may actually be the case that quite unreliable automation is at the moment a lot safer than very reliable automation until we can get to extremely high reliability. So I don't know when that is going to happen. But this is perhaps one thing that has kept aviation so very safe paradoxically that the automation isn't foolproof. All pilots know that at any time the autopilot can drop out, can be turned off and he has to hand fly the aircraft. In the automatic landing in low visibility that I just showed, the autopilot can drop out at any point. There is dual redundancy, which means if one autopilot drops out he can still land. But if something else fails, if the instrument landing system, the ground beacon that guides the airplane down fails then the only thing the pilot is allowed to do is initiate a go around and take off again and not land. A human is not allowed to land in those conditions. That may change with modern artificial vision systems. Some business jets already have infrared vision systems that are superimposed on the windscreen and they may use that as a backup system if the autopilot fails, but not as a primary system for landing. So, it's never easy to get automation exactly right. And there are some cases in which mostly the automation did something that the pilot didn't expect and was surprised by it or the pilot, the human pilot made a small error or there was a small technical system that didn't work as expected or as designed and in some cases with catastrophic consequences. Some of the design decisions that have to be taken when designing automation and cockpit systems is what needs to be done if things go wrong. And the first level is, is there a redundant subsystem that can't take over. Airliners always have two autopilots, sometimes they have three independent computers. Fly-by-wire airliners have several, Airbus typically has seven flight control computers, a single one of which is sufficient to get the airplane safely back on the ground. So, with less comfort than if all of seven work, but it'll suffice to land. And the next fallback level is of course to hand back control to the human. In airplanes that happens frequently, so pilots know how to do it and know what to do and are frequently retrained, at least every six months. I just read that that Qantas does a check up of the pilot seven times a year. So, they are very thorough in that. Another decision is if things go wrong and if many things go wrong, which sometimes happens in aircraft and probably in cars too, the engineers when designing the system need to make the decision together with human interface experts and psychologists and all kinds of other people, which information to show to the pilot at what time. It may be that there are 30 or 40 error messages waiting in the queue and during landing you don't want to show all of them to the pilot, only the ones that he needs to know at that point. And that is a very tough decision to take when designing such a system, what to display at which point. And some things are important to know, some things less so. And I have a few stories about accidents that were caused by problems with the automatic thrust system. And they are very different systems. Boeing and most of the rest of the world have thrust levers that move with the automatic thrust control. So the computer decides to reduce thrust and you can see the thrust levers moving back a bit. And Airbus in its fly-by-wire aircraft uses static thrust levers, which are not back driven. So the pilot sets them to a certain position, which means automatic thrust control and then the computer regulates the thrust and controls the thrust, but the thrust levers stay in the same position. So seems like a dumb idea, but we're not exactly sure. There have been accidents with both systems in which the system was involved. And here's one, here's another video. In this one the sound isn't important, so I try to get it full screen. So that is in Sao Paulo, Congonias Airport. This is a normal landing, probably also in Airbus A320. And you notice it's slowing down, it's going pretty slow. And this one is the accident aircraft. And you immediately notice that it hasn't slowed down as it should. Let's again the normal one, in the background the one. I think it takes about 13 seconds or so to move through the picture. And this is the accident aircraft. It takes about 3 seconds. And it didn't slow down and it exited the end of the runway, still going 100 knots. It's about 185 km per hour. Shot over a big road, a 6 lane road and crashed into a building next to a fuel station. And burned and 200 people died. And what happened? This is an excerpt from the digital flight data recorder from the accident flight. And you can see these two lines are the left and the right thrust lever. And you can see that the pilot pulled one back. This is idle. And if you pull it further back, that's reverse thrust. And he pulled one back and put it into reverse thrust and let the other one sit in forward thrust. In normal forward thrust, the position where it is, the entire flight. It has never been found out exactly why he did this. It may have to do something with a procedure that you had to use if one of the thrust reverses was broken and disabled. Which was the case on this flight. He was deactivated. And the procedure was to pull both thrust levers to idle and select reverse only on the good reverser and not on the other one. What he actually did is maybe he remembered that or used reverse thrust only on the good engine. And he forgot to pull the other thrust lever back entirely. So it had still significant forward thrust on one engine and reverse thrust on the other engine. And when not both thrust levers are in idle or in reverse, then various other automatic systems such as the ground spoilers, which destroy lift and help break down, help break the aircraft at higher speeds, slow down the aircraft. And the automatic wheel brakes don't work. And it took the pilots 11 seconds to figure out that they weren't braking at all. Neither aerodynamically nor with the wheel brakes. And that was far too late. If they had realized that immediately and used full manual braking on the wheel brakes, they might have probably just stopped on the runway. But this was a little thing that the pilot did and the aircraft didn't really tell him what was going on. He noticed he didn't have any ground spoilers extending, but other than that, they were completely left in the dark about what was going on. So this is really basically what I already told you just in German. And this had perhaps something to do with the static thrust levers, because just before touchdown the computers reduced thrust to idle automatically. But the thrust levers stay in the forward position. And then on the Airbus there's a call out that says retard, retard, which means pull the thrust levers back, because we're not using forward thrust anymore anyway. So maybe in a system with moving thrust levers, this wouldn't have happened. But there are other cases. Here's another accident. It wasn't quite as catastrophic. It was only nine dead. And in this case there was also a small technical malfunction on the aircraft. One of the radar altimeters stopped working. And that radar altimeter is coupled to the automatic thrust control for landing, which means if it senses that the aircraft is lower than 27 feet above the ground, it reduces thrust to idle. And the malfunction in this case meant that the signal was reflected directly from the transmitter to the receiver. And so it indicated minus 8 feet. So the computer said, oh, we're below 27 feet, so reduce thrust to idle, while they were in fact still 200 feet high. So the pilots noticed, pushed the thrust levers forward again, but that doesn't disable the automatic thrust control. So the computers retarded the thrust levers again. This time unnoticed by the pilots. The speed decayed, they went into a stall, lost control and crashed just a few hundred meters short of the runway. And in this particular case, why you can't say which of the system is actually a lot better or worse, this particular scenario wouldn't have happened with the Airbus-Style thrust levers. They would have just shoved them forward and if you push them forward of the normal auto thrust position, you are in full manual thrust control. So it's really hard to say which system, if any, is better, they are just different. And here's another one also to do with thrust control and in that case the automatic thrust was disabled and didn't turn itself back on, although the crew thought it would. So, again, as in Amsterdam speed decayed, they went into a full stall and crashed. One of the tragic side notes on this one is that one of the three people who were killed may actually have survived being ejected from the aircraft but was then run over by a rescue vehicle. It is clear that she was run over by the rescue vehicle but it's not quite quite clear if she was dead already or was killed by that additional accident. There was so much foam already covering everything so the driver didn't see her. So what happened in these cases, in two cases the aircraft had small technical defects. The one in San Francisco, was in San Francisco, was Angelus, not sure. So, the aircraft in that case was perfectly okay and there were even four or five people in the cockpit because some of the pilots were being checked out. But they didn't realize that the auto-throttle didn't work as they thought it would and maybe there's not enough training in some airlines, maybe there's poor crew resource management because one of the check pilots actually told the captain, the pilot in command, that the airspeed was lower than it should be but there was no reaction and when he finally pushed the thrust levers forward again to go around five seconds before the crash it was too late to make a significant difference. I have a small side note here about pilots who did the wrong thing even when something major went technically wrong with the airplane. You probably all remember Captain Sullenberger who ditched his Airbus A320 on the Hudson River after Canada Gees disabled both his engines. He completely lost thrust immediately after takeoff. Simulations later found that he might have been able to return to a solid runway but equally likely or maybe some different probability he would have crashed into a residential area and so ditching on the Hudson River and saving everyone in the end turned out to be the right choice. And there's another one who's also rescued by by good airmanship by the pilot Bridger Airways Flight 38 lost thrust on both engine on short final approach and by excellent energy management with what he had left retracting flaps one step he managed to land on the airport but not on the runway, stopped just short of the runway, landed, landed, crash landed on the grass and in this case too everyone survived, few people with injuries but despite several tons of fuel spilling on the airport there was no fire. If it had ignited it would have looked very different but in this case everyone survived as well. Briefly talked about that it's a hard decision for the engineers and designers what to show to the pilots and the extreme case was Qantas Flight 32. Who remembers that? Yeah of course pilots will remember that. An Airbus A380 from Qantas suffered what is called an uncontained engine failure and if you read the report it's actually very interesting to see the language there. One of the big compressor discs or turbine discs can't remember which, it's a big solid piece of metal to which the compressor or fan blades are attached and some of them are as heavy as maybe 50 kg so it's a really big piece of metal and it says that the turbine disc was liberated so it actually at very high speed broke apart and three big chunks of it exited the engine and they're basically like artillery projectiles so really they would go into the fuselage and exit the other side without noticeably slowing down and it was very lucky that the cabin wasn't hit so no one was hurt in this case but the Airbus aircraft are very highly computerized like any modern airliner and for at least 50 minutes one crew member was going through all the error messages about failed systems and some pilot wrote that 21 of the 22 major systems had malfunctions on this aircraft so hardly anything was working as it should one engine couldn't be controlled anymore there were few leaks electric control lines were severed, hydraulic lines were severed they basically lost half of all control surfaces so it was still just flyable but in the end by going through all the error messages the crew had a very good picture of what was still working what the aircraft was still capable of and so they managed to calculate the required landing distance and found that by manually recalculating the two conservative estimate of the computers the required runway that they could land with 100 meters to spare so that's what they did they had to approach very fast because a lot of the high lift devices were still working but in the end they brought it down in one piece or what was left of it so we're still a long way away from that with cars and it's also not very useful in a car to tell the driver what particular aspect of which particular specific subsystem has failed because normal drivers aren't engineers and they wouldn't understand it anyway so is a very good engineering background to understand what the aircraft is doing but even so, aircraft have automation only in a very limited way it can do some things very well, it can land in no visibility it can cruise along comfortably and very smoothly change altitude and things like that but there always has to be a human monitoring the system ready to take over whenever necessary and it also can be disabled at any point but unlike cars in an airplane, there's usually a few seconds time if the computer says, oh I'm turning off the autopilot the airplane isn't going to do anything if it's in cruise flight so it's just going to continue flying and there's a bit of time there was always more than one pilot in airliners and as I said, they have a better education a better training than car drivers but also the aircraft as such the aerodynamic thing that flies needs to continue flying and a car in most situations you'll be able to stop by the side of the road and take a good look at what has gone wrong but the airplane at least has to go on to the next suitable airport so, who has seen this before? is anyone into car automation? so, the automobile industry has defined five or rather six levels of automation up until a few years ago almost all cars were at level zero so, the dumb systems that we have like simple cruise control, ABS things like that, they don't really count but then we have some simple automation systems like adaptive cruise control, lane change assist things like that, self parking cars because in most of these systems the driver still has to operate the accelerator on the brake and the car will only turn the steering wheel so that's a one-dimensional assistance function or automated function and all the cars that today say they can drive autonomously and that are sold to customers are at level two no more than that today there are some level four cars in testing there's no level five car in testing yet in my estimate it's going to take a long time until we see level five there's some level four in testing and most of it is level two and as such the Tesla autopilot is actually aptly named because it operates on a level comparable to an aircraft autopilot in a very specific domain it can do some things autonomously but if it has to do something else entirely it hands control back to the driver and that's what you saw on my title page if you enable the autopilot on a Tesla it shows this little blue box that says hold steering wheel in fact the Tesla autopilot will remain engaged if you remove your hands from the steering wheel for several minutes it will go on driving if you don't touch the steering wheel but if you do it for too long eventually it will say you didn't heed my advice to touch the steering wheel so I will now disable the autopilot for the rest of the trip so yeah there's some level four in testing but how safe are current cars with automated driving features we really can't tell maybe a very small number of accidents and some manufacturers claim that they have driven n million kilometers I think Waymo or Google by any other name said that they have now driven four million kilometers autonomously without driver intervention or so they say or they have to report every driver intervention but in fact they don't report every driver intervention because they think that they don't need to report driver intervention if their simulations show that the automatic system would have been able to cope but the driver disabled it so they say they don't need to report that because they could have done it with the automatic system and if you compare the statistics with manually driven cars, human driven cars it's also a bit biased because practically all automatic cars are new or at most a few years old so it's also may have something to do with the state of repair and how careful drivers are with a new toy so and most of the miles or kilometers driven with the automatic systems enabled will have been on highways and overland much less in the city and cities much much more challenging so and it's going to be a long time until cars can completely autonomously drive through through city traffic if you've ever driven in Rome that's apparently extremely challenging so how safe are they going to be in the future well we really don't know either obviously the next levels will be three and four as I said there are some level four cars in testing level four means in a very specifically defined environment for example on a highway the car can drive autonomously and the driver is not required to supervise the automatic system and in level four it also means there is an automated fallback system which at least can bring the car into what is called a minimum risk state which almost always will be driving to the side of the road and stopping so if the car knows that it's coming to the exit to the intersection at which the defined environment in which it is supposed to operate will end it will alert the driver saying for you to take over in 10 seconds or in 15 seconds or whatever and in level three the driver must take over then that is part of the of the derived requirements so to speak for the driver so it's one assumption on level three that the driver will be ready to take over after a request for intervention they use lots of strangely defined terms in the SAE document and the capabilities of the normal automated systems for level three and level four are basically the same but level three works on the assumption that the driver is ready to take over after a brief warning time and level four has no such requirement and says level four if my professional routine that will allow me to put the car in the minimum risk state which may be just stopping by the side of the road or something like that so most people I have talked to in the automotive industry say that level three is unachievable in practice we will not do level three we will do level four because it can't really work on that assumption it will be very hard to prove that the driver will always be ready to take over because while the car is driving level three under full automatic automatic control the driver is allowed to do something else can watch a movie or play cards with his passenger or whatever but needs to be able to take over after some time and the other thing is that how long before the actual required takeover does the car have to tell the driver in 15 Sekunden und wenn du auf 150 km auf der Highway 15 Sekunden coverst viel Distanz so, der Car muss die Ability haben zu schauen in die Zukunft und zu schauen um eine Bende so, das ist all wirklich, der Konsens sieht zu sein, dass level three nicht funktioniert in der Praxis also, das ist einfach was ich gesagt habe so, was ich gesagt habe von Science-Fiction-Movies das Auto, das kein Driver hat es vielleicht nicht hat ein Steering-Wheel oder so es ist nur ein Kabel wenn du es aufstimmst und du willst gehen und der level five wirklich stipuliert, dass es muss so sein überall unter any road conditions unter any weather conditions ist dass es muss nicht sein, dass du das komplette Trip unter conditions, in denen ein humaner Driver auch nicht driver kann oder wenn es wirklich nicht eine Information gibt über its Surroundings z.B. in der Blizzard etwas wie das das ist die Exception für das aber other than that drier weather, rain storm, fog all conditions we don't really see that achievable in the near future maybe someday so how safe two cars need to be and this is from an actual industry standard the tolerable risk is the risk that based on current societal values will be accepted in a given in a given condition so it's very very vague and it's going to be a huge problem if we're going to develop self-driving or autonomous cars that we're going to sell all over the world because this actually depends a lot on the society on which you're going to sell them so there may be societies which accept much lower standards of safety as long as they are better than current human driver performance or in some countries there may have to be a lot better than human drivers before we're going to accept them but it's going to be hard to measure current societal values and one other thing is that traditionally for safety critical systems we have been doing formal requirements and a formal design specification formal verification if possible such as the flight control system fly by wire control system on airliners or even more so on fast jets because fast fighter jets are aerodynamically unstable and if the flight control fails it will literally rip the wings off within 200 milliseconds or something like that so that has to work at all times there's no way around that for airliners it's not quite as critical they'll fly stably for maybe half a minute or so before it starts decaying they'll continue to fly more or less and for self-driving cars we don't even really know what the requirements are and the environment in which they are going to to operate is so complex they're the only chance that all manufacturers see is deep learning by any other name artificial intelligence, whatever you may call it CNNs, whatever and you can't objectively verify that the code and all the parameters actually fulfill the requirements and well, we don't have requirements anyway so there are probably some border conditions which we have to fulfill for safe operation but proving that a neural network will actually fulfill those is an unsolved problem Tesla already said in the latest software version not a single line of code will have been written by human there's just the basic machine learning structure and everything that the car is going to do is done by machine learning it's quite likely that automated cars will be a lot safer than human drivers on average quite a bit actually if it's done right by accident caused by automated driving systems that a human would not have done that could have been prevented by a human and that's going to be one of the big obstacles to acceptance I think even if we're going to save 10 hypothetical unspecified people somewhere won't be a big consolation if a relative of me is being killed by a self-driving car and I have no idea to predict how how the societal values will evolve around this so it's really hard to tell maybe just a little bit better will already be accepted by the society in general and I think the press is going to play a big role in that in the acceptance or not for that but we'll see how it works out it's going to be an exciting future thank you very much for questions are there any can't hear you are there any questions in English I a little bit miss the beginning I guess you are pilot or in this direction I don't know how many of you heard in Japan there will be there will be automated taxi service can you know this you know air taxis air taxis maybe so my question is in Japan they are crazy for the technology in which level from these six these taxi services is that just on the rail or is it on a special if it's on a rail that the auto the automotive levels apply but something that's running on a rail driverless is probably going to be level four you know the taxi service what the Japan use in 2020 there will be Olympic Games and they want to use maximum automated cars but they are going to run on a rail I don't know that system so no more questions yes what about emergency cars for example police or firefighters is there maybe a driver assistance system or automatic driving for the emergency drive or something like that planned in the future not that I know of okay yeah go ahead okay if there is no defined way to predict the behavior of a program are there any ideas how you could reliably verify the performance of a self learning network that's the big question right so one idea is that there's a wrapper around it limits certain parameters to probably safe bounds but how exactly that is going to work is still not quite clear and it's also very hard to do that because if you can make a verifiable program safe then you're probably going to restrict the deep learning algorithms too far and they can't really work to the full potential so the short answer is no we don't know how to do that beyond extremely long testing and living with the occasional crash yeah oh yes thank you for the presentation what is your take on when something happens in a plane or today a car crashes there is some legal liability so usually somebody goes to jail when an autonomous vehicle crashes or kills somebody these days it's kind of like nobody's fault maybe the company pays some reparations but that's about it pretty much there is nowhere near the amount of actual human liability involved in autonomous driving today do you see this changing yeah I could have talked about that for another hour I guess normally the first liability is to the driver okay so we don't have a driver then it's probably the manufacturer who is liable but I don't know that anything like that has happened yet or has gone to the courts yet so we don't really know um it will there will probably be something like not exactly a certification but you need to prove to the legislator that the car that you're bringing to market is somehow not going to harm people and that's part of European legislation so the first the first one liable how that plays out in practice in the courts I don't know hello I'm sorry I missed maybe the first five or ten minutes please tell me you mentioned IEC 6508 so on the other hand there's ISO 26262 the automotive standard for cars like up to three and a half tons the difference between the two standards is like the ISO 22 the automotive standard in the space of developing a vehicle so are you somehow familiar with what's going on how they want to specify this whole let's call it automation paradox that currently just pushing to the market because they want to outsource anything so they want to be very specific with their OEMs so I think like they're trying to create facts before they had actually a standard for this right yeah it's going to be very interesting if you're familiar with the 26262 so can't tell you anything but I'm sure they're meeting and trying to to incorporate that into the standard but if I know anything about standardization meetings it's going to take a long time so any more questions okay then well thank you very much for listening