 If somebody steals your Tesla, does that make it an Edison? A few weeks ago, a Tesla Model S using its new autopilot mode, a sort of adaptive cruise control that steers and breaks the car without driver input, drove into a semi-truck at full speed, unfortunately killing the driver. There were a few extenuating circumstances that made the accident less of a deal than it might have been in media. But oddly, whenever autonomous vehicles and safety are mentioned in the same news article, many people begin discussing a peculiar meme that's become popular in the last few years, and this case is no exception. The idea was popularized by Cal Poly philosophy professor Patrick Lin that self-driving cars require an unprecedented consideration of ethics that engineers and designers have never encountered before. The argument goes something like this. If your self-driving car is in a situation where it must either kill its occupant, say by driving into a telephone pole, or several pedestrians by driving up onto the sidewalk, then it must choose one. It must have some algorithm to decide which is the more moral option, and then execute it. And the deaths which result are on the hands of the programmers of the sanity. Now, if you've ever watched THUNK before, you'd probably guess that I'd be into this sort of thing. Philosophy, technology, the ethical and practical ramifications of cutting-edge, futuristic stuff, a real-life trolley problem, that's kind of my jam. But there's something that bugs me about this discussion, some underlying assumptions that are necessary to have it at all that I don't think are particularly well-founded. First, let's talk about some basics of engineering. At its core, engineering is just intelligent planning for making stuff, the keyword being intelligent. A common adage is that an engineer can build for a nickel what any fool can build for a buck. For example, if someone wanted to build a bridge between Manhattan and Brooklyn, even if they had no mathematical training at all, no knowledge of architecture or material science or anything, they could probably come up with some way to do it. Something along the lines of pick up rocks, schlep them to the coast, dump them in the water, repeat until you can walk across. Of course, that would take a really, really long time and be super expensive. Engineering is figuring out how to get the most mileage out of the least stuff, how to take a project which would require millions of tons of rock and thousands of people maybe a century to finish, and instead, with a few thousand tons of material and some ingenuity, build something in 14 years that would work just as well. The goal is to build something that doesn't just work, operating efficiently and effectively at the job that it's intended to do, but just barely works. That might sound a little bit scary, but so long as the engineer's requirements are satisfied by their design, so long as something that's supposed to support 500 pounds actually supports 500 pounds and is never used inappropriately to support a greater weight, then the only difference between that and something that's designed to support 5 million pounds is amount wasted. Of course, you don't want something to break just because somebody was one pound over the weight rating, and accidents do happen. Just in case their math doesn't match the real world exactly, engineers use something called a safety factor, a simple number that they multiply all their final answers by to make the end product that much more reliable. For very tightly controlled materials in situations where you really need to keep weight down, like in aircraft, factors of safety range from just over one to around two and a half for more important components. For less rigorously controlled materials that might see things like harsh weather or unpredictable loading conditions, like buildings or bridges, safety factors of eight or even higher are possible. That means that they're designed to hold eight times the maximum load the designer expects them to see in normal operation, that every single nut, bolt, and beam in them is rated to handle eight times the stress that the designer expects. The most obvious way to look at a safety factor is as a definition of the conditions where it's safe to use some technology, but it's also implicitly a definition of the conditions where it's not safe to use it, a boundary where the engineer has decided, if you use it here, I can't guarantee your safety. If one were of a more clickbaity frame of mind, one might choose to phrase this, why bridges must be built to kill? I mean, if you decide to exceed its weight rating by more than eight, a bridge with a safety factor of eight is absolutely built to kill you. In fact, almost every piece of technology you interact with every day is built to kill you, in some situation well outside the parameters it was designed to handle safely. Your phone is built to kill you if it's struck by lightning while you're outside playing Pokemon Go in the rain. Your car's cruise control is built to kill you if you steer over a cliff. That's not anything to be alarmed by, it's just how engineering works, by deciding how far a given design will go to keep the user safe. So the idea that autonomous vehicles might be built to kill you in some unlikely situations? Actually, not that big of a deal. They're being designed to keep you safe in the vast majority of scenarios, and if you're unlucky enough to get struck by a meteor, well meteor avoidance just isn't in the scope of that design. Now let's talk about another one of the underlying assumptions of this idea, that this particular issue is really important for designers and engineers compared to the other design choices they usually make. I happen to work in design and I enjoy thinking about ethics, with even a little bit of thought it's easy to recognize that every design decision I make has some ethical component. Should I specify materials that are easier to find locally and support local businesses? Does the shape that I've made something make it dangerous to handle? Is my documentation going to leave somebody feeling helpless? These might sound like relatively minor considerations compared to autonomous vehicles deciding who lives and who dies, but for designers in large companies, the decisions that they make can have a huge effect on a massive number of people, many more than would be affected by the one in a million scenarios envisioned by AV ethicists. For example, when Apple designers decided to use a particular kind of plastic in their smartphone screens, they didn't consider that the cheapest way of manufacturing said plastic was by using benzene and n-hexane, a carcinogen and a neurotoxin. When they outsourced the manufacture of those screens to a low-cost bitter in China, they didn't consider that hundreds of workers might be exposed to those chemicals. Since then, Apple has reformed their policies to forbid using either, but that was after several million screens were made, and this is just one example of how design choices can have drastic, far-reaching moral consequences. Recyclability of materials, product longevity, labor conditions, carbon emissions, depletion of natural resources, large companies make design decisions daily that have global moral effects. Compared to what autonomous vehicles might do in bizarre situations, there are probably much larger fish to fry, which leads me to my last critique of this discussion. Patrick Linn does an amazing job dissecting the myriad ethical implications of various automated responses to an unavoidable fatal collision, but he sort of handwaves how an autonomous vehicle would conclude that a given collision was unavoidable. Now, full disclaimer, there's a lot of room for improvement in autonomous vehicle software, so maybe in some far-flung future, it might theoretically be possible for a self-driving algorithm to make explicit ethical decisions the way that Linn suggests, but if so, it would have to be radically different than any such software currently being developed. Every autonomous vehicle algorithm that's available today more or less mirrors the priorities of human drivers. Stay in your lane, observe traffic laws, break carefully, steer around obstructions, stop if there's something big in the way. When put into dangerous or emergency situations, they're unilaterally designed to just try really hard not to hit anything. Even that is a remarkably difficult computational task. Speed, traction, road conditions, the positions and velocities of all the surrounding objects, it's a remarkable amount of data to crunch in even optimal conditions, let alone in a split-second emergency. Deciding that a fatal accident is truly unavoidable would require predicting the future positions and trajectories of all surrounding objects, and their likely responses to any given maneuver which skyrockets the complexities of the equation to truly insane levels. Throw in the relative utilitarian cost of every human nearby, and you're gonna need a long, long time to solve it. Time which you don't have. Every microsecond wasted on that massive calculation wastes precious processing time that might be better used figuring out how to just not hit anything. That doesn't seem like a reasonable trade-off. Now again, Moore's Law can be surprising, but I think that it's much more likely that self-driving algorithms will always be built more or less the way that they're built now, using their computational power to avoid accidents, rather than waiting for some opportunity to crunch some crazy numbers, throw their little silicon hands up in the air and say, ah it's hopeless, who's gotta die? Regardless of all these objections, the least ethical response to this meme that I can imagine would be delaying the implementation of self-driving algorithms of any sort, debating largely implausible hypothetical scenarios. 30,000 people every year die in automotive accidents in the United States, and many more than that are permanently injured. Getting distracted, impatient, inattentive, tired people out of the driver's seat and replacing them with even slightly less error-prone software is of far greater moral importance. Of course, the problems being raised here of codifying our morals and algorithms are still very important questions that deserve thoughtful consideration and discussion, but maybe self-driving cars aren't the best context for that discussion. What's the right context? Well, Google DeepMind is getting scarily good at a ton of stuff. Maybe we should be talking about what happens when we inevitably hand control of our government over to it. Do you think the ethics of autonomous vehicles deserves the attention that it's getting? Please leave a comment below and let me know what you think. And a quick reminder, thunk episode 100 is coming up, so if you have any questions you'd like me to answer, please leave a comment or send me an email. Thank you very much for watching. Don't forget to blah, blah, subscribe, blah, share, and don't stop thunking.