 How safe should self-driving cars be? This question is becoming increasingly important as companies like Waymo, Uber and others test their self-driving vehicles on public roads, but answering it turns out to be more complicated than it might first seem. Around the world, something like 1.3 million people are killed each year by vehicles. If self-driving cars could put a dent in this, it would be a great achievement. Many of these deaths are down to human error. In contrast, self-driving cars don't drink, they don't text and they don't fall asleep at the wheel. Reducing deaths and injuries from these causes shouldn't be too hard for self-driving car manufacturers, but it'll only count if their technology is so good that it doesn't end up killing or enduring people in other ways. Here, manufacturers are going to have to work hard to ensure their cars don't fall into human-like behaviors like misreading road conditions or missing important safety cues, but they're also going to have other challenges to grapple with, like sensors that fail to detect pedestrians or algorithms that misinterpret what the car sees or even machine brains that just plain make bad decisions. Because of these new and as-yet poorly understood risks, manufacturers are going to have to take great care to ensure their vehicles are acceptably safe, but this still leaves us with the question of how safe is safe enough? Human drivers are a good place to start here and where better to look than Arizona where many of these vehicles are currently being tested? It's also sadly where the first pedestrian was killed by a self-driving car. In 2016, 952 people were killed in vehicle crashes in Arizona and many of these were down to human weaknesses. For instance, 21% of the drivers involved in these crashes were impaired by alcohol, drugs or medication and 2% of them by illness, sleep or fatigue. Imagine what these figures would look like if 30% of those vehicles on the road were driving themselves. There's a chance that around 90 lives a year could be saved as long as the self-driving technology was completely safe. But what if it wasn't? What if these autonomous vehicles also ended up killing some people who otherwise would still be alive? In this case, how do we work out what acceptably safe means? One way to approach this is to use the number of fatalities per billion miles traveled. In urban Arizona, there were 518 people killed by cars in 2016 or around 11 people killed per year for every billion miles traveled in the state. This is equivalent to around a 1 in 13,000 chance of being killed by a vehicle each year if you live somewhere like Phoenix. Maybe this should be the benchmark for self-driving cars or maybe a slightly lower number, say 8 deaths per billion miles traveled. This sounds like a reasonable target. The only problem is it's not that easy to prove if and when it's been reached. According to research from the RAND Corporation, if a company had a fleet of 100 vehicles driving around the clock and wanted to show a 20% improvement in car-related deaths somewhere like Phoenix it would take around 15 and a half years of testing with no fatalities to achieve this. Things look even worse if a tighter standard of a 20% improvement in pedestrian deaths for self-driving cars compared to human drivers is used. In this case, it would take closer to 53 years of death-free testing with 100 vehicles on the road. These sort of timelines are clearly unworkable for demonstrating safety. Ironically, they're so long because relatively few people are killed by cars compared to the number of miles driven in the US and so it takes a long time to build up statistically significant data. An alternative approach is to rely more on performance standards that ensure self-driving cars are designed for safety under the toughest of conditions and to monitor for early warnings of potential failures, ideally before anyone's killed. And because this is an evolving technology such performance standards should evolve along with it. This would help develop the technology safely while not slowing it down unnecessarily but on the way there will inevitably be crashes and this is where regulators and manufacturers need to start working with local communities so that they can work together on what being acceptably safe actually means. For instance, it may be decided that testing around residential areas is OK but not around schools or it may be agreed that testing where pedestrians are likely to be paying less attention college campuses for instance is not such a good idea. On the other hand, it may turn out that local communities are pretty risk tolerant when they see the potential benefits, you never know until you ask. The bottom line is that while self-driving cars could make roads safer if the technology is developed responsibly it's up to manufacturers, regulators and everyone else affected by them to decide how safe is safe enough and what they're willing to risk to reach the benefits the technology offers.