 Robots fighting wars. Science fiction? Not anymore. If machines, not humans, are making life and death decisions, how can wars be fought humanely and responsibly? Humanity is confronted with a grave future. The rise of autonomous weapons. So autonomous weapons are those that select and attack targets without human intervention. So after the initial launch or activation, it's the weapon system itself that self-initiates the attack. It's not science fiction at all. In fact, it's already in use. The world is in a new arms race. In just 12 countries, there are over 130 military systems that can autonomously track targets. Systems that are armed. They include air defense systems that fire when an incoming projectile is detected, loitering munitions which hover in the sky searching a specific area for pre-selected categories of targets, and sentry weapons at military borders which use cameras and thermal imaging to ID human targets. It's a pretty far cry from a soldier manning a checkpoint. Militaries are not turning to robotics and increasingly autonomous robotics because I think they're cool. They're doing it for very good military reasons. They can take in greater amounts of information than a human could make sense of it quicker than a human could be deployed in two areas that might not be possible for a human system or might be too risky, too costly. In theory, any remote controlled robotic weapon in the air, on land or at sea could be adapted to strike autonomously. And even though humans do oversee the pull of the trigger now, that could change overnight because autonomous killing is not a technical issue. It's a legal and ethical one. We've been here before. At the beginning of the last century, tanks, air warfare and long range missiles felt like science fiction, but they became all too real. With their use came new challenges to applying the rules of war, which require warring parties to balance military necessity with the interests of humanity. These ideas are enshrined in international humanitarian law. In fact, it was the international committee of the Red Cross that pushed for the creation and universal adoption of these rules, starting with the very first Geneva Convention in 1864. These rules have remained flexible enough to encompass new developments in weaponry, staying as relevant today as ever. But these laws were created by humans for humans to protect other humans. So can a machine follow the rules of war? Well, that's really the wrong question, because humans apply the law and machines just carry out functions. So the key issue is really that humans must keep enough control to make the legal judgments. Machines lack human cognition, judgment and the ability to understand context. And you can see the parallels with how we deal with pets. A dog is an autonomous system. The dog bites someone. We asked who owns that dog, who takes responsibility for that dog. Did they train that dog to operate that way? That's why the international committee of the Red Cross advocates that governments come together and set limits on autonomy in weapons and ensure compliance with international humanitarian law. The good news is that the ICRC has done this work for over a century. They've navigated landmines and cluster munitions, chemical weapons and nuclear bombs. And they know that without human control over life and death decisions, there will be grave consequences for civilians and combatants. That's a future no one wants to see.