 It's very important to distinguish facts from science fiction. And what I mean by that is that today there exist weapon systems that have autonomy in their ability to select and attack targets, but they're usually confined to very limited roles. They're usually fixed defensive systems that are capable of shooting down missiles or rockets, that is, objects. The issue arises in relation to possible future developments in autonomy and weapon systems, where you may have weapons that are capable of having a far greater degree of freedom of action, freedom of movement, operating in complex, dynamic environments, including against human targets. In terms of the legal questions, can such a weapon system that operates autonomously respect the rules of international humanitarian law, in particular, can a weapon on its own distinguish a tank from a school bus or a soldier from a civilian? The development of autonomous weapon systems has profound implications for the future of warfare and indeed for humanity. And the central question is the potential loss of human control over the selection and attack of targets, that is, over the use of force, and in particular the use of lethal force on the battlefield. But then beyond that, there's this profound moral question, which is, is it right? Do the dictates of public conscience allow a machine to make life and death decisions on the battlefield? States at the moment are unsure what exactly needs to be banned, if anything. In fact, there's no agreement really on banning autonomous weapons. The debate is really around what kind of limits, if any, should apply to these weapon systems. For the ICERC's part, we're not calling for a ban, we're not calling for a moratorium, but we're asking states to consider the legal, ethical, and societal issues that are raised by autonomous weapons, that is, that are raised by the loss of human control over the use of force.