 Thank you, Peter. It's wonderful to be back at New America and nice to be here for Arizona State University. So we have a wonderful panel and not a lot of time, so we're going to get right into it. I'm going to be very short on the introductions. To my left is Assistant Secretary at the Department of State for Democracy, Human Rights, and Labor, Tom Malinowski. You might have known him in Washington Circles for being Director of Human Rights Watch here. He has a distinguished record of speaking out on human rights topics and writing about them, and we're going to see what he looks like in his new dress today, and we'll push him a little bit. To his left is Ryan Goodman, an old colleague of mine from NYU Law School. Ryan wears many hats. He's not only at the law school, he's in the sociology department as well as the political science department at NYU. So he's vastly knowledgeable on a lot of topics, but mostly he's the co-editor-in-chief of Just Security, which is the most amazing blog you can imagine about the national security issues, the rule of law issues, and law in general, as it applies to the world we're living in today. So if you don't know it, you should get to know it. To his left is Daniel Rothenberg, who's probably been introduced a thousand times here already, but I'm going to say that in addition to being the co-director of this wonderful Future of War project with New America, he's also with Peter, the editor of a book that's just coming out, Drone Wars, and I think you should all get to read it. There's a lot of stellar people that have contributed essays, and so we're here to talk about human rights today, and human rights, the future of human rights within the context of this Future of War question. And each one of our panelists is going to talk to you about a different aspect of this, but I think the basic question is, how much has the way in which the Future of War changed altered the way in which human rights and how we understand human rights has changed, and what are the challenges that we may not really know or have thought about that are going to perplex us as we go forward. And I would say, echoing one of the speakers this morning, we're already here. So the question is not just the future, but what right now is in our face that will be even more so in the future that has to do with the human rights dilemmas, the human rights challenges. So we're going to begin with Tom Alanowski. Thank you, and thanks everybody. So Karen, you asked me to talk about new technologies and their implications. So I asked my staff, so what do you think I should talk about? And they gave me a list that included exoskeletons that increase human strength, aquasuits that emulate the locomotion of dolphins, invisibility cloaks, and so on. And I thought, no, we can't reveal the stuff that we're working on at the Human Rights Bureau or the State Department. So I'm not going to talk about those, but I did want to... So I chose a topic that I've been thinking about for some time, including before taking this job, and that is very much a subject of discussion in and out of government. And that's the issue of autonomy, lethal autonomy in particular. And I'm not going to be revelatory in my new suit. I'm not going to make news, but I did want to talk a little bit about why I think it's interesting and important. And the debate on autonomy is very different from the debate that's most in the public eye, the debate about drones. The current generation of drones, very interesting, important debates, but they're not really new. Drone, the current version, is really an extension of a human being. They may separate the human being physically from the battlefield, but not from its choices. The human operator still makes the ultimate choice about whether or not to kill, but we are moving inexorably in the direction of technology that could, if we chose to employ it this way, make the decision to kill or not to kill itself based on the fantastical advances that we are seeing in artificial intelligence and in the collection of data and the potential marriage of these two different advances. Now, there are a lot of potential uses for this, military uses for this technology that I think would be completely non-controversial and that we will probably see in our lifetimes. You can see unmanned, fully autonomous systems doing things like putting out fires or surveillance, a battle space or search and rescue, minesweeping, humanitarian relief going into a dangerous war zone and using the targeting technology to identify civilians who need help and then dropping or delivering humanitarian relief and so on. One could also imagine the use of kinetic force by autonomous systems that might not be entirely controversial, such as static defense against incoming objects, missiles, things that are not manned by human beings. The tough question is whether weapon systems on the battlefield should be able to make the autonomous decision to kill, to target and kill human beings. That is the question that we face and I think we need to understand that number one such systems would have obvious military advantages on the battlefield, particularly the ability to make decisions much more quickly than any human being ever could make. Because they confer advantages absent any conscious decision not to develop them, I think their development is inevitable given the pace of the technological advances and if other nations develop them then we might be in a position where we feel we have to counter and compete in some way. So how does the US government address this question right now? As many of you may know there is a current policy outlined in a DOD directive on autonomy in weapon systems and the basic principle is that we want to try at this stage to ensure what the directive refers to as an appropriate level of human judgment. The directive talks about three different kinds of autonomous systems, semi-autonomous systems which can only execute targeting decisions that have been made by a human operator are in principle allowed to use lethal force. Then we talk about human supervised autonomous weapon systems. These are largely autonomous systems but where a human operator is aware of what the machine is doing and has the opportunity to intervene even at the last minute if he or she decides that unacceptable harm may ensue and these systems under our current policy could be used to apply kinetic force against objects but not lethal force against human beings. Then we have the category of fully autonomous where decisions at every stage are made by the machine and its computer and under the current policy such machines would not be able to apply lethal or kinetic force. So it's a pretty restrictive policy that basically embraces the principle of a man in the loop. A human being must make a decision to kill another human being but I would stress that this is a policy that is based on our sense of the limitations, practical, legal, ethical limitations of the current technology and we have not yet as a government, as a country made the fundamental decision, the ultimate decision about whether such weapons should be built in the future. Should they be prohibited in some way? If not, should they be regulated? What is the answer to that ultimate question? Now we feel we have time to come up with the answers but they are very very important and eventually they're going to have to be settled. So how do we think about them? Again I'm not going to weigh in on one side or the other today but I want to lay out some of the factors, some of the considerations. Some of them are legal ones. Can such weapons be built and programmed in a way that satisfies the requirements of international humanitarian law? Maybe the answer to that is yes at some point. People debate that but even if the answer is yes it's not all that we need to consider because ultimately there are going to be policy questions as well. Ultimately the question is will we be better off or worse off if such weapons become ubiquitous on the battlefield? And here there are arguments in favor of such systems even from a humanitarian point of view. Some people will argue that a robot on the battlefield will never experience fear or anger or any of the emotions that lead human soldiers sometimes to commit war crimes and other terrible abuses. On the other hand one can argue that some of those benefits could be obtained as easily by using remote controlled weapons of the sort that we currently have and will be continuing to develop in which the weapon itself goes into the danger space. The human operator is not in danger but the human operator could still make the ultimate decision. And there are obviously a number of other practical, ethical, moral and philosophical questions. Could a machine ever be programmed to make the kinds of complex decisions that a human soldier has to make, ethical decisions on a constantly changing battlefield? Could a programmer foresee in advance all of the different contingencies that the machine he or she programs would have to make? Are we comfortable with machines making decisions about proportionality, which ultimately entail judgments about the relative value of a human life even if a machine could be programmed to make those decisions in the same way that a JAG officer might make? Who would be accountable if a machine makes a mistake? One question that I ask is someone who focuses on human rights. What would happen if such machines were to proliferate into the hands of authoritarian governments? Imagine a future Assad with an army of autonomous police officers or soldiers. That dictator would have something that no dictator has ever had in history and that is enforcers who will never refuse an order. Those questions incline many people to support a preemptive ban or to put it more precisely a rule that embraces, that takes the man in the loop principle and makes it law. But then one has to contend with the practical question of whether a legal ban would be enforceable and practical and some will argue that actually if we aim to regulate them we may actually get more benefit. So those are the questions. We do not yet have the answers to those questions nor I think have we answered as a government or a society an even more fundamental question which applies not just to autonomous weapons but to autonomous everything, self-driving cars and all the other things that are out there and that is what is the right relationship between human beings and machines as this technology races ahead. Eventually I think it's fair to say that machines will be able to do everything better than human beings can, everything. But does it follow necessarily that we should then cede everything to them, every single decision. Right now machines can fly planes more safely than human beings can. We haven't made the decision to cede that decision to them. So the debate for me is not just about machine autonomy it's ultimately about human autonomy and how much of that we're willing to give up. Thanks. So that's quite thought-provoking because the real question even though you've concluded that machines will do everything better than human beings the real issue going forward is who are we as human beings and how much do we want to delegate to these machines and where is it when machines take over that it's better in terms of judgment than we would do and what do we want to preserve about ourselves. So one of the ways to think about that beyond policy is the law and how these different technologies and the world we're finding ourselves in now and in the future needs the law to set parameters around this as we go forward and figure out the policies. So maybe Ryan you could weigh in here. Sure so what I thought I would talk about is the next frontier since we're talking about future war and what it might mean when we're in a post-war footing and the idea here is taking a little bit of a riff off of what Harold Codes spoke about yesterday which is how can we ever end a perpetual war and get to another frontier and what I want to think about what legal regimes apply in that context which then might also obviously introduce autonomous weapons and other uses of kinetic force lethal actions that are taken in a kind of a war fighting mode but aren't actually operating in what is traditionally thought of as an armed conflict so that's kind of the question and even though this is a sense of a post-war so for example after al-Qaeda's leadership has been decimated and the Taliban has reached some kind of an equilibrium it's also what I would want to suggest is that we're already kind of there and there are aspects of the post-war that we're already dealing with because there might be other terrorist groups that arise that just don't reach the level of an armed conflict kind of threshold or the context of a war so that's that's the question introducing those kinds of contexts and what legal regimes might apply I think there has already been quite a bit of thinking done about this inside the US military but not really outside and that is incumbent upon us to do a lot of international heavy lifting so that when we get to more instances of the post-war we're ready to have that conversation um because I just don't think we're there yet um there's a piece published by Michael Adams who's the deputy legal counsel of the chair of the Joint Chiefs of Staff in the Harvard National Security Journal calling it a use extrabellum so he talks about these zones in which we're in a post-war type setting or a pre-war type setting dealing with non-state actor terrorists and says it's actually kind of a free-for-all in that there are no legal regimes that necessarily apply except for the use of self-defense but use in Bella and humanitarian law wouldn't apply to autonomous weapons happening extraterritorially human rights law according to the US government would not apply extraterritorially when the United States uses lethal force in other countries and Bobby Chesney has also written about this he actually calls it post-war and he kind of the wake-up moment in Bobby's article is to say for those of you who think that you want to return to a pre-911 model the pre-911 model was not law enforcement it was kinetic action and lethal actions for example Reagan's presidential finding for the CIA to engage in targeted killings of Hezbollah's leaders um the um Washington Post just recently reporting about uh in 2008 supposedly the CIA and Mossad engaging in the car bombing of a Hezbollah head of Hezbollah's international operations each of those taking place outside of armed conflict so humanitarian law doesn't apply and extraterritorially so human rights law apparently doesn't apply so that's what I mean by this kind of free zone and what we think about it um the idea is that they're all operating only under the standard under international law of self-defense the nation having the right to use force to defend itself from attack so what I want to suggest is first that we're already are here that the future is here in a certain sense because the president's starting to take us off of a war footing by in May of 23rd May 23rd 2013 not just making a speech at NDEU but also promulgating new sets of rules for targeted action outside of areas of active hostilities and outside of areas of active hostilities we raise the threshold of restrictions that apply so that they're more regulated and it's more like peacetime and not like a battlefield so in some sense I even think it's preparing us for a soft landing for the post war because it's kind of a midway stage to getting to that point and in fact the president's uh the national security strategy that was just issued earlier this month and the state of the union the president says quite clearly we will take unilateral action when there is a continuing imminent threat and that's the model the model is the continuing imminent threat model that president Reagan used in the early 1980s that's traveled all the way through to today but that model is one in which it means that the united states can use lethal actions against terrorist threats even outside of an armed conflict and at one point the first strikes against the karsan group were justified in that kind of framework um I think there are other better frameworks for justifying it but that's one one instance of it so there are three issues I just thought I would raise about the the post war and the continuing imminent threat model the first is how the model actually operates in the second are the legal regimes that apply to it so in terms of how it operates the big question is what is the word imminent mean and continuing imminent threat and the question is uh some I think it's being confused in part because of the department of justice's white paper that was leaked which was in the case of the US citizen Anwar Alaki who was located in Yemen the DOJ's white paper said that the US could take action against him because it was an imminent threat even though he was only plotting uh strikes against the united states so the notion of imminence in my mind it kind of does a great disservice to the english language to suggest that's imminent it's not an immediate or an about to happen event is quite far off in the future but that's actually okay if we don't make the category mistake and just think about it as a targeting rule in wartime in wartime you don't need to wait for your um opponent to be in an imminent stage of attack you can kill them in their barracks while they're asleep um or plotting for the next strike that they might occur in the future so in some sense the elongated imminent standard is actually not anathema to humanitarian law it's almost an act of grace because it's not required by the law uh that you would only wait until your opponent is about to strike then you'd strike them but if we don't make the category mistake and think of it purely in terms of self-defense I mean the United States is able to take action against a group that it has not yet entered into a conflict with that's a different story and the baseline is that uh under international law you can only act in self-defense if you've been subject to an arm attack an armed attack is on the way almost the quintessential example given is Japan with Pearl Harbor if the aircraft carriers were on their way the United States didn't have to absorb the hit it could respond in that moment but there if that's the baseline the elongated imminence in some individuals just preparing for an attack down the line that's a real stretch so if we try to separate those two it's it's useful to think about John Brennan's speech at Harvard Law School in 2011 in which he said quote over time an increasing number of our international counterterrorism partners have begun to recognize that the traditional conception of what constitutes an imminent attack should be broadened I doubt it um I doubt it based on the baseline under international law for self-defense it would be a heavy lift to get international partners to agree to that I think the fact that they can't be named I'd have to just be referred to as international partners kind of suggests that it's not exactly a legitimate standard that they're willing to step out on I'm not sure what the word increasing means maybe that's from two to five um that would be an exponential growth um but so there's ambiguity there and I also doubt that we would that the international community is ready to to absorb that standard with respect to non-state actors so the move by John Brennan was to in some sense suggest look the previous administration tried to use a notion of elongated imminence to deal with Saddam Hussein the international community rejected that the US position has pretty much turned a corner on it but that's because other states thought that they might be on the receiving end of that kind of justification with non-state actors they don't necessarily feel that way it's a kind of a common enemy I don't think so partly because and you can see with Syria in some ways it's not a good case because Assad is uh is an Assad and the Syrian government is a rogue regime but the US using force in Syria kind of just gives an indication of it's not just about non-state actors because it justifies the use of force in other countries that are unwilling or unable to quell the threat from those non-state actors and you can see it so even though there are many international partners of the US that are with us in airstrikes in Iraq they're not engaged in the airstrikes in Syria France the Netherlands Australia have all in fact said part of the reason they're not engaged in the airstrikes is because of international law concerns the UK is also not engaged in airstrikes in Syria and Cameron was not willing to put that to his parliament the only question there was whether or not they would engage in airstrikes in Iraq so I'm doubtful about how much the international community is on board which might even be if one's purely interested in maximizing US national interest just a concern that we have a long road to travel before you can get to that point last two points are just about what legal regimes apply so unlike Bobby Chesney and Mike Adams I disagree with the idea that international humanitarian law doesn't apply and as an historical matter I know we had the panel this morning about what this history teaches the Reagan administration in fact did apply international humanitarian law to some of the very lethal actions that Bobby Chesney analyzes so Operation El Dorado in 1986 to strike Tripoli Bobby incites that as an example but in fact Hayes Parks who is one of the major legal advisors to the US military has spoken about the internal workings of the administration at the time in which Reagan had said we will apply the laws of war because we're dealing with terrorists we want to distinguish ourselves from the terrorists from from state sponsors of terrorists and a terrorism action also the CIA memo that is cited as the legal memorandum for justifying Reagan's presidential finding to take out Hezbollah leaders interestingly the legal memo according to Bob Woodard's book refers to minimization of civilian casualties which is intriguing at least it's an ambiguous historical record suggesting that our nation does have more of a tradition or history of trying to abide by certain kinds of humanitarian safeguards or floors fundamental floors when we engage in these actions even outside of armed conflict or the war scenario in which we might otherwise think about it last point is the application of human rights so for anybody who's been studying observing US representations before UN bodies over the last year year and a half you'll know the US position is basically that international human rights treaties do not apply extraterritorially if they were to apply extraterritorially then the law of armed conflict would displace human rights under this international legal rule of leg specialist if it didn't displace international human rights and it applied extraterritorially then would only apply in a situation of effective control for civil and political rights and even if it did apply in a situation of effective control which the United States government position is lethal actions are not effective control then it wouldn't be an arbitrary deprivation of life so it doesn't apply because human rights humanitarian law kicks it out doesn't apply because it's not it doesn't apply extraterritorially doesn't apply because the United States doesn't exercise effective control in lethal operations and doesn't apply because it's not an arbitrary deprivation life so I just want to tackle one of those here the extraterritorial application that has been in some sense the current that is the current US position with respect to the treaties but it's actually not the US position with respect to customer international law so if you look at the handbooks that the US military produces for its own service personnel the international handbook basically it says not basically it says that customary international human rights law so even uncodified unwritten rules the fundamental guarantees of human rights do apply extraterritorially which is an important caveat Mike Adams in his analysis of use extra bellum makes the claim that human that the US position is human rights doesn't apply but he just cites treaties that's a separate source of authority the independent sources customary rules they do apply and the last one is just to suggest that there are other traditions of us foreign policy interests in which we ourselves have applied human rights norms and evaluating other countries military operations in foreign lands and what I want to suggest is that we have to think about which of those how do we maximize and reconcile those different traditions so just to leave it at that the United States took a very strong position when the USSR entered Afghanistan and we condemned them for human rights practices at the UN we did the same when India Asia was in East Timor we did the same when Milosevic was in Bosnia Herzegovina we did the same when Russia went back into Georgia we applied human rights under customary international law to their military operations in foreign countries I think that's a tradition that we're going to have to try to sort through in the post war setting this is between the two of you the amount of work that's going to have to be done seriously I mean I think Washington is going to be very busy for a long time so whatever you say make sure that you make it seem like it's achievable in the moment okay so no no no no so so one of the things that's clear in in in in both these discussions is how what we know and how we collect information and how we use the information we collect helps determine how you're going to categorize the response and you talked about it sort of on the ground you talked about it in the law how do you think about this we'd like to bring up a point that links the two presentations which has to do with the use of data data and information has always been a part of warfare obviously intelligence is core to all military operations but we're in a in some ways the future of war is now in the sense that we're already going through a data transformation what we could even call data-driven warfare where the massive amounts of digital information that's gathered as a part of war fighting but also as a part of all sorts of ordinary commercial practices and all sorts of elements of our regular lives can feed into the management and use of force and the and the understanding of worlds apart from our own I didn't mean that sounds a little bit abstract but it isn't really so abstract when we think of for example how what drones really do that are revolutionary or at least transformative which is they're mainly intelligence surveillance and reconnaissance platforms they they can hold missiles but they provide the capacity to gather an enormous amount of information over the areas that they surveil and that information isn't solitary it's unified with human intelligence with signals intelligence and it allows for the the picturing and the imaging of of targeting operations but all sorts of other operations that begin to take us to a place that we really haven't yet been or at least we don't fully recognize the implications so think of all of the massive amounts of data that's gathered in your regular lives and think of how military practices now gather data of various kinds whether it's biomedical data think of the biomedical data gathered of those who've been detained by us forces in iraq and afghanistan and this plays into the time limit issue that was raised and plays into automated issues in that this is digital material it's stored forever it's not clear under what certain legal regimes and policy mechanisms this is gathered and it's certainly less true under whose authority that material can be continually processed and analyzed forever and if this is the beginning of this process imagine where this will take us and to get to the title of this panel so what human rights will be most at risk in future wars i'd submit to you that perhaps the core human right one core human right is the right to life but as intimately in as intimately bound to the right to life and to the entire human right system is something that's not often referenced in a lot of discussions of warfare and often in more technical discussions of legal regimes is the concept of dignity and i submit to you that the mass gathering of data in the digital form its correlation in current through current mechanisms and through future mechanisms presents an extraordinary affront to human dignity that we don't yet even have the language to process tom can i ask you to follow up on that a little bit how that fits into your notion of human versus non-human which we didn't say inhuman but i actually think we're saying the same thing i i i think at the end of my talk i invented a new human right the right to autonomy yeah which you know it maybe it's not such a new concept because of course our autonomy is taken from us if we're imprisoned our autonomy you know when our ability to make decisions is taken from us in that old fashioned way i i guess i'm suggesting whether i'm asking whether that right is implicated in the decisions that we are going to make about um about autonomous machines whether on the battlefield or in in terms of other applications and i think that right to autonomy is encompassed in in the notion of dignity which is the the baseline principle of of human rights right but the trade-off of how we went down this road is important at least in what you are focusing on is the idea that americans and others who use these technologies will incur less death you said the right to life and so there's a sense right of preserving human beings from being killed in combat in a war that's one of the places we start so is this awaying the human right is it is somehow we're weighing very unusual human rights here one is autonomy dignity and the other it's what it's coming into conflict with is the removal of human beings from the actual conflict right so i think in terms of autonomous warfare we don't know yet the answer to the question of whether the deployment of such systems will lead to more or less civilian death that's a debate that people people are having but let's pause it that we reach a point where under some or many circumstances the deployment of a very smart artificially intelligent system arguably leads to fewer civilians being killed i'm not certain that that will settle the debate for for many people that's why i offered up the proportionality example i it seems to me that it will be very possible to for for a programmed machine to make proportionality judgments because they are basically you know even human beings make them in a fairly mathematical way what is the value of a target military value of a target and then how many civilians will die you can write a program but imagine if it was your mother or father or son or daughter who was killed on the battlefield and you wanted to know why and somebody came to you and said you know a military officer advised by lawyers made a really difficult human judgment and here's why we decided that that risk of collateral damage was worth the ultimate target you might you you're not going to be maybe never accept that but at least you can sort of understand a human being grappled about if someone were to tell you a machine made that judgment even if the machine made the judgment in exactly the same way mathematically as the human drag officer would have advised it to be made would we accept that as easily and i think that gets back to this more amorphous question of of dignity and of something that humans do better it was what they do is they they do better take the responsibility and and be able to tell the story yes that's right all right good even if the results may be mathematically right right that's interesting ryan and we're going to turn to questions um how far behind are we legally from grappling with these kinds of we're not going to turn to question oh oh we're not going to turn to questions ryan you're going to answer this question and that's going to be the end um how far are we behind legally for for dealing with these really rather philosophical um issues that that seem to put them in either law or policy seems just way on a lower level they are these are cosmic issues so how far below is is it how what does the law have to do to catch up with this really sure um so i think you'd have two different answers to the question depending on who you ask um because one answer is that the law has a certain kind of flexibility in it just like the u.s constitution has flexibility to deal with certain types of technologies today that were unimaginable um when it was written so that you could have an existing kind of legal framework we have some fundamental kinds of concerns or considerations about privacy liberty autonomy as a part of privacy um that might just need to be translated to modern day context that's one answer the other answer might be that these really outstrip um the kind of even the consideration of what the framers of those original codes anticipated and the kinds of trade-offs they were considering these are very different kinds of trade-offs okay so we're out of time that's what i'm told so we can't take your questions and join me in really thank you for taking the time to be here