 The first question really is, what are we talking about? What are new technologies? And since most of us here are lawyers, and we don't really know how to operate our own iPhones, well, that's maybe a bit exaggerated, but we do have our one technical expert here that really knows how these machines work, how these different technologies perhaps even function and how they're likely to evolve, probably the one person that can realistically distinguish what is likely to come, become reality in the future and what is likely to remain science fiction, and also in a sense of where are we too optimistic with regard to technology and perhaps we're also too pessimistic. Noel, could you kind of shed some light on this and what are new technologies broadly, what are we talking about here? I'm good with technology. It's very nice to be here surrounded by lawyers, you know, it's great fun. I think you all know what I mean. So I'll just speak very, very briefly about some of the new technologies, but one thing to say at the very beginning is we're going to compartmentalize them here, but they're not really separate, they all interact together and we don't know how that will form out. And the first one I suppose is cyber warfare, which you must all have heard about. Now cyber warfare is that I attack your computer from my computer or anybody attacks your computer from my computer. Now it's been happening all the time. 90% of British companies have been cyber attacked. It's not cyber warfare, but it's an espionage. So one company will go to another company, what they do is they log into their computer in a devious way. It's not too hard. As we say in computer science, I'm a computer scientist, as we say in computer science, the only secure computer is one that hasn't been hacked yet. So they're all hackable. We have no way of stopping that, except that we have in the UK, for instance, on an island, a little island, a former PhD student of mine works at, they don't allow any connection to the internet whatsoever. They're not allowed to have disks or anything else. That's sort of secure, but not necessarily completely. So what happens in cyber warfare is very difficult. They can attack your infrastructure. One of the big worries in the United States, for instance, is someone will manipulate the motors on the Hoover Dam and then thousands and thousands of people could die. That's the kind of thing that could happen. And President Obama has said that any cyber attack, he will respond with kinetic force. So it's a very dangerous thing. And what's more dangerous, I hope we'll explore a bit more, what's more dangerous for me from the legal point of view, is that when we talk about cyberspace, that came from science fiction from William Gibson, and it's one metaphor for what we're talking about. And as soon as you start talking about cyberspace, you start thinking about territory and stuff. And if you talked about it as phone lines, you might have different legal problems. The other thing to say is that it's just about the idea of hacking into people's machines, but also we have things like cyber logic bombs. So I can leave a logic bomb in your computer and set it to go off in three years' time. Now, the big problem with cyber warfare, especially legally, is that forensics is almost impossible because you write your program and you write your little piece of code, and that gets distributed to thousands and thousands of computers, gets spread to thousands more, and all comes back together at some point. So forensics, trying to work out where it came from, is almost impossible, and that's a real problem, because someone in Taiwan, for instance, could make it look like it was China attacking. And so that's where it gets very, very dangerous, and I don't know what we're going to do about that at all. The other one I've got here, I've been given a list, by the way, that's why I'm looking down. Remotely controlled platforms and devices, I don't know about these, that's what we call the drones, but it goes more than that. You can remotely control anything nowadays, and you have been able to do so since Tesla had his first remote control boats in 1880, I think it was, he had the M87 or something like that, and you've all got remote controls for your televisions. So one of the big things is it creates a great geographic distance between the attacker and the target. Also, when you're using, this is where it all mixes up, when you're using remote control, you've got some sort of radio connection between two devices, that can be hacked. So I can use cyber warfare to hack into this. You can spoof a drone, for instance, as has been shown. So a drone, as you know, will use GPS to navigate, just like your iPhone. And someone from the University of Texas last year showed that he could build a GPS signal on the ground that was much, much stronger than the GPS signal from a satellite, and so he could make the drone think, well, I'm using the word think here, drones don't think, but make it, I don't know what word to use, I'll just use the word think, make it think it was somewhere that it wasn't, and therefore you could drive it into a building. So that's hackable as well, that's one of the problems. Autonomous weapon systems, and I have to be very careful here because that's one I could rant on about for hours. I'll just say what that means. An autonomous weapon system means one that can select its own target, once it's activated, it can select its own target and attack it without any human intervention. So the machine is finding, the machine is selecting the target and it's attacking the target and killing it itself. So for instance, if I said to a machine, go and kill anyone in this room wearing a red shirt, I don't really see anybody wearing a red shirt, maybe a red coat, it's possible that it could do that. So that's what an autonomous machine is, and people think that it's morally wrong for a machine to be delegated to decision to kill people, and the other problem with that is they're not very good at it, they're not very good at discriminating, because I could certainly build a machine that would find anybody with a red shirt or shoot them, whichever I wanted, but it'd be very difficult for me to build a machine that could tell the difference between a civilian and a soldier, for instance, so that's a big problem there. Now nanotechnology, that goes outside of my remit, really. I'm not a nanotechnologist, but I know about nanorobots. There aren't any, and I don't know when they're ever going to be any, because when you get down to the nano size, you start, for instance, having a robot much smaller than a molecule of water, and that gives it a lot of problem to move around, of course. How does it get through your bloodstream or your water? But when people talk about nanorobots, they're actually talking about micro-robots, and they're using nanomaterials to build those micro-robots. So, for instance, little sensors, you imagine a nano-camera, can everybody imagine a nano-camera? That's so tiny, you probably get about a few hundred, 250,000 of them in a teacup. Well, you can't imagine a nano-camera because, and this is where science fiction comes in, because people talk about having a nano-camera for spying, but of course a nano-camera is smaller than a wavelength of light, therefore it can't work. You have to be bigger than a wavelength of light to be a camera, but you can build lots of sensors with it. The other thing about nanomaterials is you can get nano-dust, and so acts like biological warfare, but it's not biological. Where nanotechnology can be very useful in warfare is building brand new materials that are very, very hard. There are all sorts of materials that you use, because I've done robot competitions where robots attack each other and cut them up, and if you use really, really detailed nanomaterials, you can make them very, very strong, so it can create great armour. That's one of the things about it. Explosive power, you can increase explosive power by really dramatically using nanomaterials. I'm not too worried about nanorobots yet, but let's not get into that. The last thing, and this is very related to nano as well, is human enhancement. It's another technology. We've also got synthetic biology, and people really worry about that because you build new organisms with synthetic biology. At the moment, all you can build is like a virus, which isn't really alive in the first place, but it's a barrier for genetic material. So you could maybe build a virus, or you could maybe build some sort of... I mean, the polio virus has been reconstructed. Now, would that be called biological warfare? Because it's not actually biological. I'm not sure, you see. So that's a problem there. Last one, human enhancement and deterioration. Now, that's kind of worrying human enhancement, and you'll have read about students in the United States taking... Thank you. What was it again? Ritalin. Ritalin, that's right. Are you an American student by any chance? No more said about that then. Well, it's supposed to be a cognitive enhancement drug, and so if you want to know about how cognitive enhancement works, you'd think, well, that would be the first place to look. But I used to do a radio program I presented, and I interviewed the top people in this area to find out in what way it enhances, and it turns out it doesn't enhance you cognitively at all. It's a kind of emfetamin stuff, and so what it does is give you a lot more attention. So if you're really stupid, it doesn't help at all, and if you're really clever, it doesn't help at all, but if you're somewhere in the middle, it can help a lot because you can work a lot more hours. That's the point, and stay awake, and it helps your concentration. I mean, it's really designed for kids with a tension or deficit disorder. So, you know, I mean, this kind of enhancement has been used by the militaries for a very, very long time. Fighter pilots have been taking emfetamins for a long time to keep awake, so it's that kind of enhancement. But a lot of other people talk about other kinds of enhancement, for instance, chips that you put in your brain, and people think, well, I'll just get a... it wouldn't have been wonderful if I could get a mathematics chip and I could just plug it into my brain, and wow, I can now do all sorts of mathematics. Unfortunately, it doesn't really work that way. We're a long way behind in this, but some people like to fantasize about it and discuss it a lot and think about, you know, we just do plug-ins, we'll be able to enhance this skill or this skill or this skill. But the thing is... sorry, I'm also a psychologist, and the thing is when you look at psychology, one of our greatnesses, and you be surprised at me saying this, is our limited memories because the human mind works in a very strange way through having a limited memory, we're able to abstract a lot more. When you take people like... there was a patient called S studied by the Russian psychologist Luria, and S had a perfect memory. In fact, he had to develop techniques for getting rid of things, so if you showed him a thousand numbers, he memorized it instantly. Luria saw him walking around a restaurant of several hundred people, and he never had to write anything down. He never forgot anything. But he led a very strange life because he could not abstract at all. And that is why, as you get older, to be my age, you get much better philosophically because you can't remember much of the detail of the world. So when you're talking about cognitive enhancement, it's not just like taking a club and sticking things in there, because you might fill your head with all sorts of math modules, and if you could possibly do that, which you can't. But you might fill your head with all sorts of modules, but then you'd be a complete drongo that couldn't abstract or think at all. Thank you very much. I think it was useful to get this overview a little bit of these various technologies and also that, although some of it is science fiction, some of it really is there or is coming and is likely to become more and more important in the reality of warfare. Bill, I know you've written a book about the evolution of warfare and which certainly considers some of these technologies and how they may impact the way in which wars will be conducted in the future. Could you give us a bit of a summary, also without going too much into the legal questions already, as to how this will change the way wars will be fought in the future? First of all, thank you very much for the invitation to be here. It's a great honour to be talking in front of an audience which is so large and so obviously committed to the subject as yourselves. Yes indeed, what does all of this mean for the future? Now, I have to confess that I've crafted the thoughts I was going to make without hearing the breadth of what the it is that Enola's put before us. But one of the things that I think you can potentially identify in the future is the idea that the battle space is going to become de-personalised, relatively speaking. After all, if you think about cyber, it's a process that involves taking people away from the area where the effect is going to be had, at least the people who are initiating that effect. If you think about drones, that's all about having the operator of the drone at a remove from the location where the events are taking place. If you think about automation of attack, that's all about the machine creating its own automated activity without an individual needing to be there to pull the levers and press the buttons on the machine itself. And if you think about autonomy, that's all about people creating a logic which the machine then employs and delivers without an individual needing even to be involved in much of that activity at the time when it occurs. So we identify potentially de-personalisation as one theme, if you like. Then I think we could identify civilianisation as being another theme. States are finding it financially, fiscally, if you like, advantageous to take activities out of the hands of members of the armed forces and to place those activities into the hands of civilians because in a number of states you'll find the capitation rate for a civilian is significantly lower than the corresponding capitation rate for somebody in the armed forces. And then you'll find that much of the new technology that we're talking about in relation to armed conflict use is technology where the expertise overusing it is typically in the hands of civilians. So you take those two factors together and you have potentially another emerging theme, it seems to me, which is the civilianisation of the battlefield. I'm not arguing that the battlefield will become exclusively civilian, absolutely not. Definitely not. It's just a tendency towards a greater civilianisation. Now, remoteness in attack, conducting attacks from a distance is nothing new. I mean, you can go back to Homer's time in the time of the ancient Greeks to see discussion of the use of early cannon and crossbows and things of that nature and bows, certainly. But on the other hand, you could argue that remoteness is developing in a way which is arguably radically different. Now, if we're talking about autonomy in attack, if we're talking about machines being pre-programmed to engage other machines, which in turn respond, a machine versus machine, if you like, brand of warfare, you would, I think, reasonably ask yourselves, well, what does that actually prove? What is such a kind of hostilities actually demonstrating? And is it, if you'll forgive the expression, is it warfare as we know it, or indeed, is it warfare as we particularly want to know it? Is establishing technical superiority a goal in itself worthy of warfare? I think that's a quasi-ethical sort of question that I think we would do well to start to get our heads round. I think for me, the partly legal, partly ethical question is whether all of this adds up to a qualitative shift from the past. You see, if you were to go back 105, 110 years, I am sure that there would be people who were sitting around in possibly smaller numbers discussing the advent of air warfare and thinking that that constituted a radical phenomenon shift from anything that had gone on before, such that new law was necessary, and yet the themes of the law persisted. Is what we are witnessing now and foreseeably going to witness going to represent a sort of quantum shift which will be acknowledged in 30 years or 40 years' time as having been a radical shift, a qualitative shift in the nature of the conduct of warfare? I don't know, but I think that that is the question that we need to ask. And I wonder whether machines deciding on who or what is to be attacked would constitute such a quantum shift. I suspect it would. But on the way towards that, you have an awful lot of minor enhancements of currently available technology, which wouldn't, in my view, radically change things, where what in fact you're bringing in is greater degrees of automation short of autonomy in attack, i.e. your automating functions without getting to the point where it is the machine that itself decides of its own motion and applying its own discretion who or what is to be engaged. But then I ask myself whether the distinction principle, which is so core to the international law that we know and which we value, is sensible in its application to warfare which is characterised by some of the themes which I think we can foresee, widespread deception of the sort that you can foreseeably imagine in the context of cyber warfare, in my view, in the future cyber warfare is going to see deception operations on a scale which hitherto we have not known. What about anonymity? The sort of anonymity which is a peculiar feature of cyber warfare. How is the principle of distinction going to survive that aspect? What about the sheer absence of personnel other than victims from the area of warfare of the sort that I described earlier on? What about the increasing use of civilians in the fight? So you're using civilians, you're undertaking attacks which foreseeably are going to disproportionately affect civilians and you are depersonalising the battle space. How does all of that square up with the continuing relevance of the principle of distinction under which remember its military objectives and combatants to which attacks ought exclusively to be focused? And even notions of conflict, it seems to me, are becoming increasingly blurred because you have situations in the modern context where an adversary is a group and it's armed but it communicates exclusively using the internet, using smartphone technology, things of that nature. There's no evidence of, for example, a command structure or a degree of organisation such as is required to characterise the group as an organised armed group. So self-evidently the law of armed conflict in relation to non-international armed conflicts is not going to apply. How are we going to deal with that sort of situation? How are we going to deal with the blurring of the spectrum of conflict which I think is partly being created by some of these new communications and interaction technologies which we are increasingly seeing? But, and my final point is this, new technologies includes old technologies. Wars will continue to be fought using the rifle, the spear, the knife, the mortar, the rocket, all the traditional ghastly tools of warfare. So whatever rules we bring in have to be capable of addressing an increasingly broad spectrum of technology in conflict and that, I think, is the challenge that faces all of us when we're addressing this particularly difficult issue.