 that's a few tough acts to follow, thanks a lot for my fellow panelists. So I guess this is a slightly maybe sort of a, it's a very good sort of segue, it's slightly perhaps sort of separate tack, where part of this sort of paper is maybe taking a, well one of the papers I based on is sort of called a view from 40,000 feet, sort of taking a, not even a bird's eye view, but a very high level view of what are the changes that technologies in general and AI in particular could have on the global legal order. And so a lot of this is a bit sort of problematic looking at how you sketch out a research agenda, a lot of these sort of strong claims, but lightly held for discussion. And I want to start off with a poem, and it starts with technology. I used to hope for a breakthrough, and now I wonder what into. And that sort of reflects this apprehension, well there's a lot of hope for technology can do for us and for the world, but there's also a pervasive unease that was of course seen a lot in the last couple days. What does that mean? And so just to briefly sort of scope and define AI, because there's been a lot of discussion over AI and intelligence and what does that mean, and there's a lot of these discussions over, well, I mean the last survey I saw found 72 definitions what intelligence means, that doesn't seem very useful and often can be misleading over what we think about when we think about AI. I'd rather define it rather very functionally, so it's a general purpose technology that is can be embedded into or distributed across platforms or cloud networks or existing software systems or administrative systems. Why do we do this? We use it to aid substitute for or improve over human performance in areas like accuracy, speed or skill, and why can we do this? Because AI is machine learning systems, it's good at narrow tasks, so we don't have general AI that can do anything, but we have lots of sort of narrowly trained AI's that are good at tasks, but those tasks that are narrow are domain general. So if you can improve something in data classification optimization, you can use that whether it's in banking or in the military or in energy usage, and that is why AI can be used in so many sectors and with so much sort of utility. And so in this paper, look at this concept of if you have something that can be used to do things like data classification, recognition, prediction, optimization, autonomous operation, you start eventually getting into usages of AI that are very sort of disruptive, in the sense that they start generating sort of global or transboundary impacts or being erosive. So we've seen discussions on computational propaganda, discussions over cyber warfare systems or systems that are destabilizing sort of pervasive balances of power or nuclear deterrence. And there's a lot of now discussion over also using AI systems to inform and sort of stack the deck in international negotiations. The Chinese actually last summer started using a system and negotiation support system in their Ministry of Foreign Affairs that informs negotiation strategies. And so many ask rightfully, so how do we govern AI, either the national level or in some of these cases on a global level? But a preceding question perhaps is, what will I do to the tools of governance that we're that we're meaning to resort to in governing this technology? How will I affect the efficacy or reliability of international law? And so I sort of based this mean, so in the first place, it's sort of important to recognize that this is not new. Colin Picker has had a sort of study looking at the long history of technology and international law where technological change either creates the sort of the North the political impetus, sort of creates wars that are more bloody than has been seen before. So you get 30 years war with sort of gunpowder weaponry, you get the piece of wasp alia, you get the first world war with sort of trench warfare and sort of again, in the second world war, nuclear weapons and all this feeds into rationales for major new legal innovations. But in some cases, also, you see that undercut legal regimes. So there's the case of submarine warfare, which sort of came onto the scene, as there were sort of international negotiations about sort of international prize court, which is meant to award captured ships to commanders and things like that. And of course, once submarine warfare gets on the way, this whole practice of ships getting captured in you having the space to capture the and ferry the crew back to shore sort of obsolete. And so in this literature, you can see this sort of theme that technology changes the legal situation directly. So it can create new entities that aren't neatly covered by existing legal structures or can enable more commonly new behavior that's not neatly covered, or can indirectly can shift the incentives, or the values of the regularities that are being in this case, state parties in the international law system. And so in my sort of road map, I look at three ways that you can work through that. So one is the sort of three ways that can change the legal situation and have legal impact. One is thinking about how how these changes make AI a problem for international law, what are the gaps that creates in existing regimes and what what legal development of these regimes look like to to plug these gaps. Secondly, sort of the discussion AI is a substitute for international law or a augmentation, the legal displacement. And thirdly, could AI be a threat to international law? Could it be sort of legal destruction even? I mean, I think destruction may be a strong term, I would have gone with erosion, but then you don't get the DDD sort of eliteration. It's so nice. So take that with a grain of salt. Stylistic. So sort of unpack that a little bit. I'm for the legal development, I'm drawing on the sort of scholarship of Lyria, Ben Atmosis, who has made a sort of four parts framework for thinking about new legal situations. The first she says, sometimes it just creates a legal gap. There's a just a need for new to get to get and generate laws. So we got nuclear weapons and they're new, they're not necessarily covered, although maybe they're covered by some sort of extant principles, but we basically say that we need to negotiate a new ban specifically issued as new technology and for nothing else. You could say, well, maybe politically, this might be a problem, but we this is what we're currently working on the CCW and any discussion over over lethal weapons weapons systems. There are new applications of AI in more fair, but we're trying to to tailor these these new solutions. So our second category is that the legal technological change or the social technological change in new behavior or a new entity creates legal uncertainty. We're no longer sure where or how certain extant concepts apply. So you have these concept of responsibility, command, command responsibility, control or attribution. And so there's a big literature, of course, in the autonomous weapons debate over, yeah, so is it the means or methods of war in the Geneva additional protocol? So the article six or the weapons review, or is it can we think about responsibility of actors? And that could be a problem. And this is, of course, it has been the cause of major literatures. And we could see that in other spheres as well. Although, Burry has argued Thomas Burry has argued that, well, there are actually quite pervasive literatures, in sort of case law, international case law on establishing precedent for state control or attribution. So in principle, all we need is sort of judgment saying, oh, well, that also applies to killer robots. It's another question of how quickly that can come and those judgments. But it's a legal development that's needed. It seems possible within the tools of international law. Fertile is the discussion over the wrong scope. So suddenly we have a new entity or a new behavior that is either not covered by the laws when we think, oh, it really should be covered, obviously, or the way around it is covered by the laws where we think that, hold on, that's not what we meant by those laws. And I'll briefly jump ahead. So there is, Burry has made this argument. So there's lots of debates over should AI systems have personhood? And there's a big literature in philosophy and moral philosophy today, should they qualify for rights or whatever grounds for that. There's another literature parallel and sort of the, well, who knows about the metaphysics of suffering and consciousness, but more about sort of legally speaking. Is there something like a case of a corporation? Would it be legally useful to give it rights? Those are interesting and irrelevant legal debates to have. But they're claiming here is that proceeding these debates, they suggest it's already been possible using existing sort of company law in a number of states, I think the US, Germany, Switzerland and UK to incorporate a limited limited liability company and basically put an algorithm in charge of it. So functionally ascribing legal personhood to an algorithm. And it's basically a legal hack that if you're able to do this, other accident legal provisions such as ECJ rulings would mean that if you'd established this in one EU country, it has to be recognized the other EU countries. And that seems to be this case of like wrong scope. You're suddenly saying that the existing laws allow you to give personhood to a algorithm and possibly sort of evade criminal responsibility. And that's not what we meant at all. And so we need to clarify that this is not going to fly. And finally, it's sort of legal obsolescence in Bennett Moses typology. And she is basically certain assumptions that underpinned the existing laws are no longer applicable. And so the first one is basically the conduct that the law covered has been made obsolete super supersede it. And there's cases where they discuss the case and maybe I don't know, laws on telegraph, telegraphs and carriers and it's like, well, it doesn't get invoked that often. It's maybe a bit of a stretch of thinking what a good argument here would be we could have the discussion over increasing automation of warfare as you get soldiers taken out of the direct sort of theater battlefield. Last soldiers will get sort of captured in at least in sort of pair state combat. Of course, you still have asymmetric warfare, although even then, if you're flying drones, there seems to be less opportunity to capture soldiers. So that might basically make provisions that are about treatment of prisoners of war, a sort of like, oh, well, yeah, we only capture one prisoner of war a year. So it's not really invoked that often. Secondly, some of the justifying assumptions behind the rules are no longer valid. So the discussion that if you have, well, bodies of laws about labor condition provisions, those assume that there is actually a lot of many sort of sub industries or types of industries that human beings are employed in. And if sort of current predictions and of course there's a lot of disagreement about this, but predictions are something like 20 to 30% unemployment in 30 years. And that might sort of shift some of the assumptions. I mean, there's another discussion I had over a human right to work, which is not to be fair, not as always been aspirational and never about sort of states need to provide employment. But it's more about you can't sort of keep people out of employment. But basically it has this assumption that it would be possible to give work to everyone and it's no longer possible. And I'm going to speed up because I think I'm late. Finally, it's basically, okay, so finally the enforcement of the rule may simply be no longer cost-effective. And so one thing is there's been discussions over deep fakes and fake news over the last few days, as we've seen. And one sort of concerning use case could be that if you're using deep fake systems, and this is basically people have been using this on desktop computers to recreate scenes from Star Wars, which sort of previously took massive teams and CGI budgets. If you can at scale generate sort of fake evidence of war crimes or human rights abuses and just plot human rights observer agencies with that, you both undercut the credibility of your mission because it's easier for other parties to say that well, we've demonstrated that some of this is fake, why isn't all of it fake? But even if that's not the case, you're basically able to overload an observer agency like that in terms of labor costs, in terms of sort of like going through a lot of these materials and sort of identifying what material is authentic and what isn't. And so it may become less cost-effective at least in that way to enforce those rules. So that creates a number of sort of holes or possible gaps in international law that we need to think about how to plug them. Secondly, AI is a substitute for international laws. There's been, so this is the personhood, there's been discussion of course on domestic level over like oh well, can we automate law? And does that transfer to the international law level? There's two versions of this argument. One is the well, can you use it in adjudication? And that, sorry, first one is can you use it to enforce laws and to monitor? And that actually seems, in some cases, quite possible. So there is actually a, Pulse is a system that's used for predicting where poachers go and sort of to help wildlife preserve security sources to intercept poaching, poaching groups. This is Sentry, which is a system used in Syria to give advanced warnings to citizens. So it basically predicts where strikes are going to be likely to arrive based on what they claim machine learning, and again, what people claim AI. So and it's a way of sort of giving advanced warning to civilians in a war zone. So it seems there could be some utility in using this for monitoring compliance with treaties. In the same way that we've used nuclear sniffer planes or spice at the lights to detect compliance. There's another version of that which is like, well, we've had the discussions over administrative decision making. Is that something that you can scale up to the international law level? It seems a bit unlikely, especially Barry has made this argument that the, on domestic law this works because like things like tax law or cases like patent law cases are very large, homogeneous, there's enough data. In international law context the data sets are too heterogeneous, too unstructured and there's, yeah, thanks. And it's sort of vulnerable to data poisoning and there's of course the legitimacy problem. Who's going to accept this? Who is going to cold and train the ICJ AI? That doesn't seem likely. Another version, and we've had the discussion I think in the last few days over, so similarly with Professor Kingsbury over regulation, right, infrastructure or non-normative technological management, can you change the technological environment of states to change their behavior? And that seems, well there's some cases like discussions over geothensing autonomous weapons. Basically they drop out of the sky if they cross borders. Not sure if that works because it's easy to get around that. And many applications don't seem to work that well, especially because you need states to submit to that and it doesn't seem very promising. Finally, one minute to do this thing. AI is a threat to international law. So there's two versions of that, a soft version, which is basically we found all of these legal gaps or problems that need to be addressed. And it's just going to be unlikely that international law will have the versatility to adapt to it. And in that way, it's a pretty effective efficacy problem. These problems will go unsolved and a legitimacy problem because it will become increasingly visible that law isn't solving this. Especially because neither, like most of the usual tools of international law are not really adequately prepared to dealing with a technology like AI for a range of reasons and I'm going to skip past. But most importantly is the hard argument that AI might facilitate a shift towards unilateralism in general. So some scholars like Harari have suggested that it creates a premium on centralized data processing and so basically in the past democracies outcompete autocracies because they are decentralized and this is why we won the Second World War allegedly. And in the future that will no longer hold the case, the differential vantage will go towards autocracies. More problematically, and this is a very like steep claim that the argument could be that whatever benefits states previously sort of believed, like especially like major like Security Council, the Permanent Five perceived that they achieved by engaging in the international legal order, they might not perceive that they can achieve these through unilateral use of AI capabilities like enhanced surveillance, computational propaganda. Well yesterday I had a talk by Dr. Kowal on how social media and AI propaganda might undercut through reputational sanctioning. And so if that's the case, it's a bit of a problem because the same leaders in AI development are also major parties on which international laws, at least they're sort of they're non-interference international laws that depend on that. And this is not to say that AI unilateralism is the only factor or the major factor or the determinant factor that's challenging the international legal order right now, but it may speed up the decline of multilateralism. So I made sort of putting that together, I made a bit of like a you can organize this in different ways, I haven't there's a lot of question marks here, but this is a way you can organize approaches to thinking about what are the problems AI throws up for international law, is it something that can be automated or substituted for, are there opportunities of using AI to strengthen international law or will AI erode the sort of political scaffolding or the normative software. And so I want to finish with another poem because I like poems and they have seen the last light fill by day they kneel and pray, but still they turn and gaze upon the face of God today and God has touched and weaves a new for the lost souls around and sorrows turn their pale and blue and comfort is not found. And this is by the GPT-2 AI system limited release, so this is the limited version of the system. The full version was not released because they were too worried for how it could be used. So thank you very much.