 Good afternoon. So I'm in that wonderful position of being the last person. So it's good for you. You only have to stay awake for 15 minutes without visibly falling asleep on me. It's okay if you just don't snore too loudly. So I would like to first acknowledge my co-authors. I've been working on this research with co-authors who are all at NYU. I'm actually based at the University of New South Wales, which is in Sydney, not Wales. I was once asked by an American where he lives in New South Wales, and I said just south of Old Wales. And he seemed to accept that without even talking to me. Which technically it's true. We are South Wales. Just a hell of a lot of better weather. So my job here is to try and tie this in together a little bit and look at it through one particular prism. So what I think we've all been talking about here in this panel is really the role of corporations in society. And then the bigger problem we've been discussing for the last two days is particularly how corporations as a non-state actor fit into our sort of state-centric framework of international law. How does that all fit together? Claire was sort of looking at the particular prism of due diligence as a way of regulating companies. Justin was talking about the role of a potential ombudsman in doing that. And Sabine was then looking at the role of companies in societies which are transitioning into justice. My particular way of tackling this is to look at the very topical issue at the moment of social media companies. And I'm not so much tackling the thorny issue of Cambridge Analytica. I mean obviously Cambridge has long been held by us all as a bastion of ethics and virtue, but no longer. You are now all tainted with Cambridge Analytica even if it is in London. So Cambridge's name is now ever slurred I think. I'm not here to slur it further so I'm not going to address that. But I'm looking in particular at this issue around the role of companies in harmful content. I'm not so much that underhand tactics around privacy, but we'll get into a little bit of that at the end. So what's become very apparent to all of us and has been sort of forgotten in the last few weeks of drama around Cambridge Analytica is the really revolutionary role the Internet has played and its profound impact on human rights. It's had an unimaginable impact on access to education, really furthering that in many areas. It's really become closely associated with democratisation until Cambridge Analytica. But it's given particular credence to some organisations outside of the mainstream, political movements outside of the mainstream that simply wouldn't have got the prominence that they would have without the Internet. It's also given economic opportunities to millions of people around the world. So when we talk about the problems of late and how we think about regulating Internet platforms, a lot of it focuses on a particular, when we look at and think about human rights, we particularly focus on the right to free speech around that. And free speech is, you know, enshrined in the International Covenant of Civil and Political Rights. And when we've been talking about regulating the Internet, well, a lot of people pop up sort of what people lead to straight away is, well, the problem is we'll be limiting free speech. And sometimes what we forget is that we already do limit free speech in many ways. In particular, you can see it in the ICCPR and the European Convention in many other ways that this is a basic right to have hold information and access information. But you can see that also we've all for a very long time been clear that it has limits, that it might be limited to certain areas. The vagueness and problems I think of looking at this through human rights framework is that there's a lot of ambiguity around those limits. Vague terms around national security and public order and morals. And we've seen in many places where particularly more draconian states that they might use those to clamp down on free speech. And that's the argument that people have been talking about in relation to this particular problem. My particular issue that I want to focus on today is around the notion that the Internet has become polluted with harmful content. And so what are we going to do about it? So in this particular paper, what we're talking about is the potential regulation of two types of harmful content. One is terrorist incitement. So using the Internet for things like terrorist recruitment. And the other is what we call politically motivated disinformation. What many people call fake news but it's hard to use that term now without conjuring up images of Trump. And to him fake news is anything that he obviously doesn't like. And so that's a term that's been used a lot in the media but we're particularly looking at the lens of sort of what we call politically motivated disinformation. So we're looking at what should be the role and who should be regulating this. Is it states? Is it companies? Is it other stakeholders? Is the job of people like you and me? Is it us as consumers? Do we have some responsibility in relation to this? So these two issues that I'm talking about around news that's around terrorist incitement and political motivated disinformation are very well connected. Because basically both have seen social media used as a sort of in a particular way in sort of a media warfare if you like to confront issues that they wouldn't be able to confront otherwise. And they're taking advantage of the essential elements of social media. So an enormous audience, something that can adapt very quickly. A capacity to launch new ideas including untrue ideas with a real sense of energy and urgency and a medium which really lacks vigorous oversight. So in terms of terrorist incitement one thing that we've been looking at is the role that groups like ISIS have played and used in social media to really an unprecedented scale to both recruit new members and incite violence. Estimates of the sort of the daily pro ISIS messages of studies that have been done in the last year have estimated there have been about 200,000 messages a day sort of related to largely recruitment for ISIS or inciting violence in the name of ISIS. One of the challenges that will come to when we look at regulation is how we actually talk about and define what terrorist incitement is. And some of the new laws that we'll get to in a moment have basically a banning speech that focuses on terrorist incitement. But that itself is a vague term if you look at it. I mean for years states have been trying to define what is terrorism. And we don't have any comprehensive convention on terrorism but now we're basically putting it in the hands of non-state actors like companies to ban terrorist incitement. When the states themselves aren't exactly clear what that is in the first place. And so this is one of the problems that we're going to come to. The other issue that we're focusing on is this so-called fake news of politically motivated disinformation. So this came to the fore obviously quite a bit with the 2016 US election. The role of groups like Cambridge Analytica and Facebook are still yet unclear as to the role that they've played in that particular election. But this shows you that some studies have basically looked at the views of how much news was circulated during particularly in US election. And you can see that fake news was actually getting a higher read rate than the sort of mainstream media news during that particular election. And so what we want to look at is what's the role of both companies and governments in trying to put an end to sort of politically motivated deliberate falsehoods that is focused on politics. So then this is sort of what we'll spend the rest of our time talking about because my interest in this issue is not so much that these problems are out there but what do we do about it? And you'd be glad to hear that I don't have any answers in relation to that. So one of the arguments that people are throwing around is this sort of notion that the problem with regulating free speech is this sort of censorship creep that once you start to regulate speech then you're opening the doors as to who might do that and how particularly governments might apply that wrongly. So the argument by Sun is that instead of regulating speech what we should do is basically the remedy to be applied is more speech. And this quote is often attributed to the US Supreme Court Justice Brands when he was talking about sort of the role of sunlight if you like. And people have been talking about in this particular forum some people have been talking about technology being our saviour that in terms of technology here is both the problem but the way to solve it is to basically allow more technology so to allow more speech. Now I think the problem that we have with this is that that may have worked at some point or you know been more possible in the past but now that basically the speed and scale of internet traffic is so strong that I think it's eroded the more speech solution. And today harmful contents can spread so wide and quickly that rebuttal becomes ineffective and the rebuttal is often lost. Those who speak first are those who speak loudest and we're not convinced that just letting free rein if you can will be the answer in relation to harmful content. So the next issue that is getting a lot of play in this area is the role of government, role of states in regulating digital content. And this is particularly interesting to us as lawyers as to how this might be done in a way that would be reasonable and also preserve international sort of do it within the international human rights framework. So we've seen some advances in this. We're about to see the new European Union law general data protection regulation come into play in May and this will basically require companies to report data breaches within 72 hours and it will give users of companies like Facebook, Google and Twitter a little bit more control about how their information is collected and also it's proposed sort of fines of up to 4% of companies revenue in relation to this. We've also seen the new German law that started this year that could require that on request could require companies to remove hate speech and this again requires the removal basically within 24 hours or potentially face fines as high as 50 million euros. And both of these have got some good points to them but I also think they've got some real dangers. One of them is the notion that they're of this sort of censorship creep where states start to regulate that it encourages other states to use this type of regulation not in sort of to further democracy. We've already seen this in Poland with its 2016 counter terrorism law which basically allows greater suppression of websites in the name of national security. Even the German law which links to sort of the criminal code, it talks about speech having to come down which is helping with the formation of terrorist organizations but also really broadly in terms of speech that defames religion. So these are really broad categories that right now what we're doing with some of these laws is they've got some link to state laws but it's also leaving a lot of discretion to companies and governments as to what restrictions should be, sort of what legislation should be regulated. I also think that what we're seeing in something like the German law to take down information in 24 hours will lead to companies just really wanting to take down more information rather than not take down information. Because why would they leave it up? What is in their interest if they receive a request to take it down and they're potentially facing a large fine, why would they even argue about it? I think there will be more haste in taking down laws. The third point there is around self-regulation. And so this is a really familiar theme in the business and human rights debate that companies that we don't need new laws that companies just simply need to self-regulate. And some of what we've discussed today particularly with Claire and Justin is saying what they're proposing is that actually it's been a failure of self-regulation. So I'm not convinced that self-regulation is of no use but I'm not convinced that it will save the issue just by itself. Today we saw Facebook announced that they're banning the IRA, not the IRA the internet research agency, the Russian group from its pages which again is a very delayed but proactive step on the company to basically say we recognize there's a problem, albeit maybe two or three years late. And so what we've also seen in the last couple of years is companies like Facebook increase their human fact checker, look at Google adjustment of its overall rhythms bringing more human oversight into them, YouTube's looking at how it eliminates and identifies violent extremist videos. So we've seen the companies being forced into take proactive measures if you like even though it is reacting to issues that come up. I think the problem with relying purely on self-regulation in this content is basically the business model here. So a company like Facebook derives 89% of its revenue from advertisers. So with that sort of model, the whole model of Facebook is built with selling users data. Now I also think that we're blind if we didn't realize that when you bought into Facebook in the first place, leaving the underhand tactics of Cambridge Analytica aside the role of these companies is that the way that they're making money, we're getting a free service is that they're using our data and they're selling it to advertisers. And that's been apparent for some time, I mean more than five years ago there was a whole debate when Mark Zuckerberg was in on this, basically arguing that privacy is dead, that we willingly forsaken our privacy. Now what we're starting to see here is basically clawing that back in different ways. So I think where we get to and where we'll come to in the end in our research is perhaps a more hybrid approach. So part of it will be relying on companies to act, but providing them with basically more vigorous oversight into how they do it. And importantly for me, I think the question around self-regulation is that it shouldn't be all in house. Is that if companies are making these decisions about takedown, most of the people in these companies are not human rights advocates, they're not experts. And so that they should be involving a broader group of stakeholders in these decisions. Companies should be reaching out. For years that was an argument in things and looking at how companies dealt with their supply chains and their apparel companies. And the reaction of companies initially, companies like Nike was like, no we've got this under control, we're going to do it in house. So clearly they don't and the social media companies don't either, but I think they do have an accountability and a self-evaluation role to play here. I think government interventions will have a part to play in the research that we're looking at, particularly around politically motivated disinformation. I think there's a real role for targeted legislation. I'm not convinced of the value of something like the German law, but I am convinced of making laws that require transparency around political advertising that basically require online firms to act as offline firms in relation to political advertising must become a reality. There's a bill in US called the Honest Ads Act, which I think is a starter in this direction. And so I think what we need is really smart targeted legislation, not just a blatant crude blunt tool that may allow sort of censorship creep to happen. So where we come up with, I think it's this hybrid approach, is that for particular problems, legislation may be useful, but it's got to be really targeted at the problem. But also we want companies to continue to sort of self-regulate in a way, but with oversight and also with the involvement of external stakeholders. So I think we're at the point in this particular field of that there's been a lot written about the moral reckoning of sort of the Silicon Valley giants in the last few weeks. And I think that for us that morality isn't going to be enough. That what we've seen is we've seen companies come out with great regret and sort of assurance that it won't happen again. And that by itself isn't going to be the answer. And so I think what we need is basically greater transparency in this area, both on governments, if they're requesting information be taken down and on companies. And so our way forward that we're thinking about is to try and think about what are the particular problems of these two issues and try and develop legislation that specifically targets that. Thank you.