 OK, good afternoon. My name is Shannon Raj, and I am a legal officer at the Special Tribunal for Lebanon in The Hague. And I work there in the appeals chamber and the office of the president. So the subject of my presentation today is the weaponization of social media and options for holding social media entities accountable under principles of international criminal law. And I don't use this turn of phrase, weaponization lightly. And so I'll attempt to justify its use in my presentation. So my interest in this topic actually originates from my time at the STL, because I've been really interested in some of the tribunal's decisions on the liability of legal persons, which I'll discuss towards the end of my presentation. In addition, social media and issues of tech are very close to my heart, because I'm originally from the San Francisco Bay Area. So I have a lot of friends and former colleagues who work in social media, who work in the tech industry. Although that does not, as I think will soon be clear to you, prevent me from wanting to find ways to hold some of those entities accountable under international law, where I think it's appropriate. Lastly, I'll just note that, for those not familiar, the title of my presentation actually derives from Facebook's own slogan, which for a long time was move fast and break things, which in my view is a little bit aggressive and ultimately perhaps fateful. So as has now been widely reported, social media played what the UN called a determining role in the violence that transpired against the Rohingya. As many of you know, hundreds of Myanmar's military officials engaged in a massive disinformation and propaganda campaign to support its ethnic cleansing against the Rohingya. They distributed photos of corpses from fake massacres they said were initiated by the Rohingya. They shared fabricated stories of rape, and they used troll accounts to flood social media with incendiary posts, timing all those posts just to have absolute peak viewership. And what was notable to me as I read those reports was just the breadth of the social media campaign in a really organized fashion by a country's military. And of course, we know that it resulted in the largest forced migration in recent history. So obviously, this is just one small example among many we could pull from regarding the role of social media and international crimes. But it's a useful focal point and kind of case study for what I'm talking about. So I want to keep it in mind as we go through some of these issues today. And on that note, there was one other key fact I wanted to point out regarding Facebook and the Rohingya. In 2015, a group delivered a presentation to Facebook executives at their headquarters in Menlo Park, California. And they did a whole presentation where they literally told the Facebook executives that there was a distinct risk in Myanmar, of Facebook being what hate radio was to Rwanda in the days preceding the 1994 genocide. So they received warnings. They were alerted to the ways in which the platform was being used in Myanmar. And by all accounts, they didn't act strongly enough or quickly enough. And there were excuses offered later that they didn't have enough manpower, enough translators, and it's outside of the scope of the presentation to go into the validity of those excuses. But suffice it to say that they were warned. So while obviously, ethnic tensions, xenophobia, hatred don't originate online, it's important to note the ways that these tensions are magnified and distorted through the lens of social media. So there's a few features of social media that I'd like to note here. First, most social media platforms are not simply chronological reflections of what its users post to the internet. That would be one thing. And I hear that a lot that, well, the social media platforms, they just reflect what we post. But that's not really true. Most platforms today have highly sophisticated algorithms that are meant to keep you engaged on the platform for as long as possible. And this is something that Alan Sears, if you were in one of the morning presentations, touched on in his presentation. But what researchers figured out is that by feeding you more extreme content based on your preferences and your past engagement with the platform, you would stay on longer and engage more. So the New York Times recently highlighted this when they were discussing YouTube. And they were explaining that, for those of you familiar with YouTube, an auto plays another video. And so it talks about how you would kind of get taken down a rabbit hole, where you would start off watching, for example, a video about space and the planets and five kind of auto play videos later. You would be watching kind of being fed a video about how going to the moon is a hoax or something. So Facebook has similar techniques for keeping you engaged on the site. They call it reducing and amplifying posts. And based on your prior conduct on the site, what the things that you like, the pages that you interface with and visit, Facebook might think it's worth amplifying certain types of posts for you, making them appear at the top of your feed or more regularly and reducing others. And so as you know, it's constantly collecting data on you and targeting its content to keep you specifically engaged. So what does this mean for atrocity crimes? Well, it means that social media plays a role in driving people towards more extreme content. So in addition to fake news being spread, an addition of fake accounts being created, like those by Myanmar's military, Facebook will specifically target those posts towards certain demographics that it thinks may be vulnerable or susceptible to its messages. So in the wake of all these reports about the role of social media in fueling violence against the Rohingya, a lot of people started to compare them to news entities that played a role in spreading propaganda and hate speech in other conflicts of the past. So one journalist said, you can hardly fail to see the parallels between the use of social media in Myanmar and that of hate radio in Rwanda. And I've heard that comparison a lot. But are these entities really analogous to traditional forms of media? Of course, many people get their news from social media, unfortunately, and it is a means of spreading information. But there's some really key differences. And it's worth noting that in his testimony before US Congress, even Mark Zuckerberg, when asked his Facebook and media company, he said, no, I consider it a technology company. So in order to assess how accountability for social media platforms would look, I took a look at some of the traditional media cases in ICL on the subject of incitement and hate speech. So these are cases prosecuted under the theory of direct and public incitement. And a classic one, of course, is the 1946 Nuremberg Conviction of Julius Stryker, who was the publisher and editor of the Sturmer and Anti-Semitic Weekly newspaper. And according to the International Military Tribunal, he infected the German mind with the virus of anti-Semitism and incited the German people to act of persecution. And similarly, in December 2003, the ICTR convicted three media executives from RTLM, The Hate Radio in Rwanda for their role in inciting bloodshed. It held that editors and publishers have generally been held responsible for the media they control. But in doing so, it really pulled out in importance, the importance of intent, the purpose of the communications that they were channeling. And so for example, they look at the media publisher's purpose in transmitting the material. So was it for purposes of historical research, disseminating news or holding public authorities accountable? And they found that not only in that case was genocidal intent evident from the actual broadcast, so from the propaganda being spread by RTLM, but also from the individual statements made by each of the accused. And going back to the Julius Stryker case before the IMT, similarly, the tribunal found that Stryker himself was clearly a staunch Nazi and a supporter of Hitler's policies. And they used that in their finding of intent. So analogies in these media cases to social media just isn't a very good fit, because I don't think most of us could plausibly argue that Facebook executives, you know, specifically are intending to exterminate the Rohingya. So it presents really a challenge to bringing any sort of accountability mechanism based on a theory of incitement. So what my paper, which is a working paper, a working research, what I argue is that rather than seeking to hold social media entities accountable under principles of incitement, perhaps a more appropriate option is to reconceptualize social media as a weapon and to consider these cases as not analogous to media cases, but to armed supplier cases. And this would come under a theory of complicity and a lot of these armed suppliers have been charged with aiding and abetting. So traditionally the elements of aiding and abetting are providing practical assistance, encouragement or moral support, which has a substantial effect on the perpetration of the crime and with knowledge that these acts assist in the commission of the crimes. And the key you'll note here is the lower intent requirement for aiding and abetting. So unlike incitement, it requires not the intent to assist in the commission of the crime, but just the knowledge that these acts assist in the crime. So taking our case study of Facebook and the Rohingya, for example, you know, we noted earlier that the UN already determined that the platform had a determining role in the violence. So I certainly think that would meet the requirement of providing practical assistance. And in light of those presentations that were made to the executives at an early stage, it cannot really be denied that the company had knowledge of the way that its platform was being used. So there's a number of cases that we could look to in which weapon suppliers were charged with aiding and abetting, but I just wanted to highlight a few of them. First, the Zyklon B case was before a British military court. And in that case, it was about the poison gas used for mass extermination at Auschwitz. And in that case, the supplier was actually able to evade liability by pleading ignorance of the end use of this product, which I thought was very interesting because the supplier said, yes, we knew that the poison gas was being sent to the concentration camps. Yes, we knew about the concentration camps, but we thought it was being used as an insecticide. All those people, we've got to keep bugs out. So that surprised me, but they were able to evade liability. But of course, even that would stand in contrast to cases like Facebook, where knowledge of the end use was made quite clear to the company's executives. The Shinovich case at the ICTY, a Serbian commander was accused and convicted of providing weaponry, including tanks to assist the VJ and they organized and equipped the units. He was convicted of aiding in a bedding. The trial chamber held that it constituted practical assistance in light of the fight against the KLA and NATO. And of course, the Charles Taylor case before the Special Court for Sierra Leone, the trial chamber found that Charles Taylor was aware of the RUF's operational strategy, which was to terrorize civilians and their intent to commit crimes. And that nevertheless, he gave the group guns and ammunition that fueled its terror campaign. So let me just briefly describe some of the key advantages to conceptualizing social media as weapon suppliers. First, as I mentioned, the intent requirement under a principle of aiding in a bedding would be lower. In that regard, we're talking about the ad hoc and the hybrid tribunals requirement. So the ICC has a higher intent requirement under the relevant article. At that court, it actually requires that the assistance be provided for the purpose of facilitating the commission of the crime. Some of you would also be aware of, there was a line of ICL cases, which for a while actually had a heightened intent requirement for aiding in a bedding. They required that the perpetrator commit acts which were specifically directed to assist in the commission of the offense. But that requirement was actually dropped by the Shinovich case. And they said that that requirement was in conflict with the prevailing jurisprudence and customary international law. And that seems to now be consistently the view that has been emerged. The same conclusion was reached by the Taylor case. So another advantage to prosecuting social media as a weapon supplier is that knowledge is now a lot more difficult to deny. So in an article in which he focused on arms suppliers, William Shabbos said, with regard to violations of international law, establishing knowledge of end use today, it's much less difficult, both because of the scale and nature of the assistance in these cases in international law and because of the intense publicity around these cases. You have specialized reports like from the UN and of course you also have mainstream media. So he concluded that a court should probably have little difficulty concluding that entities like diamond traders, airline pilots and small arms suppliers would have knowledge of their contribution to a conflict. And I would argue that the same would hold true for social media. Third, the analogy seems very apt because of the lucrative aspect of both arms suppliers and social media. They're similarly situated in that their primary intent is generally not the commission of crimes, but the generation profit. Further, of course, under aiding and abetting liability, no physical presence is required. So an arms supplier need not actually be present at the site of the crime to be held responsible. And that of course is very valuable when we're talking about social media entities where the click of a keyboard can have ramifications on the other side of the world. Lastly, both traditional weapons suppliers and social media entities may be especially incentivized to alter their behavior if we apply ICL to their conduct. And I'm really optimistic about this actually because in light of some of the public backlash, they've Facebook and other social media platforms have already begun to take steps to alter their algorithms, remove content, employ more translators and so on. And a friend of mine who was previously employed at the ICC was actually recently hired by Facebook and she moved to California to help them in addressing this backlash. And she sent me some documents about what they're doing and they're all public, but she explained that they're working on creating a public appeal content board where people could appeal content decisions to an independent body of experts. And those experts would only be obligated to users of Facebook, not the company. So Facebook is holding workshops now around the world in Singapore and Delhi, Nairobi, Berlin and others to get feedback on issues of human rights and figure out how this board would work. So of course it's up to you whether and other people whether these steps will be effective or there'll be enough. But my point is just that in terms of deterrence, normally when international law is applied to a set of actors, it's not so often that most defendants would say, let me reform all my practices, let me take all this into consideration so that I can ensure that I'm not in violation of international law and human rights standards. So this seems like a really good application of the international criminal law, jurisprudence in terms of deterrence. The elephant in the room just briefly is of course the prosecution of social media as entities. And this is something we can talk about more in the Q and A. But as you know, corporate liability was dropped from the Rome statute, both because of lack of political will and concerns about complementarity. But there's a number of indications to suggest that there's an increasing movement towards the liability of legal persons under ICL. And some of the jurisprudence from the STL is really relevant in this regard. So we can talk about it more in the Q and A, but some of those contempt cases refer to a growing number of states criminalizing the acts and conduct of legal persons and an emerging shared international understanding on the need to address corporate responsibility. So I'll just conclude by leaving you of course with the horrific tragedies out of Christchurch, New Zealand last week because this was an attack that was really born of and made for the internet. It was streamed to Facebook Live, a post on 8chan, directed users to the Facebook stream. A video of the shooting was rapidly spread across YouTube and Instagram. And the manifesto was spread throughout Twitter. And so as one commentator said, we can really only have an honest analysis of the sources of this violence if we understand how it grows and spreads. And the worst of social media is not a mirror image of us, but it's really a distortion. So he said it's a fun house mirror that bulges and squeezes and disfigures us in ways that mock our humanity. So my thoughts on this subject are just to try to apply ICL to bring social media more in line with our humanity. And I look forward to your questions. Thank you.