 All right, so we're going to kick off with this presentation. So we start with a presentation from our guests, and then we have a whole panel of panelists ready to comment and react and tell us what they think about global online speech norms. So our two guests are Kate Kloneck, who is an assistant professor at San John's University Law School, a fellow at Yale's Information Society project. She holds a JD from Georgetown University and a PhD from Yale Law School. Recently, Kate has published a well-known paper on the Harvard Law Review, which we all draw inspiration from, entitled The New Governors, the People, Rules, and Processes Governing Online Speech. More recently, she's been co-authoring another paper together with Thomas Cadre that they are going to introduce and talk about in their presentation. Thomas Cadre is a PhD candidate at Yale Law School with a JD from Michigan Law School and an MA from University of St. Andrews in Scotland. He has interests that span very widely in free speech, privacy, property, torts, criminal law, and the internet. And so I will let them kick off with their presentation. Thank you. Everyone hear me OK? The mic's working? Great. Oh. One more thing is this event is being live cast. So if you need to intervene or ask questions, just bear that in mind. Great. Well, thank you so much for having us and to the Conclined Center. We're really, really delighted to be here to talk to you about our paper, Facebook v Sullivan Building Constitutional Law for Online Speech. So in the lead up to the last presidential election, you may remember that there was a big kerfuffle in the news about some statements that then candidate Donald Trump was making about his proposed Muslim ban. And this was obviously big news across the country, but it was also big news within Facebook. And that's because plausibly, at least, the words that Trump was saying, these statements that he was making about his desire to have a Muslim ban, plausibly violated some of Facebook's own internal rules. And some people within the company were upset about this. They thought, why is it that Donald Trump gets to say these things when, if somebody else said them, they would come down. They would be flagged, and they would be removed under Facebook's internal rules. And in addressing why it was that the company decided to keep these statements that Trump had made up on the platform, Zuckerberg explained it in terms that were quite reminiscent of First Amendment law for those of you who are familiar with certain First Amendment concepts. He was saying that Trump is a public figure, obviously. He's running for the highest political office in the land. And they were saying that the content of his words were newsworthy. This is a public policy pronouncement that he's making. It's a huge part of his platform for president. And he was using Facebook as a way to get that message out. And so as perhaps hateful and divisive and cruel, as some of his comments were, it was newsworthy. It was in the public interest. And Trump, as a public figure, got to have his words stay up on the platform. And there were a lot of people who were really upset about that, both within the company and outside of it. So why do we start with this anecdote? We start with it because it tells us something really quite interesting about the dynamics of online speech today. And that is that it's no longer the case that the body that is responsible for adjudicating, competing claims about harmful speech are these old governors that we know so well, the sort of courts that maybe are applying First Amendment law and First Amendment principles to torts such as privacy, defamation, intentional infliction of emotional distress, the so-called sort of communications torts. There's a new player in town, as Kate so wonderfully coined, the new governors. And so what this paper attempts to do, at least, is to unpack a little bit how it is that these new governors are borrowing, being inspired by building off of, riffing off these concepts that have been developed over many, many years by the old governors, by the courts. And so what we try and do first in our paper is to tell a story about the development of the jurisprudence in this area, mostly through Supreme Court cases, but sometimes not so much, all around these two ideas that came up in the initial Trump question, newsworthiness and public figures. When is it that special consideration should be made about speech, because it maybe concerns a public figure, or it's made by a public figure, or the underlying content of the speech is newsworthy? And by drawing out some of the jurisprudence in this area in the courts, and then doing a really great deep dive into what Facebook is doing in this area, because of some wonderful empirical research that Kate had done, and interviews that she had done with people inside the platforms, we were able to do something of a comparative analysis and say, OK, well, here are some of the ways in which this is really similar, and here are some of the ways in which it's different, and here are some of the lessons that we can maybe learn from those similarities and differences. And as we'll talk about in a little bit, some of those lessons that we draw out of it, there is at least perhaps a partial response to some of the concerns that we have in the form of a new oversight board that you may have heard about, this so-called Supreme Court of Facebook, or the Facebook Oversight Board, which should go into action sometime later this year, and will serve as an overview board for decisions about what content can and can't be on the platform. So in the future, if President Trump, when he's on the campaign trail next year, when he makes similar statements, it won't just be perhaps, it won't just be up to Mark Zuckerberg or other high-level policymakers to make those decisions behind closed doors. We may actually have some sort of formal oversight process and a bit of transparency about that. So we go into a little bit of that in the paper. So I'm going to kick off by just talking a little bit about some of the First Amendment issues that are at play here, just to give a little bit of a primer on that. For those who aren't familiar with some of the cases, and then Kate's going to talk a little bit about what's going on at Facebook, we'll draw out some lessons from that, and then finish, if we have time, with a little bit of comment about this oversight board. Thanks. So this is the ad that was an issue in the famous pathbreaking First Amendment case, New York Times versus Sullivan. And this is really a starting point for us, not only the pun in our title, but a really inspirational case that really is foundational in First Amendment law. And we won't go into all of the details here. Many of you, I'm sure, will have already be familiar with the case, and you can go read it. But the main point to kind of take away from it is that the Supreme Court, in trying to understand the limits of defamation, so when you can have liability for saying false things about a public figure, drew some constitutional lines. It said, because of the potential chilling effect that it could have on free speech, if you're allowed broad liability for defamation about statements you make about public figures, we're going to craft this constitutional rule. We're going to set this sort of constitutional balance. And we're going to say that if you're going to win your defamation claim as a public figure, you need to show actual malice, which basically means that you knew that what you were saying was false, or you said it with a reckless disregard for whether it was false or not. And in order to allow space, breathing space, for the First Amendment, we're going to set special rules around public figures and public officials. And then outside of the defamation context, so we'll go to the next slide, we have this is from the famous case Time v. Hill, which is not a defamation case, even though sometimes it sounds like one, it's actually a false light privacy case. And similarly in this realm, the court said, well, like in New York Times versus Sullivan, these privacy torts, they have the potential to chill free speech quite a lot. And we need to craft certain rules around to give a better breathing room for speakers to be able to make mistakes or to err for the sake of the First Amendment. But here they stuck a slightly different balance. They didn't say that it matters whether the plaintiff is a public figure or not. They toyed with that idea. But instead, they said, no, what matters is the underlying speech is on a matter of public concern. It is newsworthy. Quite what that means. We go into a little more in the paper. But this is where some of these concepts come from. Do the next slide. Here is Terry Balea or Hulk Hogan. Some of you may be familiar with the rather famous case that he brought a few years ago against Gorka, which led to Gorka's demise. And this was another privacy tort. This was public disclosure of private facts. And there, like in the false light case of Hill, the court says, well, yeah, there are First Amendment concerns here. And we need to be able to protect speech that are on legitimate matters of public concern, or, again, that are newsworthy in order to allow the breathing space for the First Amendment. And then we go to the next slide. The third and final privacy tort that we look at outside of defamation and privacy is intentional infliction of emotional distress. So this is the famous Campari parody ad about Jerry Falwell, about a time when he was in an outhouse with his mother. And he sued and said, this is so cruel. This is such a horrible ad for you to put up here. And I have suffered severe emotional distress as a result. And the court said, well, that may be true. But in order to allow the First Amendment space that it needs, we're going to craft these rules again. And here the court drew one on the public figure, private figure line. It said that Falwell is a public figure and we need room to be able to talk about these people. But then if we go to the next slide. Only, gosh, I can't remember how many years ago. Now maybe eight years ago, I guess, Snider v. Phelps. This was the case of the Westboro Baptist Church, who picketed outside of a soldier's funeral with some absolutely abysmal and horrific signs. And the father of the slain soldier sued and said, like Falwell, will you have inflicted severe emotional distress on me? And I should be able to reclaim damages for that. And the court said, we understand that this is something that has caused you this distress. Yet nonetheless, the values and the principles of free speech that are enshrined in the First Amendment require that we say, no, this is speech on a matter of public concern done in a public place. And therefore, unfortunately, you cannot reclaim damages or you cannot claim damages even though we accept that this was harmful speech. This was speech that really harmed you. So this is the kind of backdrop of the jurisprudence that laid the foundation, we argue, for some of the decisions that Facebook made using these similar terms, public figures, and newsworthiness. So I'm going to kind of take you into why are we talking about the Supreme Court's decision in the First Amendment. And this brings us to the second way that we described that Thomas described earlier, of taking down speech that we don't like about us through private systems, like Facebook's content moderation system, which didn't develop in a vacuum. And instead, in developing in American tech companies and Silicon Valley, it developed a lot of the same types of concepts and exceptions that the court created in creating their First Amendment limits to tort liability in cases involving public figures and matters of public concern. So whereas the court's public figure doctrine grew out of claims of defamation, Facebook's concepts of public figures actually emerged from claims of bullying. In 2009, Facebook was facing heavy pressure from anti-cyber bullying advocacy groups to do more to prevent kids from being bullied online. The problem, however, was that the traditional academic definition of bullying seemed impossible to translate to online content moderation. How do we write a rule about bullying, recounts one of the original authors of these policies at Facebook? What is bullying? What do you mean by that? It's not just things that are upsetting. It's defined as a pattern of abusive or harassing unwanted behavior over time that is occurring between a higher power and a lower power. And that's not an answer to the problem that resides in the content. You can't determine a power differential from looking at the content. You often cannot even do it from looking at their profiles. So the impossibility of employing a traditional definition of bullying meant that Facebook had to make a choice. It could err on the side of keeping up potentially harmful and bullying content, or it could err on the side of removing all potential threats of bullying, even if some of the removed content turned out to be completely benign. And so faced with the intense pressure of these advocacy groups, media coverage on cyberbullying, and at the speed in which they were being asked to make the decisions based on the volume of the content reported, Facebook opted for the latter approach, but with the caveat. The new presumption in favor of taking down speech reported to be bullying would apply only to speech directed at private individuals. So this is a quote from one of the authors of the policies. What we said was look, if you can tell us this is about you and you don't like it, and you're a private individual, you're not a public figure, then we'll take it down. Because we can't know whether all the elements of bullying are met. We just had to make a call to create a default rule for the removal of bullying. Although this author denies barring directly from First Amendment public figure doctrine, the justification for creating this exception actually tracks the recess of Sullivan quite closely, and subsequent cases in treating certain targets of allegedly harmful speech differently on account of their public status and the public interest in their doings. Besides deciding how to quickly draw this line though, I hope you kind of pulled out as lawyers or lawyers to be that there is an issue here, which is that they're still using this term public and private figure, which is undefined. So they had to figure out how to define quickly, again, to do this at scale, public and private figures, and if there's anyone of any guess what they did to try to determine how to do this, what do you think? No, not check mark. Close, very close. They decided to do this by running the people's names through Google News. So, to determine who is a public figure in order to decide whether or not to keep up the speech against bullying. This is like a whole other nest of things that after I write my first next 15 papers, I will someday come back to this idea of like the cross use of platforms of these companies, that they borrow each other's technology and services to provide their own product, but anyway. For right now, developing separately from this idea of public figure, which again, as we said, was classified only to bullying, there was a related development in the idea of newsworthiness, and newsworthiness is an exception to everything on Facebook. So that means gore, that means nudity, that means hate speech. So that brings us to the story of the Boston Marathon bombing, and I've actually never given this talk in Boston before, so this is kind of like a little bit interesting to do. This is not the picture that I'm about to talk about, but at the time, in 2013, after the Boston Marathon bombing, a graphic picture was starting to circulate on Facebook. And the image in question was of a man in a wheelchair being wheeled away with one leg ripped open below the knee to reveal a long, bloody bone. And the picture had three versions. One was cropped so that the leg was not visible at all. And the second was a wide angle shot so that the leg was visible, but less obvious. And the third and most controversial version clearly showed the man's insides on the outside, which is the content moderation term for how they define gore. If you can see insides on the outside, it gets taken down as gore. So despite being published in multiple media outlets, the site policy dictated that any links to the images or the images themselves, the third version of the picture must come down. Philosophically, if we were going to take the position that insides on the outside was our definition of gore and we didn't allow gore, then just because it happened in Boston didn't change that. Remembered one of the team members on call that day. Policy executives at Facebook however disagreed and reinstated all such posts on the grounds of newsworthiness. So for some members of the policy team at Facebook who had spent years trying to create administrative rules, the imposition of such an exception seemed a radical departure from the company's commitment, at least in the realm of content moderation at the time, to procedural consistency. And it touches on, in his opinion for the court in Gertz which Thomas didn't talk about but is one of the intervening cases in public figure doctrine, Justice Lewis Powell worried openly about allowing judges to decide on an ad hoc basis, which publications address issues of general or public interest and which do not. And similarly at Facebook, many worried that newsworthiness as a standard was extremely problematic. The question is really one of newsworthy to whom and the answer to that is based on ideas of culture and popularity. The result, some feared, would be a mercurial exception that would privilege American users views on newsworthiness to the potential detriment of Facebook's users in other countries. Great, so drawing those two parts together, what Kate and I then do in the paper is look for some guiding lessons that both platforms and courts might be able to take out of this kind of a comparative analysis. And so looking carefully at the cases, we found that there were basically three different justifications or rationales behind the public figure doctrine that the courts developed and then we tried to see how those cashed out in the platform context. So why is it that you might have special rules for speech about public figures? Well, one rationale that the court frequently talks about is that they have greater access to channels of counter speech, right? If somebody says something about you, especially if it's false or defamatory and you're a public figure, you might think that you have greater access to the means to be able to rebuke the lie. The harm against you maybe isn't quite so grave because you are able to access the mass media and say, that's false, here's why it's false, my reputation won't be as harmed as a private figure might be given the same sort of speech. So there's this kind of access to channels of counter speech there. The second idea is this one of sort of dessert, of you are deserving of slightly harsher First Amendment rules. And again, this is this idea that is based on the concept of voluntariness. You have voluntarily put yourself in the public eye and then as a result of that, you need to take along some of the lumps that you get with that fame. And this is, you know, the court has talked about this as the sort of normative consideration underlying the public figure doctrine. And as we'll sort of maybe talk about, especially in the question and answer, there's maybe some reason to question both of those justifications. You know, is it the case that public figures now have vastly superior access to rebuttal? And is it the case that they're really more or less deserving of harsher speech rules just by virtue of their fame? And then the third reason why you might have different rules for speech about public figures is just going into this idea of newsworthiness. Just by the fact that they are public figures, just by the fact that they have this really important role in our society, the stuff that they do is more likely to concern the public, it's more likely to be of interest to the public. So this idea of newsworthiness and matter of public concern can take on a few different valences and it's really important to kind of tease those out. But at its simplest level, you can think of it as a normative concept or as a descriptive concept, right? So if you're thinking about it as a normative concept, it's really the idea that it's a legitimate matter of public concern. It's something that the public should be concerned about. And this allows certain considerations to creep in that say, well, the public may be really interested in this fact, but it's being prurient in being interested in this fact or it's invading somebody's privacy by being interested in this fact. It's going too far. It shouldn't be interested in this fact even if it is. That's that sort of normative conception of a matter of public concern. But you might think that actually the correct standard should be a descriptive one. If the public is interested in it as a matter of freedom of speech, then that is sufficient to say that it is a matter of public interest and it deserves a heightened protection who are the courts, for example, to be able to say what we as a public should or shouldn't be interested in. So with those kind of justifications in the background here, some of the issues raised by the way that Facebook is implementing its rules and setting its rules, I think become a little clearer. So one thing, for example, is the Google News issue that Kate just talked about, right? That is purely a descriptive concept, right? And kind of a haphazard one as well. So just by typing somebody's name into Google News, if it pops up and that makes you a public figure, there's nothing there that says anything about dessert, about whether you voluntarily put yourself in the public eye. There's nothing there to necessarily say that you have more or less access to counter speech. There's really no nuance about it at all. And perhaps more problematic in the age of social media, you might be in that Google News search because you went viral on social media. And I think that leads us maybe to what Kate's gonna talk about a little bit. Yeah, so this kind of gets into the idea of why we mentioned Gertz before, which is one of the cases that laid out, took kind of from New York Times v. Sullivan, which was really about public officials, people running for government or already in government positions. And then Gertz kind of opened this idea into this idea of there are different types of public figures. And there's a general purpose public figure, which is someone who is generally either a public official or has been in the limelight or has celebrity. There are limited purpose public figures, those are people who thrust themselves into the spotlight. And then in a footnote in Gertz, they mentioned there is also this idea of perhaps an involuntary public figure, but we're not going to define that or worry about that exactly right now. And the court has continued to not do that for many, many years. And what we argue is that the internet has brought us for the first time the true involuntary public figure. And previously there was such a rare phenomenon in the physical world and the court has only found them in a few instances that are usually about being a victim of crime or a perpetrator of a crime in some way. And so that brings me to the story, which I think is probably the best example I've ever found about a lack of voluntariness, which is Alex from Target. I do not know if any of you remember this. This was, this was back in 2014. On November 2nd, 2014, an anonymous Twitter user tweeted a picture of a target employee wearing a name tag Alex and bagging items behind the cashier. In the next 24 hours, the tweet gained over 1,000 retweets and 2,000 favorites. Over the next day, the hashtag Alex from Target was mentioned more than one million times on Twitter, while the keyword Alex from Target was searched over 200,000 times on Google. Shortly thereafter, Twitter users started an effort to de-anonymize the Alex in the photo. They were successful and it resulted in the publication of his Twitter handle, at which time he amassed a quarter million followers. So jealous. Two days later, he appeared on the television talk show, Ellen, that made me jealous. Death threats, denigrating posts, and fabricated stories about Alex from Target followed shortly thereafter, of course, because it's the internet. So it's hard to argue that Alex from Target, a global celebrity at this point, with hundreds of thousands of social media followers, is merely a private individual. Similarly, it's hard to argue that Alex from Target is a voluntary public figure, who thrust himself into the vortex of a public issue by bagging groceries at his part-time job after school. Moreover, Alex from Target does not fall into one of the categories that are involuntary public figures who have been established in case law thus far, though it's thin, people who have been victims of crimes or accused of committing crimes. And so we argue that the internet has eroded some of the traditional reasons for especially protecting speech-concerning public figures based on the assumption that people become public figures by choice, and that as public figures, they have greater access to channels of rebuttal. And these assumptions are becoming increasingly outdated in the digital age, given the dynamics of online virality and notoriety, and given the ubiquity of channels for engaging in counter-speech. And so we argue, voluntariness kind of has to go, and in its place, an idea that maybe has a more normative application, which is admittedly messy, but this idea of a sympathetic public figure. People, we can't quite think of private figures given the celebrity that they have, but because of the nature of the speech directed at or about them, we want to make a different set of rules for. And are we, do you wanna? Yeah, I'll talk briefly. Well, actually, so. Can you do the first two? Yeah, so, well, so the next example, we'll just go through these really quickly. We have four examples of a range of what might qualify as a sympathetic public figure. Yeah, we'll just put these on the table. And so this is Justine Sarko who put up this tweet, and it went viral as well, and by the time she put this up right before she got on a flight, by the time she had landed in South Africa, there was a whole kefafal about it. She was followed around on her vacation while she was in Africa and lost her job and was widely shamed as a result of this. So she is perhaps another example of somebody who maybe voluntary public figure, she did something voluntary to put herself in the public limelight, but I don't think we can say that she reasonably assumed that her tweet was going to gain this stardom. So anyway, put a pin in that one. This is the example of Adalia Rose. So she has a very, the child has a very rare condition that causes her to look like she's aging very prematurely. Her mother put up a Facebook page about her as a way of raising awareness about Adalia's condition and so that people could keep up to date on her progress and horrifically, she was then the subject of all sorts of horrible conspiracy theories and hateful posts on social media about it. Again, can we really think of Adalia Rose as some sort of voluntary public figure even though her guardian thrust her into the public eye in that way? Put a pin in that one as well. Covington Catholic Boys, another example, they were voluntarily in a public place protesting, engaging in core First Amendment activity, but then things went viral and they are sort of taken out of the private figure realm and thrust into the public figure realm. Do we think that that's sort of fair? And then finally, gosh, I'm totally blanking on her name. Leslie Jones, sorry, yeah. Leslie Jones, who after the Ghostbusters remake came out was targeted horrible racial and sexist slurs on Twitter, causing her to leave. And she is a core public figure, right? Under any definition under First Amendment law, she's obviously a public figure, but does that mean that she should get different rules under Facebook recall but if she is a public figure under Facebook's rules, she can't make use of the cyber bullying protections under Facebook, at least as they were historically defined. So I think that that's basically the core of our paper, the last part of our paper I hope we can talk about with the panel, which is basically the idea that what we're trying to say that is an answer to a lot of these things is hopefully found in the Facebook's new oversight board or Facebook's Supreme Court, as it's sometimes called this idea of establishing kind of rules and regulations and a set of norms and values that these content policies can be tied to and to borrow perhaps from what we know about institution building from the American courts and American history and create something that is much more responsive to the freedom of expression that we wanna have online. Can we put Zuck and his eagle up please? Yeah, there we go, good. I just love that image. Okay, thank you. Yeah. Let's move on to the other three panelists to come up. And so I'm hoping perhaps we could start, so I'll introduce our three amazing other panelists and then I'm hoping that perhaps Kate and Thomas can say a few words on the oversight board and what they were planning to kind of say about it in their presentation and then we'll have reactions and comments from the other three. So the three panelists are we have Kendra Albert who's our clinical instructional fellow at the Cyber Law Clinic at Harvard Law School. They have a JD from HLS. They are interested in a really wide variety of issues across online speech, harassment, critical race and feminist theories, gaming and the list goes on. They are currently a lecturer at HLS where they teach an advanced constitutional law class with Professor Martha Minow. So yeah, Kendra Albert. Shinmai Arun is a fellow at the Berkman Klein Center or she works specifically on the Berklet Cyber Security Initiative until recently she was an assistant professor at National Law University in Delhi where she also founded and directed the Center for Communication Governance. Among her numerous interests are the regulation of speech online, platform governance and questions of corporate social responsibility and digital rights in the global South. One of the, so I have a thought that I wanna offer and put on the table for the panelists as they offer comments and reactions which is so the title of this talk and perhaps one of the reasons why many of you are here is this notion of constitutionalizing speech platforms and one of the questions I wanna ask or at least put in the back of our minds is where are we using this language of constitutionalism? Does it matter or does it make a difference or should we just do away with it? Yeah. There we go. I think I'm up first of the panelists but maybe I should take a moment to introduce the last panelist before I get started. We need no introduction, yeah. Oh, Jay-Z never needs an introduction. Any context, whichever Jay-Z you're talking about. So I believe you have the George Hemith Professor of International Law. I love how we're crowdsourcing things. I actually, I'm the word salad professor. So basically Jonathan's a train. The needs no, absolutely no introduction but I definitely had an introduction for him and so the introduction is that Jonathan's a train is not only the George Bemes Professor of International Law at Harvard Law School. He is also a vice dean for library and information resources at HLS. He's a faculty director at the Berkman Klein Center and also a professor at the Harvard School of Engineering and Applied Sciences and of the Kennedy School. Waiting on the dental school have not heard back yet. But if Kendra wants to add something more, please go ahead. No, I think that's probably enough. Well, thank you, Electra for the kind introduction and thank you, Thomas and Kate for the really like fascinating paper which I feel like Electra was kind to me by saying how much ground my research covers but I think y'all's paper covers almost as much ground. So it was a challenge to figure out what specifically to think about or respond to but sort of taking up a little bit of Electra's question. I think y'all have picked a really difficult topic with the public figure doctrine and as a Georgia district court judge said in a published opinion, defining the public figure is as much like trying to nail a jellyfish to a wall. Which like is at least a fascinating mental image. And I think one of the challenges you encounter and you sort of talk about at length is how little SCOTUS has like the Supreme Court of the US has actually said on this subject that's particularly useful to Facebook in making its decisions to the extent that maybe they were based on a First Amendment framework but also to sort of other courts, lower courts, policy makers, thinkers. And I think that sort of problem is the sort of same question I would have about the role of the Facebook Oversight Board or the Facebook Supreme Court, which is to say that I wonder about the framing of thinking that like Supreme Courts are the best protectors of sort of these kinds of freedom of expression issues when so many of these decisions get made much lower in the system whether by content moderators on the Facebook end or by judges, district court judges, state court judges in the judicial one. And so one of the things I wanna sort of just throw out there and I will honestly admit that I have not thought through it as thoroughly as I would like to before saying it on public live stream but we live in the future. So which is to say you identify a number of sort of problems you want to solve, right? With this things from like the lack of expertise and professional norms in sort of making these decisions that Google news thing is just such an incredible factoid and a testament to the power of your research. The way powerful and connected people are more likely to get better treatment and finally and I think frankly most importantly the failures of context like the way in which the American centrism of the policy team of the sort of of the company, it makes a huge difference on how these rules are administered from everything from just like not knowing who's famous enough in countries that aren't the US to things like language where there's been widespread problems with Facebook not enforcing the rules equitably when they just don't have anyone who speaks a particular language, Google Translate being the other tool that they use produced by Google and not an adequate substitute. So I think that takes me to my sort of provocative question which is why a Supreme Court or an oversight board and why not a jury? You go ahead. So I think that that's a really great point and I think it depends very much on what it is that you exactly want this thing to be doing, right? Whether it's a jury or an oversight board. So I think that we write about and one of the things that I felt strongest about is that I think that Facebook's actual, it's actual incentives align really well with creating an oversight board to jettison the hard job of creating public policies on what speech stays up or what speech comes down. Having an oversight board basically allows them to take off, separate the powers, take the jettison this responsibility and if there's people are upset about something being taken down or something staying up they have a, Facebook can say hey listen, this isn't, we didn't do this, we're just following the rules of this oversight board that has ideally all of this accountability and legitimacy and all of these other norms of institutional building built in. I think that falls apart as a concept and I think unfortunately this is maybe where they're going in reviewing content appeals decisions as they generate from every user in the entire world not liking a content moderation decision on their behalf. I think that is not a good way to kind of think of this oversight board and that things like ad hoc juries or something like that might be better means of kind of addressing that problem but that's a split that like for some reason they're trying to figure out right now and I'm really unclear why they want to do the latter at all underneath the realm of the oversight board. Thank you. I usually begin most of my comments with Kendra is always right so I love the jury's idea and maybe what I'll do is I'll open with the article 19 proposal for social media councils which was also a part of David K's report when he suggested that Facebook might like to come up with a system like this. I just want to put it out there as a potential option to get more context. So usually when I'm on panels like this I feel that I'm here to be the international voice and I go no, no, I can't speak for the whole world but in this case since Facebook has more users in India than anywhere else I feel very comfortable doing this just saying. I guess if you start thinking about this outside a US perspective and I'm sorry I know this is a little unfair because you've been comparing US law mainly to the Facebook Supreme Court but the thing is that whatever they come up with applies across the world and so people like me are worried about it. Facebook has already stated that nothing that the Supreme Court does is going to go in opposition to government orders and to court orders that emerge from nations. I noticed that alongside this they've also said that the oversight board will be responsible to the users of the platform and I just want to flag, I mean just to be put it out there that sometimes the interests of the users of the platforms can be at tension with what their governments want. It's interesting also that they've set up this board in part to respond to criticisms about what they did in Myanmar and I wonder whether a policy like this is likely to address situations like Myanmar or small country, not a whole lot of users and then it takes us back to the question of who offers context because the government is putting one narrative out there and you would have to talk to juries or to social media councils to access a different narrative and that would of course raise questions of where do you find people that are independent and fearless enough to tell you what you need to know. The third thing, it's unrelated I guess to my first two points unless you look at it again as a context question. So one of the things that I loved about your paper is that you opened with cyberbullying which you also introduced here. So it's interesting when India had its wave of the Me Too I don't want to say allegations because it's a judgmental word but Me Too narratives there was a list, it was called the list of sexual harassers in academia that was put up on Facebook by a young woman called Raya Sarkar and that list was removed for cyberbullying. The shitty men in media list? No, no, shitty men in media is different. This is shitty men in academia. Oh, oh, they're everywhere. Yeah, they are. So the list goes down for cyberbullying, right? And it's not clear to me what standard they used whether it was newsfeed or whether they just sort of felt instinctively that this was cyberbullying. I'm again putting it out there to suggest that these issues are complicated. To Facebook's credit, the list went back online in a couple of days but it was because the people sort of advocating for the list were able to access Facebook. I have one more comment offered on your paper that has nothing to do with context. Maybe it's a question, not a comment which is that when Facebook deals with newsfeed and in trying to identify who is a public figure and who's not, is it just sort of a let us troll the newsfeed and see how many hits we get or is it an intelligent classification of are people showing up on government websites? Are there some sources? Do you think that Facebook flags are sort of more significant in identifying a public figure? Because if it were to do so, I think that the involuntary public figure problem might diminish a little. Yeah, I can speak a little bit to that. So the truth is that we don't really know the inside workings of some of those newsworthiness determinations and that is in part why Kate and I are bullish in part on the idea of creating this sort of oversight board that will bring about both more transparency and also hopefully some idea of attempting to create continuity, right? Some idea of stare decisis almost borrowing again from American law, but that's not an American-only concept, is this idea that we don't know what's going on behind these closed doors and they are often made by high-level policymakers. And if as part of this oversight board, there will be reasoned decisions that will be published, we think that that is a step in the right direction. Will it solve all of the problems that you said? No, absolutely not, but I think that it is a step in the right direction. Your point about the different laws, obviously hugely concerning to us as well, I am becoming increasingly pessimistic on this point that we can have a global internet, particularly global speech platforms that do have these global policies. Facebook is so dead set on sticking to this line, we are a global community, we are a global community, but I think at some point we really need to ask the hard question of whether that's possible or desirable in practice, even if maybe in theory it might be. There are plenty of circumstances in which the internet is not this borderless world in which some of the founders maybe thought that it was going to be and I am no longer convinced that that is like a bad thing. I think there may be times where we do need to have region-specific policies, things that are in place to either protect users more or to respect certain local laws or customs and newsworthiness is one of those really delicate areas. Now, will Facebook ever create that kind of a system where it says, all right, we'll apply European privacy and speech law in Europe and American internet? I don't see them doing that, at least not unless they're sort of dragged into it kicking and screaming, but I do think that there are some of these times where we may want to push for that kind of nuance depending on localization. And then the last thing I just say, which goes a little bit back to Kendra's question on the jury's point since you brought it up as well, we were at a conference a couple of days ago at Yale where there was somebody from Harvard who was presenting their jury paper. Jenny Fan. Jenny Fan, this idea of jury's for social media. And I asked her this question and I feel like you could have easily asked it to us as well about, oh, paper, or I made this comment I should say, which is that when it comes to matters of content moderation, I think we do ourselves a disservice and we don't get as much analytical purchase if we ask at too high a level of generality about will X solve content moderation or will Y be the response to this issue in content moderation? Because so much is gonna depend on the type of content moderation that we're talking about. So will algorithms fix content moderation? Like I didn't even know how to begin answering that question, but will hashing databases be a really good way to make sure that copyright violating material goes down? Maybe, well then let's talk about that, probably not, but let's at least talk about it on that level of nuance. I'm biting my tongue. Yeah, no, no, sorry, it was a rhetorical question, but it's not meant to suggest that I agree with it. But then will AI be used to make newsworthiness determinations? No, it absolutely can't be. This requires some sort of human nuanced review that I think at least if we're gonna have a sort of a sensible concept of newsworthiness, we're probably not gonna be able to do that through AI. So just as you asked whether juries are the right response or whether courts the right response, I think sometimes court-like institutions might be the right response and sometimes juries might be and we may start getting into a world in content moderation similar to the legal system where we have sort of questions of facts determined by juries and questions of law determined by judges. Again, I'm not saying that I'm advocating for that, but I think that that might be the way in which we're heading. Well, I think it's kind of entirely fitting that this session is taking place under this meme-taschic churrasco of Mark looking right out at us, Sam the Eagle kind of distracted looking at something else and a kind of conflation of new and old, which is of course what your paper is about. And I think that's kind of the question for us is how much is really new here? And I think you're making the case that a lot of it is new and different. And then as you're asking the question of, well, how do we process that? It's really hard not to process it without reference to touch points that we can say, well, we at least all used to agree on this, is there some way to import whatever this is to the present circumstance? And there's great value in that without wanting to fall victim to status quoism where you just say, well, this is the way it was, this is the way it has to be. I find myself both hearing from a lot of people and inhabiting myself the contradiction of in the scandal of the week with Facebook and I don't know what this week's is, it's only Monday, but we'll find out shortly. It's Tuesday. Oh, well then do we know what the scandal is? No, that's why I thought it was Monday. And in that, in myself and in others, on the one hand, there's the, darn it, why don't these companies take responsibility for what's happening on their platforms? They're just turning a blind eye to things and still cashing in on the value of the platforms. They need to own what's happening, which sounds kind of right. And on the other hand, it's also SMH, look what Facebook just did. Here they are throwing around their weight unaccountably and then Facebook's like, let's have an external advisory board that we're bound by. They're like, look at the way that they're externalizing their responsibility. Why can't they just own what, and all of those seem right and they don't seem commensurable. And I think it's part of the challenge for us to work that stuff through. And just a couple ideas on that front are first to get straight at which moment we're bringing to bear which framework on harmful undesirable speech, which is what we've been focusing on here. Is it one, a public health framework? This stuff is quote unquote, viral. It can infect you. If you are exposed to it, you could become radicalized. Once you're radicalized, exposing you to non-radical speech does not tend to cure you. So it's a one-way function. That if you have that mindset, that's gonna cascade into a bunch of priorities and recommendations and decisions about things. So it's worth owning that mindset if that's where you're at. Another mindset is the rights mindset. It just says there's certain utterances that by the fact of their utterance deprive people of dignity and therefore should not be uttered. And even if they don't hear it maybe, they are being deprived of dignity. That's different I think from the public health mindset and of course the rights mindset also includes the right to speak and figuring out, reading article 19, which one is supposed to win can be its own trick. And a third is what you call legitimacy framework, which is to say, I don't know what the substantive rule is, but so long as it was decided in a way that I respect, I can abide the decision without having to substitute my judgment for it, which is where we start saying, well, let's have a jury deal with this. Let's constitutionalize it and point to this body of rules that's evolved in an American framework that at least within America should be the way to do it. And after all, the justice is decided and they're non-partisan. So those are three frameworks that might help us figure out our contradictions and see what strands we're drawing from what. Now just maybe three or four as each a sentence, issues that I find myself working through that also to me re-emphasize how new and different this area potentially is. One is it's easy and tempting to think of this just as in like the constitutional law cases, what gets banned and what gets allowed, that's what they're saying the basis of the Facebook Supreme Court will be, but there's also, what will the Thamanda scale be about? And Mark has embraced this idea, he points to this really cool graph of as speech on Facebook tends to be nearer the line of something they would ban, but still short of it, people get more enthusiastic about seeing it and sharing it. So it's like, yay, wow, oh boy. And then as soon as it crosses the line, we delete it and it drops to zero in exposure. And that seemed weird to him. And his thought was, maybe it should be a graceful decline as you get closer to the line, fewer and fewer people can see it even though it's not banned. This is the shadow banning that people get nervous about. And of course there may be some speech whose value is precisely in that it is close to the line. Just ask Lenny Bruce who crossed it many times. And yet that is a way of saying there's a whole bunch of decisions around what to promote, what to push down. Is there a baseline level of virality that starts to get into the mechanics of whatever framework there is for which my reaction tends to be there shouldn't just be one framework. How do we blow that up so that no one entity has responsibility on that curational point? A second thing is whether or not these decisions should apply to private messaging. It's one thing to say it's a public post, all right, people don't like it, let's take it down or let's put a thumb on the scale. But how about just a communication as we're planning the panel? If as we're planning the panel we should violate the Facebook terms of service in our private messaging, should Facebook be able to notice that and take action upon it? Facebook's answer I think through their content lead is yes, our terms of service apply everywhere. That is unusual to us. I think that triggers our status quo warnings. A third thing would be ways in which the introduction of chance and fortuity might be helpful. When we have close calls on an issue between people who can't agree, sometimes the way to agree is to flip a coin. And then the outcome so long as it's an honest coin is accepted by both parties. And sometimes litigation is seen that way as well. But yeah, or juries. And that's strange. I think we live in an era where we crave more certainty, more information and we are offered by the tools and by we I think I mean Facebook, opportunities for intervention and control that are nearly unprecedented. And the constitutional law cases can't really reflect that because most unauthorized gatherings and demonstrations on the Cambridge Common do not have immediately apparating police officers that disperse the crowd. It has to hit a certain threshold before it even draws public attention which is part of the framework by necessity. The algorithmicization of things effectively makes it so that every possible interaction can be subject to the rule. I guess there was a neat conference just last weekend at Yale on perfect enforcement which I'm sorry to miss but I hope this was discussed fulsomely there. And it gets into the human versus algorithm kind of debates that we're having. I might be more willing to let the algorithm decide if it is simple even though simplicity makes it wrong in corner cases and if it has elements of randomness at time that kind of mean you're not always gonna win but there's not gonna be a consistent bias either in the most directive ways. That kind of embrace of chance might be a way of trying to disperse power in the ultimate way given that we have a surfeit of it and we don't trust anybody to exercise it well. And I recognize what a contradiction that even is but I think it's one we should inhabit. And the very last point is I wonder how much these discussions would be made simpler if the act of physically threatening someone not imminently but it's like I don't know how much better I should feel if they're like I'm coming to kill you and I won't tell you when. It's like is that imminent? Please give me more detail so I know if you're wronging me, that seems wrong but if in fact there were real world consequences for the act of credibly physically threatening somebody maybe that would alter the stakes and complications of this conversation because it does seem to me that if you do that and the opportunities to do that are so much more legion than they were before this world came about it might mean there was an externalization of pressure on that front that traditional law enforcement would know how to handle if it had the resources to do it in the right jurisdictions and environments. Do you wanna respond? Oh and the question is what do you think of that? Yeah, I do but do you wanna go ahead? I'll just say one brief thing in response to one of your incredibly rich comments I really, really appreciate all of those just on the issue of sort of down ranking and issues like that I think that going back to a letter as an initial question about is it helpful to think about this as constitutionalizing and is it helpful to think about this as courts is an area where I agree with you it could be problematic particularly if we have an Anglo-American conception of courts and a particularly American one of dealing with cases and controversies and Evelyn Dweck who I don't know if she's here there she is who I haven't met yet but I'm excited to meet you just sent me her amazing paper on the Facebook oversight board and she makes this exact wonderful point about how if we look at it just in terms of sort of case resolution or a dispute about harmful speech then we might miss some of these more sort of systemic issues that don't lend themselves to that kind of framework and that's where I would agree that maybe constitutionalizing or courts based thinking is not the right way to think about it so it's a really great point and her paper is amazing. Yeah I guess I mean everything that you said but especially at the beginning when you're kind of talking about what it is that we demand out of Facebook and then what it is that we kind of and then how we flip back and forth on this so I really actually I'm writing a piece right now for a magazine that was given access to the 48 hour team that took down it was in charge of taking down the Christchurch shooting video and it is this global follow the sun escalations team around how to do it and what I think is super interesting about all of this and researching this and a few weeks ago Casey Newton at The Verge had published something also about the content the disgustingness of the content moderation work of people having to look through these terrible pictures and this terrible content and what I think is fascinating about all of these ideas is Facebook always gets such like vitriol at them for like not taking this down not taking responsibility for their platform and I agree with like most of that there has to be these content moderation policies but what I actually think people are very upset about is that like this type of content exists and the frictionless of the internet and platforms shows it to us and it's like people have always sucked this much and we just didn't know about it and it's just I think that A heartening thought Yeah, I know Is it? I know but I mean but I really do think that there is a level at which and I don't think it's being talked about nearly enough because I think it's very easy to blame a big corporation that there are like plenty of people working at Facebook who are trying so hard to take down harmful content to do this every single day and there are terrible people putting this stuff back up and thwarting the system at every single moment and it's just kind of it's just like figuring out like the right the right place to put the blame the right button to push at the right point in the system is like I think half the battle here OK, so I would have thousands of questions for the panelists but I want to give a chance to the audience to ask questions We have 40 seconds No, we actually have another 15 minutes for those who can stay with us So yep, are there mics? Thank you, this was a great discussion So the name of this event was Constitutionalizing Speech Platforms Plural and I think it's telling that we've mostly been talked about one platform, Facebook and Facebook is big and powerful and rich and they can afford to have this new Supreme Court and an army of lawyers and an army of poorly paid people to look at traumatizing videos all day long and take them down but how does what we've been talking about so far translate to, you know, for example my Fountain Pen Slack group which has several thousand users but is ultimately run by like one guy in his spare time on no money or alternatively the next CS undergrad who's going to start the next Facebook killer with a team of like, you know, five to ten people but maybe millions or billions of users For my part, I think it may be beneficial to think of a system that's mainstream and marginal knowing that the marginal today can become tomorrow's mainstream and to expect or hope for, however it might come about a very different set of practices and overhead to implement them depending on where they are like, I remember Goatsy and I was not a frequent visitor to that site if you haven't heard of it I don't recommend that you type those letters in to your browser but the idea that you would expect every site to have the kind of effort that we're describing for a Facebook and maybe it's the Facebook there's not a lot of them maybe it's Twitter too it used to be the Facebook it used to be the Facebook, yeah that maybe is an unevenness that is desirable and would call to mind the idea that if you're wanting to go off-roading, you should and talking about the way that I think it was Kate was just saying how these videos are suddenly so ubiquitous if they get offered to your two-year-old toddler after three Thomas the Tanks engines and then suddenly it's like, here's a murder like that's a problem for a mainstream platform and if you have to go in search of it and it's at Pastebin or something maybe that's okay I don't know if that's okay from the public health model but it's okay maybe more from the rights model or from the legitimacy model so that may help that unevenness as a feature rather than a bug I guess I wanna just offer a brief answer which is to say I don't think you've heard probably any of us say that we think that any of the stuff that we're talking about should be legally required so that's like maybe a useful sort of baseline point but I also think one thing that happens when we talk about like small communities or not Facebook, not Twitter is folks often underestimate the degree into which these problems are also common to them yeah certainly you don't have I think the same kind of like oh we need our route like follow the sun 48 hour team to get the Christchurch video taken down but I will say that if you have a couple thousand person slack and you don't have any moderators you actually do have a content moderation problem you may not know about it so I think that one of the problems perpetually is that these conversations do take place in terms of these large platforms thanks to fantastic work by people like Kate and Thomas but I also think it is partially because we devalue this work on how it creates smaller platforms that means that we don't think that the smaller platforms need the same thing quote the real world juries are made of involuntary draftees and we specifically don't want juries that are made up of people who volunteer to be on juries how does that translate to the online world if you propose to have juries so this one's my fault but I want to actually you had a response to the last one do you want you sure okay so I guess spinning out the hypothetical a little bit I think that what might be useful to think about is that it well thanks for coming guys thank you everyone yeah no Mark is just controlling the lights that's just a thumb on the scale you can still hear the speech you don't see it I mean I don't see any reason why I think that you could do involuntary juries in the same way on Facebook as you do in the real life and I think that the reason that's appealing to me which I think and I'll stop there because this is not about my jury theory is to say that it does help with some of the problems of context and sort of that I think are a major problem with content moderation because you can actually theoretically at least pick Facebook users who are from a particular country or from a particular place or maybe even have had particular experiences and the unfortunate factors that's facilitated by the amount of data that Facebook gathers about us but you know we'll set that aside so I think that is what could be appealing about it imagining wire dear for a Facebook jury I object to this dog being on the jury hi when we talk about using algorithms or AI versus humans to do content moderation has there been any data about how successful the algorithms or AI is versus human judgment I mean I know we hear anecdotes about when the algorithm fails but you know I think a good comparison is even if the AI or algorithm does have these failures what is their success rate versus like human content moderation? Yeah so I think that there's I'll just kind of some basic stuff about AI and algorithms in the moderation process there pretty much is none it's actually all about like 95% of it is done by humans making decisions after they're flagged by other humans as being problematic there is an automatic process that takes place in the uploading of any video or photographic content in which that photo is checked against a hash database which is like a digital fingerprint I mean like for in the child pornography realm and a database that is maintained by NICMIC which is basically keeps all of the has hashes for all of the known child pornography in the universe and what I want to say about that is that it's not an algorithm it is just a one to one matching system like right so that's like it's not doing anything super intelligent it's just one to one matching and it's not machine learning it's not anything else and the same thing is used for terrorist content now it's also used for copyrighted content ideas, YouTube's proprietary thing for this but I think that your question is basically like how do we account for AI kind of not fitting in the system one of the things about the Christchurch shooting is that humans figured out a way around the hash system and what happened was that Facebook was putting all of these new hashes on these videos that were coming up and people at 8chan were taking the video flipping it around, fuzzing it up get finding ways to like trick the system into identifying the video and so it's almost kind of like the human in the loop was like specifically trying to screw all the other humans in the loop it was a very, it's kind of, it was just a mess so does that answer your question? If I could just add something briefly onto that so at least if Facebook's own figures that they release are to be believed it's really interesting to look at the differences between the different types of content and how they're catching things so it will probably not surprise you to learn that in the copyright context and the child pornography context and some of these ones that Kate was just talking about where there's hashing Facebook claims upwards of 95, 98% success rate They don't actually release how they're doing that or like how they know that it's 98% or 95% which is a huge problem Yeah, no totally but again, assuming that we believe them or that we think that there's something there at the very least they're admitting that compared to other types of content this is at least easier for them to grab and then you look at things like hate speech and cyberbullying and that's in the like 40 to 55% range and of course, why is that? Well, because context is so often much more important in issues related to hate speech and bullying there's the power differential point that we talked about earlier in the paper for cyberbullying and with hate speech one of the main problems that comes up there is coded language when it comes to some of this forms of speech there was a really fascinating study that I saw come out of a couple of Brazilian scholars I think it was about a year ago where they looked at the fact that when it came to white supremacists words that we would traditionally associate with things like love and care were more common among white supremacist users than non-white supremacist users and so trying to come up with some sort of detection mechanism that flags bad words that could give some sort of advance notice to that type of speech is gonna be really difficult you would need to constantly be updating it there will be so many false positives if it was just to be done without human review that it makes it really difficult to think of a content moderation system that could rely on that sensibly This drags it away a little bit from your question but I just wanted to add a postscript to what you're saying is that sometimes context can work in a different way so Facebook never used to code cast into race and so to sort of capture cast all it needed was diversifying their content team and they finally started accepting it as racially charged speech I have the mic over here does that mean I get to... So thank you all for the wonderful, wonderful panel you're like a super group up there My question is whether we're looking for procedural solutions to what are substantive and logistical issues as if looking for the keys under the street lamp so the logistical issue that you're all aware of is that to effectively mind the store would mean dramatically increasing the number of moderators and the expertise that they had to who knows what the levels would be and the substantive issue being that to define speech standards across billions of users across different cultures there's not a right answer to that and we can lift up the hood and look at that but there's not a right answer Yeah I think that's absolutely right I mean something that Kate and I are quite firm on and the paper is to call out Zuckerberg for one of his I think it was probably just a slip when he speaks extemporaneously I love it because it gives us lots of things to talk and sort of criticise him about but I don't think he probably meant this but initially when he talked about the oversight board he said this is a board that's going to be able to define global norms we're so excited about that it's like good luck like that you know they don't exist and no board that you create could create them and I think that's exactly right you know trying to separate out what it is that procedural mechanisms like this can do in reality is really important and so when it comes to you know are we looking for the wrong thing well you know we can be looking for different things one thing that Facebook might do through this board is not find global norms but at least set standards by which its own community standards might be judged against so it's talking about when it does finally create this board it's going to issue a charter and in this charter are going to be certain Facebook fundamental rights at the moment they're just a bunch of buzzwords due process, equality stuff like that but presumably if it's going to be worth anything at all they're going to add a little more meat to the bone there and then we might, we're not going to get global norms but we are going to get a sense of what Facebook's values are and the values by which its own content moderation or its own community standards are going to be judged that is something that will have a huge substantive effect beyond any sort of procedure whether they're the right rules or not again that's a question that's what we need to sort of hold them accountable for but it's definitely a part of this story Hi I have two questions I'll pick one and we'll see if we have time for the second one later so this is great that we were just talking about global norms because that's the general area where my question comes from so Facebook, Twitter, most of these bigger content companies that we're talking about are based here at the moment and like we said Facebook is not in a position to create these global norms but there is very clearly a need for similar kind of regulation in different parts of the world and I know you don't want to be the international expert here but you know exactly what I'm talking about so I have to say that I don't have any background in law at all I'm a computer scientist so this is a question rooted in ignorance definitely is there any precedent of different contexts or domains where a company or like a private entity if they are not in a position to protect their users in a certain geographical area or in a certain context maybe they shouldn't be allowed to operate in that context or in that area is there any precedent, I don't know pharmaceuticals come to mind or clinical domains come to mind so do any of you have any insights about where we could look for some guidance and insight maybe? That's a question that I've been trying very hard to answer with this paper I'm writing and it's not going well, let me just say I think that if you were to think of it in the context of technology companies it wasn't law really but sort of public shaming Google withdrew from China and that sort of triggered I guess global network initiative and other quests for global norms the US used to have and I will leave it to the toddler professor to explain it in more detail sorry Jonathan but the US used to have ACTA which intervened when US companies were engaged in certain kinds of human rights violations around the world I haven't reviewed the case law in enough detail to comment but as far as I've gone it appears that that would not be the social media companies they were actively harming citizens in other countries so I agree with you that it is a worrying problem and I think that those of us who are from countries in which we have seen the scale at which human rights violations can be enabled by social media platforms tend to focus on that more but it's not looking good so far is my short and optimistic answer Yeah I mean I wouldn't expect help from like the Alien Tort Claims Act but it's interesting when we again think of platforms that aren't the classic Facebook or Twitter or any kind of public posting but like you know WhatsApp where you have private groups and things getting forwarded and that leads to rumors spreading it's interesting to me to see both a pivot by Facebook more towards encrypted private group messaging where you don't have the platform in a position to know what's going on directly or to be monitoring algorithmically and that's something typically that in our quarters has been celebrated the ability to communicate securely and then to see the imposition on top of that of but we're gonna put on a limit of five forwards of something for everybody you only can forwards and after the fifth time it expires it crumbles into dust like a book that's been lent out too many times it's interesting to see a rule as crude and comprehensive as that as some way of trying to respond to the problem of virality on a network like that Okay I actually wanna use the last 30 seconds to ask one of my questions but you can feel free not to answer and this follows up slightly on what Jay-Z was talking about on the graph that kind of goes down to the progressive content removals and also to Rob's question about focusing on process rather than substance and so one of the arguments I've heard recently was oh if you actually deal with the business model if you deal with the data and the attention economy problems you actually will engineer away all of the speech issues and I'm not sure that's a good answer but I wonder what you might think about that and what does that tell us about what are the specific speech harms that need to be tackled outside of this more economic framework? We don't. Okay. No it won't solve it. But it does I mean it does get to questions of concentration of private power and to the extent that that economy is powered by personal information and that in turn allows for ads indication networks where you could have quite an extensive dossier by a Facebook without ever having visited facebook.com or set up an account there. Thinking about ways to diversify that so that advertisers have other choices about where to try to place their ads and have a hope of targeting them to the advertiser's satisfaction I think would absolutely have all sorts of second and third order effects on the ability to intervene regulatorily or quasi-regulatorily which is sort of the whole paper in patterns of speech so that alone would surely have a huge and maybe salutary impact I'd hate to try to predict it from afar. Okay thank you very much to our five panelists and please join me. Thank you Electra. Thank you.