 All right, if you're still grabbing food, feel free just to go ahead and make your way to a seat or a good place to stand, looks like we're in a nice full room today. Thank you all for joining us today for this panel discussion on a topic that seems to be in the news every week, how to rid ourselves of toxic social media content while still preserving free expression online. I'm Sarah Morris. I'm the newly minted director of New America's Open Technology Institute. This is an exceptionally tough issue. The decisions about what content is moderated or censored has tremendous implications for how internet spaces function as places of discourse, engagement, and advocacy. Over the past few years, tech companies have come under increased pressure to rid their platforms of problematic content such as hate speech, disinformation, and terror content and propaganda. And governments outside the U.S., including those in Germany, Australia, and the United Kingdom have also sought to regulate how companies must conduct content moderation, including imposing time limits for platforms to take down certain postings. And while it's understandable that policymakers want to take action to ensure that certain forms of harmful content have no place on the Internet, in the United States, the First Amendment limits the ability of the U.S. government to regulate speech, including speech online. In addition, government regulation can incentivize platforms to err on the side of taking down content, often resulting in over-broad takedowns and stifling user expression online. As a result, the vast majority of decision making on content moderation is handled by Internet companies. And these companies must strike the right balance between removing certain content to ensure user safety and positive experiences online, while also safeguarding free expression. Over the past few years, OTI has continuously pushed these companies to provide greater transparency and accountability around their content takedown practices so that users can understand how their free expression is being managed by platforms that have turned into de facto speech gatekeepers. Just last year, we released the second edition of our transparency reporting toolkit, which surveyed how 35 global Internet and telecommunications companies were reporting on content takedowns. And earlier this year, ranking digital rights released its own 2019 rankings of major tech companies around the world, which includes indicators on freedom of expression that cover corporate practices such as content moderation and transparency reporting. So we're delighted to co-host this event with the First Amendment Coalition, and I'd like to thank them for being an important ally in this space and an invaluable partner in making this panel happen. Without further ado, I'll turn it over to David Snyder from the First Amendment Coalition for his opening remarks, and thank you all again for joining us. Thank you, Sarah. And thank you to New America and the Open Technology Institute for hosting this important event. My organization, the First Amendment Coalition, is based in the San Francisco Bay area, and we promote free speech, a free press, and access to government information. So First Amendment is in the name of my group, which raises, at least for me, a question, what does the First Amendment have to do with the topic that we're talking about, which is free speech online? Does it have anything to do with free speech online? And the answer to that is kind of yes and no. As a strictly legal matter, the First Amendment doesn't really have much to say about most of the dialogue that goes on online, because online means private companies. The First Amendment doesn't regulate private companies. It regulates government, it doesn't regulate private companies as to expression. The First Amendment limits what the government can do to limit expression, but it doesn't limit what private entities can do with respect to censoring speech or controlling speech in any way. But the First Amendment arises out of some fundamental principles that I think are very important in this space. One of the principles has been given the name, the marketplace of ideas. The idea behind that being that we, the body politic, are consumers of information. And it's better to have as broad and varied a marketplace as possible so that we can come to the right answer one way or the other. It's a little bit of an idealistic notion, and I'm not sure it ever directly applied. But today, I'm really not sure that the marketplace of ideas applies. Or if it does, I'm not sure that we have a functioning marketplace. We are all completely awash in information and speech. Anybody who goes on Twitter any time of the day, I think would question the idea behind the marketplace of ideas idea that more speech is almost always better than less speech. I'm not sure the typical Twitter user would necessarily agree with that because what we see now in terms of censorship isn't directed by the government so much as directed by individuals. It's directed by mobs of people who harass speakers online. These are private individuals, so the First Amendment can't step in there to do anything about it. Ultimately, I think in order to solve what I think many of us would say is a broken or at least halting marketplace of ideas, the government can't be the answer under the First Amendment. I think it's up to the tech companies to do more than they're doing. But more importantly, to explain to people what it is they're doing and how to be transparent about the policies and processes by which they take content down or block users. I may be idealistic, but I think that with greater transparency and with greater public participation in how these platforms regulate speech, we'll come to a place where the hope that underlies the First Amendment, that we would have a robust and meaningful debate, will be more of a reality than it is right now. So I guess that's a good place to hand it over to our moderator, who is Kat Sokreski. She is a technology policy reporter at The Washington Post. She previously reported for The Wall Street Journal, Proventure Capital. And her work has also been published in TechCrunch, The Boston Globe, USA Today, and The Chicago Sun Time. So with that, please help me welcome Kat, who will kick off our panel. Thank you. Well, thank you all so much for joining us today. I'm very excited to be here to talk about some of these issues that have really been top of mind for us at The Washington Post in our coverage of tech policy in recent weeks. I write our Technology 202 daily newsletter, and seems like every day we're thinking about these issues of free expression online, whether we're covering President Trump's social media summit last week, or we're looking at some of the hearings that were happening on the hill. And so I'm excited to dig into these and some other issues with the panel today. So I'm excited to invite them on stage. First we'll have David, you just heard from. David is the Executive Director of the First Amendment Coalition. Then we'll have Sharon Bradford Franklin, and she's the Director of Surveillance and Cybersecurity here at New America's Open Technology Institute. Then I'm joined by Sharik Zephyr, and he is the Public Policy Manager at Facebook. And then Francella Ochillo, and she is the Executive Director of Next Century Cities. And the former Vice President of Policy at the National Hispanic Media Coalition. Thank you all for being with us today. And just before we kick things off, I wanted to mention that later in the program, we are going to open the floor to the audience for questions. And so I'm really excited that we have a full house today. So I hope as we're talking about these issues, you can be thinking of questions to bring to this great panel. And so just to kick things off, Sharik, I wanted to ask you a little bit about just the position Facebook is in right now. You recently, earlier this month, released an update to your civil rights audit. Can you just set the stage for us? Tell us a little bit about what led Facebook to conducting that audit and what some of the findings were. Sure, well, first of all, let me just say how great I am to be here and to have this opportunity. For a number of years, I was a civil rights lawyer working on 9-11 backlash issues, so hate crimes against people that look like you and me. Sometimes the victim was Portuguese, not Arab or Muslim. So ignorance combined with bigotry, right? And I took this position and I worked with, and it's important to know that Facebook is a company that is made up of thousands of employees around the world who are committed to getting and addressing these issues and getting them right. And I took this position because I thought this is an opportunity to work with a relevant platform and help them address a number of challenges. Hate speech, election integrity, civil rights. I do, we're talking about free expression and there's a number of challenges. Detect community, including Facebook needs to do a lot better on a number of issues. But I think it's important when we have this opportunity that these social media platforms, they've given voice to billions of people, right? I mean, in terms of free expression, people have a voice now that they never did before, and I think we have to pause and reflect on that sometimes. When we're always, and we're working together, we're working on these challenges. And we all care about these things now. I mean, everybody in this room wants to see what we can do to promote free expression but do it in a way that's responsible. And that's why I think it's just so great that New America and Washington Post and these organizations are here today. And I'm really grateful. Now with respect to the civil rights audit, I have a colleague here, Monique Dorseville Monique, who has really led the charge for us. And it came after a number of conversations, actually preceded my time at the company, with a number of really important civil rights actors who raised that, look, there's a number of challenges that the company's facing and that maybe the company wasn't taking account or just addressing. Ranging from how people of color, how their voices were treated on the platform to making sure that how we moderate content is done in a way that you take down hate speech, but you don't take down people who are complaining about hate speech, right? And that requires context, right? And so we're really grateful that Laura Murphy, who is a prominent civil rights leader and worked with a very prominent civil rights law firm, Roman Colpax, and partnered to leave this. And they've done two updates. One update just recently came out in June 30th, and I encourage everybody to read it. Now, there are a number of civil rights issues, but what was important about this latest update is that it focused on three really key areas. The first one was content moderation. So in Facebook, we have community standards. These are the rules of the road, they're public. They tell you what you can, and what the stuff that you can say and more importantly, things that we will take down because it violates our standards. This is hate speech, harassment, bullying, things like that. And Laura and her team dug in and did an assessment of how that process was going. And found that Facebook has taken some important steps. So for example, we've recently clarified and strengthened our position on white nationalism. Prior to this position, we would take down white supremism and white separatism, but we allowed white nationalism. And the idea was, well, nationalism is a different category. You have Albanian nationalism, you have Zionism. I meant working with civil rights leaders, and others we recognized that our assessment wasn't actually correct. And that white separatism, white nationalism, white separatism, they're the same thing, an obvious point probably to many of you. And so we strengthened that. So what was important about the audit is that they recognized that, but also made some recommendations that right now the current policy requires, prevents any praise, support or representation of the terms of white nationalism, white separatism, white supremacism. But that's not exactly how these folks actually use. They'll sometimes do things that are implicit. They won't actually say, I'm a proud white nationalist, they'll do other things. And so they made some really important recommendations that we're taking about, like what are other steps that we can take? The other two areas, and I want to be mindful of time, was on building civil rights infrastructure, right? And the idea is that, look, you can't just try to do things one at a time. You actually have to build a DNA, and growing a DNA. And our CEO, Charles Amberg, has set up a civil rights task force, which she's leaving. And we made a commitment to onboard civil rights expertise, including in the areas of voting, et cetera. So the idea is that, look, we gotta get smarter, and that means training. And you could be a really smart engineer and be really, really good at programming, but perhaps you haven't had the life experience, or just understand what it means to deal with issues of race. And there's basic fundamental principles of rights that you should be mindful of, depending on your position. And lastly, of course, is election interference in the census. And that's just, I mean, I think we were too slow to act in 2016, in 2018, working very closely with our partners in media and civil society. And in governments, I think we did a much better job. And I think we're really taking a tremendous amount of time to make sure that we get 2020 right as an election, but also the census. And we're treating the census like an election because it's incredibly important. So I encourage everybody to read the update. It not only tells you where we've been, but more importantly, where we're going. And it's a very important endeavor, something that we take very seriously from the very top of our organization. Thank you for that update. And so, for instance, I just wanted to ask you, I mean, from your perspective, you've worked in platform accountability. Is Facebook doing enough on this front with this work? And are they being transparent enough about their progress? So a couple of caveats about my position here. And so, as Shark mentioned, I was one of the civil rights advocate who was very much behind pushing for the Facebook civil rights audit. And also encouraging Facebook to just reevaluate its codes of conduct, its community standard, to really just think with a different perspective and to involve more diverse voices in making those assessments. My current organization, we don't work on platform accountability. And so I'm speaking as a civil rights advocate and a digital rights advocate. And a person who is concerned about making sure that especially marginalized communities and communities of color can use online platforms to do all of the amazing things that Shark mentioned at the beginning. I think just in looking at it from 50,000 feet, it is important to acknowledge that there are things that are possible with social media platforms and access to the internet that were just completely unimaginable 20 years ago. And it has been a tool for economic, social mobility, mobilizing people to have democratic change for getting people to be involved in their local governments, their national governments, for people to understand and be more compassionate and empathetic about other cultures and other things that are happening in places beyond our borders. However, when I think about the responsibility of tech platforms and what role they've played in either allowing hate speech to be born, raised, essentially strategically coordinated and deployed for hate speech campaigns, I think a lot about, yeah, I think they should have done something sooner. But I am excited that Facebook and other platforms are at a place where they're doing some reflection and saying that there are places where we can do better and we've taken steps forward. I think that one thing that is probably a change that I've observed in Facebook in particular over the past two years is that they've incorporated people from more diverse backgrounds to actually be a part of that internal dialogue because I did feel like very often civil rights advocates from the outside, those calls were met with silence or the emails were kind of passed around but never really had any sort of adequate response. I think that something good that's come out of the civil rights audit is being able to actually congeal a lot of individual civil rights advocates' voices to say this is our list of demands in a way that I feel like conservative advocates are very good at their information. They're very good at their communication strategy. So for example, when we talk about conservative bias, whether or not you believe conservative bias is as real or as widespread as it is, or as people say that it is, I think they've done a great job in positioning and doing the communication strategy on it. So that's why you believe that. So I think that it's important that especially traditionally marginalized voices that are advocating for change for people who do not have a voice in the walls of whether it's Facebook, Twitter, Google, lots of different places because Facebook is not the only culpable platform here. I think that it's important for us to have a really thoughtful dialogue about what do we want and asking the right questions. And one thing I know that I'm getting ahead of the conversation, I also want to be clear that I do not think that any tech platform can address text hate speech individually and think that that's going to be the end of finding the answer. I think that we are going to have to continuously revisit whatever solutions that we generate because the solutions that we come up with today will quickly be inadequate two or three years from now. So we really have to commit to making sure that we have evolving definitions that we're constantly thinking about the latent effects of hate speech and the things that maybe aren't so apparent on the surface, but just having a commitment to say we want to do better. And Sharon, I wanted to follow up and ask you on that point. You just mentioned how this isn't just a Facebook problem. Obviously, they own several of the largest social media networks, but this is a problem that affects Twitter, YouTube, many others. Are you seeing meaningful change across the board right now in the industry? So we're seeing some encouraging steps. We at Open Technology Institute, as Sarah mentioned, have been calling particularly for increased transparency from the platforms about what their practices are. And this is an area as David outlined in his opening remarks where we can't really look to the government and within the US context, because the First Amendment tells the government you have to tread very carefully for any kind of regulation you want to impose in this space. Our government can engage in viewpoint discrimination. We have a lot of doctrine from the Supreme Court outlining how carefully the government must tread. So we really do need to look here to the platforms to be accountable to the public and to make rules. They are in a position, as David made very clear, because they're not government by the First Amendment, they are in a position to draw these lines to set rules on what speech will be permitted on their platforms, what won't be, and it's hard. It is absolutely hard to draw those lines to make sure that you're taking down the awful toxic content that we are seeing continually on so many platforms and not allowing that to thrive, but still preserving free expression and recognizing the context, so that's why one of the things that we've been calling for over and over at OTI is transparency. What are the rules? How are you enforcing them? What do those really mean for users and in terms of what is actually being taken down? Another thing, and I can mention something that we at OTI joined in relatively recently is called the Santa Clara Principles and a number of organizations put those out, calling for numbers, notice, and appeals. So numbers, transparency, about the numbers of posts that are coming down and why some granularity there, notice to users when your posts are taken down, that you should be aware of that or your account being taken down, and a right of appeal. Because this is hard, so one example of context that we like to use is we all want platforms to take down terrorist propaganda and glorification of violence. That should have no place, but we do wanna make sure that journalists and human rights activists can raise awareness about atrocities. So context really matters, and that is one reason why a robust appeals process is so important for platforms to have. So the platforms are making progress. We've been trying to track this. We put out an assessment of how platforms are doing, particularly with the Santa Clara Principles recently, and we're seeing some progress, but there's definitely more to be done. And I think you referred to earlier a moment of reflection. I wanted to ask you, I mean, how has the mindset changed in the tech industry about this? I mean, you mentioned terrorist content, and I remember about five years ago there was a debate going on whether the companies should be responsible for policing terrorist content on their platforms. It seems like now the companies are working together on that front in some ways and addressing that more actively, but can you talk a little bit about how the company's positions have evolved in recent years on these issues? Well, again, this is hard. And what are your definitions of terrorist content? And again, turning to the context. So one of the recent examples we've seen with the awful tragedy in Christchurch, New Zealand this spring, and following on that, I'm sure folks in the audience are all familiar with the Christchurch call that was joined by many governments, not ours, and many of the tech platforms, and trying to address this difficult conversation about what steps can platforms take to take down content? What can governments do to encourage this? But while we want platforms to move fast to take down something like the Christchurch video that should have no place and shouldn't be propagated, we also want governments and companies to hit the pause button a bit in figuring out these rules. So Open Technology Institute joined with a number of other civil society organizations in some comments on the Christchurch call, calling for more incorporation of civil society views in this space, calling for moving a little bit more slowly as we think through what the rules should follow, calling for ensuring that we're preserving free expression and not stifling the views of the marginalized communities who are often the victims of the hate speech in the process. So there are some pieces of content that are very easy to identify, child sexual abuse material, which comes down immediately, and there are a lot of tools that are available for that, but even in this space of terrorist content, definitions can matter, context can matter, and so it's a difficult space that the platforms are in. And so, David, I wanted to ask you a little bit about that. You just mentioned the Christchurch call into that the U.S. was not one of the countries that signed on to that, one of the reasons that Trump administration officials cited was concerns about the First Amendment. What do you think about the administration's decision not to join into that call with other countries? Well, let me start at a more basic point, which is that the First Amendment is a substantial limitation on what the U.S. government can do compared to other countries, and other countries hate speech, there are lots of hate speeches banned, in Germany, Nazi symbols are banned. That wouldn't fly under the First Amendment. We have a much more robust and wide open discourse because of that, and I think our social media platforms are much more robust and wide open because of that. One of the problems is, however, when these platforms started, and that's very recently, in terms of the history of the country for sure, I don't think much thought was put into these issues from the outset. Ideally, you'd want to build a platform that factored these things in. When do you take something down, and why, and what sort of notice do you provide, and what's the appeal process for that? That didn't happen, and so you had these huge platforms build up and they sort of addressed problematic or illegal speech in an ad hoc way and often behind closed doors, and so now we're in a position where we're having to kind of backfill these ideas, and there are already billions of users using these platforms in every country almost in the world, so it's an immense challenge, and to some degree I have sympathy with the platforms because it has taken 200 years of court decisions for First Amendment doctrine to arrive where it is now in this country, and those are all decisions that judges took weeks and months and sometimes years to decide, and they've laid out boundaries for what's protected speech and what's not. It's hard to do that on the fly in 30 seconds. I grant that, so I give the platform some degree of a pass there. However, going back to this idea that this is something they should have been thinking about 10 years ago or 15 years ago or 20 years ago, I think it's through all of our detriment that these issues weren't discussed and thought through at the beginning. Yeah, of course. So I think just to build on something that you mentioned that just concerns me when they treated, some problems were treated like an ad hoc problem, I think these problems were treated like a PR problem. That's really what it was. This was not, nobody really was serious about getting serious about real solutions until it started generating bad PR and it looked bad and it felt, it just didn't feel good, and then those things that didn't feel good erupted into more visible attacks on whether it's a synagogue or white people or people, it just doesn't look good on film, and I think the reality is for a lot of people who suffer the secondary effects of what happens online, like whether or not you participate online, you are essentially victimized very often by the things that are planted, nurtured, and strategically deployed from online platforms. Give you a really easy example. When I was in fourth grade, my gym teacher, Mr. Morgan, we were all getting ready for class and he says, hey, friends, I got a question for you. He said, how do you know when you're dirty? And I was a little bit stunned. I didn't really know what to do. And the crazy part is Mr. Morgan is very active on a social media platform. He has friends there, he gets to exchange ideas and share jokes, and the truth of the matter is, I don't need to be a member of whatever is his platform to actually be subject to the things that were hardened and planted and nurtured online. And so the thing is these have very real life effects and the reason why when I say taking the step further, it's not just like, oh, it didn't feel good and Mr. Morgan has those beliefs. Whether or not I agree with Mr. Morgan or think that it's heinous that he thought that, the reality is those are the teachers in your kid's school. Those are loan officers. Those are the people who are hiring managers. Those are the people who have the power to make public policy. And so it just makes me think about, it isn't until it affects like you or somebody in your family or somebody close to you that people who are in positions of power started to say, either maybe we should do something or maybe this is becoming a PR problem that's getting beyond what we can control so maybe we should react. So while I give platforms credit for really starting to treat this like a serious issue with serious consequences, I think the truth is that some of them are getting to be 10 and 15 and 20 years old and it is just recently in the past two years that has become a fire. What do you think changed in the last two years that made it a fire? I think that the prevalence of everybody having access to being able to, if you don't see something on the news, being able to replay it on YouTube has a huge effect because the truth is that something happens in New Jersey on local news and then catches fire on cable TV and then I can replay it over and over again and share with my friends on my phone or on my laptop. I think that when you have to come face to face with certain very visible incidents, I think it changes the calculus. I think everybody's like, hold on a second, we got to put out this fire now, not later. And I think also advocates have been less patient about saying, oh, we're listening, let's have a meeting. They're like, we don't want to just have a meeting, we want to see action. And I think that that's where people started to come together and say, no, we demand that you do X by this date or else. We will take it to whatever platforms to stage and sort of protest or make it more visible. But I think people are just tired of waiting for change, especially voices that have been traditionally silenced. I wanted to give you a chance to respond to that. No, look, I think these are really important points. Our CEO, Mark Zuckerberg, said very recently that if the rules from the internet were to be written today, they would look very different. And at the same time, he said that, look, it's well past the time for us, Facebook, to make decisions that have such deep societal consequences on our own. And so I think that's notable that you typically don't hear a CEO of one of the world's largest company call for regulation, a company, that's exactly what he's done. He's talked about regulation in the context of harmful content, in the context of privacy, in the context of election. And then also, it's a little wonky, but on data portability, but it's also important. And I think, look, the world as it is, is that we have these platforms. They, billions of people use them. And now we have to, these issues have come up because they're legitimate. These aren't academic problems, right? They're not academic. These aren't things that we can just admire. These are actually real, these are things where there's an impact. And so what do we do, right? A few things. First of all, to get that diversity of thought, we update our community standards every few weeks or so. And these range from things that are pretty easy, like child nudity, I don't really think there's a need to have that, everybody agrees. It's awful that stuff exists, but the machines are pretty good at recognizing when we get rid of it. Terrorist content, we're a United States-based company. We have to abide by the terrorist designation list of the Department of State and the Department of Treasury Maintain. Our list is actually broader than that. There's no place for terrorism, or right wing extremism, or white nationalism, or a range of hate groups now. There's some issues that are a bit harder. And so what we do is with every single policy change that we make, every single one, we bring in a range of stakeholders. Someone like you, someone like you. People across the spectrum to make sure that we get it right. And that's like 30 or 40 different conversations. And we don't do it, I have to say, not for public relations or for corporate social responsibility, we do it because we need the help. Frankly, we need the help. We recognize that while there's tremendous talent in the company on a range of issues, in the last several years, we've gotten more diverse. We've brought in great folks, we have tremendous expertise in civil rights and other areas. We still need the help, right? And so we are incorporating more and more expertise to get these policies right. The last thing is on appeals. It's really hard, the men and women around the world who moderate our platforms, they have one of the hardest jobs and they deserve our respect because but for them, the stuff that we would see, it would be truly awful. Their job is to enforce our community standards, right? And so we have to, when we develop a standard, we have to make it clear. And we're a global company, so we strive to have a global set of standards. So it's, you can't have a standards, okay, there's a particular video, a particular picture, you just need to remove that. What we need, what we have to get to is a point where we have clear guidance that we can give folks. Now, just to put this in context, we have more men and women around the world working on the issue of safety and security. I think some upwards of 30,000. We're spending more money on this enterprise than our entire revenue during our IPO. So the amount of revenue that we're making, 14 people, whatever the family's started, we're spending more on safety and security. But even then, because we're doing this content moderation at scale, we don't get it right. And that's why to your point, we do have an appeals process. So if your content is removed, you can appeal it. And then so a second person will look at it. Now, what we're taking a step further is we recently announced an external oversight board. And the idea is that for some of the toughest issues in gray areas where the policy may not actually, we can get closed but won't actually do it or there's just a difference of opinion or it's really hard, we've recognized that Facebook shouldn't ultimately make all the decisions. Yes, we have a responsibility to keep the platform safe, absolutely. And we shouldn't shrink from that. But by the same token, we recognize that we shouldn't have all the power. And so we've stood up, we're in the process standing up, excuse me, an external oversight board, which will be independent, right? Which will be deliberative and which will be diverse. So independent meaning that they will make decisions that quite likely we will disagree with and we will abide by them. And they will have the value of precedent, right? They'll be deliberative. So these will be people that have expertise in human rights, civil rights, and they'll be diverse from around the world. And the idea is that this is too important not to bring in more aspects of society into this process. So look, should we have moved quicker? Absolutely. Is this very hard? Of course it is, but we have a responsibility to get this right. And we're making investments to make sure that the next time we have this conversation, we're talking, we're in a better space. Just wanted to see if you wanted to respond to that. And particularly on the enforcement point, I mean, there's been a lot of reporting about some of the challenges for workers in these content moderation roles, particularly around their pay and psychological services. Is Facebook deploying these resources effectively? So I am not in the weeds on how Facebook is handling the very necessary mental health resources for the moderators. That is a hard job. I did want to weigh in on the oversight board piece. We, Open Technology Institute, did actually submit comments when Facebook had the comment period on the oversight board. And I know a lot of organizations have questioned whether this is just window dressing is even something to support as a process. We took the view of let's encourage you to make it meaningful. And so, and having personally worked in a US government oversight board before coming to New America, I do believe in the value of independent oversight. And so one thing we have encouraged is with the independence, I know this hasn't yet been clarity on the extent of the recommendation authority of this board. So you said, if they say you got a decision wrong and a particular type of content, whether to come down or not, that you will listen to it. One of the things that we've urged is this independent entity should also have the ability to make recommendations that will be listened to of, your rule isn't right. When you update your community standards, this community standard isn't sufficiently protective of human rights, isn't sufficiently getting at the suppression of the views of marginalized communities. And you need to look at the rule at that top level is one thing. You did mention looking to diversity. I think that's critically important. So much of moderation by all the platforms is understandably, especially when we're talking about US based companies grounded in English speaking as a first language and needing to get in. And I think there are steps being made. I encourage you to continue and expand those to make sure that different regional viewpoints, people with expertise in linguistics of different regions and culture to provide that context expertise also very critically important. When we talk about this idea of an oversight board, is there any possibility down the line that there could be one that affects all of the companies? Well, look, Mark was just talking about, he's given a couple of very notable public presentations recently, one in the Aspen Ideas Festival and one in the conversation with two notable law professors. And this idea came up that well, look, how can we trust you all? How will you know? And the point that he made was that look, the first time you make a decision that we disagree with and that we abide by, that's how you know you'll be able to trust us, right? The second, what was your question? If there could be an entity that oversees all of the major tech companies? Right, and so, yeah, this also came up. And the idea is that look, we recognize there's a role for regulation. Absolutely, in these areas. But that's gonna take some time and we don't wanna wait for the perfect regulation. There are things, that's why we're moving out on this external oversight board. Now it's just gonna impact our platforms, but the hope absolutely is that at some point, that this'll be something that's an industry-wide body. Absolutely, because at that point, you have some type of consistency across platforms. And so that actually came up in the conversation that Mark was leading. You know, again, we're starting, we're gonna try to build it. But just kind of like this Culebra cryptocurrency, which we announced in room one, a number of folks in this association, we get one vote and our hope is to have like 100 people where we have one vote. And I think, similarly, with this external oversight board, it's gonna be a Facebook entity, but ideally it would be great if it was something that more companies participated in. I think it would be of real value in that. And it- Can I follow up on that? Yes, please. In any other industry, or at least an industry that doesn't arise out of and depend on expression, I think it would be fairly easy for the government to step in and set some guidelines. And the government can serve a really useful role in setting the boundaries of various marketplaces. And it has been successful to varying degrees in regulating all sorts of industries. We've had this social media platform industry that just kind of exploded out of nowhere in almost no time whatsoever that has as its basis expression. And so there can't be, at least under the way, the First Amendment has been interpreted by courts for 150, 200 years, any kind of government body that does that. And from the perspective of somebody who sees a problem with all the terrible, hateful, violent expression on these platforms, that's maybe they think, well, too bad. We need the government to step in. From my perspective, my organization's perspective, the government absolutely shouldn't be in the business of determining who gets to say what and when. But you know, it's not as, this isn't as simple a debate or a discussion as it was when there were three or four gatekeepers that determined who got to say what and when to a national audience. In those days, the fear of censorship was that the government would step in and block one of these gatekeepers. And so you wouldn't have access to CBS News or you wouldn't have access to your local newspaper. You wouldn't have access to the New York Times. That fear and that model is almost irrelevant now. As I alluded to in my opening remarks, the censorship that we see now is by individuals against individuals. And I don't know that the First Amendment has anything to do about that. We have to rely on the platforms, as we've talked about now for quite some time. One element that we haven't really touched on is what about the consumers? What about all the people? What about pressure from people setting aside advocacy groups like mine? The people could speak with their feet and say, I no longer wanna use Facebook. However, where are they gonna go? I mean, there are lots of other platforms to use, but there's not a Pepsi to Facebook's Coke. There's not a, it's massive. And I'm not advocating Facebook to be broken up. I'm not even sure what that means or how the government would accomplish it. I'm just highlighting it as an issue that unlike a traditional marketplace where there are competitive alternatives, in many ways there really aren't, at least for some of these platforms. I mean, what's your, if you want something that does what Twitter does, good luck. If you want something that does what Facebook does, exactly good luck. So they have a kind of captive audience and that I think suppresses the ability of consumers to speak with their feet and just walk away and stop using the platforms. That would get the platform's attention if it happened in mass. So you bring up a really interesting point, which is competition in the industry and right now there's been a broader conversation in Washington about antitrust and hearing just this week that Facebook testified at. Sharon, I wanted to ask you, I mean, do you think that greater antitrust action against the tech firms in the US would address any of these issues that we're talking about on the panel today related to free expression? So we've been looking at this more in the context of consumer privacy regulation. And I don't want to take us too much on a tangent from today's panel. New America hosted on my colleague, Eric Mill, hosted a great event on consumer privacy just two days ago here. But I do want to mention it because in that space and Sharik alluded to a data portability as one piece. So consumer privacy is an area where we are looking to Congress to get involved. We know it'll take a while and that regulation is very important for the platforms to be and all sorts of companies. So that's not limited to social media platforms to be how they treat our data. And in that context, we are looking to push for data portability at something that OTI is developing legislative language on and that will be coming out very soon to make sure that you do have at least the ability to take your data with you and meaningfully use it on another platform. Now, David mentioned there's not really the companion. I mean, we have Instagram, but that's also Facebook, right? So what that would look like is hard to say. Those companies don't exist yet, but yes, that could be a helpful trend to address this issue. So then if your speech is taken down by one platform, there's another platform and another voice assuming it's the kind of speech that society does want to allow. I'm not an antitrust expert at all. I'm not well-versed in antitrust law, but just to make a very quick point from a 30,000 foot view, I'm not sure what, A, I'm not sure what an antitrust action could do to, quote, unquote, break up Facebook. B, I think it would take an enormously long time, at least if you look at similar actions against Microsoft. I think the Microsoft thing took 13 years for AT&T. I think it took even longer, but more importantly, and I'm speaking twice removed because again, I'm not an expert on antitrust law, but talking with antitrust lawyers, my understanding is that antitrust law is really not built to handle this industry in particular. You'd have to have some serious amendments or changes to antitrust law to sort of handle this environment. So I know there's been a lot of discussion recently about breaking up the big tech companies. I'm not even sure what that means, and I'm not sure that it's a viable solution to the issue that we're talking about today, but perhaps, perhaps. So let's define the problem with solutions. My assumption is everybody you showed up here, we all want to have an open internet one that supports free expression. I think we can all, we all want that, right? And then depending where you fall on a spectrum about your view on first, free speech, some people believe in the best way to fight bad speech is more speech, some people, particularly those that come from communities where human victimized who've been, and say, look, that's a point of privilege, and that hate speech actually harms individuals and entire communities, right? And so there's, we can have, and having that conversation, right? And we, at Facebook, what we try to do is voice is a fundamental principle. We want to promote free expression. We recognize we have to balance it with safety and equity. And that's a very hard challenge when we wake up and my colleagues around the world take that mission very, very seriously, right? But the proof point, in terms of taking, separating out different companies within our platform. Twitter is much smaller than us. They deal with these issues. Reddit is much smaller than us. They deal with these issues. All these social media platforms that are once even much smaller than us deal with these same types of issues, right? One thing when Instagram joined us, one thing we were able to do is that we plugged in our, we had a really good spam filter and we applied it to Instagram. And all of a sudden, tons of spam just disappeared because we had that capacity, right? The amount of money that we're investing in security, like as I said, is millions and millions of dollars. And because of that, when it comes to free expression, sort of in making sure, you know, addressing hate speech, when there's a capacity to do that, and there's tons of innovation in the context of AI, where now, like 99% of terrorist content is taken down because, before anybody sees it, because a lot of that material is regurgitated. And what we've done is we've working with our partners and, you know, in the industry, Google, Twitter, we've created what we call hashes, which is something like a fingerprint. So if there's a terrorist image, it has a fingerprint. It disappears because of machines. Unity and sexual exploitation, you can, the commuters can technically figure out what's going on there. In the context of hate speech, it's harder because of context. Is it a activist complaining that somebody called me a slur, right? Or am I actually a hate monger? But they're getting better, right? Just, you know, over in the community, the computers are getting better at this. And that requires lots of research, and that's something that, you know, a company like Facebook has as they're building to do that. Without that, I mean, then you could have just a bunch of A-chance, right? We just have people who are like, anything goes, right? And I don't think when it comes to having this proper balance of voice, equity, and safety, I don't think the type of anti-test action that's been talked about would actually move the needle forward. It would actually take it a step back. That said, like we think, you know, regulation and oversight are important, and we welcome this conversation because we recognize that these are not just technology issues, these are like, as you alluded to, societal issues. Ranzell, I wanted to ask you, I mean, Facebook has called for greater regulation. David's outlined some of the limitations facing government. Do you think Washington lawmakers have done enough to address some of these challenges? I think Washington lawmakers don't have the capacity to address these challenges. I mean, the reality is that lawmakers are working with staffs that have their portfolios are just, they are stretched so thin, they're overwhelmed. They are the ones informing the members of Congress and other policy makers. And truthfully, by the time everybody really gets their hands around the issue of the day, we've moved on to a new issue. And I think that right now, I don't think that the government has the expertise or the capacity to address this issue. But I think to some extent we're asking the wrong question because if we're talking about free expression, I don't necessarily think that more government regulation improves anything. I think that the truth is that we're agreeing that this is a very an amorphous issue. It's something that, you know, 50 years ago, no one could have imagined it. And even 20 years ago when some of these systems were being built, the engineers and the people in these rooms just really couldn't, they were oblivious to some of the social and other impacts that maybe some of their work might have had in terms of empowering the majority and silencing or continuing to oppress the minorities. I think that we need to be asking about what type of responsibility attaches when you create a product or a service that essentially becomes ubiquitous. When you have something that you're introducing into the market, just like when Philip Morris introduced cigarettes and then you had a large swath of population in the 60s and 70s saying, oh, this is so cool, this is amazing. And then it was like, oh my gosh, all these people are dying. And it was when people started to actually mobilize and say, hey, my grandma's got cancer, hey, such and such is moving before the government. It wasn't the government that moved first, it was the people that moved first. And so the truth was the government was reacting to, oh, we really do have a problem, all these people are dying. And so it's not the same outcome, but it's the same setup because I think to some extent, you're introducing this new service into the market. No one really knows what are the far-reaching effects. I don't know what it's gonna mean if you have people who've been on social media platforms for 50 and 60 years and some of them continue to get inflated by virtue of them having these platforms and others are continuously marginalized. I don't know what the effects of that are. But I think the reality is we have companies that are introducing the service into the market. So what is their responsibility to make sure that it's safe? And I think that sometimes when we think about how we regulate these issues, instead of expecting the government to come up with a solution, I think a lot about how people handle HR issues. It's kind of like when people were upset about how Walmart was handling things with their employees, Walmart looked to other people, other companies and said, what are your best practices? Where can we do better? Where can we collaborate and make it a better environment for the people who are using this? Customers, employees, whatever. And I think to some extent there is an opportunity for people that are large players in this market to set essentially corporate standards for this is gonna be our code of conduct. This is how we create a safe platform and actually taking the next step and saying we will not tolerate certain things here. End of story, full stop. Not with a caveat, not with a couching it and making it not so upsetting to people who are in the majority, but just saying certain things will not be tolerated here, full stop. And creating an industry standard where maybe if it's not just our issues or with Facebook or Google or whatever, maybe we're talking to the smaller companies like Reddit and some of the other dark sides of the internet. And I think that we need to just be honest about the questions that we're asking and are we thinking about what is it that we really want? And I think people want to use these platforms. They wanna be able to go and to enter a safe space where they feel like they get to express themselves and not make themselves a target by virtue of their position or things that they can't change. Sharon, do you wanna respond to that? I tell you nodding a bit. No, I agree. I absolutely agree. I was just gonna build on in saying in addition to that we can't just, that we need to ask the platforms to come up with a solution. We can't turn to regulators in the same way we would also for the First Amendment reasons. One other thing that we can't just do as a magic silver bullet is turn completely to technology. So Sharik was talking about some of the AI tools that have been deployed and must be deployed when you're trying to do content moderation at scale for some discrete categories like the child sexual abuse material, which can fortunately be identified pretty clearly and taken down proactively before it ever appears. Although of course it haven't solved the problem but those are important tools. But in so many ways because as we've all been talking about context is so important you have to have what we call a human in the loop in so many other contexts. And that that is too many people are saying you're tech companies, you're smart, there's technology for technology at it, you should be able to fix this. That's not a silver bullet here and it is so critically important to continue and this is also another area where we need more transparency. So in some cases we've had platforms saying well how we develop these algorithms and the training data that we're looking at those are protected, our trade secrets and to an extent we need to be able to have researchers and civil society and other folks be able to test them for where they may come up with bias, where and perpetuate that bias and are those tools. And for example the terrorist database that Sherrick mentioned, that's one where we don't know what's in that. And that is an area where again everybody agrees we don't want the platforms to serve as a breeding ground for terrorist content but again context is very important there when you're trying to raise awareness about atrocities and so forth and we don't even know what is in that database so that's another area where more transparency is important where having humans be able to look at this and the tech tools are a supplement to the humans not a replacement. Yeah, no look I think this is a very important point. So let me take you to the process. Let's say there's a, you know something, a hateful or something you object to on your Facebook feed, all right. So let's say in this instance you've posted something, right, it's an object and the machines caught it, AI's caught it. That is not the end of the story. A human will always look at it and make sure that this is the right decision or not. So it's not, except like the terrorist stuff, for things that are in the context of hate speech a human will look at it. Now one thing that we've done which is really I think stems right out of the Civil Rights Act. This is why the Civil Rights Act was so important was that it recognized that our community and the folks that review our content around the world, you know they deal on issues of hate speech, of bullying, of unity, of a range of community standards, right. Because our standards are quite wide and that hate speech in particular was a particularly tough nut to crack, right. And so one recommendation which we're actually gonna do is that we're having a set, a cadre of special folks who are specializing only on hate speech, right. And the idea is because having context, understanding that in a certain country, a slur is a slur in other contexts the sign of endearment and understanding that. And this requires some specialization, some cultural competency, right. And so now on the transparency point, we agree transparency is incredibly important not only as a fundamental principle but also because the more we share appropriately, the smarter we get because the feedback that we get. And so we've released our transparency report and that includes our community standards enforcement report. So that was I think in a March of 2019 and it tells you things like for example, what are the number of government requests that you receive? What were the types of take downs? How often was your service interrupted because of a government request, right. And we've done it, I was looking at the chart earlier today just to prepare for this and there's a level of specificity that I think is quite rich and it's on our newsroom, Facebook newsroom.fb.newsroom.com, just Google it. But the points will take it. Like look, the more that we can share appropriately I think it's as important as the first principle but also it makes us smarter. You talked about the importance of the companies themselves developing best practices. Can we trust the companies at this point to do that? I mean, this week we saw with Twitter they announced a policy that even if a public official were to tweet something that was objectionable, they would label that. Then we saw they had their first major test with President Trump's racist tweets this past week. Do you think after seeing incidents like that that these companies can be trusted to enforce these policies? I think that maybe the part of your question that I take issue with is that I don't think that they are the only actor that has a responsibility in this. So I feel a little bit like whether I trust them or not to some extent that doesn't always matter because I feel like they are one actor in the ecosystem. I think that there are several people who have responsibility. I think that the companies have especially the larger companies with the resources and the expertise and those who really kind of set the standard for what is acceptable in the industry have a responsibility to say, this is what's acceptable on our platform. These are the things that we won't tolerate. This is what we want to support like saying, yes, we want to support free expression. We want this to be a place where we can share ideas even if you disagree with them. I think that that's a great idea. However, I do think that there is gonna be a day when maybe government will be able to get more involved and reset some definitions. Maybe they'll be able to say, I don't know, maybe we need to revisit what certain statutes mean and what we should do to add to them or change the definition. And I think that it's important for us to just recognize that the companies are one player and then government might be here. We have individuals who are using the platform. We have people who are the purveyors of hate, misreinformation and different things like that. There are lots of different actors that have responsibility here. So in general, I don't think that I necessarily have to trust companies to expect them to do the right thing. Well, I wanted to open up the event to take some questions from the audience. So I see we've got a lot of hands. So sir, you're right there. My name is Bill Bushka. I have a site called doaskdutel.com and it has a lot of associated blogs. I've been sort of a citizen journalist ever since I retired from IT, mainframe about 20 years ago. I think there's a fundamental question that's going on about who should even be a journalist as opposed to being an activist? And I think that's affecting the social media platforms. Facebook, for example, prods me to run fundraisers for nonprofits. Anytime I make a political post about anything, they tell me add a donation box. It's as if I had to prove I could raise money for somebody else before I get the right to speak for myself. I mean, that's very disturbing. I wanted to add also the whole idea, the white supremacy, the idea that some topics are off limits. If you wanna be a journalist, nothing can really be off limits because you can't really be objective. So if you use a platform that has limitations on ideas, it's not objective anymore. Now, there's a problem and Jordan Peterson has pointed out. On the right, we know where the overton window ends. Racial superiority isn't acceptable. But on the left, we don't know where those boundaries are. If we knew where those boundaries are on the extreme left, you wouldn't have as much political polarization to worry about. You wouldn't be as hard to tell what hate speech is and so forth. The boundaries are not symmetrical on both sides of the political spectrum and that's a problem. Also with section 230, we're getting to a point to where I think you're coming close particularly because of what's going on in Europe with article 13 and YouTube and a lot of other problems with FOSTA, for example, which you mentioned and there's a new law, the case act and the Senate and everything, there's all kinds of things going on that threaten independent speakers. We're getting to a point where you're going to select who gets to be heard and that gets into social credit. Look, in the idea, do you want people to speak? Excuse me, sorry, I don't mean to cut you off, but we do have a limited time so just, if you have a direct question for the panel. So can you comment on that, particularly for example, the prodding people to raise money for nonprofits? That's a good place to start, could you do that? So, sir, thank you for your question. We have a lot of really cool tools and one of the great, I think, success stories of Facebook is that community-based organizations have been able to raise money and these aren't just only the major organizations like the Red Cross, which are the ones that everybody's heard about, even really small community organizations can have a fundraiser where you're able to raise money. I can have a fundraiser for a charity that I care about, I can just do it, my friends can contribute if it's on my birthday and it's a really cool tool. Now, so I think that's actually a good thing. But that should not impact your ability to post or to respond or to like a comment and I think it's something that you certainly aren't, yeah, you don't have to do it, but it's something you can take advantage of. For the other, I mean, I guess I'll leave it at that. I'll jump in on question number two. So I think the one thing that's important is trying to tease out something that you mentioned in your question about if conservative knew where was the line essentially where it made it uncomfortable. I think when we're talking about trying to define something about where is the line as if there were a line, that line would be adequately drawn because I'm sure that conservatives would turn back around and say that it wasn't drawn in the right place. I think that it reminds me of something that happened on the University of I believe it was University of Portland's campus and there were five conservative students who said essentially we were in this pool of liberals and in the response to it, it was a student, Emma, who did a blog post in response and she said they were calling themselves marginalized voices because they were in a pool of liberal students that they felt and she had this brilliant response when she said how can you be a marginalized voice when all of the policies and the administration props up your beliefs. And I think that when we're thinking about who are marginalized voices and who are being silenced on some of these platforms, when you even think about like what does digital inclusion mean? What does inclusion mean? It means acknowledging that something about your policies and the way something with the framework of something was inherently exclusive to start. So if something was inherently exclusive to start and we can acknowledge that majority voices are usually the voices that are propped up, then that is why people who are the minorities are saying, hold on a second, we deserve certain protections, we deserve certain care, we need to think carefully about how we moderate and who are we actually silencing. So I think it's just important for that to be a part of the way that you're thinking about it. Whether or not you agree with Emma or who's conservative and who's liberal, I don't really like being characterized as left or right. I think that I'm just a concerned citizen who wants to make sure that good people get an opportunity to speak and people that I disagree with have a place to speak as well. But I should not be targeted by virtue of the color of my skin or things that I can't change just because you get a bigger voice on any tech platform at all. Got it. And let's, you in the back? My name is Roger Coachetti and I work with equity and venture investors in the technology sector. One of the issues that's been around for a long time in both the discussion about First Amendment and Fourth Amendment rights has been how you integrate American values with those of the rest of the world. And for most of the time, this was theoretical, but we all know that today there are about 250 million Americans who use the internet, three times that number of Chinese do so, twice that number of Indians do so, and any combination of Japan, Russia, and Brazil would have more internet users than Americans. All of the platforms have more non-Americans than Americans using the internet. So the issue is if you're going to establish values, how can you integrate, unless we just say, listen, me and my buddies have all agree on this, so that's the way it is. Or you just say that everybody in this room or on this panel agrees, and that's the way it is for the world, you face the problem that basic definitions of terrorism, obscenity, and privacy differ quite a bit between China, India, the United States, Russia, Brazil, Japan. So if somebody's going to play God, I mean, how do they do this? Sir, thank you, that's a really good question. We have, I think fundamentally, the basic question is, what do you want the internet to be? And do we want to have one internet or do we want to have a bunch of silos? And one thing I am fearful of is there is an American approach, and I think the UK and Europe have a similar approach with some notable differences, but by and large. And then there is, as you alluded to, another model, which is quite repressive, where there is no free expression, and that is arguably being promulgated around the world. And one thing is that if you start breaking up big companies, if we don't adequately protect the right of expression, there is a danger that another model will not only take root, but will thrive. And that's something that a lot of us are very concerned about, those of us who care about, like I'm sure everybody in the stream, about human rights, about free expression. Now, we are a global company. We have a global set of standards. And because we were founded in the United States, the First Amendment, while not binding, I think First Amendment, it was certainly instructing. The idea that free expression is a norm that we want to not only espouse, but want to make fundamental to our platform. There's other countries, even in Europe, where they have a different approach, so certain things are banned. And we generally follow the law. The calculation that we make is that if 99% of the conversation is non-problematic, we think it's a fair trade-off to take down specific pieces of content. Even though we wouldn't necessarily do that, even US norms, we're willing to do that, let's say in Germany or in France, because overwhelmingly, most of the content is non-problematic. But there are lines that we draw, where we will not follow local customs because we think that somebody is labeled, for example, a terrorist, when they're really a human rights worker. By the same token, and this is a commitment that from the very top of our leadership, we won't put our people's personal data in countries that don't protect human rights, without naming names, but we will not do that. That said, and this is gonna be one of the challenges, that's why when we stood up this external oversight board, because of the reasons that you pointed out, we wanted to make sure that we had diversity, because you could have a Nord star of expression, or a Nord star of safety. But what that actually looks like in a country that's very, very religious, versus one that's secular, could be very different. And so part of that is like figuring, and then having a policy that addresses both, is very difficult, right? But we think that it is possible, and we think that part of the way to do that is to have global standards with appeals, and now with this new board, which will be hopefully very diverse, and will be well positioned to tackle some of these issues. But your point's very well taken. I just want to add on a little bit. So to the extent that different countries actually have different legal requirements, a lot of the platforms do have ways to enforce something and take something down for certain countries and use geo-blocking so that you get a different result. So it's allusion made to, in Germany, it's illegal to post swastikas, and so companies can take down the country, so you can't access it if you're in Germany, but you might be able to access that post that may have been made from elsewhere. So to some extent there can be, at least with regard to complying with the laws that apply in different countries, some ability to enforce differently. David Borden with the group Stop the Drug War.org. Most of the discussion today has had to do with the behavior of individuals or mobs online, hate speech, terrorist incitement. My question has to do with the orchestrated or even automated manipulation of the information stream by governments, political campaigns, for example, the Kremlin in my field, the Duterte campaign and now government in the Philippines. What's being done? What are our options? Do we have the right from the US to regulate what companies based here do abroad or who wanna operate here? So, look, again, another very good question. There's a role for governments there and there's a role responsibility for platforms. Mark was talking a few months ago and he was talking about some of the tools to respond to, for example, a cyber attack that's above our pay grade. That's something that our government needs to handle. Now, but in terms of behavior that's on our platform, what we call it coordinated inauthentic behavior. And what that means is that people, like, for example, in Russia, elsewhere, the activity may be legitimate and the activity may be something that doesn't violate our standards. So, for example, they may be, on one hand, supporting and through advertising and posts, a rally in favor of immigration reform, right? And by the same token, they'll be performing another rally across the street that's arguing for increased immigration restrictions, right? Those are both legitimate political viewpoints that don't violate our standards. But because there's a government or a company, but generally a government that's inauthentic about what they're doing, they're not being clear, they're using fake accounts, et cetera, we will take that down. Now, in terms of figuring that out, we work mostly with governments around the world, right? We have scaled up significantly. I talked to you, like I mentioned, the number of folks that are working on this issue is massive and we have folks who are experts in intelligence, who have really, really deep expertise and regional expertise to find this out. And we take things down and we're quite transparent about it. So there were some, I think some Burmese generals that were, they had an activity that was kicked off, certainly were aware of an effort by the Russian government, recently was an effort by the governor of Iran. And this is something that we're very mindful of. And I should say, particularly mindful of in the context of elections. And not just the 2020 election in the census, which we're treating like election, but elections for Facebook are 24-7, 360 days a year, because guess what? Countries around the globe have elections. We just had one in India, there's one in Brazil, in Indonesia. There's always an election happening somewhere. And this is one thing coordinated inauthentic behavior. It's something that we're very, very, very worried about and we have really scaled up our efforts to address. And the other point I'd say just on the election, we have an election, we have an election in the war room. That's, you know, we have lawyers, intel specialists, political specialists. I mean, people across the various expertise that are manning this, because we take these threats very seriously. David, I just wanted to give you a chance to weigh in. You talked a lot about the limitations with government, but is this a different challenge when it comes to election security and foreign influence? Yeah, that's a really interesting question. And I wish I had a detailed and well-informed answer. I mean, so sort of talking a little bit off the top of my head. I haven't studied the question of how election law fits in here, but I think the limitations on government involvement and expressive activity present issues for how the government might step in and halt or regulate or somehow alter the sort of thing that Shark was discussing with Russian government, Russian government's involvement in propagating inauthentic speech. Maybe one area that the government could regulate is that our government could regulate is payment, perhaps, by foreign governments, buying ads. And this is something that has been discussed, I know, a little bit about, but so to the extent the Russian government wants to buy ads on Facebook or on Twitter or wherever, that might be an area that's not, the exchange of currency is not expressive activity under the First Amendment in most contexts, although I think there actually are some instances where it is. So that's what I can think off the top of my head. It's not an issue that I've delved deeply into. So Dave was referring to the Honest Absac, which is, you know, the thought about, the good news there is we're actually, even though we don't have a legal requirement yet, we're actually enforcing it because we think it's important. So there's a couple of things that we're doing. So right now, if you wanna run an ad, and I distinguish ad from content, so you know, I can post whatever I want, but if I wanna run an ad, right, I have to first register and say, you know, attest that I am an actual individual, right, with a mailing address, I have to get registered. And particularly if I'm running an ad on an issue, actually only when I'm running an ad on the issue of national importance. So things like immigration or healthcare or politics or forward appearance. If I'm running an ad in these areas, I have to be registered, right? And the idea is specifically to get to the problem that we have, you know, potential foreign actors, you know, you don't want them to be able to influence a democratic election. So not only do they have to register first, any ad that's run by them has to say who they are, where they're from, right? And then the transparency point that we alluded to earlier, we have now a repository and archive of all issue ads. So anybody can go in, you can Google, sorry, you can search and look for, look for all the ads that, you know, that a particular advertiser has put in place. And that's really important, right, in terms of transparency. Because, you know, we recognize that elections are important. And, you know, we learned the lessons from 2016. In the blue shirt by the aisle? Oh, sorry, oh, yeah, sorry, next, sorry about that. Hi there, my name is Lucia. I'm an undergraduate student studying international relations. I wanted to ask about this balance of voice, equity and safety that you were talking about, and particularly in the context when the stakes are really high. So for example, if a civil war is taking place in another country where social media platforms are operating or even on a smaller scale in this country when there's violent clashes, whether the symmetric or your measurement system changes in how you look at freedom of expression. Absolutely, it does. So there's a risk of imminent violence, and particularly, I don't like this term, forgive me, it's a term that's kind of being used at risk countries. I have a problem with that for reasons that are probably obvious, but countries where there is a risk of imminent violence, right, where misinformation can actually lead to real world harm will remove it. So our approach to misinformation, broadly speaking, is that there's the belief that tech companies shouldn't be the arbiter of what's true or false, right? Now that said, we don't want to allow things that are demonstrably false to go viral. So the way that we address misinformation largely is, well first of all, anything that's, most misinformation comes from fake accounts, and we've closed this to see over a million fake accounts. So if you take that down the fake accounts through automation, you get to misinformation, right? That's the easiest way to do it. But then what we do is, when we work with third-party fact-checkers, and if something, it looks like the Associated Press, other well-respected organizations around the world, and that network has gone, I think constantly the 40 around the world, and it's still growing. But if one of our third-party fact-checkers says that a particular article or something is false, we'll then down-rank it, right? And so it means that you won't show up in your newsfeed. But for the circumstance that you outline where there's at-risk or misinformation that people are coming and they're gonna kidnap children or something's happened, we can leave the real room, we'll remove it. Because we recognize that this is not some type of academic problem, this is real safety and security or at-state, right? And so we will take action when necessary. And just following up on that, the company is moving more toward privacy and encryption, and obviously WhatsApp is a very popular service. How are you handling this when it comes to WhatsApp? And so I think part of the reason Mark made this, he wrote this up, and we also talked about this shift to a more privacy-focused service, was because we recognize that, look, there's a value to having a conversation in your living room, right? Which is this between us, right? And then having one in a town square or whatever it is. And then you wanna have both, right? And I think the reason, but we recognize this is important law enforcement implications and safety implications and hate specifications. And part of that is through machines, right? Part of that is through users reporting. And part of that is like, I think the reason that we announced it well in advance of actually executing on these things is because we recognize there is a value to having more privacy and having these systems encrypted, but they present so many challenges. So let's get this idea out there in the ecosystem. And as we build it, let's get your voice and your thought on this because it's gonna be hard. Look, there's no question that this is gonna be a tough nut to crack. I think there's gonna be a part of the solution is gonna be on AI, but not completely. Part of the solution is gonna be on users who are privy to these private conversations. And then we're appropriately working with law enforcement, right? But it's gonna be tough. I have to be careful, it's gonna be tough. And I think that's why it's important that we got this idea out in advance so we have time to get as much input and possible about how to get this right. In the blue shirt. Thank you guys so much for being here today and taking my question. I appreciate it. This specifically relates to the United States and hate speech as was brought up a lot. We talked about targeted communities and things of that. Earlier on it was mentioned that people are tired of waiting. A sentiment that I think a lot of people understand. My question is, what is our end goal? What do we want? Do we want someone to step in and take these things away from our site? Because that can be done, but it doesn't eliminate the existence of these things. And if we do that and these people are censored, like you mentioned with the story of the gym teachers, just disgusting remarks. I mean, you don't have to be presented directly with someone's point of view for it to affect you. So I think that maybe that sounds like a very mundane question, but I think that it's important for us to sit back and think what do we want? Do we want these things to go away from our site or do they want to go away internally? I think that the answer to that question is going to change over time. I think that right now, I think that people want to have tech platforms make an honest effort to make their platforms a place that is welcoming to people of all shades and all points of view and from all different backgrounds. And I think that while that sounds idealistic, I think the truth of the matter is, we know that if they want something on their platform, they find a way to make it happen. If they want something off of their platform, they find a way to make that happen as well. So it's hard to wait around and accept the, oh, we're working on it, not so sure, maybe it's really hard when it's something that really doesn't have a direct impact on them. So there's not necessarily an urgency to solve the problem. And I think that in terms of this idea that it's either we have a better platform or there's this dark world where they've taken everything away and they're in control, I think that there isn't in between. I think that it's just, it's this very apocalyptic view that it's like, well, if we get government involved, it's gonna look like this or platforms do this. This is what it's gonna look like. I don't think it's one way or the other. I think that we're looking for everybody to accept responsibility for their role and also for their ability to actually affect change. Because the truth of the matter is, if Google and Facebook say this is what we're doing, you better believe Twitter and a whole bunch of other platforms are gonna fall in line and start adopting those principles. So I think that that's what we're looking for people to accept some sort of responsibility, to make it a priority, to make it a healthy platform, a place where lots of people feel welcome, even if it's civil discourse, and to also be responsive when they know that they maybe created something that was that uncontrollable, unthinkable monster and said, you know what? We didn't get that right, but we're gonna fix it immediately. I was just gonna add a little bit. So to your point that if somebody still has hateful views, they still have hateful views even if they're not on a platform. But platforms can give voice to that, expand that, expand the reach. So it still does matter in terms of what is amplified and what is downgraded. And then the flip side of that of course as Sharik already mentioned in the context of misinformation, they have other tools available to them in addition to taking things down, of limiting the reach of downgrading content. And there was a problem that some researchers called to the attention of YouTube a while ago where after each major, various major tragedies, what would pop up as related content for people to see was all the conspiracy theorists about how it wasn't real. And they found a way to mitigate that problem and make it less. There are a lot of grays here and a lot of ways to address that. I was just gonna make a modest point that because I think it provides some helpful context that under the First Amendment there are pretty clear boundaries about whether hate speech is protected or not. It is. There's no such thing as, there isn't a First Amendment exception for hate speech. And the exceptions that there are are really quite narrow. So the exception for so-called fighting words means a direct threat of physical violence that is specific individual. That's not protected under the First Amendment. So the First Amendment is far broader in its protections than what I think people want would find acceptable on Facebook and Twitter. But as I alluded to in my opening remarks, Facebook, Twitter and these other platforms, they're not governed by the First Amendment. And I think, and I don't think they necessarily should be. But I do think it's important to keep in mind some of the animating principles behind why the First Amendment's protections are so broad. And this is over-simplifying it, but part of the idea is that it is only through seeing the worst of the worst kinds of speech, again, within very broad boundaries, that we get an understanding and acceptance of the fact that that exists and that there are people that promote those ideas. So there is a value, again, under First Amendment jurisprudence, there's a value, a social value in seeing the farthest extreme hateful views because those people are out there and we pretend that they're not at our own peril. I think the social media context is entirely different because it's much more personal and it's much more directed at individuals in ways that I think public speaking and information broadcast on the, well, that's a separate category, information published in newspapers don't. So I'm not saying that those First Amendment principles should apply in the social media realm. I think it is important, though, to keep in mind that until 20 years ago in this country, what was free speech and what was not was a lot broader than I think we think of it now. Sorry, in the glasses. Thank you. I take issue with some of the terms that the gentleman from Facebook used. These issues are not about an open internet, they're about Facebook's business practices. And the term community standards is to soften and diffuse what is the real term, which is terms of service. And Facebook's declining to use, to enforce its terms of service by delegating its responsibilities to oversight boards, external experts, contractors, blue ribbon committees and so forth. Zuckerberg thinks he should get a gold star for calling for regulation. It's not at all unusual. Sir, do you have a question? I do have a question. It's not at all unusual for large companies, large and successful corporations to call for federal regulation because it's a barrier to entry and it enables them to say our behavior complies with federal regulations. There's a whole host of objectionable and heinous things in the society that fully comply with federal regulations. So my point is we hear a lot about Facebook's taking down this post or downgrading this content. What we don't hear about is closing accounts by real people who violate the terms of service. Why is it that Facebook will not use its prerogative to do that? And I would argue it's because it reduces Facebook's audience size and therefore is, would be seen as detrimental to its business. Respectfully, sir, we do close accounts. Community standards, terms of service, whatever you wanna call them, we do. If you violate, if you post, if you repeatedly post hate speech, if you're bullying, if you are spamming, if you're doing all sorts of behavior that our community standards, terms of service prescribe, we will shut down the accounts. We do it, as I just mentioned, we've shut down millions of fake accounts because when people come to Facebook, they wanna have a good experience. It's actually in our business interest to do so. Not because we, because, look, we as all companies wanna grow, but if people come to Facebook and you just have tons of spam or tons of hate or you're being bullied or, if you're having a really bad experience, then you're not gonna wanna spend time on Facebook. And look, we have a business interest. It's a free, it's a service that doesn't cost money. And so we subset by selling ads. We're very open about that. We just had a recent update on our newsroom about explaining our business model, right? And advertisers aren't gonna wanna pay a service where people aren't happy, right? And so, simply put, we have our community standards and we enforce them. And if you don't abide by them, you're not gonna be on Facebook. Simple as that. And I think we've got time for one more question. So maybe right here in the front. Recently, certain politicians have claimed that digital platforms such as Facebook are enforcing these community standards unfairly, even calling this censorship. So what are your thoughts on how these platforms can move forward with removing harmful and misleading content while navigating the debate over alleged bias and section 230 protections? I'm happy to take the 230 point. Look, people from the liberal background, conservative background, no background. I think I like what you said about, don't put me in a box, I'm, you know, a lot of people from various perspectives do really well on Facebook and Instagram. And frankly, for that matter on YouTube and all these other, you know, because they've given these services, they've given a voice to people from a range background, right? So we as a service, we wanna be a company that serves everybody that has, you know, within the guidelines of our community standards, right? Section 230, we agree with Senator Wyden. He said that section 230 is both a sword and a shield. Section 230 of the Communications Act, what it allows it says is that something like internet providers, not just social media companies, but also Comcast and Verizon, aren't going to be held liable for the content that a third party will put on. What that allows is a couple of things. First of all, it allows us not to have to monitor every single post. If we had to monitor every single post, right, there's a privacy implication, right? But like, you wouldn't have like this big expansive ecosystem of people sharing things, right? Because everything, it would slow down. But also, you know, because of a knowing requirement, because it doesn't exist, we're allowed, we have, Section 230 allows us to actually enforce our standards so we can say, okay, look, you know, we're paying attention and there's hate speech, or there's violence. We're taking it off, and there's terracotta. But for Section 230, companies might want to say, you know what, we don't want to know what's going on because we don't want to have a knowing requirement making us legally culpable. So we're just gonna, everything goes. And then you're just gonna have a bunch of A-chance, right, where you're gonna have tons and tons of just really of the worst of the worst, right? And so we think Section 230 is a really, really, it's a useful regulation. It's frankly, it's helpful innovation. It's spurred an incredible ecosystem that has led to like so many people having access to great services. I just wanna add on Open Technology Institute also supports Section 230, and it's very concerned about various proposals to chip away at that protection. It actually allows for less censorship because it enables the platforms and others to engage in content moderation in a more nuanced way looking to context and to be able to make those decisions that are necessary to be able to enforce terms of service or can use standards or whatever you wanna call them and provides them with the ability to do that in a way where they don't feel they have to err on the side of censorship, which so many of the proposals that are coming out there would actually cause them to do. I just wanted to give our other two panelists a quick chance to weigh in on that. Where do you stand on the debate about whether or not Section 230 should be modified? I actually totally agree with Sharr and Sharon on that and I think that in general, I do not see Section 230 as like blanket immunity to just let anything you want go on on your platform, but I do think it gives tech platforms the latitude to be able to handle it as they see fit with more nuance without leaning on the side of censorship. I guess it's a unified front. Yeah, I mean, it's far from perfect, but it's a lot better than not having it or it's a lot better than having some very significantly modified version of it. So. Got it. Well, I think that's all the time we have for today. Thank you all so much for joining us.