 Thanks for joining us. I'm Nate Cardozo, a senior staff attorney here at EFF in San Francisco. If you are unfortunate as I was to have to sit through 10 hours of Mark Zuckerberg testimony a couple of weeks ago, you probably noticed that Zuckerberg was saying that Facebook relies more and more on artificial intelligence to flag and remove content. More and more. What does that mean? More is a relative term. More than what? That's a question that we at EFF have been asking for years. How many posts does Facebook remove for violating its terms of service? How many posts do governments request that Facebook remove for violating terms of service or other community standards? And that has been pretty much impossible to determine until recently. Facebook has started opening up a little bit more and part of that is in response to a set of principles released by us, EFF, ACLU of Northern California, CDT, the Center for Democracy and Technology, New Americas, Open Technology Institutes, and a handful of academics who work on international online freedom of expression. The Santa Clara principles came out of a meeting that we held in February of this year, down in Santa Clara, surprisingly enough. Derek Goldman, a professor at Santa Clara University, put up a conference called Content Moderation at Scale. And the civil society groups who have come out with the Santa Clara principles met the day before that conference to talk about what we wanted to push the companies on. And this week, we released the set of three principles that we think companies should follow. They are as follows, numbers, notice, and appeal. We think companies should publish the number of posts removed or accounts suspended due to violations of content guidelines. We think that companies should break this out pretty granularly. How many of these came from governments? How many of these came from private reporting? How many came from being flagged by their own internal AI systems? We think they should be broken down by country. We think that they should be broken down by the category of rule. Second, notice. Companies should provide notice to each user whose content is removed, and they should provide notice of which rule in specific that content violated. If you don't know which rule is being violated, there's no way that you can strive to do better on the platform. There's also no way you can adequately contest what might be a wildly inaccurate removal. Again, we don't have very many numbers here, but a study by ProPublica showed that Facebook's moderators followed their own moderation guidelines less than 50% of the time. So the error rates that humans experience is enormous. And of course, we can presume that the error rates that early AI moderators will experience will be even higher. And finally, appeal. Because those error rates are so high, we're asking the companies to provide every user who has content removed or an account suspended the opportunity to meaningfully contest that removal or suspension. Usually, companies to date have allowed some sort of appeal for permanent account suspensions, and that's about it. If you had a piece of content removed on Facebook, until recently, you very rarely even got notice of that removal, much less an opportunity to appeal. And still today, there are categories that Facebook does not allow opportunities for appeal. So that's what we're asking for. We're asking the companies to be open and transparent about how they moderate speech, about how they perform private censorship of what would otherwise be perfectly legal speech that simply happens to violate the company's platform policies.