 Thank you so much for coming. My name is Kendra Albert. I'm a clinical instructional fellow here at Berkman Client, and I'm so excited to welcome you to the custodians of the internet talk. So just in case you're not familiar with the Berkman standard operating procedure, this talk is being webcast live. Hello to those watching on stream and recorded for posterity. So anything you say can and may be used against you. And then if you want to respond to this talk on Twitter, you can tweet at BKCHarvard or use the hash BKCHarvard to respond in real time. And I'll try to keep an eye on it. No promises. So folks who are watching remotely, if you have questions, you want to get in and I'll see if I can circulate those. So it's my great honor. And I have a lot of excitement to introduce Charlton Gillespie. Many of you know him, but Charlton's a principal researcher here at Microsoft New England in the social media collective research group and an adjunct associate professor in the Cornell Department of Communications. He received his MA and PhD in communication from the University of California at San Diego. He's also the author, in addition to custodians of the internet, his new book, which we're here to talk about, of Wired Shutt, copyright in the shape of digital culture. And when I think about Charlton's work in the space of sort of moderation and platforms, there are two key pieces that I think of, although he's published many. One was back in 2010 called The Politics of Platforms, which has been really seminal in the field. The other is his more recent article in New Media and Society. What is a flag for with Kate Crawford? And if you've read that article and then you read his book, it's clear some of the thinking sort of gets fleshed out in a more comprehensive way in custodians of the internet. So please join me in welcoming Charlton. How's the sound of work? Can you hear me? It's okay? Great. Thank you so much, Kendra and to Berkman for the invitation. Thanks for coming. So what I'd like to do, my understanding of the Berkman talks is that, well you don't wanna let me roll on for an hour, we wanna have lots of time for Q and A. So what I'd like to do is just sort of give you a glimpse of the kind of central argument of the book. There is a lot in the book that tries to cover how platforms came to be moderators, how they reached the point that they have, how they go about it, the labor involved, the technology involved. So I will kind of skim over that if there are questions. Oftentimes we have audiences that have very little familiarity and audiences that have lots of familiarity with these questions. So I'm gonna go kind of down the middle and try to make a case for how to think about moderation as a way of thinking about platforms. But I'm happy in the Q and A to either go into some of the more details or to connect it into the bigger issues that we're asking about platforms more broadly. So the first thing that I wanna say, and it's sort of maybe obvious to some, but it's important to say it, is that all platforms moderate, period. And I mean a couple of things by that. So all social media platforms have guidelines for the content and behavior on their services, what they allow, what they disallow, what they encourage and what they discourage. All platforms reserve the right to police their sites for what they consider to be unacceptable content and behavior. All platforms remove some content and all platforms enact penalties against users who violate those policies or violate them on a regular basis. And this is not a new thing, so these platforms have done so from the beginning, though certainly how they moderate and the apparatus in place to do so has changed quite a bit. And if this wasn't the topic that you paid deep attention to, if this was the thing you let go, it might appear as if there is a new set of questions, a flood of news and concerns that somehow moderation is a new problem for platforms. And certainly the noise and enthusiasm in the press about raising questions, especially for the biggest of the platforms, might suggest that. But it's important to remember that platforms have always been tested by challenging problems, frustrated users, devious users, public and press criticism, and this has been true from the beginning. I've been studying this for a long time, as you can tell, and it's been fascinating to watch the sort of explosion of these questions around specific platforms, around specific kinds of problems. But I still think it's important, and I try in the book to sort of highlight, that moderation has been a concern for platforms from the beginning, even though public attention has grown more recently. And certainly the press was asking questions early on, but I think what we've seen clearly is that the questions that are being asked now are getting deeper. So it used to be that the questions were largely, hey, why did they do that? Why did Facebook take down that image? Why did Twitter not take down that account? And what we're seeing in the last couple of years is a deeper set of questions that are including how and why do platforms moderate? How should they moderate? What's the labor involved? Who does the work? How does that happen? What are the economic and political motivations? How is moderation a part of their business, and how does it grapple with the economic imperatives of the platform itself? And then what are the public ramifications for how platforms moderate and when they fail? And I would suggest that what we've seen recently is a shift in the public sentiment. I've been calling it implicit contract. So we all sign a contract. When we go on a platform for the first time, we click through, we read it or we don't read it. Maybe this audience does, maybe most audiences do not. And that's the contract that is written by the platforms that lays out their rights and responsibilities, our rights and responsibilities, on their terms. And what we're seeing, I think, is a growing but ill-articulated implicit contract, a set of expectations from the public itself. What does the public expect platforms to do in the face of this emergence of wicked problems that they're facing? And it may be that what we're seeing is the souring of a longstanding hopefulness about digital culture and online community. The digital culture wasn't supposed to look like this. It wasn't supposed to look like Russian trolls. It wasn't supposed to look like rampant harassment. It wasn't supposed to look like terrorist recruiting. And what we're seeing is, in some ways, the remainder after we subtract away the hopes and dreams of Web 2.0. I want to argue that the problems that platforms are struggling with today come in part from a fundamental misunderstanding of what platforms are. And I think if we look at moderation and kind of take a sort of like clear and sober look at it, hold it in our attention, we can see both what we have come to believe about platforms and how platforms in fact work. So two sort of key takeaways from the talk, this is the first one. I want to argue that we have fundamentally misunderstood the centrality and the significance of content moderation. That moderation is essential and constitutional to the functioning of platforms. It is not ancillary. Now in many ways, over the course of the life of the major platforms we now deal with, platforms have invited us to think of them in a particular way. They've invited us to think of them as open, so inviting platforms that facilitate expression. We've invited them to think of them as technical, so they're a shell in which anything can travel through that is agnostic to the content. And we've invited to think of them as impartial, hands-off hosts where the kind of information wants to be free ethos. And even when they're trumpeting how they have beneficial effects like Arab Spring, somehow that corresponds with them not doing something, having allowed something to happen and not intervene. And many of these platforms were designed by people who were inspired by or at the very least hope to profit from the freedom that the web had promised. And this meant that especially in early days the platforms needed to disavow content moderation. We didn't hear much about it. It was obscured behind a kind of mythos of open participation that when you wanted these platforms what you would find is all the content you wanted, all the opportunity to speak, all the people you wanted to chat with, all the sociality you could possibly imagine. And in that mythos, moderation was either sort of non-existent, it just wasn't part of the description or it was at the edges. There were always bad actors when we do our best to clean them up. Or it was fair and benevolent, right? It was done even-handedly. And along with obscuring the notion of moderation and the role of moderation, they also obscured the labor of moderation. So we didn't know much about how moderation was done, where that work was done, who was making those judgments. That process was opaque to most users. And I wanna argue that I think this is a problem of our cultural imagination. It is very difficult even in a room full of people like us who have now thought about content moderation or fake news or harassment or whatever sort of pet topic, have thought about it for a while and recognized it for a while. The notion of a platform, I think, still has moderation as not a central part of its picture. And I'm wondering if we can sort of forcibly hold moderation to be a key part of what platforms do all the time, not once in a while, not in certain circumstances, not more than they did before, but always and ubiquitously as a central role in what they do. So while platforms were offering up this mythos of open participation and sort of obscuring content moderation, they were all quietly discovering that they needed moderation. I put discovering in scare quotes because of course in the early days of the web, community managers understood that there had to be sort of a guiding hand, whatever that guiding hand looked like. But somehow in the process of building up these increasingly large platforms, that awareness had to sort of happen again. And maybe again and again over time. For the first 18 months of Facebook's life, they relied on volunteer Harvard undergrads to handle responses to what was a mixture of technical complaints, customer service needs, and what we would call content moderation, complaints of abuse or content. And it wasn't until 18 months after they began, they actually hired someone to sort of play that role. Some platforms learned this a little more slowly. So as late as 2013, Twitter, which had promised to respond to every abuse ticket they received was more than a month backlogged. So as these platforms discovered that they needed to have some role of moderation, they built up what is now a pretty complex apparatus for how to identify the content that they thought they should take down, how they should respond to users' complaints and appeals, how to make sure that that mechanism was somehow within the law and was functional and was responsive and was timely. This is a very complex undertaking. So it's easy to look at sort of a page of guidelines and say, aha, there's sort of a set of rules that need to be shared and a set of policies that need to be implemented, but there are a lot of elements that go along with this. Rules and guidelines, but also the animating principles, the discussion behind them, the consultation with the legal side, complaint processes, appeals processes, the logistics for review and for judgment, depending on various sorts of human labor, and then increasingly for some of the largest ones, algorithmic techniques for detecting, for also for filtering, for queuing, and for reporting. And for the biggest platforms, this process is now enormous. So we know that in 2018, just as an example, YouTube has promised 10,000 employees to address content moderation issues, and Facebook has promised 20,000. And just for a point of reference, Facebook is currently 25,000 employees. So this is an enormous part of what they do. Now, bear in mind, these are not all full-time employees working in Menlo Park. Most of those people are click workers working in other parts of the world, working for third-party companies. So that's a sort of like just a general sense of scope, but that's an immense commitment, right? In the face of increasing complaints that they were not being responsive enough and not being subtle enough, especially in different parts of the world, different languages, different cultural values. One thing that we've noticed is that we seem to be developing two tiers of platforms. So a few of the largest platforms are building up very complex apparatus for conducting this, whole sets of people, teams of people internally, thousands of people externally who are doing this work. So some of these platforms have gone to a kind of like industrial level of moderation, where this is a bureaucratized and complex part of what they do, but that's not true about all platforms. So many platforms are maintaining what some have called a more artisanal level. So Medium has five employees that does this. Pinterest has 14. A lot of times for a medium-sized platform, there'll be a small team making policy decisions. They'll have tasked engineers with occasionally responding to complaints. They'll farm out material for specific areas. So really as we think about these structures and we start to think about the policy concerns about how to address content moderation, we might wanna think about these sort of like multiple tiers of approaches. But we're certainly seeing a complex apparatus and if we think about Facebook or Twitter or Instagram, YouTube as an example, we're talking about an immense process. Facebook has 60 plus people working full-time internally on this, depending on how you count. If you wanna count the legal consultants or the PR people, that number may sort of grow or shrink. And whole teams that are crafting policy, overseeing the mechanisms of enforcement, overseeing the learning process, doing public outreach to different organizations. Many of these platforms bring in outside partners. Some of them have come to Berkman to try to get advice. They bring in groups from sort of like political activist groups, linguistic specialists, topic specialists, seeking out advice. But of course what we know is thousands of people are being tasked with this work as a frontline level. So clicking yes, no, yes, no, yes, no escalates, right? Responding to flags, responding to automatically identified content. And if we wanna map out this sort of like structure more, we can think about some platforms that still use community managers, whether it's Reddit and sort of subredders, whether it's Wikipedia administrators, whether it's the people who are tasked with managing a Facebook group. So many platforms still also farm out some of this work to volunteers. Most platforms rely on flaggers. So you and I being encouraged to complain or report stuff that we find offensive or we think is harmful or we think violates the rules. And there's a whole set of questions that we can ask about exactly who does that and what motivates them and what portion of the user population they are. Some platforms will encourage flagging by creating sort of like gamified structures where the best flaggers will get points and privileges. They'll give super flagging privileges to police organizations or health organizations. And then some platforms, this is kind of old school, where you have to rate your own content, right? So if you're posting your photo on Flickr, you've already indicated whether it's adult or not. So we're all part of the moderation process. With the structure of this immense and complex, we start to have questions that go well beyond did they make the right choice? Or is this a fair policy? We've got questions about logistics. What happens when you ask this process to work across dozens, hundreds, thousands of people? When you talk to some of the content policy people at the biggest platforms and you get them in a more honest moment, they will acknowledge that it's so immense a problem that it's not even clear what they're trying to do is be right, right? Being right is sort of an impossibility. They recognize that. Or being just is an impossibility. And one of the things they often emphasize is being consistent, right? It's incredibly important. If you have 5,000 people making judgments, then part of the way you judge that system is to say, if I gave the same image to two of my reviewers, or something was just as offensive as this one, what does that mean, is getting the same sort of punishment or the same consequence. These systems are struggling between this emphasis on consistency, right? Sort of automating, if not making automatic, these processes, right? How do you get thousands of people to make these decisions very quickly and very consistently and very fairly and not lose their minds in the process? With the reality that much of it is ad hoc, a lot of it is surprising, a lot of it is escalated and done in one-off decisions. And in some ways we want that kind of dynamism, right? We recognize that the violations are constantly surprising to these platforms or always exceeding the possible, always exceeding the anticipated. So in some ways, in many ways, moderation is the central commodity that platforms offer. So if platforms emerged from the web, both promising to be the best of the web, but somehow improve upon the web, what they were offering was a better experience than the web for information and sociality and that means a curated experience, an organized experience, an archived experience and a moderated experience, right? That promise of something other than the chaos of a web chat space is part of what platforms are offering. And not only can platforms not survive without moderation, if we stop to the moderation and we know what would happen, right? Platforms become sort of cesspools of hate and pornography, but they aren't platforms without it, right? A platform that had no moderation isn't a platform in principle in its definition, okay? The second takeaway is that even if you're not particularly concerned with moderation, even if you're not motivated by specific questions of should platforms remove or intervene or not, we can use moderation as a kind of prism for better understanding the power of platforms more generally and the ways that they subtly torque public life. So platforms are managing a number of tensions. I want to highlight two of them for the moment. The first one, which platforms often talk about when they're explaining how they moderate and why they have policy. If you go into the community guidelines pages, which I do, I find the most fascinating thing to be the first paragraph. There's always a first paragraph before they get to the bullet points of what you can and can't do. And that first paragraph is always, we are a platform that does this, it's always in the positive, right? This is a wonderful place to do something, share, connect, right? And we want you to have a safe and wonderful experience and we want you to have the freedom to do everything you wanna do. Now, here's 12 things you can never do, right? That paragraph is a really telling one and oftentimes that paragraph struggles with the tension between speech and community. And that's a shorthand between how do you allow your users to speak as freely as possible? Oftentimes imagine with the kind of spirit of the First Amendment being a part of that. And at the same time, somehow make sure that your community has across the board a positive experience or a safe experience or a compelling experience or a specific experience, right? And the platforms often talk about it as balancing speech and community. Another way to think about it is the fundamental benefits and dangers of sociality. When we bring ourselves together, there are both things we can accomplish and there are tensions that are almost impossible to avoid. And then there's a second tension that platforms are balancing. Between letting users do the content generation, this was the fundamental kind of economic and procedural offer of platforms. We build the shell, you produce the content and we'll make sure it gets to where we promised for it to go. So letting users generate the content, this is the defense that Facebook says we're not a media company, right? We're not media producers, we're a tech company. We distribute, right? Users do the work. The tension between that and achieving a dependable and ad-based revenue stream. I like Tom Malaby's point where he said, platforms depend on the value of unexpected contributions. It's part of the thrill of platforms. This is virality, right? This is the surprise cultural phenomenon that goes nuts. This is that thing you didn't think you were looking for. And I think he's right, but in some ways as an ongoing business model, platforms need to tame that unexpectedness while also appearing open and this is key to their business. So how do you build a business that's premised on users will make what they are gonna make but also turn that into something that is somehow reliable, predictable, safe, but also surprising. And these tensions don't resolve themselves, right? Managing these tensions requires that platforms be more than mere conduits. They have to make choices, but they also have to obscure the way that they make choices. Platforms have to take our contributions freely given as raw material to assemble into a flow of content that they hope will be engaging and tolerable. And this is moderation. Moderation is the mechanism that tries to manage those tensions. How do you allow an unfettered flow of user-generated information that also produces an engaging and safe community, right? Moderation is the only answer to that question and it's an imperfect one. But I wanna argue that moderation is just one part of a kind of ongoing calibration of what platforms actually do. And this is where we begin to move away from the things we've understood about traditional media. We've understood gatekeeping and the sort of choices designed for audiences and the way they sort of think about demographics and the way that they sort of weigh commercial aims with public aims or news aims or what have you. Social media platforms and other sort of information intermediaries made this offer that said we don't make those choices, you make those choices. You decide who things go to. You decide when to post. You decide what's exciting. You like things. You favorite stuff. You build a social network. And that leaves less room in the traditional sense for making choices in the traditional gatekeeper way, right? But in fact, platforms have developed all sorts of techniques for how they subtly calibrate and guide that flow of information to produce a thing called a feed or a recommended videos or a trends list so that the things you encounter are more than random and more than merely the product of the traces of who you followed and who you liked and who you put into your networks. Moderation is one of them. What are the things that fall out? What are the things that are determined not to belong in that space? The flip in some ways is recommendation. What's the first thing you see? What's the next thing you see? What's the top thing you see? What's the, what comes back when you do a search query? Curation I put in there because they still do that, right? They still have featured partners and the things that show up on the front page if you're not logged in or if there's not enough algorithmic information. There are still editorial and curatorial choices that are offering content up for you. And then two more that I've been thinking about more recently. So one is monetization. Not every platform does this, but some platforms do pay producers and there's a whole structure behind who gets that pay and what kind of resources go to that and how that amplifies certain kinds of voices, certain kinds of genres. That's a part of what platforms can do. And then one that many people here are struggling with, I'm calling authorization. If I want a tweet and I want people to hear what I have to say, I have a couple tactics that I can use. I can be clever, maybe. I can put a hashtag in there so then maybe people will find it beyond the people who already follow me because they're following that word or the name of an event or the name of a celebrity or whatever. And if I really want to, I can drop a couple dollars and I can try to promote my tweet or promote my Facebook post and send it out a little more, right? I know very little about exactly what that's getting me. I don't have access to a set of data tools that say, how can I direct this message to people in this demographic, in this state at this moment, can I A-B test my message and get it to different populations and find out who's gonna look at it more and get that material back and I can't have a consultant show up and work with me on how to do that effectively, right? But a brand can and a political campaign can, right? So authorization to levels of data tools that the platforms offer is another way that platforms structure and tune what information goes where, in what way, with what priority. So if moderation is just a part of this, a key part of it, right? But it's also indicative of the variety of ways in which platforms tune and calibrate all these contributions into something consumable, right? That is both at the same time seems like everything on another way seems like exactly what I was looking for or was personalized for me and seems chaotic and surprising but also somehow fits a set of expectations that advertisers won't mind, that data collection makes sense of that won't show up on the front page of New York Times. This does a couple of things. This creates platforms that work differently than they appear to work to users are often different than they are promised, right? Where users are not always the primary stakeholders being served. It makes platforms that engage in moderation but must also obscure that moderation beneath these powerful promises of free speech and community and personalization and user empowerment. And they're always looking for ways to moderate without losing users and engagement and this becomes a tension at certain moments. And it also privileges specific kinds of content and certain kinds of social affiliation, the kinds that produce engagement, the kinds that produce valuable networks of affiliation that appear to map onto organic communities of interest. And I think what we discovered in the last couple of years is that in doing so, it's made those platforms vulnerable to exploitation. And not just, this is like in some ways what has moved public concern and moved the press concern. Not just the, we don't allow porn, haha, I'm gonna put porn here anyway. That was like the simple problem of moderation. People just refusing to follow the rules, right? And maybe simple versions of harassment look like that too. These three people respectfully know I'm gonna be terrible as a person, right? So definite and aware violations on sort of like the obvious level of these rules. And what we're seeing now might be a sort of second order kind of problem where users are getting very good at simulating exactly the thing the platform wants to circulate, but for problematic ends, right? So simulating the engaging, this is fake news, this is clickbait. Simulating authentic and engaged users, this is bot design. Tactically building what looks like an organic community of interest, this is coordinated harassment, influence networks. And testing the moderation system to find its weaknesses, both conceptual weaknesses and procedural weaknesses. These tactics, these tactics understand the thing that I'm trying to say about moderation, that moderation is a key part of how platforms work and it's just one part of how platforms calibrate, opt in for and opt out of particular kinds of information and particular flows of information, right? By better understanding a system that we don't even understand yet. We don't yet have the capacity to understand a complex constantly changing global size dynamic system that is regulating itself by building tools and teams of people that are learning from the practices so that you can regulate new practices that were fully unanticipated. We still don't have the language and the sort of imaginative conception for thinking about that, but the tactical use and misuse of these platforms is starting to get it. And they're getting it faster than we can sort of sociologically understand it. So what would it mean to rethink platforms and their responsibility if, when we said platforms, we meant a moderating system. If we put moderation back in the center, that we didn't say open platforms that boy, we sure hope they don't get misused so they better have boundaries or guardrails, but we said these are moderating systems. These are curating systems. These are recommending systems. They are calibrating systems that take in the raw material of user contribution, produce something, whether that's a feed or an archive or a recommended list. What would it look like to imagine a responsibility if that was our sort of imaginative understanding of what platforms are? Now in the midst of all this concern about these kind of second order, these wicked problems that are challenging us, we've had a lot of suggestions about how to improve moderation. We should do it better. We should do it faster. We should do it more accurately. We should put more resources to it. We should be more sensitive to language and culture. We should deal with specific critical problems more aggressively, terrorism, hate speech, that's the sort of European approach. We should be more transparent about it, report out how it's done and data about the process. We should provide more robust appeals process. We should follow fundamental sets of values like a human rights framework. Or maybe there's a problem that we don't want the platforms to deal with. We should do this through literacy and criminal tactics for bad actors. Maybe this isn't the platform's problem at all. I would say that most of those suggestions while they're certainly valuable and are worth considering and might help a little bit if they were done well, all of them hold in place the fundamental approach that the platforms have built. And it's approach that was premised on this notion of doing moderation quietly while promising openness. Which produced a kind of customer service model to moderation. You let us know if there's a problem, we'll take care of it, and then everything works smoothly. And you don't have to worry yourself about the process or how that happened. It's gonna be taken care of. And that kind of customer service logic has animated how platforms have thought about moderation and how they've built up this apparatus of workers and software and processes and concepts into this mechanism that each of these platforms is now committed to and has to deploy and deploys constantly. All of those approaches moderate on our behalf. And that move that says we'll handle this, we'll moderate the community that you're part of on your behalf and in your best interest and we'll do it for you. Maybe we may be reaching the point where the limits of that notion are showing. We're beginning to see the edge of that approach. In some ways, platforms are eager to moderate and has always been a part of what they offer. But they are much more reluctant to govern and those are two different things. I'm left with a problem or a series of problems and maybe I'll just sort of lay them out as what I'm thinking about now. Which is that we have yet to face the true challenges of how to regulate responsive algorithmic systems. We have yet to fully understand what it means to either impose rules upon a system like this or understand what kind of notions of accountability and intervention would even work in systems that work this way. And there are a couple problems, right? We need a system of sort of thinking or regulation whether it's self-regulation or imposed that's attuned to these complex feedback loops. Users making ongoing contributions. Platforms making ongoing interventions. The interventions being tuned to the ongoing contributions. The contributions retuning themselves to take advantage of the shape of the intervention. Those double responsive feedback loops make it a very complicated phenomena. And what we find is that that produces a very hard to anticipate set of equilibria and complex kind of like spun out phenomena. And you might think about election-based fake news as one of those things. That found a weird place where it was both sort of like allowed and compensated and rewarded. And so it developed very fast into a thing that we sort of were surprised by how sophisticated it got very quickly. And other kinds of things reach an equilibrium with the platform and sort of stabilize. We need new frameworks of obligation that understand the way platforms tune public discourse. Most of the American regulatory thinking treats them as conduits. And this was built into section 230, which is the sort of key legal protection that was built to protect conduits who weren't going to intervene while also allowing them to build up a structure in which they intervened all the time. And so we don't think of them as tuning public discourse. We think of them as allowing public discourse and then having some responsibility to dive in when necessary. We might have to rethink that logic. Theories of responsibility that recognize the partnerships between humans and non-humans. And we are still very bad at this. So thinking about what it means when Facebook did something. And it's not clear what that means. Did the policy do something? Did the software do something? Did the human moderators do something? Or was it in fact a product of the combination of those things? And as we go increasingly aware of the second order consequence is not just of the misuse of the platform, but the proper use of the platform, right? So if it is in fact systems that encourage engagement, encourage virality, encourage impulse, then it produces certain effects. And that's not its misuse, that's its use. And we have to recognize that as well. There are a couple, there are two tricky challenges that I think are gonna make this question even harder. And I wanna sort of end with those. The first one is that this is a precarious moment. More than ever before, the people who have been thinking very carefully about how complicated these systems are, both as kind of regulatory systems and as technical systems and as social systems, have really begun to recognize like how truly subtle and complicated it is to understand why did I get the results that I did? Why did I get the news feed that I did? Why did something disappear? Why did something stay? Why did something get past moderation? And why did something keep getting gummed up even though the platform actually says they wanna allow it, right? And we're now beginning to develop that language that I was talking about, right? The language that gets at some of these things, right? These kind of structures of accountability that understand these complex systems. But it's exactly at a moment when the argument that the uncertainty of what platforms offer and the uncertainty of how and why they moderate can be used as a political weapon to say there must be bias in that process. And the really challenging thing about that claim is that it's very hard to refute, right? If you and I each get different results. If your photo gets taken down and you posting the same photo stays up. If you show up tomorrow and the results look different than they did today, right? There are a lot of reasons why that might be. And it requires a lot of generosity for platforms that have abused our trust in many cases to say, I'm sure it's one of the 17 things that might explain that, that none of which are you're working against my interests, right? That requires a lot of generosity and a lot of faith at a moment that it's not clear that they've earned it. And it's not surprising at that moment that people who would like to say something's going on back there, someone's pulling levers against us is very easy case to make and very hard to refute, right? And at the same time, not just to kind of like maybe Google's biased against us tactic, but a matching one on the other side which says, let's lump this mistake, this set of missteps in with every other problem. This is just like data privacy violations. This is just like research ethics violations. This is just like algorithmic bias. And allow that to pile into a kind of like broad attack on the platforms, which I'm not saying is uncalled for, but it treats moderation as just another problem of platforms being bad and doesn't get at the specificity of what it means to ask an intermediary to moderate on our behalf, how hard that is and how it can go awry. The second challenge is that we tend to talk about about a handful of platforms and I've done it mostly in this talk as well. We love to talk about Facebook. We love to talk about YouTube and Instagram and a handful of others. We're not talking very often about platforms in other parts of the world. That's one problem. But we're also not talking about the dozens or hundreds of platforms that are facing the same kinds of questions but haven't risen to the sort of like stratospheric level that a few of those have, right? It should not be surprising that the current head of trust and safety at Airbnb used to be the head of content moderation at Facebook, right? Airbnb is a platform. They don't provide the apartments. They try to connect apartment offers with renters and sometimes those people mistreat each other or misrepresent what they have. Whose responsibility is it? Is it Airbnb's? Is it Kabyat Emtor, buyer beware? Deal with it yourself? Those same kinds of questions of whether we'd want a platform to intervene, to make something work correctly, how badly they could do that, whose values they're weighing in doing that is coming up in all sorts of other kinds of platforms, right? To well exceed platforms like Facebook, YouTube and Instagram. And this is gonna grow. So there are other kinds of intermediaries that are offering the same kind of thing of a platform. It's a very powerful economic offer that says we're not gonna be the ones to provide the thing you need. We're not gonna produce the software. We're not gonna produce the tool. We're gonna broker the relationship between you and someone who has it. That's the fundamental offer, whether it's Uber, someone will drive and you wanna go somewhere, or whether it's Twitter, someone has something to say and you wanna hear it, right? Or whether it's VR apps on your headset that the company didn't design, but the first time that VR porn shows up, now you've got a question. It's the idea of offering bots attached to your communication networks. The first time someone sets up Nazi bot on Slack, whose responsibility is it, right? So these questions about moderating on our behalf, who intervenes and how that can be done well and badly, are only gonna expand and they're gonna expand up and down the chain of these intermediaries that fill the space that we occupy. Last thought. If we're trying to revamp our thinking about platforms, not just say moderation's important and raises certain kinds of issues, but that maybe we've thought about platforms incorrectly and we can reimagine platforms as moderating mechanisms, as calibration tools and highlight intervention-ness in our understanding of it, which changes what kind of responsibility we would imagine, what kind of interventions we could do. Maybe one way to radically reimagine platforms is to stop thinking of them as technical accomplishments. They are, of course, right? They are software that accomplished something that we hadn't done before, but what have we thought about them instead as a very particular arrangement of roles and responsibilities? Who makes, who provides, who organizes, who queries, who receives, and who deals when that breaks down in whatever form, right? If we think of that in contrast to a software provider, in contrast to a media provider, it's a very different notion about who exactly carries responsibility for the kinds of inevitable or not surprising frictions and violations that happen when you try to put people together, pair them up in their own interests, and find that people look to take advantage of that. This is true about all systems that distribute information, that shape participation, and that sort of come together in this exchange model. So if platforms are, in fact, a very particular arrangement of roles and responsibilities, then what platforms do is they don't just distribute content, they distribute responsibility, right? They say, this is ours to do, this is yours to do, and when you're upset, this is what we'll do in response. So we're left with the question of, as platforms distribute responsibility in a particular way, are they distributing that responsibility responsibly? I'll stop there, but I'd love to hear your questions. Thank you. The generation is the commodity I will be providing. And as such, I'm actually going to take a first crack at asking Tarleton a question or two before we open it up to the audience. Please. So thank you so much for your talk and for your book. So I sent Tarleton some questions in advance, but of course his talk provoked so many others that I'm going to totally go off script. So I think in your book and in the talk, you sort of alluded to this actually with that slide. Oh, perfect. No, it's good. We don't need the slide. Thank you, though. You're primarily focusing on social media companies and even platform companies like Airbnb or Uber who are providing services. And I wonder with the recent backlash is probably an interesting word here, but the recent reaction against GAB, which is the sort of alt-right, more free speech, more racist version of Twitter, there has been sort of an effort to go, what I would consider like down the stack, to payment providers, to registrars, to content delivery networks. And I wonder how your sort of moderation as the commodity or moderation as constitutional like applies to those kinds of providers in addition to social media. No, I think that's exactly right. And it's a perfect example of sort of where I was getting to, I actually wrote down GAB in a little corner and I forgot to say it. So as we go down the stack, if we want to sort of imagine it that way and think about cloud computing providers and web hosting, think about the payment systems that are kind of infrastructural to GAB or to a website or to whoever. Each of those providers has a terms of service and each of those does in fact already articulate rules and expectations about what's on there. Now there's a big difference between sort of what is there and what's in sort of boilerplate language and what actually gets acted upon. But I remember when I got to Microsoft and I was like, oh, that doesn't help me get into any platforms that are interesting here. And I started talking to people who were working at Microsoft and it was like, oh, OneDrive, OneDrive had this huge problem because people were posting really glossy, beautiful terrorist recruitment magazines, keeping them in their public account or a private account and then sending out the link. And that was a network of distribution. And the question was like, oh, that's a really clever way. You're not have to like post in a really obvious place on your Facebook page or on your, you don't have to worry about encryption. It's just kind of like held privately and the information is distributed. And I was like, oh great, how did you deal with that problem? And they said, we have terms of service. We just said that's not what OneDrive does. And that was sort of like an uncontroversial version of shutting down speech. Now like they felt it was defensible. I don't think I would disagree. But the kind of questions that we ask all the time, like what happens when Twitter shuts down a group that might seem terrorist from one point of view, might just be fundamentalists from another. Those same questions are happening in other kinds of places that provide information. This move down into sort of like deep platform, something like GAB or something like Daily Stormer. And activists are now looking and saying, where can we apply pressure? The kind of pressure that used to apply to platforms or to advertisers, you could apply to payment systems or to web hosts or domain servers. From me, it's kind of like all of those institutions have to think about the fact that whether or not they like it, they are in a position that could be considered responsible or even complicit. They have to make a sort of like deliberate decision about what it means to do that. One position is we're gonna be a conduit, right? And that is not no moderation. That is a arrangement of moderation that says we will choose conduit except illegality when told. But that's not the only choice. And I think we're seeing some places, even things like, you know, Cloudflare saying like, I hate this Nazi crap and I'm gonna delete it. I know I shouldn't and I know it raises all these like complex ripples for like whether sites like me can do that, but I'm gonna do it. And it means we're starting to see both the managers of those intermediaries starting to say, I don't know if I wanna be a mere host of content recognizing that my hosting of it amplifies it, right? But the implications of if those actors start saying no and things just start becoming like functionally unavailable, we get that that could sort of like pockmark the speech environment in ways that we have not, we don't have the tools yet to think about. Thank you. So I'm gonna ask one more question and then I'm gonna go to the group. So when we've been talking about those companies or primarily, as you've mentioned, talking about US companies and sort of US principles, the First Amendment comes up a lot, but it's not just that these companies are founded in the United States, but often that the founders are also like white, upper middle class, male, I know. Shocking. And have often particular viewpoints on the sort of importance of what they're doing with their platforms. And I wonder if you can talk a little bit about how that shaped the sort of moderation as a commodity or the way in which that's sort of as a starting point maybe changed the conversation that you're describing. Yeah, yeah. I think one of the things that if we think about the genesis of some of the platforms that we're now using that are now global and massive and bureaucratic, something like Facebook as an example, but even also the kind of other ones that started up in the valley and seemed like cool apps that were gonna get 10,000 users and be better than some previous app that maybe got 10,000 users. I think that a lot of the reasons why it was hard for them to recognize the range of problems that they were gonna face. It was not just a sort of failure of imagination like how do you imagine if this thing has two billion users? That's a hard thing to imagine, it just is. But it was also that a lot of the things that might have become problems on their sites were handled by the fact that the user communities shared a bunch of norms. So if you were only Ivy League undergrads, the space of discussion was narrower. The sensibility of what was and what wasn't okay was a little bit more bounded. Even if there were people on the site that were like, I wouldn't talk like this. But there was a kind of ethos that you could point to. And you weren't likely to get pictures of women breastfeeding. That wasn't gonna come up. And it wasn't gonna be a target opportunity for misinformation campaigns the way it becomes. So it seemed like the platforms were gonna manage themselves because they forgot about norms. And norms were doing an immense amount of work. And the same ones that were starting off in Silicon Valley and they were picking up among a very particular tech community that were trying these things out, there were a whole bunch of shared norms or perceived norms that were doing the work of moderating that they didn't account for. And they didn't build for. And they didn't think as soon as I have 10,000 users or 20,000 users or users in a different part of the world or different language, I can't count on that. So I have to imagine it for the system. So that is a kind of like shared norms of users, which is shared norms of designers, which is not only of a particular place and a particular age and a particular economic position, but is also about whiteness and maleness and youth and Americanness. And so I think that it's not to take them off the hook for saying like, boy, it would have been nice if you'd seen this coming and listen to the people who were saying, holy crap, this is coming. But the failure of imagination is based in sort of who these platforms started with. And in some ways, when we want intermediaries, when we recognize that a provider, a commercial provider, has reached a point where they have public impact, then we, in some ways, we can say, hey, do better, and we can tell them how badly they're doing. But we also then look to regulation to say, of course, you didn't imagine this, right? It wasn't part of your business model. It wasn't part of your vision of the world. I wish you had. But you need to think like other people, or you need to recognize problems that you're not gonna recognize on your own, like me and more, right? So it's both like, again, that's not to take them off the hook. I think it is a failure to recognize how much moderation was norms, and then the platforms left those norms and had nothing to replace them with. Thank you. So I'm gonna open up and I'm gonna just note my, I'm not gonna tell you, actually, I'm not gonna do the intro paragraph about how much I love this space. I'm gonna just say that questions end in question marks, and so I will cut you off if I feel like it's your question is not question-y, but we'll start with Kathy. You just have to rise in inclination there. And I think we're gonna run you a mic, so if you don't mind holding for that. Coming up on you. Question. Is from your long time experience working in this space, what do you think is the right approach for these companies? As you mentioned, some companies have trust and safety teams, others maybe don't. Oftentimes you have the engineering teams that do the first paragraph of the terms of service, where it's all the positive stuff, and then the second tier people, or third parties are the ones that do the actual moderation, so it's like they get punted elsewhere, and there's all sorts of issues there. So what do you think is the right way for these companies to go about the moderation? Yeah. I have three answers, and I don't want to give them all because it'll take too much time, so the easiest answer which you could get more detail in the beginning is sort of like, do what you're doing, but just do it more thoughtfully and with more people, and I think that's insufficient, but I get why it's being suggested. I have a radical suggestion in the book that has to do with the idea that platforms built for the wrong dream of the web, that they dreamed of like, we could all get on and be authors and have like a friction-free communication thing, and they actually did that really well, but there was this other dream of building your own community and making it sustainable and having the tools to do it, and the platforms didn't build for that, and so I wonder what like AI tools could exist that would actually like encourage community building and recognize civic values instead of recognizing what they should sell to me and how to quickly post, but I'll leave that tantalizing detail, it's a naive detail. So there's a middle one which is, that you highlighted in your question, which is that for a long time at these platforms, the content policy team has been a piece of the puzzle and has been not isolated, but seen as a different part of what the project or the company is, and I remember talking to people for the interviews and for the research in this book, and they often said, we're often the ones inside the company saying, wait, wait, this is gonna be a problem, and I remember one telling me like, they'd have an engineer saying, we developed this great new feature, it's gonna go up in two weeks, you should know about it, and they look at it and they go, do you know how you could stalk your ex-girlfriend with this? And they go, ah, right. So there was a mismatch between like 90% of the company that was just gonna build as fast as they could and build the cool things they thought of with their own uses involved and their own presumption of norms and communities of who was gonna do it, and then this pocket of people that were hearing about problems growing and recognizing that these sort of challenges existed. What I've heard recently is moments like Alex Jones where people in those companies going, wait, we allow Alex Jones on our platform? And the content moderation people going, yeah, that's what I've been saying. So you're getting pushback from other parts of the companies, and one of the things that I've been saying to the platforms when I talk to them is that education process of getting the engineering teams and the design teams to really understand both the challenges of moderation, the depth of the problems of moderation, but also maybe the economic costs of moderation. You don't wanna be the next expose and the verge, you don't wanna be in the near times because you allowed something, and you don't wanna live with yourself if you don't think that the platform is doing what its values represent. There may be room inside that discussion for the platforms to see maybe what I'm saying, like that moderation is a cohesive element, a ubiquitous element of what platforms are, and having those as separate like, oh God, we gotta run it by the content policy team again, is part of how you get to distance that until it's like a disaster. That's like a middle road. So I'm gonna bring in a question from Twitter. So someone was asking a little bit, if you could talk a little bit about PASA and SESA, and sort of how that's changed content generation for folks in the audience who are unfamiliar with PASA and SESA, there are bills that are, there are law that amended Section 230 which creates broad immunity for platforms from content posted by others to create potential liability for material related to sex trafficking, but also sex work. So if you could talk a little bit about how you think that's changed the environment. Yeah, yeah, yeah. I don't know if I can speak to how it's changed things since it happened, like I don't know if I've taken the temperature of the people of the platform since then, but in some ways we're seeing a lot of moves to sort of begin to question the kind of broad protection that platforms enjoy from 230 and from similar statutes to say by and large platforms aren't gonna be responsible for what users do and say, and the logic of that made a lot of sense where if you could sue a platform for slander because someone said something somewhere, that defamation case could sort of like wreck a platform or platforms are gonna get very anxious and conservative about this. I've written elsewhere that we've reached a point in the discussion about 230 where we've got kind of like people ready to throw it out and people adamantly defending it and not a lot of space in between. And FOSTA is in some ways like a product of that where it's kind of like, we're really concerned about this problem, so concerned that we probably didn't even recognize the fact that it was being handled in other ways, but we also wanna keep 230 as this kind of sacred object. So we'll just make it a carve out. 230 will exist, it will still be the liability protection it will always was, and then sort of sex trafficking will just be this like one weird piece of the puzzle, along with things like child pornography and copyright, which are already kind of not part of the 230 guidelines. So 230 can kind of roll along, you're not gonna be responsible for harassment, you're not gonna be responsible for pornography, but sex trafficking is gonna be this separate object. I think there are a lot of ways to imagine renovating 230 that aren't the same as throwing it out, that aren't the same as just attaching caveats all the time. When you look back at US media regulation, there are moments where we gave industries a gift, and 230 was a gift. It said, at this moment we want you to grow, here's this gift that says, you're protected from liability for whatever users do, so you could be hands off if you want to, and you can moderate, and that doesn't make you any more liable for things, so moderate, don't moderate, moderate, in any terms you want to, you're good. And it came with no matching obligation, right? So when you hand the telephone companies a monopoly, you say, you have a monopoly, but you better provide universal service to rural communities, because we get why you wouldn't if it was just left to your economic interests. And 230 came with no matching obligation. Those obligations don't have to be, we'll tell you how to moderate, it could be share best practices, have an independent council that weighs things, have a public ombudsman that responds to these things, be transparent about the process. So maybe now it's the time to say, what are the expectations we could have attached, and said if you want that legal protection, here's the bare minimum of what you have to do to earn that protection, and once you do that, then you can enjoy this safety. And if you choose not to, then you're open to lawsuit for things that you didn't respond to. So I find foster like a weird, it's like really trying desperately to hold 230 in place, but also address this problem. And I don't know if that's the, that sort of like biting off caveats is the right way to do that. Great, thank you. Well, we're at, should we, do we have one more? Oh, okay, sorry, I thought we were wrapping up at one, but we can keep going, this is fantastic. Go ahead. Great, thank you so much. I thought what you were starting to say about the civics for the platforms, so the little tantalizing detail was really interesting. How much user appetite do you think there is for even a civic engagement with platforms? Because we've seen a couple of smaller, like federated social communities, like path shut down, mastodon hasn't really taken off. There's also a problem with smaller groups like WhatsApp and like Messenger with spreading misinformation even more ominously than a public amplification tool has. So I'm really curious if there's like digital jury duty that can even be feasible for platforms. Yeah, yeah, I mean, I think that we've had research for a while about deliberative democracy, like how do you structure an arrangement where you bring people together to have some kind of obligation to think about things. That's one model where it's like a jury duty model where sort of like at any point some small group of us are gonna be tasked to kind of help make these decisions and the rest of us will go along our merry way. A different model is handing more of that responsibility to all users and building tools to support that. And immediately as I say that, I hear how naive that sounds. So there are a couple of problems with it. So one is like, we have that, Reddit is that, right? And we know that Reddit hasn't been like a gleaming example of perfect moderation. But Reddit faces the opposite problem, right? So Facebook is sort of in a position that says we'll do the moderation for you, just be the ones who let us know what's wrong and we'll deal with it. So much so that if you run a Facebook group and you can kick people off of your group, if someone flags content in your group, it goes to Facebook, it doesn't go to you, right? So even in the moment where they could have said this group belongs to you, you're in charge of doing much of the moderation, the actual procedural mechanism doesn't even honor that. They've only just started doing that. Yeah, it's like sort of very recent that they started playing with that. Reddit has the opposite where it's like, yeah, if you're a subreddit manager, you're gonna set all these rules. And the problem was, well, if you have a group that all wants to do something terrible and no one has a problem with that, then you've got a problem, right? So if you wanna share stolen celebrity nude photos and everyone's like, we're cool, then it goes on perfectly. So then Reddit had this problem where it was like, how do you hand most of the authority to community managers, but still set a minimum base upon which you can still sync below? And they were relatively reticent about building up that base. They've done so more in the last couple of years. So they're sort of the jury duty model. I can't remember which gaming company did the Tribunal. Was that the? It was League of Legends. So League of Legends had a thing where it was sort of like, can you call gamers into a kind of like judgment scenario and let them decide together. In the book, I go back to the 2009 example where Facebook used to have users vote on policy. And if you go back to the press coverage, it's sort of everyone kind of laughed at Facebook for doing this, but it used to be that when they had changes in Facebook's policy, users had to vote on it, as certain like big changes came to pass. They had set this amazingly unreasonable level where if 30% of Facebook users participated, then the decision was binding. And if it was less than 30%, then it was just advisory, right? And it never, it was like, they had some big vote in 2009, basically about should the privacy protections on Facebook extend to the newly purchased Instagram. And like 0.008% of the population voted and they actually said they didn't, they wanted more protection on Instagram and Facebook got to sort of ignore that. And the press was like, that's hilarious. What a dopey thing to think people are gonna do. No one's gonna like respond to vote. It's proof in the pudding, no one did it, right? And very quickly, they voted to not have voting anymore, right? It seems sad to me, this is 2009, right? Think about how the techniques of advertising targeting and personalization have grown since 2009. If we had taken that moment and said, boy, that was clumsy, what's the next version of that? How do you make it so that it isn't a civics homework task, but that it takes advantage of the flow of what you're doing all the time, right? It reads your signals to pick up your civic values. It interjects certain kinds of judgments in very sort of like lean ways and supports those with tools that provide data that would make it very easy for you to make decisions. We have a decade of no development in that direction, so it sounds naive to propose it now, right? Because it stands up against this very built out structure of we're gonna moderate for you and here's the thousands of people that are gonna do it every day. That system would have sounded absurd in 2009. If you said, what if we had 20,000 part-time employees all over the world who were looking at every flagged piece of imagery and making a decision in 10 seconds and all those decisions were filtering up and that was gonna govern two billion people and you'd go, that's dumb, that's really dumb, right? But that's never gonna happen. So what would the civic version that could have germinated in 2009, what would that look like with a decades of innovation and testing? I don't know. So the most we have is sort of jury duty examples, sort of community management structures that have their own problems and then like competing small platforms that try to do something different but they have to battle against the network effects of it's really hard to topple Facebook. So I think we've got some barriers for how we would imagine doing that. The last piece that I can imagine is some kind of like obligation to explore those things en masse, right? And that sounds hopelessly naive, especially in our current political context. So I will get back to the audience but I'm in a moderator's privilege or curation. I'm gonna perform the curation part of the moderation and I guess I have authorization to do that anyway. I can just keep going. You have microphone power that the rest of them get. Yeah, it's true, it's true, right? So I think one thing I was really curious about, you've been working in the space for a long time and reading your work from 2010, I think it holds up very well. But I'm curious about some, whether there's things that you've changed your mind about since you started working in the space or since you started writing about this, like things you were initially excited about that sort of haven't panned out or things where you were more skeptical and that you're more excited about now? Yeah, yeah, that's a good question. I think in some ways my thinking shifted with, in a way that a lot of people's have. So so much of the concern when I first started thinking about this book and when people started thinking about moderation in that kind of like 2009 to say 2013 period was really focused on the what happens when the platforms are over enthusiastic about removing, so it was always like, does Facebook especially have a kind of like small C conservative tendency to just sort of take things down that they're nervous about and whether that means nudity should be acceptable, whether that means political speech in other parts of the world, whether that means whistleblowers, whether that means drag queens, there were all sorts of communities that that classic question of like, what do other communities, how do other communities run up against a rule that is universally applied, somewhat conservative, somewhat sort of like business interest driven and somewhat clumsy as the system gets larger and larger and larger. So the problem was the squelching of speech by the platform and that problem hasn't gone away. And I certainly was in that mode of like, ugh, what a dumb like, Facebook has these kind of very American notions, like prudish notions about what you can and can't show and they're not willing to give people sort of like room to sort of make their own decisions about that. And certainly the FF was really focused on like, whistleblowers and speech in other parts of the world and political activists being silenced by a platform that just was like, I don't know what that is, let's just get rid of it. And I do think that like, my whole sort of feeling about it has shifted in part because now we're dealing with, maybe we always were, now we're aware that we're dealing with platforms that are amplifiers of certain kinds of intervention and the tactical use of that power. So now it's a question of like, on the one hand I still don't want platforms making dopey decisions and I'm struck, I start the book with the terror of war example, right? Which is the napalm girl photo, Facebook taking it down. It's one of those examples where Facebook was doing, it's kind of like, let's remove this thing because it's contentious. Although many people thought that it was just like, automatically removed and kind of a thoughtless exercise and in fact they had established a very specific policy on that photo and so they had thought about it but they had come to the conclusion that it should be removed because of its child nudity, not because of its historical relevance. But I find that now as we're thinking about coordinated harassment, recruiting, hate speech and misinformation, what we're dealing with is platforms being unable to recognize that they have contours and those contours amplify not everything but in particular ways and then as people learn those contours they can take advantage of them in very harmful ways. So I'm much more on the side of like, I want a public reckoning for these platforms. I don't know that moderation is the right model because moderation was built on the we better get rid of the porn thing and it still thinks like that, right? Where's the violation? Where's the violation? Get rid of it, get rid of it, get rid of it. And it doesn't think of, are we producing a systemic mechanism that can be taken advantage of and that has these kind of curly Q effects that we didn't aim to produce but have nevertheless done. So that was a learning curve and I think we've all been going through a learning curve or certainly the people concerned with this issue have gone through something, some version of that learning curve. We got time for one more. Oh, what if I take both questions and I try to answer them? Oh, I'll talk to you after. Perfect. Thanks so much for your wonderful answer. I really have to say that I did not know that Facebook had voting in the past so it's an amazing insight. On that, on the voting but then a little bit more in the future, what about the future of the internet? Because right now the projections are that web 3.0 is going to be decentralized, right? And then that entails that the solution which if I understand correctly, the solution that you're proposing, drawing the veil and then trying to identify who does what, what kind of responsibility falls on which shoulders, how does that stand? The new, the dawn of the new phase of the internet, so to speak. Yeah, that's a hard thing to predict but I think the way you approach that question is that we have to imagine what we expect of platforms while recognizing that they continue to exist in an internet ecosystem. So anything, you know, Gab's a great example where you could say, some people would say, Twitter's got too much power, Facebook's got too much power, they're too centralized, wouldn't it be great if there were like 19 Twitters that all worked a different way? Well, one of them is Gab, right? Gab is one of the 19 Twitters and it's reprehensible but it's also like a vibrant free speech alternative but it's also reprehensible, right? So as soon as you say, well, there's certain power in centralization, if Twitter were a responsible actor then we could sort of demand things of them but as soon as you do that, you create a mechanism that says we're gonna drive people out to other places, right? Those places are, maybe that's good because it's sort of like you found different communities, I'm free of those people but then again, we're seeing a festering of nationalist and misogynistic speech in those environments. So we have ways of thinking about this and I've heard recently a couple of people making the kind of comparison to sort of public health regulation, right? Where you have to somehow, you're only gonna do interventions that happen in certain places but you have to understand the ecosystem of what occurs, right? If you nudge certain behaviors then other institutions will shift so the world you're regulating shifts with your regulation, right? Whether Web 3.0 or some other incarnation of the internet itself is structured differently in a way that current models can't function. It's hard for me to know. It might be that that question about de-platforming like that's a low enough level that we don't want that to be the environment in which these kind of questions are raised but when, you know, Cloudflare says I can get rid of your domain and make that simply unfindable. The question of what should Twitter do is now like rippling down to all these layers. So the worst thing I think we could do is to have a new technical discussion about infrastructure and not recognize that these decisions or these debates like quickly crept out of is Facebook deleting the right photos to kind of like at every level of this institution what do we imagine accountability looks like or do we want to refuse accountability and there's got to be a compelling reason to do that, to treat something as a conduit. But that's not a, the conduit position is not a, I can't punt and say conduit. Conduit is a choice, right? It is a very particular arrangement of roles responsibilities just like a thoughtful responsible platform would be a choice. So all the levels in an ecosystem where all of these questions exist. All right, well. Nessie answer. Oh, yeah. Join me in thanking Charlton for his great talk. Thank you. Appreciate it.