 Welcome to the Berkman Center. My name is Mary Gray. I'm a fellow here at the Berkman Center and a senior researcher at Microsoft Research. And I have the great pleasure of introducing our guest today. Nathan Matias is a third-year fellow here at the Berkman and a PhD student at the MIT Media Lab and the Center for Civic Media. He's going to talk with us about some work that he's been doing, ongoing work, both collaborative work, and also some work that he did with Microsoft Research New England over the summer looking at Reddit editors and their labor around moderation. So just a couple of reminders. This will be webcasted. So if you ask a question or have a comment at the end, if you could say your name for posterity and just know that this is being recorded, we won't do round robins of introductions. But for the webcast, please make sure to identify yourself. So we'll know who to badger later for follow-up. And without further ado, let me ask us all to welcome Nathan. Thanks, Mary. And thank you, everyone, for coming and those of you watching on the internet. Before we start, I should note that there will be some things in this talk that people might find difficult. And if you see this slide, you know that the next slide after it might be something you might choose not to look at. I'll let you know. So 20 years ago, John Perry Barlow declared the independence of cyberspace. And he imagined a world where we would restructure human relations and institutions in such a way that we might be able to create a more humane world. And he thought about the different approaches to governance that that might bring about. But 20 years later, we have very real concerns that these new online structures might perpetuate and even extend our capacities for discrimination, harassment, and other kinds of social harms. My goal today is to, firstly, to introduce those issues of discrimination and harassment and how they play out online. And secondly, to offer a map of the different ways that citizens are responding to take on the challenge of these problems. To start, I want to be clear that oppression, as the aorta collective puts it, this is an anti-oppression group based in California, is a huge problem. And it includes both conscious and unconscious momentum on multiple fronts, on individual fronts within ourselves at a systemic level and at a cultural level. And when we think about oppression, we're thinking about ongoing patterns of unfairness and mistreatment that play out over time, beyond just individual incidents. And if we put Barlow's ideas on this particular map, he's thinking about the ways that maybe restructuring systems might change this set of challenges we have as humanity. But I think that actually, we've seen online all sorts of ways that these cultural and individual, these conscious and unconscious challenges continue to resurface and shape how things work for everyday people on the internet. And here are some examples. So for example, we talk about racial discrimination online. There are all sorts of studies that show that although online markets, like local classifieds or even things like Airbnb are restructuring how we are economically relating to each other, that prejudice still exists. And those prejudices within ourselves and in our societies result in situations where black people get fewer offers, they're trusted less, and they actually get less money when they participate in online markets. And there are ways that the design of our systems can activate that prejudice. Research by Jason Radford has shown that on the website donors choose that allows people to do this incredibly pro-social thing of donating resources and materials to schools who need them. Simply adding information about the marital status of a teacher activates people's prejudices and has led to a dramatic difference between how women receive donations and how male teachers receive donations. And on top of that, we've seen similar cases in the case of algorithms where our own prejudices, systems of discrimination can start training the algorithms that further structure our interactions online. This research by Latanya Sweeney points to the possibility at least that algorithms may be learning discrimination as well. And furthermore, there's the possibility that even as we create alternatives to traditional institutions, we might be creating entirely new avenues for discrimination. Early work by Hannah Wallach and I looked at the role of discrimination by online audiences in who gets heard in society rely less on editors to decide what goes on to the front page and more on what each other share online. We're seeing possible evidence of cases where online audiences are engaging in discriminatory behavior in whose articles they're sharing. In this particular case showing that articles by women were receiving fewer shares on social media than articles by men. So discrimination is one of the challenges that we continue to struggle with online despite so many restructurings of how we relate. Harassment is also another major issue. And I'll be showing in just a second examples of just some of the harassment that public intellectual and game critic Anita Sarkeesian has received in the past few years. If you would find this uncomfortable, it's fine to look away for a moment. I'll let you know when the slide has passed. So these are the kinds of things that many people online face every day. People who use their voices in the public sphere have become targets of large scale harassment in some cases that's enabled by the internet. I'm moving on from the slide now. But it's not simply a case of words attacking people. Just in the last few weeks our own representative Catherine Clark was a victim of a swatting attack where someone anonymously called a swat team on her home and many of these swattings are coordinated online when people release the address of someone who they hope would then become the victim of something like this. And there are other ongoing issues that people face that Jesse Daniels has documented in her work on cyber racism and Sarah Jiang has dedicated a substantial part of her book to thinking about the connections between online harassment and domestic violence. It's not solely something that people face from anonymous people they've never met. These are issues that are linked with other historical problems of oppression. And it's not just women and minorities. In this research by Pew, led by Maeve Duggan we actually see many cases where a larger proportion of men than women report experiencing very serious problems like sustained harassment and physical threats online even as women experience stalking and sexual harassment in greater proportions than men in the United States. And it's an international problem. Groups like Take Back the Tech have started to document international cases of harassment that people are experiencing online. And if you're interested in exploring further scholarship on these issues, a number of people in Berkman Center have coordinated to create a resource guide and other lit reviews on these kinds of problems. But let's come back to this question of the iceberg of oppression. I hope I've been able to show that even though the internet may be transforming some things about the systems that shape our collective action and the institutions in our lives, we still struggle with these individual, cultural, conscious and unconscious forms of oppression every day. As Kate Milner and Sarah Bennett-Wiser put in their recent paper on network misogyny, when we focus solely on the technical and legal elements, these structural issues, we can fail to recognize this wider cultural, I suppose this wider set of cultural oppressions that are going on, these battles over cultural norms, these challenges and contestations over what it means to be a person with dignity who has full rights and opportunities in our world. And normally when we think about these problems, we turn to the verbs of responding to social problems, but James Girmelman calls the kind of interventions that we can take. But in this second part, what I want to ask is actually who is doing the work? Who are the people who are responding to these problems? Certainly, there's an important role that government has to play, and Danielle Kietz-Citron has leveraged an incredibly powerful argument for the role of governments to play some role, and certainly there's a huge role for technology platforms to play. And in fact, there's very little known about the potentially hundreds of thousands of click-workers who are doing the work of responding to these things when platforms set policies. But there's also a sense in which for the last 40 years or more, everyday citizens have been bearing the greatest amount of work and labor associated with this. And in the second part, I'd like to outline four different ways that everyday citizens do take action to intervene and make sense of these social problems that we experience online. To start, we can talk about mutual aid. In 2011, prominent atheist blogger Rebecca Watson faced huge amounts of online harassment after speaking up about sexual harassment within atheist communities and in conferences, and as a result, a number of feminist atheists created a forum called Atheism Plus where they could have their own content moderators to defend each other from racism, sexism, and harassment on their community. They were able to use this moderation to defend themselves on that platform, but their attackers realized that many of those same people also had accounts on Twitter where the affordances for moderation are not quite so protective, and that left them open to mass harassment campaigns. In response, in 2013, those groups started thinking about ways that block lists, which can be used for an individual Twitter account to prevent them from receiving tweets from someone they've identified that they don't want to hear from, ways that those block lists could be extended so that a single block list could be shared across multiple people who have a common set of rules and processes for deciding who's on that list, who's off that list, and how to keep people safe on Twitter. That was the first example, on Twitter, at least, of block lists, which go all the way back to the well and very early online communities. As Stuart Geiger has very aptly put it, these block bots make responding to harassment a more visible and communal practice, where you actually have people who volunteer to go through claims of harassment and decide who's on that list and field inquiries about whether someone should be taken off that list. That prompted a wide range of peer support and mutual aid responses, like Block Together, which is a technical platform that supports people to create their own block list sharing systems. And the more recent project called the Heart Mob by Hollaback, which organizes peer support for people who are experiencing online harassment. This just launched in the last few weeks, and it's designed to be a system where someone who's experiencing harassment, maybe for the first time, can go to get advice and support and help documenting and responding to the problem. So in this sense, mutual aid is like a shield. It's a situation where people are organizing to protect each other in an emerging situation. But there's another model that I see emerging online, and it's this model of advocacy care. And like mutual aid, people engaging in this advocacy care are still dedicated to supporting people who are experiencing these problems. They're also dedicated to research and advocacy to try to make change at this structural level. One example is something I was privileged to be part of, and in our discussion, we can talk about it further. I believe Amy Johnson, who is co-architect of this study's main findings, is also in the room, so I'm sure I have plenty of things to say. This was a project led by an NGO called Women Action in the Media who were particularly interested to support and better understand problems of online harassment on Twitter. And they ended up with a unique arrangement with Twitter where they were able to receive reports of harassment and forward them on to the company. So they were able to help people walk through the process of reporting harassment to the company. And there's a larger process that that kicked off for the volunteers who supported people through WAM. And Amy and I and a whole cast of others, many affiliated with Berkman, were then asked to help them study the data they collected with consent in that process to better understand things about what it's like to receive and report, to respond to harassment as well. So we were able to generate findings on who's reporting harassment, findings on the kinds of harassment that people were reporting. And also, we were able to do statistical analyses on what kinds of harassment Twitter was more or less likely to respond to. In this particular case, at that time we found that Twitter was not very likely to respond to these cases of doxing. These moments where people are publishing personal addresses and other vectors of attack online. So those are some of the kinds of research that we were able to do that happy to go into further detail later on. And this isn't the only example of that kind of advocacy care. There's the online censorship.org initiative which supports people whose content has been removed from sites and they're advocating on behalf of people who feel like they've experienced censorship. And there are also groups like Global Voices Advocacy who do great amount of this kind of work online. And there are some benefits to this. You're able to advocate directly to platforms on those things where platforms actually have a purview to act better understand the context and support people around their unique situations. You're able to do research and then use that knowledge to springboard public policy advocacy. But there are also some risks. There are unstable relationships with platforms. Maybe they decide that they want to stop working with you. It's hard to know how that relationship will play out. There's variation in the quality of support. WAM's volunteers were volunteers. And there are real questions to be asked about whether the kind of people who are doing this kind of advocacy care have the training or indeed the training and support to protect themselves because there are substantial labor and mental health costs. In our study on WAM, we found that actually one of the volunteers experienced substantial amounts of PTSD and had to drop out of the process. So this is a risky endeavor in some cases for the people who engage in it. But it does extend this mutual aid model into an approach that allows you to start to address these systemic issues. There's another approach that I spent my summer at Microsoft Research studying. And it's this approach of governance. Wherever we've seen online platforms emerge over the last 40 years, platforms that support wide ranges of cultures and communities, we've often seen people in the middle, moderators, community leaders, people who fill in the cracks of our online relationships, solve problems and keep communities going. You know, they couldn't be conference hosts on the well, moderators on slash dot, admins on Facebook. There's even something on Xbox called Enforcement United where volunteers contribute their time to help people facing problems. And on Reddit, there are moderators. Now Reddit is an incredibly diverse site. It is a social news site officially where people share links and have conversations. But there's an incredibly diverse set of cultures and communities on the site. Everything from book clubs to places where people look for jobs to communities where people share images to communities like Am I the Asshole? Where people post pseudonymous posts about their own behavior and ask other pseudonymous people, did I behave appropriately in this conversation? And Redditers will tell them if they think they were the asshole in that conversation or not. There's an incredibly wide-ranging set of cultures from porn to books to science to politics. And each of these communities has a moderation team which can range from two people to a thousand. These are the founders, the organizers, the promoters, architects, maintainers, legislators, and enforcers of their communities. And a lot of them work together across communities. In this particular network graph, I've mapped out the ties of common moderation between subreddits in a population of 52,000 subreddits where these lines mark communities that have shared moderation teams. In my qualitative research with moderators, I've heard from moderators who spend hours a day moderating. I've learned about the different scheduling and organizing efforts that moderators engage in to keep up with the efforts of defending their communities from spam, hate speech, and other problems. And I've learned about subreddit networks where communities band together to have shared rules and governance and spread the load. And I've also studied ways that moderators are pressured to be transparent about their work, about the governance that they do when they delete people's comments or when they otherwise engage in governance work. In this case, just a few weeks ago, the science subreddit started publishing a transparency report to allay concerns from the subscribers that they have that they were abusing their power. And I've also studied what happens when moderators pool their energy and voice to pressure platforms to change how they work. In the case of the Reddit blackout where over 2,000 communities shut down in order to collectively bargain with the company and force the company to improve the moderation tools that they provide to these communities. So this governance role is one that definitely has cultural and systemic implications. It's one that also involves defending and supporting a particular community. And there's also another angle that we need to be thinking about when we think about citizen responses. And it's this angle of social change. And here are a few examples of, and especially looking at this iceberg of impression, when we think about changing the cultural or the personal ways that we in our own behavior and our own societies behave in unjust ways, that's where we start talking about social change. And here are a few ways that people have tried to do that. Here is a chart, for example, of estimated by Jason Radford of participation in Wikipedia by men and women, men on the left, women on the right of this chart, that he estimated in 2015. And in the case of Wikipedia, it's not going to be enough simply to try to change the norms. Social change also involves expanding participation. And Jason has plotted out different, the kind of projections for different strategies that you might take. One strategy might be to, for Wikipedia to do better at retaining female editors, which is if it does that, it might reach gender parity by 2026. Another might be to recruit more women. And there has been a huge amount of work within Wikipedia around a variety of strategies for expanding representation on the site, some of which has been my own. But as Amanda Menking and Ingrid Erickson rightly point out, bridging these gaps is more than just getting people in the room. It's about addressing these wider cultural and social issues. Another strategy, in addition to participation, is to think about our own biases and our own discrimination in our own behavior. This is research, early stage research that I'm doing on an ongoing basis on discrimination by journalists on Twitter in who they pay attention to and respond to. This is a chart of the average percent per journalist, percentage of women followed across 66 different publications in the United States and the UK. And we see here that most journalists on Twitter follow mostly men. So even as we think about Twitter and social media as an incredibly valuable way to expand who gets heard in society, we still have these personal behaviors of homophily and bias that shape who we interact with. And in work in collaboration with Sara Salovitz, I've been trying to understand the factors that lead to this, personal factors, factors having to do with social norms and the groups that we're part of, and to look at whether there are transparency initiatives that we can introduce that could shift personal behavior and group behavior on this particular kind of discrimination. This is the follow-by system that we're testing right now to see if exposing people to information about who they interact with on Twitter could have an effect. But it's also important to think about group-level norms and the values that persist within a particular community. This is newly published work by Betsy Levy-Palak testing anti-bullying interventions within schools across 56 schools in New Jersey, accounting for factors at the individual level, group-level norms, and the social networks. A study I think is a very breakthrough study, particularly because it supported young people to design their own interventions to reduce bullying and social problems in their schools. Often, anti-bullying initiatives are top-down. They're driven by the priorities and values of adults. But this particular study said, we believe that young people have a role not only in reducing these problems, but in defining the goals and values and interventions that are important. Coming back to this value of citizen power in shaping the problems and responding to the problems that they face. And they found that supporting young people to design their own anti-bullying initiatives was associated with a, had a large effect on disciplinary reports and that the position of those young people in their social network, how popular or central to that network they were, was also important for predicting the effect of that particular intervention. And if you want to learn more about this, Betsy is actually going to be giving a talk this week, this Wednesday here at Harvard. So that's the social change angle on addressing these things. If the other three are often very concerned with protecting people and communities, it's also important to see a role that citizens play in addressing these wider issues of social change. Now, you might have noticed in this last part, the verbs have come back in. We've started to ask questions about the effect. We've started to talk about change. And as I move from the work that I've traditionally done to document problems of harassment and discrimination and study the work of people who are fighting them online, I've found myself thinking more and more about causal questions, especially because the interventions that we imagine to address social problems online can sometimes backfire. And my GIF is not playing. There it is. Yay! For example, here's a study by Justin Chang on the effect of downvoting across four different political news sites. He found that the people whose comments are most downvoted come back and contribute more. That their future posts are of lower quality as perceived by the community, and they go on to downvote other people very badly, increasing the levels of acrimony in those communities. So here's a case where we think that this intervention of pooling our time to downvote people could make our communities better. But in these particular communities, the effect was actually the opposite. And so as I move into the last year and a half of my dissertation, I've been thinking about ways that we can support citizens to run their own experiments to estimate the effects of their efforts to address these problems. And there are huge challenges in terms of thinking about the research ethics of this and also methodological challenges about finding ways to meaningfully support people to run good experiments that they can involve in helpful and meaningful conversations about what they want to do as a community. And those are the challenges that I'm hoping to tackle in my upcoming PhD. So to sum up, I hope that I've been able to show some ways that simply addressing or changing the systems of governance that we have and changing how people collaborate and relate to each other online hasn't necessarily created this more humane world, but that there are some patterns emerging as Barlow hoped for that illustrate ways that even as governments and platforms debate how best to respond, citizens are also bearing a huge amount of the work and innovating in remarkable ways to find ways to address these longer-term oppressions that we face together as a society. Thank you. And I think we have time for questions and I'm hoping that we'll have discussion, a rich discussion. There's a computer illiterate. I probably ought not to be here or asking this question, but... Of course, you largely exclude, had to exclude from your talk what platforms and government agencies can do. To deal with this problem, am I right in thinking that if the internet is going to operate in the many beneficial ways, it does that to a certain extent these bad things are going to have to be put up with and we can't, they are not going to completely eliminate them. That is the question. Yeah, I think there are a number of things that governments and platforms are doing. So one area where there's been a lot of legal activity is child pornography. And that's one thing where many governments and many platforms and citizens have agreed that this is so bad and the harms are so bad around that it merits special legal and technical considerations. I think platforms have developed a wide range of responses using machine learning, using flagging systems, using human labor in various ways to try to respond to these problems. And those boundaries of what is acceptable, what is not acceptable, what will we put into law, what will we leave to platforms and citizens to decide for themselves. Those boundaries are under intense negotiation in an ongoing way. And I think there are scholars studying and advocating for various angles. So people like Daniel Citron, one person who's thought deeply about this. And there are also scholars who are looking directly at that kind of negotiation process. People like Tarleton Gillespie, who's thought about the ways that platform policies allow platforms to position themselves as supporting all of the good things that they want people to recognize them as bringing to the world while disclaiming responsibility for the bad things that we might not want to see. And those are very live and ongoing conversations that we might, I guess, as we look and see things like the New Twitter Trust and Safety Council, we're seeing examples of new arrangements and approaches that people are coming up with to try to work through those questions. I hope you don't consider the same question asked again, but what, you consider just, I probably don't in vice versa. How do you draw the lines on what's right just? That's one. And the other one is I certainly should be allowed to start a group in my house that consider women not as good in math. Shouldn't I be able to start a new Twitter or any other app or site only for people who consider that women not as good in math? So this question of where the lines are and whose conversations are acceptable has been a major point of contention for one of the platforms that I've spent substantial time studying, which is Reddit, who on one hand have had typically an approach that is more along the lines of anything goes. They want to support a wide range of communities to have a wide range of conversations. And so for example, for the longest time conversations that distributed non-consensual photography and different hate speech groups, different neo-Nazi groups were permitted and thrived on the platform. And this past year, the companies started to change their policies under pressure from their other consumers and users and board members in the public where they started to make rules that would instead of saying there are certain kinds of speech that they want to prohibit, they decided to say that they would not allow certain kinds of behavior. And on that basis, they removed several communities entirely from the site and banned all users who are moderators of those communities, which prompted a great backlash. So I think we're seeing people wrestle with the fundamental categories through which they think about these questions. Are we to think about these things as a speech issue and an intellectual freedom issue? Are we to think about these as issues of behavior and harms? And I think as we see companies evolve in their policymaking, as we see legislatures come to terms with this, I think this discussion around behavior is going to be even more central. Nathan, thank you so much for your talk, and I'm always really grateful about how wonderful you are at crediting people and really thinking deliberately about how to talk about other people's work. So with that in mind, I wanted to ask, I know you've read probably more of this literature than maybe any other human, but certainly than me. And I was wondering what kinds of things would you like people to study? Like what kinds of things do you wish people were looking at more, just with an eye toward sort of understanding better what's not known about the field, because you've done such a great job of providing an overview. Thanks, Kendra. I think there's been a lot of focus on trying to understand what is online harassment and who are the people who do it, and there's clearly a huge amount of work to be done there still, because our struggle with definitions, as we've just been discussing, is one of the fundamental struggles, how we define these things, where we draw these lines, determine whether we see them as like legal questions or otherwise. In parallel with that, I think we've had far too little research on the effects of online harassment and the effects of our efforts to deal with them. So for example, many of our conversations about why online harassment matters are based on the assumption that it's a risk to people's exercise of their speech rights. So we have this assumption, which is well grounded in qualitative evidence and the experiences that people face every day that receiving online harassment has the effect of pushing people out of participating in our democracies. And although that's a compelling argument, we really don't have quantitative evidence to show that this kind of thing is happening on average. And that's one area, just documenting these effects on people's lives, whether pushing people out of the public sphere or the effect that it has on their own person and emotions are areas where I think there's a lot of work to be done. And then also, I think there's a great need to understand the effects of our efforts to respond, both our efforts of care for people who are experiencing these things, as well as the effects for the people who offer that care, who might themselves face mental health challenges. And also the effects on the wider cultures and strict structures and systems and norms that support and propagate these things. And so as I move forward with my research, I'm hoping to focus more in those areas. And then I guess finally there's a great need for people working on these things to talk to each other more. In our literature views, we've found so many different communities where scholars have done great work, but haven't necessarily been aware of each other's work. So that's another thing I'm hoping to dedicate time to in the coming years. I'm Eric here at the Law School. My question relates to some of these questions in that it seems like a lot of this that you've presented today is either U.S. focused or maybe Anglo-American. I'm curious about maybe the kind of international perspective or different approaches in different countries or if you've looked at all it at cultures like the United States that are very free speech-oriented and almost anything goes versus maybe something like a European model where it's a little bit more restrictive, where there's a lot more respect, or something that's very restrictive like a Chinese model or something like that. Any thoughts on that about how different countries or different governments respond to how they approach this and whether there might be lessons there that are a little bit more universal in character? So my own work has focused mostly on the U.S. and the U.K. There are scholars who are doing great work on and really trying to build case studies on these international dimensions. Here at the Berkman Center we have people like Susan Benesh and the Dangerous Speech Project that's trying to document those things. And I mentioned the work of Take Back the Tech. There's also trying to think about these questions internationally. Unfortunately, that's a bit out of my purview. One of the challenges for someone like me who's more of an engineer and a social scientist is that often my research is limited by the reach of a particular platform or community that I'm working with. And there are some platforms like the Facebooks of the world, the Wikipedia's of the world, and to some degree the Reddit's where it will increasingly be possible to do these comparative studies that account for cultural differences between languages and cultures. And I'm really excited to explore those kinds of studies as they become more technically possible in the future. Thanks, Nathan, for a really interesting talk. I had a question about the underlying grouping you have for actors. So you had governments and platforms and citizens and governments and platforms in very obvious actor categories. But I'm wondering about citizens and why you chose the word citizen because you could have chosen user and talked about sort of the commercial and private corporate relationship that's going on here. But I think you're making a larger argument and I'm wondering if it's connected back to that Barlow quote that you started with or how it all fits in. So if you could talk a little bit about that, that would be great. Thanks, Amy. I'm really inspired by other movements in Western history in particular where people have responded to kind of rapid technology change through citizen efforts to both support each other and work for larger structural change. And a lot of this comes from inspiration in the work of Eleanor Ostrom on the citizen governance and the role for citizens in governing environmental in cases of environmental harms. And there are also other histories like the histories around the formation of the FDA in the United States and UK where you have this vast expansion of productive capacities. I think I have a slide somewhere here of the poison squads in... Here's the poison squads in early 20th century United States of citizens who volunteered to eat adulterated food partly to help researchers understand the effects of food with borax and alum and formaldehyde in it but also partly so that they could advocate for the creation of things like the FDA. And you have these fascinating movements like a microscope in every home around finding efforts, ways that citizens can better understand and respond to problems that are emerging that people are trying to make sense of, that government hasn't yet figured out how to properly govern. And so I draw great inspiration from that and also in my own qualitative work that I've talked to people who do this work. I've been struck by both, certainly the labor that goes into it that supports these platforms but also the way in which they bring the same kind of attitudes and passions that someone might bring to being a border crossing guard for their school or being on the school board and the ways that they come under the same kind of pressures as someone who plays that civic role in their communities. So as I do my field work, especially in work I'm trying to write up now on the work of Reddit moderators, I've been struck by the fact that while they're doing certainly this labor, they're also in this very civic role in their communities. Is this on? Yeah, you can hear me. Thanks for your talk, Nate. It's so grateful for your comprehensive all embracing vision on these things. My question maybe is kind of a quirky, certainly less formed follow-up to Amy's. I'm curious about the role of moderation in these situations and as you were talking, I just wondered if there's like a sort of nascent political theory of moderation, a kind of political anthropology of moderation from the Reddit community model to the click-workers. And I just wonder maybe if you could talk some more about moderation and where it comes from and where it's going and the diversity of approaches to that crucial political fulcrum. What a great question. I'll try to keep it short. So we have had people in these kinds of roles for as long as we've had people relating to each other online. And I think a few places, I think I'll share maybe a few places to turn rather than give you the comprehensive all-encompassing answer. There's work by Brian Butler in the 1990s where he looked at mailing list moderators and the different work that people put in in mailing lists and news groups, which is where I draw that rich taxonomy of work, the founders, promoters, the legislature. And more recently, there's been a lot of interest in governance on sites like Wikipedia and folks like Darius who's in the room here, Amanda Menking looking at emotional labor of that. We've heard here at Berkman just two weeks ago from Aaron Halfaker who's looked at the way that automated systems play in that. And I think that that role for automated systems is something that we need to especially pay attention to getting to grips with. And there's great work by Stuart Geiger, who I already mentioned, on the role of bots and the ways that we use code to understand how things are playing out in our online communities. Many of the subreddits have millions and millions of participants and it's frankly impossible for any single person to really understand what's happening in them. So the moderators who engage in that work, sometimes in the thousands for a single community, really rely on automated systems of various kinds, both to understand how things are playing out in their community and also to do the work of moderation. And my small piece of that in my research that I'm still wrestling with and trying to write about focuses on this way that moderators are definitely positioning their work in terms of the platform and trying to maintain a good relation there, the ways that they're trying to relate well to their communities, who in many cases they're accountable to, and also the ways that relationships between moderators shape how people become moderators, how they stay as moderators, and how their idea of what it means to be a moderator evolves over time. So those are questions that I'm actively asking in my research. Hi, I'm Ron Newman. I happen to be a volunteer moderator for a local live journal community and somehow I think at least we've managed to avoid most of the problems that have been outlined here, although I'm not quite sure how we got lucky. But my question is about, you mentioned a perverse effect that a downboarding, which to me is essentially crowdsourced moderation, can encourage bad behavior instead of hiding it and making it disappear. Why do you think that is and what could be done about that? That's a really important question that needs more research. I've always thought that with all ideas here, you can suddenly get downward to death and you don't even see it anymore because Reddit said this is worthless, find it's 100, it goes away. Yeah, absolutely. So one of the most fascinating findings in the world of research on these questions comes from Moira Burke from 2008. She's now at Facebook. And she did a study on, oh, I completely forget which platform, but she wanted to see if posts that were more positive received more comments on them or whether posts that were more negative received more comments. And she did an analysis over hundreds of different online communities and the answer was it depends. That there's all sorts of cultural variation between how different communities work, who's in them and what their norms are. And so when we look at findings like Chang's on the effectiveness of down voting, it's hard really to know how well that generalizes. Does that mean that all down voting systems work that way? Or is there something about those four particular like politically polarized communities that meant that that's how it played out there? And there are a variety of ways that we might start to explore that and that's one reason why I'm especially interested in supporting communities to do their own research and potentially pool their findings with other communities so that we can start to get a sense for what are the differences between communities that lead to such different outcomes for the same kind of intervention. I've never been able to understand that I've thought about it quite a bit. What is it exactly about this in the cloud or online or digital communication that disinhibits people that permits them enables them to say things that they would never say in their sort of analog existence? I mean is it the relative anonymity? Is it the fact that they're not as identifiable? They're not as localizable? Or is it just the absence of eye-to-eye face-to-face contact with anybody who's reading what they have to say? I think it's extremely important to get at this because it may be possible to correct one or another of the reasons for this absence of civility without necessarily trampling people's expressive and expressive and rights. My response, so this is a live and ongoing debate whether things about the design of our online communications are prompting a unique kind of harassment or insubility. I think I would turn back to this of oppression for helping make sense of this and also to that wonderful article by Bennett Weiser and Miltner. Even as there are certainly some cases where we've been able to document things through the design of a system that have effects on discrimination, we have to acknowledge that we don't live in a world where people are civil to each other and behave in just ways. We live in a world that's full of all sorts of racism, intergroup conflict, sexism, and we are struggling with these things as a society in an ongoing way. And that certainly a great deal of what we see online is simply people behaving like they always behave and we're seeing it online. And so I would, I guess, agree that it's important to find those spaces where maybe the design choices we're making are activating or furthering those kinds of problems, but I think we also need to acknowledge that really to address these problems we need to be thinking in terms of social change. Well, I'm so glad you had that response, anything, because all I was thinking was we often don't think about how what's enabled here is the visibility of the kind of vitriol that's often directed at individuals that, unless you're at the target of that, you just don't feel and see in here. So it's a, you know, for better and for worse, I feel like to your point it really renders visible the kind of tensions that we often navigate in a day-to-day way. My question is around a comment earlier you made about wanting to also be able to provide opportunities for posting material that's been taken down and what kinds of rubrics, logics you use in more of the advocacy care work of balancing, and I don't even know if that's the right paradigm for it, but making sense of how to provide opportunities for speaking in a kind of lock-in freedom too with the capacity of blocking and freedom from something that might be hurtful. So you're asking about the tension between like block systems and ideas of say the public sphere. Yeah, and I mean, I don't know if you have a specific example of something that you've been actively working to make visible that's been taken down and how to make sense of that as an act of a particular kind of liberation to speak. And I wanted to ground it in some specific case if you have something where you've had to negotiate that where you've got, you know, we can probably think of back and forth between individuals as those cases where that's getting negotiated, but when you're participating with a group of people trying to care for a conversation, how does that play out? Like how to work with that? I don't know if I'm still being too abstract. I can give an answer. I think we can maybe talk later to narrow down that question. I think there are definitely critiques against, say, the work that WAM did that by offering support to people who were reporting online harassment, they represented a threat to freedom of speech in America and around the world. And I think it's very easy for people to maybe conflate. This comes back to that need for better definitions and better understanding of the kinds of things that people experience. In our work, once we published the report from the work that WAM had done, we actually had a fair number of people come to us and say, oh, now we understand what they were trying to do, and we recognize that our fears that you were a threat to speech rights are actually not well grounded. We now see that a lot of what WAM was doing was supporting people who had ongoing issues of harassment or it was connected with someone that they already had a tie to in their life. I think there are concerns around whether blocking technologies in particular might lead to a world of very carefully crafted filter bubbles of people not being able to hear other viewpoints. And Stuart Geiger has a talk from the Association of Internet Researchers conference where he kind of untangles some of those questions more eloquently than I can. But I think my answer typically to those things is to urge people to look at the specifics of what people are blocking and why they're doing it, and then secondly to look at the mechanisms that they do or do not have in place for dealing with cases where they may have made mistakes, in the case of the block bot, they have an appeals process. In the case of many subreddits, they have appeals processes, and that's very different from how most online platforms work. Most online platforms don't tell you why they removed your content. They don't give you a chance to appeal and they certainly don't give you a chance for that appeal to be considered by a collection of peers who deliberate over that, and in many cases, whether it's Wikipedia or many communities on Reddit or things like the block bot, those processes of accountability are either in place or being put in place as more and more users advocate for them. Okay, there. I have a question about the way I'm studying. You mentioned during it that you said with consent and there's some question as to what consent means in social media research. I was wondering what the consent model is for that, whose consent you get, and I ask that kind of because I'm really curious what the response of attackers is to being researched. So I guess the Ouija question and the follow-up. Yeah, so in the case of the WAM project, which I should note was entirely designed and implemented by WAM, and our team of researchers came in part way through, they asked for... Firstly, they advertised that they were doing research and then secondly, they did ask for consent from people who were reporting harassment. Now, as researchers, we knew that these were merely allegations of harassment and it was incredibly important for us to treat that data with the utmost security and protection for people who were alleged harassers because there could be grave consequences if that data would be released. And this is one of the core challenges that has faced anyone trying to do evidence-based research, especially large data set evidence-based research in this space. And a question that Brian Keegan and I have been exploring in our work on the ethics of particularly causal research on problems of online harassment and other moderation questions in communities. Some researchers, not the majority, but some researchers studying hate groups and alleged neo-Nazi groups claim that they're justified in doing covert research on those groups without asking consent. Others take different lines and there's still a very live conversation to be had about just how to balance the risks and benefits to people who are in a conflict online and that's a question that I'm actively pursuing as I try to figure out how best to proceed with my own research. Thanks for the provocative and fascinating and feels encyclopedic talk, which is a good thing. So platforms have tools and automated systems to deal with these issues and call it the advanced guard of citizen self-care, whether the Reddit moderators or big moderators of big subreddits have automated tools. What's the equivalent of a microscope in every home? And should we move there and move towards that and what would that actually look like? Yeah, I think things like these mutual aid systems and those kind of mutual aid advocacy approaches I think are the closest parallel I can think of for that kind of thing where with something like a block list just a group of two or three people could use one of those tools to protect themselves from particular kinds of harassment. But we're very much at an early stage and much as people in the mid-19th century were debating whether you needed to use chemistry or microscopes or other kinds of technology to observe adulteration of food, people are still trying to figure out what ways are there to detect and observe and even describe. When we think about technology description, language, simply having the language to talk about it is itself one of the most powerful things. Simply having a form that people fill out is an incredibly powerful tool and one that is hotly debated for pretty much every form that any organization or platform puts out. And I'd say that that question is probably the one that has most captivated many of the computer scientists that I know who are trying to work on this question. For example, there's... Oh, anyways, we could talk about three or four universities who have recently started initiatives to think about that question after the talk. Well, thank you, everyone. I really appreciate your question.