 It's my great pleasure to introduce Susan Venish. Susan is a faculty associate here at the Berkman Center. She teaches at American University at their School of International Service. Susan is the founder of the Dangerous Speech Project, which studies inflammatory speech and its capacity to inspire violence and looks at how to limit the harm of dangerous speech while still protecting freedom of expression. Susan and I and several other fellows from the Berkman Center have been working on a variety of different thought experiments and tests and ideas to test some of the principles of freedom of expression and how they apply online to start thinking about how we can rigorously challenge some of our justifications for freedom of speech through what we can observe online and explore those in greater detail. And we talked a lot about the pro speech solutions for speech harms and the efficacy of counter speech and lots of other stuff. So I am totally thrilled and happy to introduce Susan to you all, where she's gonna be talking about troll-rassling for beginners, data-driven methods to decrease hatred online. Please join me in welcoming Susan. I wish I could tell you that I'm going to, shall I sit or stand? Left to you. Okay. I can flop around and you die. I, unfortunately, I was about to tell you I won't demonstrate troll-rassling. We'll have to leave that for another time. But I'd like to start by explaining this very serious title. So what is troll-rassling? And why do I say beginners? Why for beginners? So as you all know, hatred, racism, xenophobia, homophobia, misogyny and violent threats are all rampant online. We're all worried about it, but there is a sense of helplessness in the zeitgeist about this. Research and public discourse so far have focused on what seems to be an uncontrollable plague of all of this. We hear and we say the internet foments hatred. Online disinhibition is causing this. Or a simpler and more time-honored line, kids nowadays. There's something wrong with the new generation or wrong with the internet or both. Or we hear and we say often don't feed the trolls, as you know. So don't feed the trolls is the online world's principle policy response to this very serious and perhaps growing problem. The other two standard responses, by the way, are the same ones that human authorities, usually but not only governments, have been using and have always used to deal with objectionable speech of any kind. And those two other responses are punishment. You go after the speaker, right? In comparatively mild and fair forms of punishment, sometimes the speaker gets prosecuted. In other circumstances, the speaker is punished without rule of law and sometimes the punishment as goes as far as death. And then, of course, we have censorship, which is practiced very vigorously all over the world today as it has been ever since we began expressing ourselves. Well, expressing ourselves at all. So these two forms, punishment and censorship, can do very serious damage to freedom of expression first of all. That's the first reason why I'm worried about them and wouldn't want to rely on them. Second and equally important, they don't work very well in many contexts, including in particular online. I would suggest to you that punishment and censorship work even less well online than they do offline. The speech of Anwar Alolaki, for example, has been vigorously censored in many contexts, including online, and he himself was punished quite severely with death, as you know. Yet his speech continues to be seen and heard widely today. I was just last week talking with someone who studies this in detail. And Anwar Alolaki's speech is still influencing lots of people online. He continues to inspire hatred and violence just as he inspired the Boston Marathon bombers. So punishment and censorship, I hope I've convinced you if you didn't already believe that, are inadequate. That brings us back to don't feed the trolls. What about that? In that maxim, there are several, even many implicit assumptions that call into question the wisdom of relying on it. Let me give you just four of those implicit assumptions. The first one is that it works. The idea is that if you ignore a troll, still to be defined, by the way, if you ignore a troll, that person will stop. There's some anecdotal evidence to support this idea, but it's quite limited. It tends to be limited only to one platform, first of all, and sometimes only to one exchange, right, with a person and a troll. And as far as I know, there isn't robust data to support the contention that don't feed the trolls is a successful policy response, in other words, that it works in reducing the expression of hatred online. So that's one assumption that I think you ought to join me in questioning. The second one is that online hatred is actually produced in its vast majority by people we would call trolls. The few experiments that have been conducted on who is producing racist, hateful, misogynist, et cetera, et cetera speech online indicate that, in fact, as much of half of it is produced by non-trolls, by people we would not consider to be trolls, we also have some early data indicating that feeding these non-trolls who speak like trolls can, in fact, be successful. And I'll show you that little bit of early evidence that we have. The third assumption is that trolls are a homogeneous species, that there is one kind of a person that is a troll and that, therefore, they are likely to react in the same way in response to similar treatment. The assumption that if you do something to a troll, the troll will respond in a particular way, you must agree, depends on the idea that a troll is a troll as a troll. And you know how well that worked with roses. Okay. The fourth assumption I'd like to point out to you is that, well, the implicit assumption is that don't feed the trolls, this is not quite an assumption, I'll call it an observation, don't feed the trolls, focuses our attention on those people who are producing hateful speech, whether they are trolls or not. However, it is useful and important to focus, if not instead, at least in addition, on the rest of us, on the audience, on everyone else who now sees and hears so much hateful, objectionable, violent, offensive, frightening speech online. Wouldn't it be useful to think about the effect of all of this speech on the rest of us, on the community, the fear, the shame, the silencing and the shifting of discourse norms that this type of speech produces? So what I want to argue to you is that this maxim don't feed the trolls, concentrates our attention on the trolls themselves and trying to change their behavior, their ideas, their speech, et cetera, et cetera. Research and data collection should also focus on the non-trolls, the audience. So we have hardly begun to take the, quite extraordinary and would even say wonderful opportunity that online communication provides to study possibilities for shifting discourse norms away from this hatred. Online communication is a remarkable opportunity for a number of reasons. One, you can study the effects of speech on people when the speech is disseminated online, not so easily, but much more easily than you can study the effects of speech offline. And there are other reasons why it's a remarkable opportunity but to get to the slides and then much more importantly to get to your questions, I'll just say that now I feel I've explained why I call a troll wrestling for beginners and that is we are all beginners. We are all beginners. There is an exceptional opportunity to study other methods of decreasing hateful expression online, which, as I'll show you, a few people have just begun to take, but I hope to convince at least some of you to expand on those efforts. Okay, so there's this opportunity to diminish hatred with other methods including what I call counter speech, that is to say not ignoring the trolls but instead speaking in response to them. And to try to convince you of that, I'm just gonna make three points and then I'll show you this work that's already been done and I will close then with some suggestions for next steps for other work that can be done. Okay, why do I think this is such a great opportunity? After all, a few more reasons. One, it isn't hatred that's new. This idea that the internet is somehow causing hatred is wrong. It does perhaps produce a sense of either different community norms or less explicit community norms or indeed a lack of community so that people feel disinhibited to express hateful ideas but the internet does not create those ideas. They have always been there. What is so different now is that the rest of us over here or of course in most cases over see hateful expression. The rest of us are now privy to speech that pre-internet we would not have seen or heard. So for example, if the Ku Klux Klan held a rally they wouldn't have invited those of us who are here in this room. If men wanted to tell rape jokes to each other in the past they would have told them in a physical space within a small community of people in which they felt confident that no one else who would disapprove of those jokes would overhear. In a locker room, on a hunting trip, I know I'm using stereotypes, please forgive me but in those physical spaces the people exchanging that speech knew that for example women wouldn't have overheard that speech. That's not to say that racist speech wasn't also shared in physical public space as well but lots of it was exchanged in places where those who spoke it and those who heard it knew that it wouldn't be overheard. That increasing line online is not the case. That is to say speech is crossing boundaries between human communities of all kinds in a way that in all the rest of human history was impossible. This has become a new feature of human life. It causes a tremendous amount of pain but it's also an opportunity to learn to toss lightly perhaps speech back across those boundaries to see what sort of effect it might have. This is something we don't know how to do not surprisingly it's a brand new sport if you like. It's a brand new effort. People are now exposed to new hateful speech but can also be exposed to new constructive and peaceful speech and speech expressing social norms to which they have never been exposed in the past. For example if I asked you where have 16 year olds traditionally gotten their ideas and opinions throughout the course of all of human history they got them overwhelmingly from the small highly homogeneous normative community of people around them. Their parents, their siblings, their peers, their clergy, their neighbors, their soccer coach. You know a small homogeneous group of people that is no longer the case. That is an opportunity that again has only just barely begun to be exploited. The second reason why I think this is quite an opportunity is that it is far from impossible to shift speech norms within communities. As I said to a journalist yesterday I'm not suggesting that we all fly to the moon in our pajamas without a rocket, you know, with your arms. It's not impossible. It's not even all that outlandish. On the contrary we have plenty of examples pre-internet. Those of you who have heard me talk about this before please forgive me, I always give this example. What is the likelihood that an American politician will use the N word in public even though not only is that speech not prohibited by law in the United States. Any politician has a constitutionally protected right to use the N word in public. However, how likely is it? Anybody? Knowingly, knowingly, believing the microphone is on. Thank you, 0.01%. My students usually say zero but you're more precise. Okay, it's extremely, extremely unlike 50 or 60 years ago there were probably paces in the United States where you couldn't have gotten elected if you didn't use that word. Is that fair to say? If you want 70 years, okay. No, anybody wanna argue with that? So speech norms in that particular case have shifted not only a little, not only a lot, but 180 degrees entirely. Marriage equality is another example of a very rapid shift in historical terms. That one did have some help from the online world. But that's to say speech norms within communities can shift in human terms very quickly. And the third point is that as the psychologist Maria Konikova pointed out very eloquently in a Fine New Yorker piece last October, people's behavior shifts dramatically in response to community norms. I will be able to illustrate this for you in just a moment with some online research as well. As she wrote, one of the most important controls of our behavior is the established norms within any given community. For the most part, she wrote, we act consistently with the space and the situation. This is true of the vast majority of people, even trolls. Now, as I've been promising, I'll describe a few early efforts to gather data on how to diminish hatred online. I'll suggest some other experiments and then I will ask, of course, I will ask you to suggest some. In Kenya, at the end of 2007, after months of hateful and inflammatory speech online and after an election fell apart, there was a terrible eruption of violence. More than 1,000 people got killed and more than half a million people were displaced from their homes. Many of them have still not been able to return home. Last spring, almost just a year ago, on March 4th of 2013, Kenya held its next presidential election after that disaster. As that next election loomed in 2011 and 2012, many of us who love Kenya, including, of course, many Kenyans, began to worry about the prevalence of inflammatory speech in Kenyan discourse on and offline. And a project was launched called UMATI, U-M-A-T-I, it means crowd in Swahili, to scour the Kenyan online spaces for hateful speech and inflammatory speech and what I call dangerous speech. As Andy explained, dangerous speech is a term I've coined to refer to speech that has a special capacity to inspire mass violence, such as what happened in Kenyan 2008. So this is a slide representing some results from the UMATI project and what it shows is unfortunately a large number of examples of very inflammatory speech collected from Facebook pages and then such an astonishingly small number collected from tweets that you can perhaps barely even see that side of the slide. Now, and by the way, this is data from more than 7,000 total examples that were eventually collected and the project is now continuing and so forth. What I'm trying to show you is the gigantic disproportion. We expected a considerable disproportion between Facebook and Twitter for all kinds of reasons, including somewhat different populations on each of the platforms, the comparative expectation of privacy on Facebook, not on Twitter and so forth, but not this kind of a disproportion. This was wild. So we started looking at first anecdotally to see what might explain the comparative absence of inflammatory speech on Twitter and what we found was lots of counter speech. For a bit of context in this period in Kenya, there was so much concern about inflammatory speech that there were many, many appeals in all kinds of media, including, of course, online spaces, but even in graffiti murals painted on the walls of Nairobi, all kinds of language appealing to people to keep the peace, to stay calm, to remember that they're all Kenyans and not to listen to inflammatory speech. So what we ended up finding was quite a bit of that on Twitter. That is to say, community responses on Twitter, asking other people to stay calm in response to inflammatory speech. So here's an example. One person tweets, all the aloo and aloo-ya tribe can go to hell as long as it's the Kikuyu tribe running things. That was, particularly in that context, a very inflammatory thing to say. In response, is this the Kenya we want? You should apologize to the KOT, the Kenyans on Twitter. And this is highly compressed. There was lots of other conversation, as you can imagine, but I'm trying to give you a very compact, streamlined sampling of some of this work. And so then we have a response from the original person who tweeted, or at least from the original account. Sorry, guys, what I said wasn't right and I take it back less and learn. Sort of odd behavior for a troll. I'm hearing you saying, hmm, and things like that. Of course, I don't know what you're thinking, but we at least were a bit surprised by that. So we started looking at some other cases of inflammatory and hateful tweets that were met with lots of counter-speech, spontaneous counter-speech, by the way. These are not experimental interventions. So you may remember that Nina Davaluri, an American whose parents immigrated from India, was chosen as our latest Miss America. And in almost immediate response, there were many, many, many tweets attacking her and attacking her choice as Miss America and mischaracterizing her as Arab, among other things. So here is one such tweet. This account poured out hateful, racist tweets about Miss America, about Nina Davaluri. Here are responses. One day I hope you'll realize how shameful this tweet is, I hope you realize it tomorrow. Someone else, your hatred made it onto Sky News, congrats. That's an incidental interesting factor that we also noticed in Kenya, often online speech gets greatly magnified, particularly in contexts where not everyone is online. When it is reported in the mainstream media, this also happens extensively in the United States. Particularly hateful content online is often reproduced by the mainstream media. So that happened in this case. And then we have a somewhat different type of counter-speech, ignorant, illiterate, racist idiot. So this should make you aware that in the same way that we can't say troll and refer to one unilateral homogenous species, we can't say counter-speech and be referring to only one type of thing. So one of my own projects now going forward is to study counter-speech and try to understand the different forms of it that exist and to develop a kind of taxonomy of counter-speech with the idea of determining when which forms are useful. When is it successful in what kind of circumstances and in response to what sort of speech produced by whom? So here's another example of counter-speech. Don't just hate her for her skin color, she's an American like anybody else. Response from the original account, I think it's funny what they have to say, but I am not racist. And I didn't realize it would explode like that. So counter-speech has not evidently been very successful until a few hours later. He tweets Miss America directly, actually. Sorry for being rude and racist, he's putting racist in quotes, but he still says that. And calling you a Arab. My so far only anecdotal, non-rigorous experimental hypothesis is that trolls have bad grammar. And the counter-speakers somehow tend to do better with their grammar, but as I say, that's only a very anecdotal impression so far. Sorry for being rude and racist and calling you a Arab. Please tweet back so everyone will know it's real. So this is another example of a troll, someone who produced hateful inflammatory speech, who received counter-speech and seems to have been affected by it such that he recants and even apologizes like the example in Kenya. We have lots of other ones, but I don't wanna take the time to show you many of those. Instead, I guess I've already, I hope that I've already suggested to you that it is worth studying this quite a lot more to see if one can glean some knowledge from it about how to produce this sort of reaction. I hope you will put this together with what I asked you about 16 year olds, or I suggested to you that 16 year olds traditionally have gotten their ideas and opinions from a small and homogenous group of people. This guy might be such an example. And in his original tweets, he might have been saying the sorts of things that the rest of us would never have seen or heard from him before he got online. So this is what I mean by an opportunity. Now, I hasten to tell you that I don't imagine that we are going to, that anybody is going to bring around all the hardcore extremists and haters and violent misogynists and racists and so forth. Out there, that is first of all not realistic and secondly, not necessary to shift community norms. We need to influence some critical mass of people within a particular community, not everyone and particularly not the outliers. Most people are in the sort of 80% or so of malleable middle who, as Konikova argued in her New Yorker piece, tend to shift their behavior in response to what they perceive to be community norms. There's another source of some experimental data and that is the gaming industry. Already some quite interesting experiments have been done by various different gaming companies and regarding various platforms. The most work has been done by Riot Games which produces League of Legends, which I'm told is the most popular game of all time. In any case, it's exceedingly popular. Jeffrey Lin directed an extensive set of experiments produced by a large team that he assembled at Riot Games. After Riot Games became alarmed at the level and virulence of toxic hostile and even violent speech being used by gamers playing League of Legends. The results are quite striking. First of all, this is one of the experiments that found to the surprise of this group of researchers that fully half of the toxic messages were not coming from people who would otherwise have been considered trolls. That is to say, about half of toxic messages were produced by people who otherwise seemed to behave normally the rest of the time. They were, as Lin describes it, apparently having a bad day. Although as he points out, just one person having a bad day can cause all the other people playing a given game to perceive that the atmosphere in the game at that time is toxic and that can in turn shift norms. So first of all, half of toxic messages did not come from trolls. Second, peer feedback and community driven sanctions caused quite dramatic changes in player behavior. Other, what we can call tweaks in the platform made substantial differences in player behavior and therefore we can probably conclude community norms as the players proceed them. And finally, very small changes in the platform. For example, small changes in the language used to prompt players made substantial differences in their behavior, in some cases, dramatically improved player behavior. Even font colors apparently can have a considerable impact on player behavior. So there's another source of interestingly enough some of the same conclusions and that is a team at Facebook that conducts what they call Compassion Research. Compassion Research is work designed to teach Facebook how to get people who use Facebook to behave in more pro-social ways. One of the conclusions that, or one of the pieces of work of which Facebook is particularly proud is what they call social reporting. This is changing the flow of prompts that you see when you try to flag content on Facebook. For example, you see an image that a friend of yours on Facebook has posted. You don't like it. You try to flag it or report it to Facebook. And Facebook now will prompt you to engage first with the person who posted this offensive, this content that has offended you. And then the Facebook flow tends to prompt you also with language that you might use when you communicate with this other user. The language focuses on emotion so you're encouraged to say something like, could you please take this photograph down? It makes me feel embarrassed or it hurts my feelings. And Facebook working with a couple of teams of academics at Berkeley and at Yale has had very successful results with this. And has again, just like Riot Games, found that very small changes in the language used in these prompts can have a dramatic impact, positive impact on user behavior. So I have said that this is highly effective and useful but not in all contexts. So that brings us to very different contexts. Such as Myanmar or Burma, where several very important things are true. One is that Facebook is overwhelmingly the most used social media platform. So much so that a friend of mine who works a lot in Myanmar told me just the other day that you can ask somebody, are you on the internet in Myanmar and often they'll say yes, I'm on Facebook. In other words, Facebook in the eyes of some people is the internet. When people search, they search on Facebook. When they look for information, they look for it on Facebook. Facebook is a source of news and information, has therefore an enormous amount of influence. And unfortunately at the same time, there has been perhaps something like in Kenya in 2007, there is an increasing tide of highly inflammatory speech offline and online, in particular inciting Buddhists against Muslims in Myanmar, including a group of Muslims called the Rohingya. And these terribly ugly Facebook posts are examples of that. This brings us to a couple of last points that I'd like to make and then I'll finish and turn to questions. And that is that platform architecture can have an enormous impact on responses and on how people behave. For example, the social reporting flow that has proved so useful in some contexts and that can greatly improve behavior in some contexts does not work so well in Myanmar. Where first of all, if you are encouraged to report in a country where that is still only emerging from a military dictatorship, the implication of reporting is entirely different, can be terribly pejorative and frightening and therefore people often hesitate to do that. And secondly, when the platform helpfully asks you whether you would like to engage first with the person who posted a message like this, that would be often the last thing you would wanna do since you could be putting your life in danger or you think you might be putting your life in danger. And therefore the reporting flow that is so useful in one normative context could have a totally different effect in a different one. Therefore any lessons that can be learned from the sort of research I'm describing may have to be carefully understood within particular human contexts. Since obviously human, well it should be obvious that human context and social context are extremely different from one another. And then finally it's worth noting that Facebook which was designed for purposes very different from the ones for which it is being used in Myanmar. After all it's not intended originally as a new source, nor was it intended to take over as a kind of substitute internet. It's also not very good for the expression of dissent. After all, if someone posts something like that or something like this, these are authentic posts from Facebook, simply translated from Burmese into English. This is simply a cover from Time Magazine depicting Rirattu who is probably the single most famous monk in Myanmar at the moment frequently inciting his followers against Muslims and against Rohingya. And I use it not because it's on Time Magazine but just to give you an example of the sort of person who's unfortunately spreading these kinds of messages. If you are a Burmese person who is appalled or any person who can read Burmese and who is appalled by some of those messages and you see them on a Facebook page, how can you respond? You may flag or report to Facebook but as I've already suggested to you there are lots of reasons why you might not do that. What else can you do in response to a post on Facebook? You can't tell me none of you is on Facebook. Well you can like it, right? That's the main thing that people do in response to posts on Facebook. You can like it and that's being done in huge numbers. So there are tons of likes in response to all three of the posts that I have shown you. Of course that's not a very good way of expressing dissent. You can't unlike it. You could comment on it. However, the Facebook page is effectively a private space because the person whose page it is can immediately delete your comment or festoon the page with lots and lots of other positive comments which will effectively disappear your comment. So the interesting thing is that in this case it's a private space functioning as a quasi public space and this suggests that it is also important to focus on the characteristics of the architecture of particular platforms and the effects that they may have on discourse norms. So finally, ideas for more research. There are, I'm happy to tell you there are lots of anti-hatred efforts online as we speak. I took a couple of months last year and had a wonderful time collecting them. Many of them rely heavily on humor and parody by the way. And many of them are enormously popular if we can judge at least by page views. I couldn't find however a single systematic effort to study their effect. The extent to which they change anybody's mind or decrease the incidence of hateful speech. So my first suggestion is let's do that. Study the effect of existing anti-hatred efforts online including but not limited to counter speech. There are many different sorts of efforts. Happy to discuss those but I won't take up time with it now. Number two, let's study counter speech and its effects in other online communities. I've shown you that Riot Games and some other gaming companies have studied the effects of counter speech and other anti-hatred techniques within gaming communities but that's only one type of human community. There are lots of others such as mesh networks. Mesh networks which are also relatively static human communities. And then there are silos such as the one that I've, effective silos such as the one that I have suggested to you Facebook on Myanmar constitutes. And then finally, I've implicitly already suggested this but I'd like to include it in the list. It's worth studying changes in platform architecture in various contexts for the effects that they may have on discourse norms within particular communities. So with that, I would like to thank you very much and ask for your questions and ideas. It's like historically the reason people came up with don't feed trolls was the expectation that starving them of attention causes them to get bored and go somewhere else or go away. That's right. Why, when do you think that does not work and why? I could only guess since I haven't done proper research to study that. And it's one of my suggestions that this be studied carefully. So first of all, you said that one of the reasons why we began to rely on don't feed the trolls was the, I think you said the assumption. And even that it often caused, basically if somebody keeps speaking and nobody's responding to them and nobody's listening to them that they will be failing to achieve their objective which is to stir up trouble. Yeah. So first of all, there's a big difference between an assumption or an expectation and an observation. At least it would be good to have lots more observation and if possible to do it in a systematic way so that one could draw some real conclusions from it. For example, is it true that people really fall silent when they are ignored by their targets? Or is it just that they might attack one particular target less when that target ignores them and instead focus on others? Or do they switch platforms or do they come back tomorrow? We know of so many cases of relentless systematic harassment and abuse of people online that it's difficult to believe that don't feed the trolls works at least on a collective basis. The reason that it came to be, and I think it started on use net for those who are old enough to remember it, was the feeling that by starving the troll of attention it avoids disrupting the entire conversation in form. So basically it's a way to keep it on, to keep things on point and on purpose. And I want to be clear, I'm not saying that don't feed the trolls doesn't work or never works, but that number one it would be very useful to understand when and in which circumstances and how it works and secondly not to allow it to displace all other possibilities which in my view also should be studied. Willow? That also, don't feed the trolls as far as attention, they're still getting attention because the audience is wide enough that they are getting positive responses that they're not getting the negative pushback. So you've just removed any dissent. That's also a very good point. Sometimes if you actually identify a person with a troll, they realize they've been outed and they'll actually shy away from the conversation because they feel like they can't do any damage on you. Then I would ask you whether that is a person who would be stereotypically understood as a troll. In other words, if you can convince someone to stop with that kind of community shaming, is that really the person you mean when you say troll? After all, at one point in this talk, I think I said, I haven't yet defined troll and then I never did. So that's another task that is on my list or that I'd love to shove on to one of yours. Yes? So I'd like to distinguish between people who are deliberately troll-ish and people who are inadvertently trolls and the second group of the ones that might be shamed, I mean, all of us in a traffic jam have cussed and sworn and if all of that was live tweeted, we would be trolls to the max in a traffic jam. But these people are inadvertently trolls or otherwise well behaved, well adjusted, normal nice people who don't wanna spread hatred in the world, they're just in a crappy situation or frustrated. And so this links to your last point. The question I have for you is, do you think that the system of discussion online is fundamentally unproductive and leads people to that kind of frustration of a traffic jam? You have these 8,000 threaded comments that go nowhere and it ends off when somebody calls you Hitler or cusses. But it's unproductive, what do you get from a discussion that's 8,000 comments? I'm wondering whether the tool itself is broken. That's a wonderful question. An impartial and inadequate response that I can give you is that we have so far observed just in early research on Twitter that in a situation where there is hateful speech and counter-speech, once the counter-speakers begin using insults and or slurs, all the discourse goes down to that level. So one, I talked about developing a taxonomy of counter-speech, without doing that, I can tell you that counter-speech is unlikely to be, or I can hypothesize that counter-speech is unlikely to be successful if it uses obscenities and slurs. And as Riot Games and Facebook found when they were tweaking platform architecture, the language used in counter-speech seems to make quite a considerable difference also in the positive direction. That's not really an answer to your question, but I hope that it is at least a correlate. Platform being broken potentially, there are different ways to play around with that and this may be another thing to study if it's Facebook or whether it's experimental platforms, but you see the very latest Facebook posts, right? And you mentioned people will just shove them off, but when you look at something, another platform, you see Amazon, which is showing the best reviews and the worst reviews, right? There are platform mechanisms to call out different qualities. So I wonder if there's a lot of space there to explore. Yes, for example, YouTube has recently changed its opportunities for moderating comments. If you've got a channel, you can moderate comments so that certain comments are favored, or for example, so that anonymous comments are featured much less prominently and comments associated with real identities feature higher and so forth. So yes, there are some interesting experiments out there with, and this is perhaps impartial response now to your question. There are ways of regulating comments spaces in particular, so that it isn't just this fire hose of everything in which the worst tends to dominate in the minds of some of the people reading. Yeah. I want to moderate a political debate space in Singapore and political debate spaces are like bait patrols. One of the things that we thought would work was removing anonymity, which YouTube's assumption was, but as you pointed out, Facebook is the source of a lot of hatred that's associated with their family, it's associated with their friends, it's real identities are involved and they're still extremely vigorous trolls there. So anonymity was not the solution, which surprised us. We found the same in Kenya, but anonymity doesn't necessarily work. Has it? You talked about mind lock. I'm interested in the suggestion that there's a sort of direct causal relationship between people expressing pretty vast sentiments in Facebook and Twitter and actually actual sort of violence and social problems, especially in a country where internet penetration is so extremely low, it's the vast estimates is far less than 5%. So I mean, does it matter that much that people are venting on Facebook when the actual causes of the problems are probably a lot more profound? Yes, thanks very much for the question. First of all, yeah, internet penetration at the moment is below 10% in Myanmar. However, there are two companies that have publicly set themselves the goal of producing mobile access for 75% of the population by 2015. One of them is going straight to 3G and already lots of people are buying inexpensive feature phones from China, the same ones that are increasing the number of people online in countries like Kenya at an exponential rate. So whatever is happening among a relatively small tranche of the population in countries like Myanmar is likely very soon to be happening among a much larger proportion of the population. So that's one thing. Secondly, I don't mean to imply that we know that speech online causes violence offline. It may be that it's more of a reflection of the sort of speech that is also being disseminated offline. And that is the case in Myanmar, for example. The sort of speech that is being distributed hand-to-hand on CDs after it is given in physical speeches by monks or by political leaders is also what is online. And then the people who are online are disproportionately influential and educated and have money and so forth. And so if they see discourse norms in which it is entirely acceptable or seems to be acceptable to say things like the ones that I showed you on slides, then that may have quite some influence on them and on the people they talk with. Most people think it's acceptable because we see that kind of speech that most of us think is unacceptable and I would suggest that people in Myanmar are the same. Well, there's also a difference between thinking it's unacceptable and feeling that one may safely say so in what is seen to be a public space. So it seems to be the case that there are quite a lot of people who disagree in Myanmar but no longer feel that it is safe to say so. And that represents a shift in discourse norms in a wrong and potentially dangerous direction. So I'm not saying has it that shifting discourse norms online would necessarily solve a problem but it might at least decrease the worsening of a problem. And at least it seems worth a try. Yep. Under what circumstances and what kind of exposure that you see examples of counter-speech occurring because one of the things I've been trying to get a handle on is that like, you know, if you click on any Java article or any political blog or even a CNN article and you'll see long list of hate speech kind of going down but there's not really that much response and it seems like response only occurs in certain types of environments or certain types of media. And wondering if the implication for that is that there's just a hard threshold for desensitization that people just don't care. I mean, if you look at a sports blog and look at like the Michael Sam thing for example, as a football player coming out of the game you won't see any sort of counter-speech in that sense or you'll see little but you'll see it more so from the media than you will in the comment sections. Saying that, you know, whatever, whether the physician is justifying or not but there's a lot more hate speech there that's not responded to. So just here's what circumstances this. So I am also very curious about that. Have also observed that that counter-speech is surprisingly common in some on some platforms and in some circumstances. In fact, I was surprised to have found so much of it on Twitter in response to certain surges of hatred and racism and so on, like the Nina Develuri case. I don't know why it is so prevalent in some cases and absent in others and would love to know. Is it that for example, people don't think it's likely to have any effect? My guess is that if we did a poll and asked people, do you think you're likely to have an effect on someone who has tweeted or posted something racist or hateful? My guess, and believe me, it's only a guess. I haven't done this yet. Most people would guess that they won't be able to really accomplish anything with counter-speech. So, pardon me? What if you asked it from the other point of view in terms of would you care if someone posted something hateful? Would you care? Yeah, so people ignore hate speech and they kind of are desensitized to it, so it's not as big of a deal if you see it in the comments, you kind of scroll over it or it's like, oh, it's troll speech, right? If I'm on Reddit or if I'm on political and I'm looking at the comments, I just literally just gloss over it. It's not something that sticks out to me. You may, but there are also lots of other people who haven't succeeded in really filtering it out, who are still... Yeah, I'm kind of curious who those people might be. Yeah, me too. Thank you. And then we're coming back to this side of the room. I'm sorry, I haven't forgotten about you. Go ahead. You said initially that in the real world, people would say things in one context that they wouldn't necessarily say in another. In a closed room with the people that they think agree with them, they might say one thing and not say something in public. The question I'm having about this is people are posting, to some extent, knowing or expecting a certain community to see it. Like if you're posting on Facebook, you expect your friends to see it. If you post to a group, you expect just the group to see it. If you're posting on Twitter, you may use a hashtag. Either people were following you or people were following a particular hashtag. And if you're posting a comment on a news website, you know that it's going to be seen by whoever. So on some level, I wonder whether people are intentionally posting on sites to be provocative and also intentionally posting what they don't think will be seen beyond if it's the same old thing, it's just in an online context. I think the answer is yes, both in different circumstances. So for example, you saw that the guy who posted the Miss America tweets that I showed you was surprised that he got so much response, was surprised that his tweet was reported on Sky News, from studying those tweets and quite a few others around that case, including his and once in response to him. My sense is that he didn't intend to be speaking to such a broad and varied audience. Not everyone is the most sophisticated user. And that's why I say there's an opportunity. He heard from all kinds of people whom he would never have encountered offline. And eventually, he was influenced by those people in what seems to be a very positive way. So again, it's not a solution to hatred in the world. But there might be some quite useful tools that haven't yet been properly studied or discovered. Somebody over here? You were kind of answering my question as you go along. But I'm wondering, how does this function as a form of democracy in a lot of things to your question? Well, maybe. It's a perspective-based. Yeah, well, maybe one way of sharpening the answer in response to your question is to note that in the United States, one of the most common theoretical justifications or explanations for our First Amendment jurisprudence is the remark by Justice Louis Brandeis in a case published in 1927 that the remedy for bad speech is good speech or more speech. By the way, he qualified that remark. He said, if there is enough time to correct the harm and so forth. But Justice Brandeis gave us a policy response. Interestingly, however, we really don't have much evidence about whether and when and how that remedy is successful. It's our collective intuition that it works pretty well. Because after all- There's a huge conflict between the moment of the spread of everyone that comes to freedom of speech or speech in general. But as time goes by, there are more models of democracy, certainly in this culture, other places as well. I believe in it. Well, in this country in particular, but of course not only in this country, freedom of speech is understood to be an essential requirement for democratic life and, in fact, democratic governance. So then we have to find a way to make that work. I would say also in the context of the observation that very hateful offensive and even threatening speech can silence people. So freedom of speech, functional freedom of speech, can be affected in many different ways by discourse. And it shows that it's like there are consequences. Yeah, yeah. You can't yell fire when you're trying to move together. Now you have to find a way to do that. So this is sort of a free part question and feel free to answer any part of it you want to focus on. But first, it seems really problematic to sort of have this discussion of trolls and non-trolls without defining either, right? So multiple examples that you pointed out, you said this seems like a troll response. This doesn't seem like a troll response. And so clearly there's some sort of framework that you're using in your head that indicates which one you think maybe it is because your responses sort of indicate that. So I'd be very interested to know what it is, even if you don't have a formal definition of troll or non-troll, what kind of criteria you're using to distinguish not. Really interesting. Also, it seems sort of picking up on what the gentleman over here said earlier, thinking about domains within which people would say, be willing to say certain types of speech. Sort of historically, I mean, you gave a bunch of examples like rape jokes maybe would only be told among men, things like that. And I think historically a lot of these actually point to power differentials and that they're not actually just private domains. It's about whoever actually has the most power, you can say it in front of people who have less power and that's part of actually enacting your power. And so it seems like what might be new online, if it is, and if it even makes sense to separate online and offline anyway, is the idea that there are different power of differentials. And that actually might be a really good thing. Yes. And then finally, I wondered, you mentioned that it was easier to study the effects of speech in online spaces. And I wondered, how do you study the effects of speech and why would it be easier? And what was the last thing? Why would it be easier? Why would it be easier? OK. Thank you. Trolls and non-trolls. I'm not sure how useful the term trolls is, and so I use it mostly tongue and cheek. But it's still a very thought provoking question and so I will give it a stab. Trolls are people who are so deeply committed to their racist, xenophobic, hateful, et cetera, et cetera, views that engaging with them is very unlikely to change the way in which they're expressing themselves. And what I'm suggesting with the data and also with some of the anecdotal evidence that I've so far seen online is that there is a common misperception that the majority of hateful and offensive and inflammatory speech is produced just by those people. And then non-trolls would be other people in whom there are factors at work such as ignorance, fear, lack of engagement with groups of people different from them and so forth. That's the first question or an attempt to answer the first question. And with respect to the power differentials, yes. I think that's a very useful observation and I agree that it can be very good that power relationships are different online than they were or still are offline in some cases. So yes. Last one, remind me. So the effects of speech, of course, that's also a very large category and can be defined in many different ways. One of the simplest ways to define it is the capacity of speech to alter the way someone else or to influence the way someone else speaks. So if someone who posted a hateful or angry tweet about Nina Develuri reads tweets from other people imploring him to change and then apologizes, then if he has been affected by that speech online, something which, of course, we can't assume, maybe his grandmother sat him down in the kitchen lectured him. But if he has been influenced by speech online, then we have been able to observe that by studying tweets. And that's an easy way to find such a person and ultimately to find lots of such people with a view to examining how interactions between people online can influence the online behavior of those people. I think that the idea of trolls, a lot of people use a different kind of framework where the trolling is actually kind of an art form or other people are seen as the puppets that the troll is actually using. Yes. You asked me for my definition. Yeah, yeah, yeah. But I'm sticking to other people use different ones. When you use that term sort of more broadly, different people might have different ideas. And so it's worth, I think it's really worth specifying exactly what you mean when you say troll, because I think it's open to misunderstanding and the types of responses in terms of counter speech that you would give to these are very different. Point very well taken. But it also seems like sort of going back to what Hasid said that even if you can have impact on the discourse, right, like if you can have discursive effects, that is fantastic in some ways. But we don't know what that actually might mean in terms of more physical behaviors about how people are going to act. I mean, there's this great anecdotal study of mystery writers who are supposed to have one of the lowest incidences of homicide ever, presumably because they're out writing out the all of their angry thoughts, right? So maybe these people who are on Facebook or who are venting via Twitter, maybe that's actually preventing violence. And so if we see a discursive effect where people stop venting but they still feel the same, you've actually done a disservice, right? So it seems, I mean, you also have mentioned that you have lots more research that you're planning to do. So if I could ask one last follow-up question here to be, what are some of the specifics of your research? And what are you going to tackle next and how are you going to determine impact or whatever is your next challenge? So I'm about to start, for example, I'm about to start a two-year project looking for cases on Twitter like the two that I showed you. That is to say cases in which there has been a surge of hatred, hateful speech, with counter-speech responses in which some of the original accounts have recanted or apologized. We'd like to collect lots of those examples with a view to studying why and in what circumstances counter-speech seems to be successful in influencing people who have originally produced hateful or inflammatory tweets. And secondly, we would like very much to study the effect of this counter-speech on the rest of the audience, on the rest of the community. What effect is it having on people who are reading these exchanges? We guess that the huge incidence of hateful speech is having an impact on people, although that's also considerably understudied. We'd like to know whether the counter-speech itself is also having an impact on other people. What are the perceptions of these types of exchanges? Particularly, first of all, on Twitter, it's very easy to study it because Twitter is public but also we're already seeing that there tend to be these sudden spikes, these sudden surges of expressed anger and hatred. And then in quite a few cases we were surprised to find there's also a sudden surge of response, some of it equally angry and expressed in obscenities and so forth and then some of it expressed in more gentle and conciliatory terms, which brings me back to this point that there are many forms of counter-speech, which as far as I know, haven't been examined much. It's time for one more brief question. Wait, that's, you choose. I can't do that. Quickly, I have two fast comments and questions about areas of research. One comment is I see a lot of hate speech, I feel obligated to do something about it, but I don't because I don't have time to respond to that many issues or there's too much of it. And second comment is I've seen postings on Facebook that I consider curly anti-Semitic and have reported them and the Facebook responses does not a violation and that's disappointing. The questions I have are have you or anybody else that you know of done research on the sort of cyber bullying that has resulted in suicides of teenagers? Yes, there's a huge field of research on cyber bullying now and because there is such a lot of concern about it, particularly in the United States and generally in the West, that's not to say that there isn't any concern about it elsewhere, but it is one of the forms of objectionable speech online that has received the most attention and study in the United States and so yes, there's quite a bit of work on that. And the second area I wanna ask about, you talked a little bit about anonymity. If you can consider a continuum from anonymous comments to pseudo-anonymous comments to purported real names to fully authenticated real names, do you know if anybody has done any research on where along that continuum one might have an effect of controlling speech, not controlling, but... Do you mean by requiring people not to be anonymous? But, well, yeah, I'm saying there's this continuum from anonymous to highly authenticated and whether there is any research that says... There correlates that continuum. There are quite a few projects to look at the effect of anonymity or quasi-anonymity on online behavior I can, if you like, after the talk, I can give you a couple of references. I don't think, though, that one can make a general blanket statement that, for example, this spot along the continuum will diminish hatred most because, as I tried to suggest, different contexts online as offline are functioning differently and having different influences on behavior, both of people posting hatred and people reacting to it. Thank you. Thank you guys all for coming and please join me one more time if they can see me. Thank you. Thank you. Thank you.