 So, my talk kind of touches on some of the stuff that I need to mention, but I'll be also discussing some other things as well. So, I hope that everyone is enjoying the conference so far. This event is one of my favorites, and I am thrilled to be here in Malmö, Svenska-Konferenzen, Erwerldens Beste-Konferenzen. And this is not the first time that I have been to the conference. So, for those of you who were here last year, you may have seen me speak in the other room about internet memes at the collaborative innovation session. And I got to do that because my biggest claim to fame thus far is that I wrote my master's dissertation on lolcats, and one of the outcomes of that was quite a bit of press coverage. And along with that press coverage came a bunch of nasty internet comments. Now, you all just heard Anita speak about her experiences, and using the common vocabulary that pervades our public discourse on this topic, what both of us experienced could be considered online harassment. Now, there are some similarities between the two. For example, many of the critiques that were directed at me were also personal attacks that were related to my gender. But overall, my experience with what I call online negativity doesn't even come close to hers. Like, it was an entirely different story. And this disconnect, the fact that our very different experiences both fall under the same umbrella term, indicates to me that we need to change the conversation that is happening around this topic. And this is at the heart of what I'm going to speak to you about today. So here's a little bit of fun history for you. The first academic article that addressed online negativity was released in 1984 by researchers at Carnegie Mellon University, and it looked at a decade or so of online behavior. So for those of you who might think that online aggression is a relatively new phenomenon, there's a choice quote that I'd like to share with you. Observers of computer networks have noticed uninhibited behavior for years. In the computer subculture, the word flaming refers to the practice of expressing oneself more strongly on the computer than one would in other communication settings. The Defense Communications Agency, which manages the 12-year-old ARPANET, has had to police use of the network bulletin boards by manually screening messages every few days to weed out those deemed in bad taste. So not only was this sort of thing happening back in the 1970s, it was happening when the Internet was still ARPANET. However, it's important to note that I'm not telling you this to argue that online environments are inherently negative or conflictual. In fact, I'm going to be arguing the opposite, that even though this problem exists within the online arena, it's actually not an online problem. It's a larger cultural and systemic social problem. So over the past few decades, there have been many attempts to address the issue of online negativity, aggression, and abuse. However, nothing really seems to be working that well, and some people even argue that things are getting worse. I don't think that we have an unsolvable problem on our hands, but I do think that there are three misconceptions surrounding this issue that are impacting our ability to effectively address it. The labeling problem, the anonymity problem, and the technological determinism problem otherwise known as the computers are turning us into animals problem. Each of these issues involve a lack of understanding or misrepresentation of the beast that we are truly facing. When it comes to online harassment, we keep trying to fix the wrong problem and it's not getting us anywhere. I like to think of it this way. If your house is cold because your window is broken, replacing your boiler is not going to help. Okay, so first up, the labeling problem. Right now, particularly within the press, there are two terms that we tend to use as catch-all descriptors for antagonistic behaviors in online spaces, and I need to mention them before, cyberbullying and trolling. Now, cyberbullying and trolling are definitely two categories of behavior that can be extremely problematic for some people. However, not every negative, critical, antagonistic, or hateful speech act can actually be defined as cyberbullying or trolling. In fact, trolling is a label that describes an entire set of behaviors, some of which are very problematic and some of which are not. Even the term online harassment, which we are using as an umbrella descriptor for this session, has a very specific definition. Now, I'm an academic and we love to get into it when it comes to word choice and defining parameters. I mean, we will write entire papers debating the definition of a single term. However, what I'm talking about here is not a matter of semantics, it's a matter of policy implications. These terms are being used to group together a wide variety of behaviors that are very different animals and require different solutions. When we lump all of these behaviors into one or two categories, it undermines our attempts to protect people who are subject to severe harassment, overreaches and chills speech that should be actually protected or both. An excellent example of this is section 127 of the United Kingdom's Communication Act of 2003. This is a piece of legislation that hasn't quite worked out as intended. The Act penalizes people for posting public electronic communications which are grossly offensive or of an indecent, obscene or menacing character. The way that this law has been enforced has been referred to as using a steamroller to crack a nut. In two notable cases, one man was arrested for posting an obvious joke about a bomb threat on Twitter, and another was sentenced to three months in prisonment after making tasteless and inappropriate jokes about an abducted child on Facebook. Now, I'm not saying that there should not be legislation to protect people from menacing content far from it. However, there is a massive difference between content that could be interpreted as offensive, such as a gross-out joke reposted from Psychopedia, which was the situation with the man who went to jail for what he posted on Facebook, and content that actually poses a threat or causes harm. Lumping all of that in together, like section 127 does, effectively guts the law because it overreaches. When we refer to specific types of behavior with vague language or attempt to create catch-all laws, we often put ourselves in a position where we fail to address the problem we set out to solve and end up creating new problems in the interim. Next up, the anonymity problem. Anonymity has long been the bet-noir of the online antagonism problem. For years, public opinion has held that because people don't have to use their real name online, they feel free to do and say whatever they want using the cloak of anonymity to carry out their misdeeds. This has led industry leaders like Mark Zuckerberg of Facebook and Eric Schmidt of Google to push for a real name policy, claiming, among other things, that it will increase civility online. In fact, this was one of the talking points that Facebook used when encouraging people to implement Facebook Connect, that since people's comments would be attached to their real names, they wouldn't dare be crude and mean. Now, I don't know if any of you have been on Facebook lately or on a site where Facebook Connect is being used, but clearly this has not fixed the tone of discourse. Back in 2009, I was talking to a friend of mine who worked at a blog network, and they had just implemented Facebook Connect in their comments section, and I asked them how it was going. His response was, it hasn't really changed anything. People are still dicks. They're just dicks under their real names. There are a few problems with looking to a real name policy as a fix for online aggression. To start, as I just mentioned, people are still jerks under their real names, but the crazy part is that it can even make things worse. Three scholars from Carnegie Mellon published a study last year that showed that real name identification actually increased the frequency of expletives and comments for some user demographics. But more importantly, looking to eliminate the ability to communicate anonymously can actually hurt vulnerable populations. It can prevent people from searching for potentially embarrassing health information that they need. And eliminating anonymous communication can even cause the harassment that it's looking to prevent. For example, say that a gay or lesbian person in a socially conservative area was looking to connect with other gay people. In the majority of U.S. states, if they were forced to do that under their real name, effectively outing them, they could be legally fired from their jobs, not to mention being the subject of abuse and harassment. So do people engage in hurtful behavior under the guise of anonymity? Yes. Are they doing it simply because they are anonymous? No. Are there better ways to address this issue without losing out on the many benefits of anonymity? Yes, and I will touch on this in a little bit. So finally, the technological determinism problem or why moral panics aren't helpful. Technological determinism is a school of thought that operates under the premise that a society's technology drives the development of its social structure and cultural values. When it comes to online harassment, a very common and very technologically determinist point of view is that computers and the internet are turning us into bad people and that the online environment is causing this problem. The problem with this argument is that it completely negates the fact that meanness and harassment does not happen in a vacuum. The problem here is us. Technology is not the cause. We are the cause. Technology is simply a platform. Now, this is not to say that technology does not play any role in this. The role that it plays, however, is one of facilitation and enabling. Before the internet became ubiquitous, harassment was localized, which is to say that the geographic restrictions of physical space prevented the types and particularly the massive scale and instantaneous nature of the flash mobs and pylons that Anita was talking about and that you see online today. The internet does not have borders, which can be great when you want to see the latest cat video from Russia and really terrible when you are the target of a decentralized or distributed network that is looking to make your life miserable. It's important to point out, however, that the platforms where this behavior is happening are built to encourage the spread and distribution of content. In fact, this is their business model. Facebook, YouTube and Twitter make their money off of you and your friends, seeing as many posts as possible. And the infrastructure that supports good sharing or virality also supports bad sharing and virality. For many media companies, meanness is profitable. Controversy sells. If you look at some of the most popular websites, they are just like sensationalist newspapers that use celebrity and political scandal to make money. Whitney Phillips, who is one of my absolute favorite researchers, has pointed out in her work that the tactics used by trolls actually mirror the behaviors and the rhetorical strategies of the American mainstream media. So, yes, these platforms do add fuel to the fire. But the actual fire that we're talking about here are the cultural logics and beliefs that underpin all of this behavior. Content is not a virus. It doesn't infect us and use us as hosts. We actively pass this content on and we often do it because it speaks to us on some level. If offensive behavior is happening, it's happening because those who are engaging in it think that on some level, this is okay. And that is not an internet problem. We live in a culture, at least in the United States, where women, people of color, and gay, queer and transgendered people are still treated like second-class citizens by law enforcement and our legislative bodies. According to the National Institutes of Health, 50% of university-age women in the United States have experienced some form of sexual assault. Half. And yet we are surprised and shocked when rape is used as a threat in online environments. We need to stop acting like there is a difference between the online and the offline worlds. The internet is merely an infrastructure that undergirds pre-existing offline belief systems. If we want to stop online harassment, we need to change the cultural values that fuel this behavior. So you're probably saying to yourself, okay, this seems pretty hopeless. Social change is slow. What do we do in the interim? The easiest answer to this question is to heavily moderate your community sites. In 2011, cultural critic, Anil Dash, put up a great blog post where he argued that websites and portals are responsible for creating civil environments. At this stage in the game, there are some proven strategies for keeping conversation civil in online communities. And at its most basic, it involves setting clear rules for your community and having a paid moderation and community management staff to enforce those rules. Of course, when it comes to the larger issues, that's a much harder fix. If we want teenagers to stop ripping each other to shreds on their Facebook walls, we need to stop ripping celebrities and other public figures to shreds in the press. If we want women to be treated with respect and equality in online environments, we need to do that in our everyday lives by reducing institutionalized misogyny, such as unequal pay and unfair hiring practices. If we want torrents of racist and homophobic epithets to stop filling the comments section of websites, then we need to treat people of color and gaze with equal protection under the law. And if we as a society can't do that or don't want to do that, then we have a larger problem on our hands than a bunch of nasty internet comments. Just as we have the web communities that we deserve, we have the society that we deserve. And as Anita said, even though we might not be publishing nasty comments, we are all responsible for making the changes that we want to see, whether that's on the internet or not. Thank you.