 Hey, thanks for having me. Yeah, this is also a happy panel, I'm sure, at home at least. So it's my pleasure being here. It's my second time here, and it's great meeting such intelligent people doing good work. So I'm here with my fellow speakers in academia, and I hope to provide a slightly different perspective. I believe I'm the only reporter on this group. A little background, I was assigned a side beat covering hate crimes and discrimination in the US. It was 2016 at the time. I wrote dozens of stories over those years, ranging from moss bombings, targeted assaults, murders, and other such incidents. That first year, I found myself covering the shooting death of an imam, which is a Muslim religious leader, in Queens. As I live-tweeted scenes from his funeral, my phone began to light up with a lot of notifications. I began getting hateful tweets directed at me and minorities at large. And it began to just snowball from there. If I could get the next slide. So what you see here is one of my earliest encounters with hateful content on the internet. It was shocking at first, but over time, as I continued to report on these issues, it became normal to encounter images and words like this. So as I planned for this session, I looked back on all my years of reporting, and I just kept coming back to this tweet. And I think it can sort of impart what I see as some of the issues related to hateful content online. So if you look at it, it represents also the challenges that are faced by the social media companies like Facebook and Twitter, as well as just ordinary citizens. So it's hateful, it's crass, it's alarming, sure. But what makes this so damaging, in my opinion, is that it's public. It's visible to anyone online. I believe in this case we're talking about Twitter. It's a digital equivalent of a poster thrown up along a wall on a busy street. And even though it's public, this sort of content, for one reason or another, has been proven difficult for social media companies to tackle. Just like the challenges with abating disinformation and lies online, hateful content is hard to monitor or catch or ideally squash before it's even viewed, let alone shared, thousands of times over. So questions start arising. How does one police this sort of content? How is this content not given a stage? I could get the next slide. The second issue I learned exemplified by this tweet is the problem of anonymity. Anonymity is what allows much of the hateful content to exist online. The person who tweeted this, a Pepe the Frog, is by and large anonymous. It allows them to tweet with what I can say and I'll be kind here, such venom. Part of the reason they feel safe in doing so is precisely because no one knows who they are, which leads to my second point. With this anonymity comes a lack of accountability. The lack of accountability for a person or group spreading this hate online is at its core what allows for the toxic ecosystem to thrive in the way it does today. If this person gets banned, so what? It's a fake profile, they'll just create another. Nothing else happens. Are there any real life ramifications? Absolutely not. And if you look at some of the most hateful, darkest areas of the internet, you'll find anonymity everywhere. In fact, you'll find platforms that base themselves solely on the premise of anonymity and obviously here I'm referring to anonymous message boards like 4chan, HN and the like. Time and time again in my reporting and the reporting of so many other great journalists, sites like these are often the genesis of much of this hateful rhetoric. These places also happen to be the genesis of disinformation campaigns that you see going on or the targeting of certain reporters or certain news outlets leads to more questions. What are the potential outcomes of this hateful content online? What can it lead to? Perhaps the most disturbing aspect of online hate to me is the real world outcomes, events that transcend the virtual. This can be in the form of school bullying, hateful incidents on campus which is on the rise, the targeting of religious institutions or in its worst form mass shootings. The mass shooting in El Paso, Texas last year, the suspect is believed to have posted a manifesto full of hatred towards immigrants right there on one of those message boards. In the New Zealand mosque shootings, the suspect not only live streamed those killings, but he also posted his manifesto on a similar message board. If I could have my last slide please. This is an article I wrote after those attacks in New Zealand. That shooter's manifesto was filled with hate, but it was also riddled with memes and references that were half serious. There were inside jokes in there, 70 some pages I believe, meant to amuse people that live in this alternate media ecosystem, like 4chan and 8chan. But the manifesto also revealed itself to have a deeper intent. It was designed to unload a great deal of misinformation with it. And literally it was packed with all sorts of erroneous issues, claims that he knew that people within his ecosystem would understand to be jokes, but a layperson or a reporter covering this for the first time or breaking news reporter would believe to be true. So how did we tackle this and how do we go on from here? More questions, I'm a journalist. I'm asking questions, not answering them. Should news outlets report on hateful content or is it just giving attention the shooters want? How do social media companies or internet service providers deal with the issue if they're trying to handle it? How do they do it better? Does the government have a role to play? If so, how much? Where is the line between privacy and preventing the next mass shooting? It's not my job to theorize on these solutions. I only know that at the very least, news outlets and reporters, both at the local and national level, need to be cognizant on how to report on incidents like this and also how not to report on incidents like this. I believe that this can only happen through an understanding of the dynamics of modern-day hate speech on the internet. People in the news media must have a basic understanding of that world. And to be honest, hateful content online is at the nexus of so many other issues, whether it be bullying, as I mentioned, disinformation, politics, outside state actors manipulating our election, demonization of races, religions, or people of certain sexual orientations, and so on. So much of it is propagated there. And that's why as we head into the next election and as more people and their lives are lived online in ever-increasing ways, large and small, these questions are at the utmost importance. And that's why I look forward to hearing my colleagues' research on this matter as I know they've done a great deal of work about this precise issue. Thank you.