 My name is Ceci Correa and I'm here to talk to you about the psychology of fake news, why people believe in fake news, and what we can do as developers, designers, technologists to create experiences that are more honest and transparent for our users. So I actually want to start by talking about Orson Welles in 1938 and a broadcast of War of the Worlds. If you were here earlier during the break, you were listening to Orson Welles's rendition of War of the Worlds, which they broadcast on October 30th for their Halloween episode. Now Orson Welles being the, what's a good word, genius maybe? Being the genius that he was, he didn't want to just do any rendition of War of the Worlds. He wanted to do something different, so he thought, wouldn't it be great to tell this story as a series of breaking news? So this will be a news broadcast and we'll tell it like it's happening right now. So the night of the broadcast, they started off with this pretend show, there was music, it started off with news about the weather, and then we had the breaking news that a UFO had landed in New Jersey, and then panic started. It is reported that due to this broadcast that night, people panicked out in the street, they went outside, they packed their bags and left and tried to go somewhere for safety. There were reports of even people just trying to even kill themselves because they were just so afraid that this was happening. There was a study that came out in 1945 that tried to tie some numbers to this event, and this study in 1945 said that about one million people had listened to and had been impacted by this broadcast. After that, this kind of became a cautionary tale about the power of the media, and it's been in textbooks about media and critical studies of the media ever since. That was then, this is now. I want to play for you a clip, and the first thing I want you to do is close your eyes and just listen to the audio first. On the back end now of my, now that it's almost completed, although there are all kinds of issues that I care about, the single most important thing I can do is. Now when you listened to this, who did you think it is, Barack Obama? Okay, now watch his mouth. It is white gold because our parties have moved further and further apart and it's harder and harder to find common ground. So when I said in 2004 that there were no red states or blue states, there are the United States of America, I was wrong. On the back end now of my presidency now it's almost completed, although there are all kinds. So here you can see kind of how that is being made. They are using facial manipulation technology to essentially use an existing video of President Barack Obama and move his mouth, and they're also using this other program called Project Voco by Adobe where if you feed it about 20 to 40 minutes of someone talking, you can then just type what you want them to say and Project Voco will give you an audio track of that person saying those things in that same voice and in that same cadence. If you want to find out more about this, I would highly recommend you go to Futureoffakenews.com and you listen to the Radio Lab episode entitled Breaking News. They talk a lot more about this technology. One more thing that I do want to point out about that episode and just to talk about ethics is that one of the biggest things that they talked to about the creators of these programs is are you worried of what people are going to make with this technology? And they were not. They felt like people would be smart enough to figure it out. If people know that this technology exists, it's okay. They'll be able to have the critical thinking skills to figure out if it's fake. But the thing that happens with this is that it creates plausible deniability. If you actually catch someone doing something and you have video evidence of them doing something or talking about something, they can easily say, well, actually, it's a fake video. It creates this society where nobody can believe anything. So this is the post-truth world. This is where we live right now. And we are no longer at a point where we can stop and think about whether we should, whether we could. These things are here right now. They're not going to go away. They can't turn back time and hide this technology. It's not going to happen. It's here to stay. And what we need to do is figure out how to address these things, the content that might be generated with things you might not intend them to. So how do we do this? We need to understand why people believe fake news to begin with. And to be able to understand that, we need to understand the psychology behind fake news. And this takes two things. Understanding how beliefs are formed and also how we experience knowledge. So let's start with beliefs. And what I'm going to talk about is coming from this book called Thinking Fast to Slow, which is a great book that talks way more about how we think. But essentially, we have two parts to our brain. The system one, which is fast, automatic, intuitive. It's like riding a bike. You can do it, I don't know, with no hands, with your eyes closed. No, but it's things that come to you easy. It's also known as the lizard brain. So these are things that are intuitive, automatic. Then we also have system two, which is the more analytical part of our brain. This is where the slow, the thinking slow comes from. So we use both of these with beliefs. However, we use our slow mind, our system two, to form those beliefs. But how we act on those beliefs comes from our lizard brain, our automatic brain. So really, when we react to something based on those beliefs, it's really coming from a feeling. So if we take a look at these two headlines from two different fake news websites, which are actually owned by the same person, we can notice something. If you read the headlines, first of all, look at the language. They were clearly written by the same person, because it's using the same language, giving the boot. But also, at the end, it's asking you for a feeling. Are you glad, prepare to be infuriated? Now, this is not a new tactic. It's not new. It's been around for a while. It used to be called yellow journalism. It's been around since the 1890s, and it's been around, and it's stuck around because these types of headlines work. Sensationalistic headlines work because they appeal to our feelings, and we respond to that with that intuitive lizard brain of ours. So it's asking for that guttural and instinctual response, and we respond to it just almost automatically. So where do feelings fit with beliefs? Again, from thinking fast and slow. When faced with difficult questions, we actually answer an easier one instead, because that takes our system one easier part of our brain. So for example, when asked, how should financial advisors who pray on the elderly be punished, we might actually answer how much anger do I feel when I think of financial predators? So we answer an easier question instead, and that question is deeply tied to feelings. Now, we can often confuse feelings for knowledge. Why is that? So let's talk about knowledge. And in this discussion, I want to talk about the Knowledge Illusion, which is another great book that talks about the illusion of knowledge. So in this book, they talk about an exercise. You might have seen some of these pictures online. So this exercise had three steps. They would ask someone, how much do you know about bikes? And they would say, I ride a bike every day. I know a lot about bikes. Then they would ask them to draw a bike. And then they would fail spectacularly, kind of like that. So then after they went through that exercise, they would ask them, how well do you know bikes? And in most instances, people ranked their knowledge of bikes lower. What happened is that people experienced that lack of knowledge. They weren't aware of how little they knew about something. And if we're not aware of what we don't know, then we just don't know that we don't know. Right? This is actually by design. Knowledge is shallow by design. Because if we walked around aware of how much we don't know, literally our brains would not be able to function. They would explode. The way that the authors of the Knowledge Illusion talk about this is that if people needed to know the ins and outs of metallurgy before ever picking up an ax, then the bronze age would not have amounted too much. So not only are we not aware of what we don't know, but also those strong feelings that we have, that we react with, aren't actually based on deep understanding. So we might feel like we know a lot about bikes, and we might actually not know how little we know until we are faced with that. But our beliefs will be deepened as more people believe in things together. So to recap this, which I think is really interesting, those deep strong feelings about things, we don't actually have to know much about them. They don't come from deep understanding, but if we believe in things as a group, these things are reinforced. So a lot of what we do is believing together. It gives us that sense of belonging. And when you think about fake news and how they spread, this is really key, because you start seeing how that thread happens. We start with a new story that evokes some sort of emotional response from us. We immediately react to it, because again, we're instinctually reacting to this feeling. And if this feeling is part of this larger sense of belonging to a community, we're gonna go and share it, and it's going to sort of reinforce that belief, whether it's true or not. And it's harder, once that information is out there, to prove that something might be fake, because once it's already out there, in order to change someone's mind that this wasn't true, they need to not think with their system one into instinctual brain, but they need to think with their more analytical mind, and that takes brain power. So this is how fake news spreads. There was a really good study by The Atlantic recently, or a really good article talking about the study, where they did, it was the largest study on fake news on social media done by MIT. And the quote says, it seems pretty clear that false information outperforms true information, but it might actually have something to do with human nature. So again, if you think about those psychological factors, it's almost like we're kind of wired to respond a certain way to that type of information. So based on that, what they found out is that fake news stories are shared six times, or spread six times faster than a true news story. So to talk about that, I wanna go back to this, The War of the Worlds, 1938. So as I said earlier back in 1945, so this is a few years after the original broadcast, there was a paper published by a scholar that established a lot of the story that we talk about today, which is that about one million people were affected, and thought that this was real, and went out, and panic happened. Now, this is actually not true. And confession, I'm not a CS major, I'm not an engineering major, I was actually a film major. So I'm very familiar with the body of Work of Orson Welles, and when I found out that this didn't actually really happen the way I thought, I was really disappointed. So what ended up happening is that newspapers at the time were really struggling with radio, they were losing a lot of ground to radio. So when this broadcast happened, they kind of jumped at the opportunity to criticize radio. So that's where these headlines came from, but they actually died down within a day or two. It died down really fast, and it wasn't until that 1945 study that said that a million people were affected by this, it wasn't until then that people started talking about this mass hysteria that never really happened. And the reason why this might have grown out of proportion is that with that time that passed by, as people started seeing some of these headlines, they maybe didn't hear the broadcast, but they read the headline and kind of tricked themselves into thinking, yeah, I think I heard it. So what ended up happening is that new data surfaced back in 2003, and it turns out that the supposed panic was actually really tiny. So it ended up happening that that same day of that broadcast, there was a survey done, where they call your house and they ask you questions, and they asked people what they were listening to that night, and only 2% said they were listening to the Mercury Theater. 0% said they were listening to a broadcast. If it is to be believed that people were listening to the news broadcast and thought that this was actually happening, that number of people listening to a news broadcast would not be zero. Not only that, but the police reports about people going out in the street or even attempting suicide, none of that happened. There are no police reports that corroborate that story. So what ended up happening, again, is that as time progressed, people started to kind of buy into this. So this is how you can start to see that, especially when that study in 1945 came out with the data that millions of people listened to this, and they were panicked, we started believing that together, and it became this myth. So fake news stories elicit that emotional response. So when I was in high school, in college, and I heard about this story, it's very sensationalistic, it really gets your attention. You think, wow, I can't believe that that happened. And I think that's part of the reason why this myth was perpetuated. So can we disprove a fake news story once it's out there? Turns out, sort of. So to go back to that exercise of drawing a bike, if instead of asking people to draw a bike, you ask people to explain policy and you kind of follow the same steps, you ask how well do you know this policy on a scale from one to 10, 10 being the highest, and people might say, oh, I'm very familiar, eight, nine, maybe, and then you would say, okay, step by step, explain to me how that policy works, and then people would try to do that and not be able to, and then at the end they would be asked, how well do you think you know that policy, and then they would rank their knowledge lower. So in most cases, people would be able to figure out, okay, I don't actually know as much about something, so they had to kind of experience that, but it turns out that we can't just say great, let's do that, people don't like feeling dumb. So at the end of that experiment where people were asked, can you please explain this policy, and then people weren't able to, what they remembered was not that they learned something, they didn't remember, oh, I learned something that I didn't know about this thing, they just remembered that feeling of being dumb. So the answer can't just be, just get people to realize that they don't know something, get people to realize that this thing is fake. To go back to the Mercury Theater, War of the Worlds example, I was really bummed when I found out that my beloved Orson Welles didn't actually have the impact that I thought he did. We don't like that feeling. When I've given different versions of this talk before, one of the things that people talk about is like, if we can just get other people to see a different viewpoint, different opinion, we might be able to convince them about something, with facts, figures, et cetera. There was a great study done by The Guardian that tried just this back in 2016. They asked people of opposing viewpoints to switch Facebook feeds. I wanna say for maybe 30 days. And you would think that maybe people would come out of that feeling like, wow, I really learned something about people who have different beliefs than I do. But it turns out that it backfired. For some of the participants in this study, checking out the other bubble only confirmed their position to stay in their current bubble. So what we do can potentially backfire. So to quote the knowledge illusion again, a good leader must be able to help people realize their ignorance without making them feel stupid. And this is our job. This is what we're tasked with if we try to address the idea of fake news spreading online. I think this takes a three-pronged approach. It takes education, it takes design, and it takes engineering. So let's talk about education first. There was an interview again with the Guardian. I wanna say Mitchell Baker from the Mozilla Foundation talked about the importance of liberal arts education in STEM. So just to quote her, if we have STEM education without the humanities or without ethics or without understanding human behavior, then we are intentionally building the next generation of technologists who have not even the framework or the education or vocabulary to think about the relationship of STEM to society or humans or life. We need to understand how code intersects with human behavior, privacy, safety, vulnerability, equality. So essentially what she's saying is that we don't need STEM in liberal arts, we need more liberal arts in STEM. And sitting here listening to other talks in this track, I'm not the only one who thinks that. Obviously this person also said that and she was quoted and she's way more smart than me. But other speakers today have also stressed the fact that we need to talk more about ethics in tech. It needs to be part of our education as technologists. But it's not just education of technologists themselves. It's about teaching others and teaching those around us. So as an example, when I was talking to my boss that I was gonna give this presentation, she talked to me about this collective that she's a part of in Brooklyn called Cyber Collective. They do meetups at the Brooklyn library, which are free and open to anybody of any skill level. And they talk about things like fake news. They talk about things like cybersecurity. But they also talk about more basic things like are people reading my email? And it's really important to have these more basic conversations about computer literacy before we can even tackle the more complicated conversations about how to do fact-checking online. So it's not just about educating ourselves. It's also about educating others and those around us. And this can be as easy as starting a meetup like this in your city. Or also as easy as just talking to people around you, whether it's your family, your neighbors. I don't know, I don't really talk to my neighbors. Do people talk to their neighbors anymore? Next door? But it's about educating people around us as well. Also want to talk about design. A friend of mine, her name's Marissa Morby and her friend, AJ Davis, I saw that they had proposed a panel for South by Southwest on Design for Post Truth. And I talked to her because I'm not a designer and I'm really curious to hear what a designer might think about how to solve this problem from a design perspective. And she's still putting together this workshop. So one of the things that she did tell me that I found really interesting was this idea of native ads and native experiences. This is an example of a native ad. It's harmless, it's just an ad for Grubhub. This is from Boost for Reddit on the Pixel on an Android device. And it's a pretty seamless experience. You see an ad and it looks just like any other post in that app. But where things get dicey is where you start dealing with fake news websites or fake news pages on Twitter. This specifically is a screenshot from different Russian bots that had bot ads on Facebook. And you can see that these are native ads, so this is a native experience. And the only way, if you're scrolling through Facebook, through that feed, the only way that this content is differentiated, and I try to highlight it with a red box, is that little sponsored, tiny little text in light gray. That's really the only differentiator. So I think this is really interesting because this UX pattern was created to make the ads a seamless experience. But what ends up happening is that this can actually be exploited to spread fake news. So in a post-truth world, we need to think more about honest design. And in some cases, this might actually mean that we need to implement anti-patterns. And we need to add a little bit of more friction to that experience of ads. And native ads is a good example of that. To try to add that a little bit of friction to make it a little bit more clear and transparent to the user that what they're seeing might be an ad. So that's one design idea. Let's talk about engineering. So when we were talking about this fake news technology, there was actually a really interesting article that came out after this future of fake news video came out. So CNN is talking about these videos called Deep Fakes. And are we ready? And it turns out, no, we're not. And this is because just as quickly as we can create AI to detect these deep fakes, the deep fakes evolve and it becomes a little bit harder. So to steal a slide from a talk in Strange Loop that I didn't attend, I just saw someone tweet this photo and I just thought it was really interesting as it relates to these deep fakes is that it can be pretty easy to deceive an image recognition bot. So on one side, you have the original image, 89% likeness or likelihood that it's a lifeboat. And when you add a little bit of noise to that same image, now you have a 99.77% chance that it's a Scotch terrier. So there can be a lot of room for error. So the solution is gonna be both tech and human. We need both because when you lean too far into one direction, you get this. So companies like Apple are actually using people to try to moderate what actually makes it to Apple news. And I think this is really interesting, but editors, you invented editors that already exists. I think that's really interesting. This is one of the responses that I really liked, which is, this is yet another example of how the tech world is inventing something that already existed. And I think that sometimes when we try to address these issues, we try to reinvent the wheel and we don't realize that there are other solutions out there that are already tried and true. So I think that is not just about designing experiences that are honest. It's also, as an industry, we need to be a little bit more honest with ourselves and realize that we're not inventing anything new. We're just using editors or using something that already existed. Another great tool out there that just came out recently is called Fact Check Me. And this is actually both a data effort and a people effort. So on request, people can email deploy at rubhat.com with a topic or event that they wanna track. And the folks over at Fact Check Me, which I believe is a group of college students that started this company, will fire off this, I believe, like some sort of AI and try to analyze tweets around a specific topic. And they'll analyze it and give you some data on which photos might be bots and which photos might actually be real and who is sharing what. And this can help journalists navigate that data that is out there in social media. But it combines a little bit of the human with the automated. And to talk about fake news and bots, I was talking how deep fakes keep getting more sophisticated. And it's the same case with bots. So in a recent article on Wired, we found that 60% of all of the recent controversial conversations on social media were actually driven by bots. But it's not just that they are sharing actual fake tweets or content. It's that they are now more sophisticated to try to amplify things that are divisive or specifically that are fake, especially if those things are actually written by humans to try to sort of game the system. So the bots are still there, they're just more sophisticated. And from the same people that made fact check me, they also have bot check me, which is a Chrome extension that you can add that will check if a specific Twitter account is a bot. I wanna close this section on engineering to talk about this. I think this is really interesting. This was from a post by Jessica Powell. She talked about why she left her big job in Silicon Valley and she says, you can't tell your advertisers that you can target users down to the tiniest pixel, but then throw your hands up before the politicians and say your machines can't figure out if bad actors are using your platform. That just doesn't go together. So I think that a lot of what we need to do as an industry is take a little bit more responsibility about what we can and can't do and be honest that if we can do this thing, then we can surely do this other thing and try to track if there are people that are acting badly in our platforms. So in the end, I feel like the solution takes a lot of different moving parts. This involves design and engineering to understand things like psychology, understand things like ethics, and be more involved in education. But also when I end this talk by saying that this is still very much a developing story, we are currently living in this post-truth world. We are living out this story right now. So if when I presented things that we could do as an industry to address these issues, if you felt like it wasn't enough, it's because it's not enough and it's because we're still living this out and we're still trying to figure out what to do and how to address the spread of fake information online. So if you are interested, know that you have the power to create the tools, the designs are really going to help give people, have those tools to be able to think critically about the things that they see online. So it really is up to us. If any of you have any questions, if you want to see my slides, if you want to see my research, you can go on Twitter at SESI Korea and with the magic of Twitter scheduled posting, you should see my slides here shortly and also my notes. And I'm also happy to answer questions out in the hall or over Twitter. Thank you.