 We're back. I'm Jay Fiedel. This is Think Tech, beginning another broadcast week. Wow! Gets better and better. Brett O'Brien. He's in faculty at the journalism program at the School of Communications at UH Manoa. He comes down to talk to us from time to time about very important issues relating to journalism, to try to help us understand not only journalism, but, you know, the sources and effects of journalism. So, because we engage with journalism. They are us and we are them, right? Right, Brett? Yes. We're all intertwined. And, you know, advertising, public relations, journalism, it's all in a mix, and we have to tease out what we want of our communication. So we have a couple of topics today, the first one being the MIT study reported in the paper, the New York Times I know, a couple days ago, you know, on when people are more likely to accept and pass on fake news, this is really interesting. So you know, you've thought about this question, what for that discussion, for that analysis, what is fake news? It's not an easy question. What is fake news? Well, I would argue any part of any story that's not true makes the entire thing fake. Okay. So you'll find, what we've discovered in research about fake news is that these Russian bots will actually scoop up a bunch of real news and circulate that as well to mix with their fake news. So they give this appearance that, yes, we're a legitimate channel, and then they will take these real stories and change a couple words or add a couple sentences right in the middle of it. So it's very, very difficult to figure out what's fake and what's not. But as a baseline definition, I'd say if there's one fake word in a piece, then it's fake news. Yeah, and that, looking at it from the reader's point of view, and I have had this experience, I'm sure you have too, I'm reading something that appears to be legit, and they got my email address from somewhere, they send me some stuff, maybe more than once, maybe daily. But I'm looking at it and it seems to be legit, it seems to comport with other news and then there's a clunker. Right in the middle of it, there's a clunker, and I say, well, let me look at that again. Let me see if I can identify that as fake news. And if I can, actually, Brett, I make the same analysis you were talking about. Well, if the one sandwiched among the good ones is fake news, I have to question all of them. And if I have to question all of them, I'm not taking that email anymore. Right, and this is the worst in social media because your friends provide you with these stories, they think they're real, so they add a layer of credibility to it that you don't get just from direct contact with the media source. Right. And plus you lose the origin of where the story came from, you know, to come from the Denver Guardian, which is a fake site that's meant to represent, you know, like the Denver Post or something. And it looks completely real. You look at it on the website, and if you're not paying attention, if you're not being vigilant, if you're not doing your, you know, citizenry type work, then you easily can be misled. Well, what about a site? And maybe there is a site like this. I just wouldn't know about it. Maybe there's a site that identifies the Denver Guardian and others like that that are, you know, fake. What about that site identifying stories that are sandwiched in that are fake? I mean, my problem is I'm not sure I believe the site that claims to be able to identify fake news, but let's assume, you know, there is a certain amount of credibility on that site. Wouldn't that be helpful to have somebody come up and tell me where the bad guys are? Oh, it would be. But imagine how daunting of a job that would be when any person in any place at any time could change one sentence in any story and recirculate it. How are you going to do that? It's really, I don't believe a technological solution exists. I think it's a human awareness and literacy problem. And until people really take charge of their information again and become responsible for it, then we're going to have this misinformation and disinformation everywhere. Let me offer a thought on that last. You said there's no technological solution right now. But there might be. You know, artificial intelligence is, by definition, pretty smart. And maybe you could program artificial intelligence to spot the clunkers and notify you. And it would be dynamic. It would be every day. It would be every story that comes around. Or better yet, be every story that comes to you. So it said, wait, Jay, there's a clunker here. What about that? I would recommend we don't put faith in machines and algorithms because somebody has to write the program. Now, artificial intelligence, by nature, does learn from its own mistakes. But what is it learning and how does that benefit to humanity? That's a big question. I don't think we really have an answer for it. And we also don't have any guarantees that the machines are going to say the best interest of us is to make these humans and not know fake news. That's a pretty bleak picture you're painting, Brett. I mean, it's bleak for all of us, really. But it gets bleaker. It gets worse with this MIT study. Can you talk about what it said? Well, the summary of it would be basically that fake news travels much faster, much deeper than real news. And it's not the computers that are doing it. It's the other human beings in the system that are sharing with their friends or retweeting or whatever. And they're circulating this information as true, giving it a vouching for its credibility, like, if I share something with you, then you think, oh, Brett, share this with me so it must be true. It lets you put your guard down. And the fake information does its job. But sometimes, fake news, arguably, is pretty believable. I don't think a lot of fake news are close to the truth, but not quite. And so if somebody, as a credible source, or reasonably credible, a peer-group credible source, my high school classmate, for example, tells me he got this or implies he got this from a good source, I'm likely to believe it and not know, and not know that it's fake. This is very troublesome. That's me. I mean, it reminds me of a story with one of my graduate school professors told me about experiment he did. He studied rhetoric. And he would walk around the halls of the university saying, I read a story in the New York Times yesterday about how raisins cured cancer. And if everybody ate more raisins, they would, you know, there would be no more cancer. And he would just see what people would say. And almost every time, people would say, oh my gosh, really? I'm going to get some more raisins. You know, they would never question it. Here's an authority figure dropping down the second authority with the New York Times and some kind of healthy, sounding snack. You know, it just all works. Sure. It comes together. Yeah, it comes together. We've heard a lot of pieces of it before. Yeah. And people want to believe those types of things. They don't want to believe that we're out of control. As you know, we're out of control of the world. The world's chaotic when we can't save people from cancer. They want to believe we can do it. They want to believe that we can stop war. We can stop all sorts of injustice. And the reality is, we're in a very chaotic world. And without this clear, truthful type of public discourse, we're really in trouble. Public discourse, I want to get to that. But one thing first is that we both mentioned the New York Times. So if my classmate in high school says to me, I saw this in the New York Times, high credibility, absolutely high credibility. More and more, the New York Times is a leader in this sort of truth-giving. But how do I know that what he said about the New York Times is true? He may have that all wrong intentionally or otherwise. I have to go and look at the New York Times. And a lot of people see the New York Times as the oracle, but they don't read it. Right. They read the headline or subhead or something. Yeah. So I mean, this creates another problem in terms of the credibility aspect. I believe what he said. But we have to look like, why do we trust the New York Times? And the reason we trust it is because they have a process in place that is trustworthy. And very few people have that level of a system in place in the media. And they have maybe one of the most refined systems, where they have probably 10 editors that look at a story before it appears in their publication. They have fact-checkers for every fact. And they have a lot of discussions about what we include, what we don't include. And if there's some request for clarification or correction, they have a vetting process for that that is quite extensive and timeless in the sense they have corrected errors made 100 years ago. If they become aware of it and they look back at their coverage and they say it's wrong, they'll write a correction. And it's that process that makes it believable to us, not necessarily any individual at the New York Times or any publisher or anything like that. It's because of the system they have in place that they've earned that trust. Yeah, and they have a quote, public editor. They have a public editor who represents the readers and writes about the paper, about that process that you described, so it's interesting. Everybody has this kind of controls on the truth. Hardly anyone does anymore. I think there's only five public editors in the country now. And one thing I've noticed about the media in Hawaii is it's a very closed system. They're not transparent about virtually everything they do. And I think there's reasons to be skeptical of people like that. You can't take people's word for it that they're going to do what's right if they don't show you that they're doing what's right. And that transparency is the key to the whole issue, I think. So going back to the question of the speed at which fake news travels and the, what do we call it, the coefficient of distribution, if you will, this is fascinating. And I was telling you before the show, I see journalism these days and the response to journalism. In fact, everything around journalism has a study of sociology and psychology, of mass psychology. We have never had this kind of distribution of information the way we have it now. And it moves people. It moves political forces. It moves the economy. So it's all the more important that we get it right because it could move us easily and it does. Look what happened in the election. We've got to talk about that. But why does fake news travel faster? Why are people more likely to repeat fake news than regularly? This is just very chilling. And why do they move it in the first place or why do they move it faster? Two questions. Well, I can't say definitively why, but what I can say is that when you have information that's less tied to facts, it becomes more emotional. And I think the effective response of the information is what's pushing it. So if you hear a piece of information that's salacious, you're more likely to say, oh my gosh, you know, I guess what I know because there's some emotional impact to you that something bad is happening or something untoward or something dishonest or whatever it is. And you want to spread that information so other people know about it. The problem is if it's not true, then you're really, as the Russians call people, the useful idiot. You're running around telling people misinformation that you believe is true, but it's really not true. So you're being useful to the propagandists or misinformation. Yeah, and you do that because you do that because it's worth the moment to dwell on it, on the word salacious. I certainly agree. Raw meat news travels faster because it appeals to some baser intellectual process. It's the kind of thing you want to repeat as gossip. It's when you send it to your friend, you're saying, whoa, you really want to hear about this one. Yeah, well, that's all pathos. And when you study argumentation or rhetoric, there's basically three legs that rhetoric stands on. You have the ethos of a person, which is their character and credibility. You have the logos of the arguments, the facts and how they stack together, and then you have the pathos of the emotions of it all. How do you, the emotional feels, yeah. And basically fake news is all geared toward the pathos. Well, could it not be? I mean, for example, Economic Report says the market went up 200, when in fact the market went down 100. That's not an emotional well for some people it is. Oh, well, but that's easy to check. If you say, President Trump is having multiple affairs with porn stars. That's salacious. Yeah, and you, how would you check that? It's gonna be, he said she said or whatever, and then it's very emotional because it really cuts to the core of the morality of our country's top office. Why, what is it about human psychology that makes us want to glum onto salacious news, want to glum onto that? I mean, I think it's almost, let me throw a theory at you. It's not from me so much. It's from my friends that I'm gonna distribute this. It's, I wanna give them a gift. I wanna give them something they don't know that they never expected. I want them to have the gifts that they can give it to their friends too. I wanna, you know, raise the buzz somehow. So I'm doing my friends a favor. And I feel this in my own self when I do forward an article or something. I'm not gonna forward every article. I'm not gonna forward stuff that's boring. I'm gonna forward stuff that's hot. All right. And I usually- People are gonna talk about. Yeah, right. Yeah, because that builds your esteem in the community. You know, you're the person breaking this news. I mean, you're not gonna forward a city council agenda with a budget, right? And then you have the ethos and the logos of the work, but you don't have any faithos. So you end up with a real orientation toward emotional, effective news. Yeah, it's very dangerous for society. You can't operate that way. You need a more measured intellectual analysis of it. And we're also, this is a way that we have been put at each other's throats. You know, you pick the most emotional issue and you fuel each side of it and makes something that was, you know, potentially solvable and unsolvable and it makes something unsolvable into a death match. Yeah, I wanna talk about that right after this break. I wanna talk about a couple of things that have sprung out of what you said so far. That's Brent Overgaard. He's the faculty at the journalism program at the School of Communications at UH Manoa. Comes down and talked to us. And I really enjoy these conversations. We'll be right back. Aloha. My name is Mark Shklav. I am the host of Think Tech Hawaii's Law Across the Sea. Law Across the Sea comes on every other Monday at 11 a.m. Please join us. I like to bring in guests that talk about all types of things that come across the sea to Hawaii. Not just law, love, people, ideas, history. Please join us for Law Across the Sea. Aloha. Aloha, I'm Dave Stevens, host of the Cyber Underground. This is where we discuss everything that relates to computers that just kinda scare you out of your mind. So come join us every week here on thinktechawai.com, 1 p.m. on Friday afternoons, and then you can go see all our episodes on YouTube. Just look up the Cyber Underground on YouTube. All our shows will show up and please follow us. We're always giving you current, relevant information to protect you. Keepin' you safe. Aloha. Okay, we're back. We couldn't wait to get back. I'm Jay Fidel. This is Community Matters. This is Brett Obergaard from the Journalism Program at the School of Communications, UH Manoa. So a couple of things that sprung out of what you were talking about before. One is, one prescription, Rx, so to speak, for this emotional pathos kind of social reaction, and I mean on a large scale, social reaction, is that people should talk to each other like they did in the days of Abe Lincoln, where they sat around the, you know, the hygienic store, had conversations, and one guy would say, did you see that thing? And the other guy would say, I don't believe it. Or he would say, did you check it out? Do you know something about that? And as a result, you get a human kind of engagement test on whether this is right or wrong. I mean, how would that work in the 21st century? Well, start by getting a story on Facebook that you find salacious or you want to send it, but you know, like when you're mad and you say, I'm going to take 10 seconds and I'm not going to talk. Take 10 minutes and don't re-forward. And then, you know, I recommend finding someone outside of your bubble and saying, did you hear about this story? What do you think of it? And see if it matches their perception of it or their take on it. And then you can kind of find in the middle what really happened. So to immediately forward the story among your bubble and incite more rage and hate instantaneously, what do you get out of that? That's what I wonder. It's like, what do you really get? You get gratification. Well, okay. Maybe a little bit. You get a little bit of a savor. You got everybody excited. Yeah. You get energy, but it's not good energy. It's negative, destructive energy. And when you could be like a voice of reason, that could be the choice you make that I'm going to take just 10 minutes or 20 minutes or whatever or even a day to sit on this one and think about, is it true? And then what do I really make of it? And have a reasoned, rational response instead of, look at what these stupid people are doing again. Because you enhance it when you make an affirmative comment that suggests it's true. Yeah, you amplify it. And that's in the social media tech world, they call it amplification. And they want to amplify all their content up until it drowns out everything else. This is part of that same thing with the social psychology, the amplification part. And the scary part about, to me the scariest part is these social media companies have created a monster that they have lost control of. There's nothing Facebook or Twitter can do right now to stop this Frankenstein's monster that's walking around, crushing everything in its path. Unless they can just turn the whole thing off and they're not gonna do that. There's too much money. Yeah, I mean, and if they start to squelch conversation or squelch whatever circles are in their platform, then what has already been shown is that people go to alternative platforms that are even more hate-filled, even more partisan, even less credible, which if that's possible, they go into these even worse circles and worse types of social media companies that are basically taking the drift off of Facebook and Twitter, and they're supporting it to try to get an audience. But then the audience is even worse than it's like condensed version of the worst of Facebook or Twitter. So it's just a disaster. Suppose, back to my artificial intelligence possibility, suppose I'm Facebook, and with various, you know, artificial intelligence analysis, I create a rating system based on the source of the information, the likelihood, possibility that it's in conflict with other news and all that. You know, with social networking analysis, I'm just looking at keywords like they do in Washington, you know, in the basement. You can find a lot about text, what the text is saying. And so suppose they came up with a rating, sort of like the ratings for a seller in eBay, or the ratings for a product on B&H Photo, or Amazon itself. I'm much more likely to buy a product or stay at a hotel or take an airline that has a five-star rating than a one-star or a two-star, or even a three-star rating. I want the highest possible rating. So if they did that, and granted, it wouldn't necessarily be completely accurate, but if they gave me ratings and stars like that, based on whatever analysis they could provide, that would affect my likelihood of transmitting this with a reinforcing, what you call an expansion, expansion comment, you suggest that he should listen to it. What about that? Well, it reminds me of the experience I had a couple years ago when I was in a new place and I wanted to go to a restaurant. And so I typed in type of food I wanted, and there were a bunch of reviews, like five-star, blah, blah, blah. And I went to the restaurant, it was by far one of the worst places I'd ever been. I mean, it was a, it was claimed to be a five-star Asian fusion mix of cuisine, right? I get in, it's a buffet table with the ice cream machine broken, squirting ice cream all over the place. So I have zero faith in these reviews because they've been rigged, they've been rigged, just like some media. Because when somebody's paying money, it's a problem. Well, they have trolls for everything, they have trolls for restaurants, that they pay people wherever, and they say, you know, write a hundred positive reviews about this place, right? For a hundred bucks, and then they get, so I don't have any faith in that kind of system. And then even the most fundamental part of that argument is, okay, you have New York Times, Fox News, what's gonna get the five rating and why? And then you're gonna have to debate that with all the people that watch Fox News or New York Times, and they're not gonna agree on it. They're gonna be in their bubbles about it. And it's just not gonna go anywhere, so. I'll tell you a short story. On January was the 13th of Saturday when we had the false alarm. My wife and I are sitting there and the phone rings and the false alarm. So we go into a protected part of our house, because why not? And we bring our puppy with us, okay? And now we've done really all we can do to respond to the false alarm. So I call my brother. My brother's on the mainland. He's a fair witness. He's not here, there's no emotional lay for him, layover for him. And I say, you know, this is what we got on the phone. What do you think? Do you think, what is the probability this could be true? And I mean, a smart guy anyway, college professor, law school professor. And it didn't take him one second to say that doesn't compute. This false alarm does not compute everything that he knows in the world. It's not consistent. And the point is, you know, he's 6,000 miles away. The point is he's a smart guy and he's gonna give me a straight answer. So I think it's part of what you were saying before about this need to have a trusted group, a trusted friend, who you can bounce it off, who is more likely, and who is at some distance really, who is more likely to be able to make a good analysis. Well, and it also talks about, it also directly hits on the emotional part of it. Like on January 13th, when we got the missile alert at my household, you know, my wife received it and was very concerned and almost a panic. My children are crying, we're trying to escape. You know, there's no time for rational thought. It's a survival moment. And there was, I mean, if I was by myself, maybe I would've sat down and rationally searched and tried to figure out, oh, why are the sirens going or whatever. But when you have this group of people hearing the same news as you, it seems believable because we've been told that it's believable by our own trusted government. And I mean, what are we expected to do? Not panic, I don't think so. This is like, you know, just about to be incinerated. Yeah, one guy put his children down a manhole. Oh, there's all sorts of stories about things that happened during that time. You know, I heard it, well, anyway. I heard a lot of really disturbing stories about what happened during that time period. And it reminded me of the War of the Worlds broadcast where there was a new medium, the radio, in the time of this broadcast. And people heard the story. They thought it was true. It caused an incredible chaos. And it's the same thing with mobile technology. It's a new technology that we haven't quite figured out. We're not accustomed to it. And we assume that if somebody sends us a message saying, a ballistic missile is going to hit us at any moment from our government and sends us this message, then we should believe it. And that's a dangerous thing to not believe. We have no, right, dangerous not to believe. We have no way of verifying. We have no way of verifying, we get a text message. We have a minute left. I want to ask you one other thing. And that goes to the Russians, we spoke about them. So the Russians in St. Petersburg, those young kids who write, copy American, you know, English language. And this somehow gets distributed in the US to targeted areas and groups and social media channels. And there's false news in there. And the news is intended to make bubbles, make controversy, create dissension, and affect an election. And I really believe they did. So the question I put to you is, what is the difference between those guys in St. Petersburg and some guy in St. Petersburg, Florida? Who does exactly the same thing? He wants to affect an election. He wants to affect public opinion. He wants to create bubbles and dissension and controversy. I mean, isn't that possible? And maybe isn't it happening anyway? What's the difference? Well, I don't think there's functionally a difference. But what I will say, and I saw this great quote from Jim Carrey, who's a comedian, and he quit his Facebook account. And he said that basically Facebook has allowed other countries to have a bridge into our country and to wage war on us. In the past, geographically, we had the ocean surrounding us. We had a tremendous natural resources. We had great advantages to protecting ourselves. In this case, Facebook, Twitter, whatever, they built these bridges all around the world that people can just walk right in without a passport or any kind of vetting. And they can do horrible things to us, including helping the people in St. Petersburg or wherever. They can be tricked into believing something that's not true. And I encourage anybody out there to look into these Russian troll operations. They're very, very dirty, very sneaky, and very disturbing when you consider about what kind of damage they can do. And it's not a stretch to say these are acts of war. Would you build a wall? Would you build a wall of an incident wall, such as the way China has? I wouldn't recommend doing that. That's a good question. I think there are lots of avenues that these social media companies can take, though, in terms of what kind of content circulates on their platform and what doesn't. And really, it gets back to my earlier argument that these are publishers. Facebook is a publisher. They need to be responsible for what they publish, just like every other publisher. The New York Times has to be responsible for what they publish. Twitter has to be responsible for what it publishes. And if not, they need, if they are irresponsible, then they need to be punished like other publications would be. So I think it's a loophole in the law, and that would be my recommendation. Move them into the camp of publishers, and I think you'd see a lot of different discourse on there than you see now, as soon as it has to be vetted. I agree, absolutely. Grant Overgaard, thank you so much for coming. Thank you. Always great to talk with you. Me too. Me too. Next time soon. Okay.