 Hi, I'm Ulrich Ecker from the University of Western Australia and I research the psychology of misinformation. My research is mainly on memory updating and why people continue to rely on information that has been shown to be false. So this would require them to update their beliefs, update their memory, but my research shows that often they continue to rely on retracted or corrected misinformation in their reasoning and in their decision making. So that's what I'm really interested in. Why do people use information that they actually know to be false? Simple retractions of misinformation are notoriously ineffective. They just don't work. Once a person has processed information and believed it, it's not possible to just take it back. It will still stay in memory, it will still influence your reasoning, influence your decision making. It's not as if we had a magical eraser that we couldn't get rid of information in memory. It's still there and people will use it if it's available depending on how they're prompted and depending on what other information they have. If they don't have any other information that they trust and they believe, they will continue to rely on misinformation even though you've retracted it. So many people believe that it might just be as easy as telling people, look this is not true. So you tell them the truth and then they just go on and behave more rationally but unfortunately it doesn't work that way. So some people might continue to rely on misinformation because they kind of missed the retraction. They might have just encoded or just learned about the misinformation and they never got the retraction. And sometimes newspapers have misinformation on the title page one day and then the retraction comes on page seven on the next day and people might miss that. And that's a genuine argument that can happen. But our research shows that people continue to rely on information even if they demonstrably remember the retraction. So we ask them, was any information retracted in what you received? And even if they say, yep, this and this was retracted and that was found not to be true, the misinformation will still continue to influence the way they reason about the event that they encoded. So one way to think about the question why people continue to rely on misinformation despite retractions is that you can kind of think of the way people try to understand the world in that they build mental models of the world. So they want to understand what's going on in particular if there's unusual events that are happening. They build a mental model of the world that explains what's happening. Now if you retract a critical piece of information in that mental model, then people are left with a gap. And they don't like gaps. People prefer complete models because they want to know what's going on, right? They don't want to have a gap in the model not knowing what's going on. So they prefer complete model even if it's incorrect sometimes over an incomplete model. So when you retract a piece of misinformation without offering an alternative explanation, you leave people with a gap in the model and they will continue to rely on the only piece of information they have to fill that gap, which is the misinformation. But if you are able to supply a piece of alternative information that can fill that gap, then they no longer have the need to rely on the retracted misinformation. They can use the new information. But unfortunately, you know, an alternative explanation is not always available or it's, it might be available but it's very complicated. So to give an example, you know, if there's a crime and there's a suspect but then the suspect is found not to be guilty, you know, has an alibi or something. As long as there's no alternative suspect, people might still have suspicion and still rely on the misinformation that this person committed the crime. But if you can present an alternative suspect, in particular if it kind of makes sense, if it's plausible, a plausible case, there's a motive why that person was involved in the crime, those kind of things. If it's a plausible alternative that you can present, then people no longer will refer to the initial suspect. But sometimes, you know, these, these alternatives are not available. So for example, with the missing Malaysian Airlines plane, we still don't know what happened. So there is no plausible alternative explanation that people are willing to accept. And in this case, people really have a strong desire, a strong need to understand what happened because it's such an unusual event. I wouldn't have thought it's possible that a plane just disappear into thin air. And people fly a lot, right? So you don't want to board a plane and worry about the plane disappearing, right? People really have a strong need to understand this particular case. And I think this need or this desire to understand what happened, a need for closure, if you will. On the one hand, and on the other hand, the fact that there is no good explanation, even after months and months of searching for the plane and investigations, these two factors together, I think fuel this, you know, these conspiracy theories that are out there. I mean, from geodists to a political fanatic political fanatic pilot, you know, I've read that the plane's been taken to northern Pakistan. All these really, really wacky conspiracy theories flourish if there's a strong need for an explanation on the one hand, but the absence of an explanation on the other. In other cases, like the causes of autism, there's a lot known about the causes of autism, but it's not fully understood. And it's a very complicated story of many factors that interact and play a role. And that makes this alternative explanation of why autism, or what causes autism, less attractive, because it's too complicated. And people might continue to rely on myths such as, you know, the myth that autism is caused by vaccinations, which of course has been thoroughly debunked. You just have a desire to understand unusual events, negative events. And it makes a lot of sense because if you find out what happened, obviously you can reduce the likelihood that it's going to happen again, right? So it makes a lot of sense that people work this way, that they have this desire to understand things. But if there's no good explanation out there, that has, you know, the downside is that then they'll fall for misinformation. Okay, so what we call the continued influence effect is that people continue to rely on misinformation even after it's been retracted. Even if they understand the retraction, they remember the retraction. So it's available to them, but they still are influenced by the retracted misinformation. So a real world example of that is the myth about Obama's birthplace. So people have claimed that he wasn't born in the US and hence wasn't eligible to become president in the first place. And that's been thoroughly retracted. He's presented an alternative, namely his birth certificate, proving that he was born in the US. But lots of people continue to believe that he wasn't. So misinformation has an effect on people's reasoning, despite this retraction. Considering the fact that retractions are really ineffective, unless you can provide a really plausible and easy to understand alternative explanation. Even worse, retractions can also backfire, so they can reinforce or strengthen the misbelief that you're trying to correct, which is unfortunate. So these are kind of ironic effects that you're trying to reduce people's belief in a certain claim, but by retracting the myth, you can actually increase people's belief in it. There's various kinds of backfire effect and the evidence for them varies a bit. One backfire effect that's discussed in the literature is the familiarity backfire effect, and the other one is the worldview backfire effect. So the familiarity backfire effect, the rationale is that people generally believe and remember familiar information. So whatever's familiar is something that you'll be drawn to. That's something that you'll pick on the supermarket shelf. That's something that you will remember. So familiarity is a big, big driver of how people process information. Now, unfortunately, when you try to retract a myth, often you need to repeat it, right? Otherwise, people won't know what you're talking about. So if you want to say, you know, the sun is not causing global warming, well, you kind of have to repeat this link between the sun and global warming. And by doing so, you're actually increasing the familiarity of that link. And that means that five minutes after you tell people why the sun is not causing global warming, they will remember it. But after a week or a month, you know, people forget details. In particular, if they, you know, of older age, memory deteriorates with age. So people forget details over time. So after a month, they won't remember all the details. And if you talk to them about different facts and myths, they will be confused. They won't remember exactly which were the facts and which were the myths. So all they might remember is this familiar link that you made even more familiar by repeating it while you were retracting it, that, you know, they told me something about the sun and global warming. That might be everything they remember. And then you might end up in a situation where your retraction has backfired and actually increased people's belief in the myth that the sun is causing global warming because you've repeated it. And I've now repeated it three times. It's just probably a bad thing. But the more you repeat the myth, even though you're doing it to retract it, the more familiar it might be. And people might rely more strongly on it sometime after you've retracted it because of your retraction. And that's, that's the ironic part of it. So practical steps to avoid this kind of effect, the familiarity backfire effect from occurring is, well, first of all, you should not just tell people you know, if you heard this myth, no, that's, that's wrong. You shouldn't stop there just telling people that it's a myth is not really enough. You should always try and give people an explanation why it is wrong. That helps to prevent this kind of effect from happening. The second thing you can do is when you retract a myth, often you find these flyers about facts and myths about, you know, the flu vaccine or this or that. And what they do is they provide a claim saying, you know, for example, the side effects of the flu vaccine are worse than the flu. That's wrong. In actual fact, the worst side effect could be a sore arm. So they present the myth first, then they tell you that it's a myth, then they provide the factual explanation, they provide the facts. So this myth first approach has been shown to be not as well equipped to debunk myths as the opposite way of doing it, namely to provide the fact first, say, with the flu vaccine, the worst thing that could happen is you get a sore arm. There's also this myth that, you know, the side effects are worse than the flu, but that's just a myth. That's a better way. People, if you present the myth first, then people process it as if it's true, and then you tell them it's not true. So they have to undo that in their mind, right? And that's not a good way of doing things. That's error prone. So it's a lot better to give people the facts first and then warn them, okay, there's also this myth. And as soon as you say that people will be cognitively on guard and say, okay, what's following now is misinformation. I'm not going to believe it, right? Because they've already told me that what's going to follow is a myth. Whereas if you present the myth first, they're going to believe it initially. And then when you tell them it's wrong, they have to undo that, right? So it's better to give them the facts first, then give them the myth and tell them immediately that it's a myth to put them on guard, and then they don't process it as if it's something that's true. That makes sense. The other backfire effect that's, you know, well studied and that's really important is the world view backfire effect. And there's a lot of evidence for this type of effect. It results from people's attitudes, always having a big influence on how they process information. So if you have strong attitude, then or strong belief, then you will be a lot less skeptical about misinformation and misinformation sources that are in line with your world view that are in line with what you already believe. You're going to be less skeptical, you're going to be more willing to accept information that fits to what you already believe. And if what you already believe is wrong, then you're more willing to accept misinformation. And that's true for pretty much all people. We are built to process information in a biased way. We, you know, look for information that confirms what we already know. We're more willing to accept things that fit in with what we already believe. So that's a natural thing that, you know, all people do. Now, this becomes a big problem if, you know, your beliefs are wrong, and if the information you receive is wrong. And if you have a belief that is really central to your identity, so if you have a really strong belief, then you will defend it, right? You're defending your identity, who you are. And if someone comes along and challenges it, what happens is that you're not going to be convinced by what they say because they're challenging your worldview. You're actually more likely to become even more extreme in your belief. So if someone tries to correct a piece of misinformation that you, you know, is very close to what you believe, very close to your identity, you're actually going to more strongly believe in that misinformation. So that's the ironic part of it. Someone tells you that something is wrong, but you're going to believe it even more after this correction. If you have a very strong belief and you feel like you're under attack, if you so will. So to give an example of the worldview backfire effect, if you present people with information about a study that has found that politicians of a particular party are much more likely to embezzle funds than politicians of another party, okay? And then you tell people, hang on, we got that wrong. It's not actually true. We misrepresented that study. What you'll find is that people who support the party in question, so they will be more happy to accept the retraction, because, you know, the misinformation said that politicians of the party that they support are more likely to embezzle funds. They're happy to accept that that's not true. Okay? But what you find in supporters of the opposite party, the other party, is that they increase their belief in this study, right? So you tell them, hang on, we got that wrong. It's not actually true that politicians of, you know, party X embezzle more funds, generally. They are more likely to believe that it's actually true. So obviously they don't want to believe the retraction, because it's not in line with their worldview. So basically what the results looked like is that we had two conditions. In one condition, people just received this information that, you know, there was a study finding that politicians of party X were more likely to embezzle funds. In another condition, people received the same misinformation, but then it was retracted. So we told them about the study, then we told them that it wasn't true, that it was misrepresented. Okay? And what we find is that if you just tell people about the study, all people refer to the misinformation, or in that case it's not actually misinformation, refer to the critical information to a similar extent. But then if you give people the retraction, then if the retraction is congruent with people's worldview, then their reliance on the retracted information goes down. But if the retraction is dissonant with people's worldview, then reliance on the retracted information actually goes up significantly, right? So after you, after these people receive the retraction that the study was misrepresented, they increase their belief in what the study found, or what we initially said the study found. We're still collecting more data, which I just saw the data first time last week. Okay, so the worldview backfire effect is a significant problem and there's not a whole lot you can do to avoid it completely because, you know, the effects of people's attitude on how they process information are very strong and robust, you know, you can't avoid it completely. But there's several things you can do to try and reduce its occurrence, or reduce its significance. One thing you can do is if you're trying to convince people of a set of facts, it's often better to present the information graphically instead of verbally. So if you're trying to convince people about global warming, about the fact that temperatures have increased, you can have a verbal description of how many degrees it has risen over how many years and that there's different sources of data and all that kind of stuff. Or you could just present a graph which unambiguously shows how much it has risen over what time scale and people can visualise it and in one glance sort of get the idea of what's happening and it makes it a lot harder to counter-argue those facts because if you just talk about an increase, you know, it could be an insignificant increase or there could be an increase followed by a decrease or an increase that's not linear but it's actually slowing down or something like that. But if you have the graphical input then you can't counter-argue the facts, right? You see the increase, you see its shape and that makes it more easy to accept that information even if it's running counter to your attitudes. And this has been shown by my colleagues Brent Nye and Jason Reifler recently in the United States. So that's one thing you can do, use graphical information or present information graphically instead of verbally. Other things you can do is you can try and, you know, avoid inflammatory language and try avoid being confrontational and rather try and take into account who you're talking to and what their beliefs are and what is important to them and try and accommodate your message with what they believe. So conservative people who believe in a free market they're much more willing to talk about climate science and climate mitigation policy. If you also talk to them about ways how this could provide opportunities for business, for nuclear power, for renewable energy and how that can create jobs and all those kind of things, which are true. It's not you don't have to lie, right? If you look at some European countries it's been a massive increase in the number of jobs created, you know, for renewable energy and so on. So that might reduce some fears that these people have about, you know, people losing their jobs and industries collapsing and all these kind of those kind of things. So take into account who you're talking to, what their concerns are and be open-minded about those so you can increase the likelihood of those people engaging in the conversation and accepting or considering the facts that you bring to the table as well. So avoiding confrontation and, you know, taking into framing your message in a way that makes it more easily digestible or acceptable or even worthy of consideration to people who don't share your views is a good thing to do. And also you need to accept the fact that there's people out there who will not change your mind, whatever evidence you give them, but also consider the fact that that's just a very small minority. Most people, you know, you can talk to them and they might change their mind if you present your case. Some people won't. No matter how much evidence you give them they won't change their mind, they're just going to become more more extreme. It's best to ignore that minority. It's often very vocal but it's usually very small and not very significant. So focus your attempts to, you know, convey your message on the majority of people who are willing to engage in conversation. In summary, the best way to deal with the worldview backfire effect is present information in a way that makes it easy to understand. Often that means presenting it graphically instead of verbally, for example. Framing it in a way that makes it acceptable to your audience and knowing your audience. Talking to the right people, not spending too much time, you know, trying to convince hardcore non-believers because they won't change no matter what evidence you give them, but, you know, focus on the majority of people who are willing to engage in conversation. As I said before, it's important to provide an alternative explanation when you retract a myth. And a few things that you should consider is that the alternative needs to be plausible. So obviously if it's, you know, if it's people having trouble believing it, then they're not going to accept it, they're still going to stick with the false information. So if the incorrect, sorry, if the correct information sounds implausible, then it's going to be a barrier for people accepting it. And that might happen in the real world. There might be, you know, true explanations that just sound a little bit implausible or less plausible than the myth. So it has to be plausible. It has to sound plausible. That's one aspect of it. Second, it has to be understandable. So if it's too complex, if it's too complicated, then, you know, people prefer simple explanations. So if you present a too complicated story, they will stick with a simple myth. Okay? So if they believe that vaccinations cause autism, and you tell them actually we know that autism is caused by, you know, genetic factors interacting with the testosterone level in utero, interacting with this and that and the other, you know, at some stage people go, oh, that's too hard. Okay? This is much simpler and that's very attractive to people, simple explanations. So it has to be understandable. So as a general guide, you know, if there is one particular factor that is really important, then you should focus on that factor. So don't say, ah, there's a high likelihood of this happening because of A and also considering X, Y and Z. If X, Y and Z are not important, just focus on A. Just say, no, we know this is happening because of A, full stop. Okay? And then if you see that the other person is interested and so on, then you can maybe add bits and pieces to that. But as a first step, keep it simple. Use simple language. Make sure it's understandable, plausible, and then you have the best chance of actually convincing people that, you know, the true information is what they should believe. When you're designing, you know, debunking, trying to give people alternative explanations, there's a certain structure that helps, you understand how people process this kind of information. So to use an example, if you want to debunk the myth that global warming is caused by solar activity. So you have the myth that the sun is causing global warming on the one hand. Now the alternative to that is, you know, sun causes global warming, x causes y, so we want to understand what causes y, what causes global warming. So the alternative explanation is that, hang on, it's not the sun that causes global warming, it's human emissions, carbon dioxide. Okay? That's the alternative to the myth. That's what you should provide people. So if you want to convince people that, you know, the sun's causing global warming is the myth and carbon dioxide is causing global warming is the fact, you can use two sets of supporting arguments. One being arguments that the true explanation isn't fact true, so you can bring all the arguments why carbon dioxide is actually causing global warming, explain the greenhouse effect and all that. But you can also add supporting evidence why the myth is incorrect, for example that the sun has been cooling while earth has been warming. Okay? So you can use two sets of supporting explanations. One telling people why the fact is true and one telling people, one set telling people why the myth is incorrect. The more arguments you can bring to the table why your model is the correct one, I guess the better. And it should work on both sides, supporting evidence for your claim and, you know, counter arguments against the other claim. And preliminary evidence in our lab shows that more arguments are better as long as they are relevant. Adding irrelevant arguments is not a good idea, but as long as you have reasonably strong arguments, more arguments seems to be better. I guess it's a fine line between keeping things simple on the one hand, but also providing, you know, all the evidence on the other hand, because it's a trade-off, right? The more evidence you present, the more complex your explanation will become. So it's a trade-off and where the optimal point lies between those is something that we're still investigating. But keeping it simple means that you should focus on relatively few factors. So don't talk about, you know, lots of different things when you're just trying to make a simple point. But if you have lots of different pieces of evidence that all support the one factor, then, you know, you should bring all those arguments. So you can keep the explanation simple, but still present many arguments. But obviously if you're making it too complicated, then that's that's not a good idea, right? So what I'm trying to say is that if you have one thing that you want to explain and there's one central factor that explains it. So carbon dioxide causes global warming, right? Now there might be lots of other factors that influence global warming. Like I'm not an expert, but you know cloud feedback or whatever. There might be lots of other stuff that's important to explain the entire concept of global warming, if you so will. But really the only one thing that's important is human emissions carbon dioxide. That's the big factor, right? So you should probably focus on just that and kind of not talk about the others too much. But then if you have many arguments that support this direct link between carbon dioxide and global warming, then you can bring lots of arguments. The explanatory model will still be very simple. It's still one big factor explaining one big phenomenon. But if you have lots of arguments to support that link, then you can bring all those arguments, right? So you increase the number of arguments that you use, but you're still keeping your explanatory model simple. So you're not adding a lot of complexity, but still make a very compelling case because you can use lots of evidence. So you should avoid building a complex explanatory model, but it's okay to use lots of arguments as long as they're relevant, right? If they become really weak then they hurt your cause. But if you have strong evidence then use all of it.