 Mark Twain is quoted as saying, it ain't what you don't know that gets you in trouble. It's what you know for sure that just ain't so. That's a good quote because it pretty succinctly defines the difference between ignorance and misinformation. Ignorance is what you don't know and we all don't know a lot. And that might not be as good as knowing something but at least you know that you don't know something. Whereas when you think you know something that you actually don't know, you have misinformation. You're actually in an even more difficult position because you don't know that, you don't know something. A good example of ignorance is if you've never heard that quotation before. If you've never heard it before, you were ignorant of it. That's nothing to be ashamed of but now you know it. The problem is now that you know it, you know something that falls into the category of misinformation. In this case, this is not actually a Mark Twain quote. Even though he's quoted as its author at the beginning of the movie, The Big Short, there are a lot of quote picks like this one. Sometimes this quotation is attributed to people like Roy Rogers and other popular American personalities from the late 19th or early 20th century. It actually seems to come from a 19th century American humorist named Josh Billings. Except it wasn't worded exactly like this. Billings wrote the entire thing and this kind of made up vernacular that sounds a little bit like lolcats today. He says, I honestly believe it is better to know nothing than to know what ain't so. But interesting note about Josh Billings, his name wasn't actually Josh Billings. It was Henry Shaw. That's not to say Mark Twain never said it, that's not to say Roy Rogers never said it or something close to it. We can actually go back further and see a similar quotation from one of the founders of our country, Thomas Jefferson, when he said that he who knows nothing is nearer to the truth than he whose mind is filled with falsehoods and errors. Jefferson was in particular complaining about the press putting so many fallacies and inaccurate information in it that people who read the newspapers, he was afraid would become even worse off than people who never read the newspapers at all because at least people who never read the newspapers wouldn't have the inaccurate information. This is the same person who once said as long as the press is free and men are literate, then freedom is assured. And if his age had an ambivalent relationship with information, well you can imagine those problems have gotten worse in the information age today. But in this class, we've actually come across this quotation before in the article by David Dunning called We Are All Confident Idiots. Dunning is one of the psychologists that lends his name to the Dunning Kruger Effect, which is the principle that the less you know about a particular area of expertise, the more likely you are to overestimate your knowledge of that area of expertise. So if you don't know much about some specialization that you've never studied, it may seem like, well, how much is there to know? I probably know a lot. The people who are experts there probably don't know much more than I do. But the more you learn about a particular discipline, the more you realize, wow, there's a lot more there than I ever expected. But to this point, Dunning says, an ignorant mind is precisely not a spotless empty vessel. It's one that's filled with the clutter of irrelevant or misleading life experiences, theories, facts, intuitions, strategies, algorithms, heuristics, metaphors, and hunches that regrettably have the look and feel of accurate knowledge. In other words, we all walk around with this file full of foregone conclusions, conclusions that aren't actually conclusions because they're not the conclusion of a process of research and examination. They're frequently the prejudices that we receive from the people around us. Here say, group think, as he says, a lot of the things we think we know could be composed of facts, but they're irrelevant or misleading experiences. In other words, the particular things might be true, but we've probably overgeneralized from that. We don't have enough data to justify the general warrants we use to apply that data elsewhere. And so Dunning, like a lot of the other readings we've done in this class, calls our attention to the process of metacognition. That is the process by which human beings evaluate and regulate their knowledge, reasoning, and learning. This is what we've been doing in this class, working on different types of analysis from looking to see where data comes from, looking to see what warrants are used to extrapolate from that data, to make inferences from that data, and then laying out an essay that shows that every step of the way that every inference that needs to be made to get from specific data to the conclusions that we want to argue. From the very beginning of this class, we've looked at the difference between cognition, which is what your brain does in the background, most of which never actually becomes conscious, and metacognition, that is the way you think about consciously reflect on the cognition that's been happening this whole time. And metacognition has the ability, as Dunning says, to regulate and evaluate our knowledge, our cognition, but it doesn't usually do that. It usually does what Jonathan Haidt, the psychologist who gave us the metaphor of the lawyer writing on the back of the elephant, says. The lawyer on the back of the elephant doesn't steer the elephant, the lawyer, our metacognition, usually just explains, rationalizes after the fact what the elephant has done, no matter what it does. So that metacognition has a choice. Does it just use ad hoc reasoning, pull together fallacies, selected evidence, and that sort of thing to justify instinctual thinking or prejudice, pre-judgment? That would be motivated reasoning, beginning with a conclusion and trying to persuade others to accept it without adequately investigating the accuracy of that conclusion. But if we focus, we can have our metacognition do something much more difficult, and that is critical reasoning, which is intentionally and honestly looking for all the ways we might be wrong. Not just being critical of other people, that's easy enough, but being critical of ourselves and our own thinking practices. So critical reasoning means examining our assumptions, asking is this drawn from data that is falsifiable? In other words, is this something that could be proven wrong, or is it so abstract or vague that it could never be tested at all? If so, has that testing, has that data gone through peer review? Has the methodology used to gather that data been examined by other experts in that particular field? I've described peer reviewers as sharks with red pens, at least ideally, that's what they would be. Picking apart anything in an empirical, scientific, or scholarly argument that does not pass the most rigorous standards of a particular discipline. And then we look to see what inferences do we make from those facts? What definitions do we add to those facts? Do we agree on our definitions? Have we shown cause and effect as opposed to mere correlation? Do we agree on our values? What's outcomes are better than others and why? And then if we agree on all of those things, can we then go from the facts to a policy claim? And the toolman argument forces you to not just jump from a particular data set to a claim, but to ask yourself what warrant gets me from the data to the claim, how would I justify that warrant to someone who doubts its general truth? And then even for those who accept the general truth of a warrant, but might not accept its relevance to the data, how would I qualify my particular claim to say that yes, this warrant is, in this case, applicable to this data? All of this is to say that you have engaged in the process of an essay, not just writing a paper, not just writing a report, but an essay in the modern sense of writing something, but also in the ancient sense from the Latin word exagio, which is weighing, like testing, say, a piece of gold against something you know is gold, to see has someone deluded it with copper and tin? Or is this the real thing? Well, I can't know unless I have two things to measure. An essay is an argument of inquiry, not just an argument of persuasion. I am there to test my conclusion, not just to convince someone, not just to persuade someone. All of this goes into an essay and once you've gone through that process, you have not just a pile of foregone conclusions that you presume are true, you have an earned conclusion. You have conclusions that have been tested. They're based on facts and they're based on valid warrants rather than invalid warrants or fallacies. And now that you've completed that process, now that you've created a toolman argument, you're ready to take that argument out to the rest of the world, to go and find an audience. And now comes the next step in the process. Who is your audience? What do you want from them? What can they gain by accepting the conclusion that you have earned? And most importantly, are they going to listen? Or is there something about the way they think or the things they already know or think they know that is going to cause your conclusion to either be dismissed or potentially rejected out of hand? Because your audience, as you're going to find, has their own foregone conclusions. They have their own assumptions of what they think they know that just ain't so. That's why we have to remember that we are entering into a rhetorical situation. Rhetoric, the term by itself might have negative connotations because we can think of people using words to persuade us of something that isn't true or that they don't know is true. But we have to come back to rhetoric eventually. Once we have used rigorous investigation to find the truth as best we can figure it out, then we have to engage in rhetoric in order to deliver that truth, to communicate that truth to our audience. And who is our audience? Our audience isn't just the entire world, it isn't just whoever will listen to us. But the audience, according to Lloyd Bitzer when he defines the rhetorical situation, our audience consists only of those persons who are capable of being influenced by discourse and of being mediators of change. In other words, it is possible that they will change their minds that they will listen to what you're saying, but also that they can do something about it. And a lot of people might seem at first to not be part of that audience. They might seem to be people that are not capable of being influenced. But we're going to look in the next section of this class at the things that get in the way of communication, things like motivated reasoning and the backfire effect. And we're gonna recognize that there are gonna be some impediments to our communication. But in many cases, these are gonna be impediments that we can get through without resorting to deceptive persuasion like logical fallacies. A case in point is the campaign over the last 20 years by the Centers for Disease Control, the CDC, to get parents to have their children vaccinated against diseases like measles, mumps, and rubella. The CDC has put out lots of different types of A.A. campaigns. They've had TV commercials, radio spots, and newspaper announcements. And a lot of literature and websites like these that try a range of persuasive strategies to get parents to understand the dangers of diseases like the measles and the safety of the vaccines. But because of the continued resistance or what the World Health Organization calls vaccine hesitancy, parents who are afraid because they've heard rumors that there's something dangerous about vaccines, because of that resistance and the difficulty that the CDC has had in just getting straight information out there to get people to just accept basic medical facts. Psychologists and political scientists, Brendan Nyan and Jason Reifler and their colleagues did a study back in 2014 called Effective Messages in Vaccine Promotion in which they tested some actual CDC information, leaflets, handouts, narratives and images that the CDC was using to try to convince parents to get their kids vaccinated. They tested this out on actual parents and they tried four different strategies to see which worked the best. One version which they called the Autism Correction Message was a factual science heavy correction of false claims that the MMR vaccine caused autism, assuring parents that the vaccine is safe and effective and citing multiple studies that disprove the claims of an autism link. Another version they call the Disease Risk Message just talks about the risks of getting diseases like the measles and describing the nasty complications that can come with these diseases. Another version of the message was the Disease Narrative and this was a story about a 10 month old whose temperature shot to 106 degrees after he contracted the measles from another child in a pediatrician's waiting room. And lastly were these disease images, this case by showing parents images of children that had the measles, the mumps, and rubella. This was a way that the doctors could emphasize the importance of the vaccines. And of those four different messages, there was a difference in the effectiveness. The Autism Correction Message, they say, quote, worked among survey respondents as a whole to somewhat reduce the belief in the falsehood that vaccines cause autism, but at the same time the message had the unexpected negative effect of decreasing the percentage of parents saying that they would likely vaccinate their children. In other words, parents looked at all this data about the false link between the MMR vaccine and autism, and they said, okay, we accept that, we accept that there's not a proven link, that there's no evidence that the MMR vaccine causes autism, but those same parents were even less likely to get their children vaccinated than they were before hearing this information. In the Disease Risk Messages, which just contained information about how bad these diseases are, these messages didn't cause parents to be less likely to have their kids vaccinated, but it didn't really produce any benefits, according to the authors. And most surprisingly were the last two, what you might think are the most emotionally salient ways of communicating just how bad the measles, mumps, and rubella are, and that is a story about an individual child who has a 160 degree fever, and having to look at these images of these children being afflicted by these diseases. And in these cases, the authors say that the results show that by far the least successful messages were the disease narrative and the disease images. Hearing the frightening narrative actually increased respondents' likelihood of thinking that getting the MMR vaccine would lead to serious side effects, from 7.7% to 13.8%, nearly doubling the number of people who suspected complications or problems from vaccines. Similarly, looking at the disturbing images increased the test subjects' belief that vaccines cause autism. In other words, both of these messages backfired. And this word backfire effect comes from the two of the authors of this piece, Brendan Nyan and Jason Reifler. These two researchers had been looking into political misconceptions all the way back during the Bush administration, during the invasion of Iraq, when there were beliefs that Iraq had weapons of mass destruction. People would be asked if any such weapons had ever been found. And people that supported President George W. Bush, who had been saying this, they were very likely to say that weapons had been found when in actuality none had been. And even when they were confronted with the fact that there was no evidence of such weapons of mass destruction, they still held on to those beliefs. Because that claim that, as Donald Rumsfeld said, the absence of evidence is not evidence of absence. The fact that there was no proof of something doesn't actually mean that something doesn't exist. So it wasn't something as conclusive as saying, proving something you didn't believe was true actually was true. But the lack of evidence in these cases led people to sort of deny, have plausible deniability in the claims that they were making, independent of what the evidence showed. And as the authors say, they define the backfire effect in this earlier study called when corrections fail, by saying that in the backfire effect, corrections actually increase misperceptions. So when someone has a false belief and you correct that false belief with accurate data, that person might be likely to actually double down on the false belief. I want you to hold this definition in parentheses. Take it with a grain of salt right now because in the next lecture I'm gonna come back and talk about some of the qualifications that need to be made to that. But what this study did find, the thing that Nyan and Rifler thought led to the backfire effect was something that is well documented outside these studies. And that is this process, which they say people use to counter-argue preference incongruent information and bolster their preexisting views. By counter-argue, obviously, they mean argue against something that doesn't meet their prejudices, their assumptions, the things that they prefer to be true or that they assumed were true. If people counter-argue unwelcome information vigorously enough, they may end up with, quote, more attitudinally congruent information in mind than before the debate. In other words, if someone challenges something that you believe, you can go to Google and find people that believe what you believe. And then, whether those are good facts or not, if they adhere to what you think ought to be true, you can take that information and use it against the person who was challenging it. But that might not be accurate information, but you're going to be more confident at that point. As they say, this in turn leads to reported opinions that are more extreme than they otherwise would have been. Now, this term backfire effect leads us into a couple of studies by Stephen Lewandowski and his colleagues, one being the study misinformation and its correction and the other being this debunking handbook which Lewandowski and one of his colleagues put together to help people, especially people that are trying to communicate scientific information that is maybe unwelcome to a certain audience. How to deliver that information in a way that will be accepted or at least listened to. And they identify four types of backfire effect. The first of these is the continued influence effect. This effect was first studied by one of Lewandowski's colleagues, Culling Seifert and her co-author, Holland Johnson, back in the mid-90s when they were researching people's ability to recall a story that had been told to them, not just what they may or may not have forgotten but elements that they were supposed to take out of a story after that story had been corrected that they may in some cases forget to correct. First, the test subjects were told a story about a fire that had started when a short circuit occurred in a warehouse near a closet that contained volatile materials such as cans of oil-based paint and paint in pressurized cylinders. After being told that version of the story, they were then told that a later investigation by an insurance adjuster found that the closet near the short circuit was actually empty. There were no paint cans. And after that, they were asked to make inferences about the events that were described such as why they thought the fire had burned out of control or why the smoke rising from the building was black or why the insurance company had refused to cover the damages. And these test subjects answered those questions by making references to the paint cans such as the oil-based paint that would make black smoke or continue to burn longer despite the sprinkler system, spraying water on it. But these were people who had already been told that the paint cans were not in the closet. That first version of the story had been corrected. They had been told that the closet was actually empty. But they held on to that information that they needed to complete their mental model of how the fire started. And Lewandowski, Seifert, and their co-authors write that if a retraction invalidates a central piece of information, people will be left with a gap in the model of the event and an event representation that just doesn't make sense unless they maintain the false assertion. Therefore, when questioned about the event, a person may still rely on the retracted information to respond despite demonstrating awareness of the correction when asked about it directly. In other words, if you ask these people, do you remember the retraction of the correction that said that there were no paint cans actually in the closet? They will say, yes, I remember that. But when they're asked to explain why the fire spread the way it did, they then describe the situation as if the paint cans were actually there. So Lewandowski and Seifert say, people tend to fill in gaps in episodic memory with inaccurate but congruent information if such information is readily available from the event schemata. In other words, having an explanation even though we consciously are aware that that's the wrong explanation, some explanation seems to be more satisfying than no explanation at all. We don't like to say, I don't know. Now in contrast, after these people were given an alternative explanation, in this case, they were told that a later investigation found signs of gas cans and matches. In other words, the potential for arson. Now when people were asked to explain the fire, they didn't refer to the paint cans, they referred to the gas cans. And this is why Lewandowski and colleagues say that it's not enough to just retract information. You can't just say that you may have thought this information was true, but it turns out it's not true and then leave it there. It helps to give an alternative account if one is available, to help people fill in this explanatory gap. Now we've come across something like this in the past, the need for closure. People don't like not knowing something. I call this the IMDbReflex. If you see somebody in a movie that you recognize, you know the actor, you've seen somewhere recently, but you can't think of who it is, you just can't resist taking your phone out, looking at internet movie database and seeing who that person is and what else you've seen him or her in. We don't like knowing these gaps are there in our knowledge. And so we have this tendency to seize and freeze. Seize on information, go to Google, go to some other source, ask somebody, and then once you have an answer, to feel pretty good that that is the answer and to not continue to investigate it, to not reflectively use your metacognition to say, is that actually the best answer though? Do I need to keep looking? We grab on to whatever the most accessible cues to information are. And as these authors say, under adjust our judgments, be unwilling to change those judgments, those assumptions once we have something to fill in that knowledge gap. And for this reason, we have a bias toward permanent knowledge, but permanent knowledge is something that's a generalization that might be so vague that it's not really that helpful, but it feels like an answer. This becomes even more significant when we feel under pressure or we feel a loss of control, we feel insecure. I've referred to this study in the past from the University of Texas at Austin, Jennifer Whitson and Adam Golinski asked business professionals, one group was asked to remember a time when they accomplished a task or solved a problem and then they were shown this image on the top right and they were asked if they saw any patterns in it and they said, no, I don't see any patterns. But a different group was asked to remember a time when they were unable to solve a problem and they were shown that same image, but they would look at it and see patterns that just weren't there. We're more likely to grasp after assumptions or patterns or explanations when we feel this loss of control. And there are a lot of examples of this kind of thing in the last few years, but one of the most famous is the retraction of Andrew Wakefield's study, the one peer-reviewed scientific study that claimed to show evidence for some sort of connection between the MMR vaccine and autism. Of course, after Brian Beers' investigation and the investigation by Britain's General Medical Council showed that not only was that evidence too small, just coming from 13 children, but even that data had been altered. Every single one of the children described in the study had had data deliberately changed by Wakefield to make it look like there was a causal connection rather than correlation and sometimes there wasn't even enough data that correlated. But despite that retraction, that assumption, that explanation had been put out there and parents had accepted it. They had assumed they had an answer to where autism comes from and so they'd stopped getting their children vaccinated. And because of lack of vaccination, because of that vaccine hesitancy, the measles has begun to break out again in places like Washington State and even here in Texas. When she interviewed some parents here in Texas who were vocal supporters of Andrew Wakefield, a New York Times journalist, Susan Dominus, also interviewed doctors who had tried to communicate with parents to tell them that there is no causal connection between the MMR vaccine and autism and doctors like Thomas Inseld, who's the director of the National Institute of Mental Health, said that obviously something is to blame and he and other researchers haven't found anything that looks like a smoking gun, but people grasp onto various explanations. We still don't have an answer and Dominus notes that to parents who have run up against the unsatisfying answers from the scientific community, Wakefield offers a combination of celebrity and empathy that leaves strong impressions. In other words, there's no, scientists always want to give lots and lots and lots of qualifications. They don't want to overstate the generalization, the general conclusions that the general inferences that might come from their specific research. But people don't like that. People want answers that feel good or at least feel like answers. So faced with this lack of an answer for where autism comes from, parents go to the person who seems to give them an answer, even though he's been proven to be a fraud, some answer still feels better than no answer at all. Now the problem with autism is we don't have an alternative account. Although just over the past few months, researchers have been discovering that there are genetic elements that may be predictive where you could look at a child's DNA before six months of age and predict that even though this child has developed normally for the first six months, that child will start to show signs of autism. But there's still not a really crisp, clean, succinct answer that is being delivered from the medical community to parents yet. So there are always going to be situations like this where we don't have a clear definite alternative account. So what do we do in this situation? Well, if there's not a no alternative account, we can do the next best thing, according to Lewandowski. Which is repeat the retraction. Every time you describe this misinformation, be sure to have the retraction there alongside it. A good example of this comes from that same article by Susan Dominus. Even though there's no clear causal explanation for autism at this point, we still know that the connection between the MMR vaccine and autism is fake because it has been repeatedly studied. People have been focusing on that explanation and found no evidence for it. So it's not like it hasn't been tested. So according to Insel, the Wakefield paper is one of the few factors that can be ruled out. The author throws out these obviously false causal explanations like could it be aspartame? Could it be ultraviolet radiation? Could it be ELMO? No one knows. Even though that might seem sarcastic and might seem a bit cruel, it at least calls attention to the fact that you can't just throw any two things together and assume that if there's a correlation between two things then there must be a causal link. So aspartame, UV light and ELMO aren't actual alternative accounts, but they're a form of retraction that says, just because two things happen doesn't mean they're causally connected. So it combines both the retraction and the alternative account. Now in the absence of an alternative account, if you have to repeatedly give retractions as you describe this misconception, this misinformation, you have to be careful you don't run into the next type of backfire effect, which is the familiarity backfire effect. That comes from the fact that the more you talk about something that doesn't exist, the more you remind people of the thing you're trying to disprove. For example, don't think of an elephant. Right now while you're watching this video, think about the familiarity backfire effect. Don't think of an elephant. Don't think of an elephant's trunk. Don't think of its tusks. Don't think of the way we've used the rider on the back of the elephant metaphor from Jonathan Hyatt in the past. Just don't think of an elephant. Now obviously if I tell you not to think of an elephant, and I put a picture of an elephant right in front of you, the fact that I'm telling you not to think of the elephant doesn't stop you from actually thinking of the elephant. This is an example of a conceptual frame or a rhetorical frame that comes from the cognitive linguistics professor, George Lakoff. Lakoff in his book, Don't Think of an Elephant, says every word like elephant evokes a frame, a conceptual cognitive frame, which can be an image or other kind of knowledge. Elephants are large, have floppy ears and a trunk and are associated with circuses and so on. The word is defined relative to that frame. When we negate the frame, we evoke the frame. By saying the word, even though you're saying don't think of it or disavow it, you're actually reminding people of it. A very familiar example that Lakoff also uses is the president Richard Nixon. Now, you may not know much about Richard Nixon, but he was the 37th president of the United States. He had to step down from office nearly 50 years ago. But you probably know at least one quotation from him, maybe only one. And that is the quotation where he says not what he is, but what he is not. Following the Watergate hearings that forced him to step down before being impeached, he famously said, I am not a crook. And even though he was saying I'm not a crook, that word, crook, even though it's a word we don't use very often, is one that has stuck with him for nearly 50 years. And even when the information we're trying to communicate is good or we're trying to communicate, like in the MMR case, the fact that something people thought was bad is actually not bad, the study that Brian Deere did that first brought attention to Wakefield's fraud, that fraudulent paper, the thing that sort of started the whole MMR scare to start with, that initial investigation was published in Britain's Sunday Times. And this is an image of several of the actual newspaper, the Sunday Times that appears on Brian Deere's website. But this is the way that the story that Brian Deere wrote, arguing that there's no evidence for the connection of MMR and autism, it was accompanied by these pictures. And notice the pictures that are shown interspersed with this very dry information, collection of information about Wakefield's alteration of the data. Basically, the core message that the article is saying is that there was no proven connection between MMR and autism. But those images are reminding people that children and shots don't go well together. Just the thought of having to inject a syringe into a baby makes us feel uncomfortable, makes us feel that child's pain. And the baby on top is crying, is obviously very upset. The child on the bottom looks more depressed, looks more like this is not just a pain reaction, but this long-term mental attack. And these images of pain connected with the injection are juxtaposed with images of Wakefield and his celebrity supporters marching, showing solidarity, smiles on their faces. Like they're sort of championing this truth while the medical community, the people that are trying to save these children from measles, mumps, and rubella are out harming them. The images probably almost had to be unintentional, but that is evoking that frame that there's something bad about vaccinations. There's something harmful about vaccinations, even while Brian Deere's trying to argue that there's no evidence that there is. That the vaccinations are overwhelmingly beneficial. And this kind of thing has also happened when the CDC tries to communicate information to parents. They will say things like, you may have heard about this myth, but here's the truth. And in debunking this myth, they actually use a lot of words that parents react to while the explanations that the facts might easily be overlooked. So what Lewandowski and colleagues argue is that you want to accompany the myth with a retraction, but you don't wanna overdo the description of the myth. So you might wanna have a pre-exposure warning. So when debunking the assumed connection between the MMR vaccine and autism, instead of saying something like, for 20 years many people had believed that the MMR vaccine caused autism, in that case you're re-engaging the frame. You're evoking the frame, that there's something bad about the MMR vaccine. Instead, start by saying, for 20 years a fraudulent study has misled parents into believing a medical myth. In this version, you are still introducing the subject, but you're giving a lot of those pre-exposure warnings. An upfront warning that misleading information is about to come. So by saying a fraudulent study, instead of saying a study, blah, blah, blah, later shown to be fraudulent, you say right away fraudulent study. And it hasn't convinced people, it has misled people. And it's not the belief that vaccines cause autism because people frequently use the word belief as a good thing. Well, this is what I believe. So it's truth value lies in the fact that I believe it. But instead, right from the beginning, say that it is a medical myth. And after that pre-exposure warning, emphasize the facts. Be sure to spend more time describing the facts than evoking the myth. For example, say at the beginning that the measles is an extremely contagious virus. It can cause serious respiratory symptoms, fever and rash. For babies and young children, the consequences can be severe. Measles has killed 110,000 people globally in 2017, mostly children under five. Measles was declared eliminated in the United States in 2000 thanks to widespread vaccinations. Then give the myth. Since then, a fraudulent study has been used to mislead parents into believing a myth that the MMR vaccine is somehow connected to autism and then finished with a fact. In the first three months of 2019, there have been 387 individual cases confirmed in 15 states making this year the second greatest number of reported cases since the year 2000. And this emphasis on facts will help to sort of drown out the re-evoking the myth. But you then have to be careful for the next backfire effect, which they call the overkill backfire effect. You may have already suspected that I tend to air toward overkill when it comes to providing information. I don't know how far into this video, time-wise I am right now, but I promise you I have cut out much more information than I originally recorded. Another frequent example of too much information might be when you're trying to do a specific task on your computer, and if you don't have a lot of software experience, especially with a particular piece of software you're using, you might ask someone who can help you. How do I do this one thing? For example, if you wrote your toolman argument on a Google Doc and you have to save it as a Microsoft Word document. Sometimes it's just a matter of saving it as that, but then sometimes you upload it and you see that there are formatting errors. So you ask someone for help. Why can't my particular Google Doc be easily transformed into a Microsoft Word document? And you ask someone with a computer experience and instead of a simple how-to description, you get a lot of descriptions about the coding differences between Google Docs, which is more like an HTML program as opposed to Microsoft Word. And all of this information might be true and the person giving you this information might be doing it because they think it's all relevant and it's all something that you're going to need to know. And this might be information that if you were an expert would be extremely helpful, not just for this one particular instance, but for any time you ran into a similar problem in the future. However, if you're not already familiar with this language and you don't know what these references to coding terms means, then it's going to have the opposite of the intended effect. Instead of solving the problem, you're going to feel like you have an answer, which is that it is impossible for my software to save the file in the proper format. Whatever form of expertise you developed and if you've spent a lot of time researching a particular issue and you've constructed a very strong argument about that issue, you may really resist having to simplify all of that description and all of the backing for the warrant and that sort of thing, because you know that if you're talking with experts they might call out something. For example, if you're trying to describe the MMR vaccine and say it's totally safe, well, okay, it's not totally safe. Some people with certain allergies are going to respond negatively to it, the same as they would with any other vaccination or really anything else you take into your body, any food, much less anything that's injected directly into your bloodstream, there is the potential for risk. But if you lay out all of those risks, it might be a good qualification, but it might actually cause the overkill backfire effect. And I have this problem a lot. The world is not a simple place. When I hear people use that cliche, keep it simple, stupid, I get really annoyed because the world is not simple. And if you need a simplistic explanation, you're saying you don't really want to know the truth. But that's probably a little too aggressive. The world is very, very, very complicated and the brain can take a lot of complicated nuance, a lot of qualification, a lot of extra data, but only up to a point. In order to get through our day, in order to get from one place to another, we have to simplify things. We have to make generalizations and those generalizations are ultimately gonna be oversimplified. We have to make analogies, we have to use metaphors and those are ultimately going to be false analogies. There are gonna be some things that do not compare between the two things we're comparing. But the world itself is too complicated to deal with directly. Too much information causes anxiety. Instead of referring to the keep it simple, stupid cliche. Another cliche that's maybe more helpful here is the one that says the map is not the territory. There's a short story by Jorge Luis Borges in which someone makes a map that is a perfect map that has every single molehill and every single mud puddle that perfectly resembles the land that it describes. The thing is the map is just as large as the territory it describes and so it's useless. You can't fold it up, you can't hold it up, you can't look at it and get from one place to another. You may as well just find your way on your own. So the map has to be simplified in order for it to function as a map. And a map is a heuristic. Remember this term that I introduced in the very first lecture. A heuristic is a strategy for solving a problem that kinda works sometimes, if not all the time. It works more or less, it's better than nothing, but it's not perfect. But it's simple and usually gets the job done. That's a heuristic. And so a map is a heuristic. It's a highly simplified description of a very complicated landscape. And having a really detailed map is better than having no map at all and having to deal with the actual landscape. But if I'm trying to get a specific job done, if I have a specific problem to solve, then I can simplify my map to just focus on that end result. Whether it's using Google Docs to save a Microsoft Word document, or in this case, traveling across Texas to get to Big Bend National Park. The simpler the map, the easier it is for me to come to the conclusion it's leading me to. And of course, when something's simplified, it's gonna leave a lot out. For example, the simplified map here doesn't include Corpus Christi. But it includes San Antonio and I can get from Corpus Christi to San Antonio. And once I get there, then I can figure out I can either take Highway 90 or Interstate 10 to get to Big Bend National Park. Whereas if I gave you directions to get from Corpus Christi to Big Bend National Park and I gave a description of every single turn and every single lane change, then there's no way you're gonna be able to take in all that information. And even if you were able to write all that information down, you're not gonna be able to drive and read it at the same time. So recognize that your audience needs heuristics more than they need ironclad, over qualified, over backed data. Now this doesn't mean that you can't include a lot of data, but especially if you're writing an essay. When you write your introduction and your conclusion, focus on what you want your readers to take away from your argument. If they forget all the data. The data comes in the body of the argument when you're laying out your data, the evidence, when you're describing your warrant and backing it up, maybe with more data, describing the qualifications and rebuttals. It's easy to worry about all the different potential rebuttals, all the different potential counterarguments out there and try to answer all of them in one paper. But depending on who your audience is, if you're writing for experts, then you might need to do that. But if you're writing for a general audience, then you're going to need to simplify it at some point. But you can still include a lot of data and a lot of descriptions of your warrants and examinations of different types of stasis in the middle of the paper, but in your conclusion especially, and potentially also in your introduction, decide there what heuristics you want your reader to walk away with, even if they forget all the other stuff. They forget all the warrants, they forget all the data. What heuristic do you want them to walk away with in the end? And this is going to be even more complicated when you're trying to correct someone else's false heuristics, some misinformation that people have been using that has worked sometimes for them, like in this case, if someone has a map of France that they think is Texas, you could see why there might be some confusion. There is a Paris France and there's a Paris Texas. We both have red, white, and blue flags, although theirs doesn't actually have a star. And if you leave from Paris, France, and you go Southwest until you run into some mountains where people speak Spanish, you will come to a mountainous area where there's lots of hiking. Same thing, if you leave from Paris, Texas and you go Southwest and you come to some mountains where people speak Spanish, you will be in Big Bend National Park. So following that map, that general heuristic, whether you're in Paris, Texas, or Paris, France, might actually work for you. But this is still a map that I would want to correct. This is an inaccurate heuristic, but I'm not going to get that person to throw away their map if I try to give a very, very, very accurate piece of information that has an overkill backfire effect that causes too much anxiety that it's more than they can actually take in. So what I want to do is replace their inaccurate heuristic with a more accurate heuristic. Of course, all of the proof of that heuristic's accuracy is something that I should have done already. I should have done all that research in my initial analysis. But once I've really earned that conclusion, then I want to put it into a heuristic that other people can understand. And that's why Lewandowski and his co-authors suggest that to avoid the overkill backfire effect, use a simple, brief rebuttal using fewer arguments and refuting the myth, but you can at least arrange your argument so that the refutation invites further questions. So when their second piece of advice is to foster healthy skepticism. Now, fostering skepticism means encouraging someone to be skeptical of everything, of the beliefs they already hold, but also just by fostering skepticism, you're encouraging them to be skeptical of you and your argument. Now, if you've got a really good argument and you've got lots of data to back it up and you know what the warrants are that connect that data to the conclusion, then you welcome skepticism. Skepticism is your friend. The truth does not fear questioning. And it is that skepticism that will allow the reader or the audience to decide what questions to ask next. So obviously this depends on some sort of back and forth communication. By fostering skepticism, you then tell your audience, I've given you a simple heuristic now what other questions do you have? Do you want to challenge me on any of these specific points? I don't go ahead and predict the rebuttal that I, or a thousand different rebuttals that I might need to refute. And then go ahead and refute them. I wait and say you be skeptical of me and ask me the questions that you have and then answer those one at a time. That way you don't overwhelm your audience with the overkill backfire effect. Now, there's one more backfire effect that Lewandowski and his colleagues describe and that is the worldview backfire effect. This is by far the most complicated but also probably the most important. And it's because of that that I'm going to devote an entire lecture video to that. So that's coming up next.