 Any time you have to communicate new information to someone, you're telling them at least two things. You're telling them the information you intended to tell them, but you're also telling them that their knowledge is incomplete. Even worse, if they're holding on to a piece of information that they already have a feeling of closure about, a feeling of closure makes them feel good and you're going to be the one that takes that feeling of closure away with your new information. Every facet of human cognition from motivated reasoning to critical thinking that we've examined over the course of this semester has been something that, whether we know it or not, influences our own conclusions and also our ability to communicate those conclusions to others. So if I can very briefly review, the human mind is very good at recognizing patterns in the midst of an ocean of data. Most of that data is irrelevant and we have to simplify it. Most of the time, that ability to simplify data into patterns is very helpful, but sometimes we see things that aren't really there. Faces in the clouds or in random objects or on Mars. And unfortunately, when we think we see these patterns, we look for evidence that confirms that first thought. What we don't do is look for counter evidence. In other words, we don't usually stop to try to falsify that first thought. We're not very good at cognitive reflection or critical thinking, critical thinking being criticizing our own ideas. That kind of thinking can get us through the day. It helps us think quickly, if not deeply. We quickly come up with heuristics. Those are strategies that work more often than not, but not all the time. And everybody does this, including very intelligent people. Doctors do it. But this kind of thinking leads us to think that we know a lot more than we really do. We really stop ourself to reflect on how well we know something. It's only when something goes wrong or someone asks us to explain what we think we know that the superficiality of our understandings becomes apparent. That's not just a problem with knowledge about complicated or unfamiliar things. We frequently never even notice how simple things around us that we use every day work, just so long as they do work. And on those rare occasions that we're confronted with a question we can't immediately answer. We have sources at our disposal that allow us to quickly find an answer. Not the answer, usually not even the best answer, but an answer. We start this season freeze process all over again. Google is basically a confirmation bias machine. Whatever foregone conclusion we wanna select evidence for, that evidence will be provided for us somewhere by some website. So we scan through and look for things that confirm what we want to conclude and ignore the counter evidence. When we ignore that counter evidence, that's called disconfirmation bias. Just like confirmation bias is looking for the evidence that confirms our hypotheses or our assumptions, disconfirmation bias is when we're confronted with counter evidence. We're motivated to dispute or dismiss all of this counter evidence. Any evidence that threatens our beliefs or our feeling of closure. This is why psychologists who study metacognition try to bring to our attention, not just our ignorance, but our ignorance of our ignorance. But the research that we've been looking at this whole semester has been describing people in general. When David Dunning tells us that we're all confident idiots, he's talking about every human being on the planet. He's not singling you out. He's not telling you that you got something wrong or that other people didn't. But when you make a specific argument correcting a specific misconception or refuting a particular argument made by a particular person, you're telling that person not that we are all confident idiots, but that you are a confident idiot. I don't recommend you put it that bluntly in your writing. But even if you don't put it that bluntly, your refutation of someone else's argument is likely to convey that connotation. The fact that you're correcting someone else's belief means that they have to recognize that they didn't do as good a job as you did in searching for an answer. Even if you back it up with solid evidence and valid logic, you're invalidating more than just that one belief. You're refuting their self-image. And they are going to respond by protecting that self-image. They're gonna respond with disconfirmation bias because they don't want to accept that second part, not necessarily the part about the specific knowledge they're holding onto, but the part about them not knowing. And that might be the real emotional motivation behind disconfirmation bias. And that's why the need for closure doesn't just lead us to seize on the first available answer and freeze or hold on to it in the face of counter evidence. It also makes us want to have that knowledge affirmed by other people. As Richter and Kruglansky put it, the need for closure, quote, induces a preference for consensual knowledge exhibiting stability or consistency across persons. When everyone we know believes what we believe, we can rest easy in that sense of closure. We don't need to keep looking for an answer or thinking critically. But when someone disputes our beliefs, they threaten that feeling of closure, and that means we have to make a choice. Do I reject my assumption and with it that feeling of closure, or do I reject the person who is threatening my feeling of closure? And this is true whether you're arguing with a single individual or with a larger audience. When you write an essay, you're not writing an email. You're writing for a much larger audience, even if each member of that audience reads the essay individually. And like Dunning says, the audience is not composed of spotless, empty vessels. People who are simply waiting for new information that they can passively absorb just because you brought it to them. They are, quote, filled with the clutter of irrelevant or misleading life experiences, theories, facts, intuitions, strategies, algorithms, heuristics, metaphors, and hunches that regrettably have the look and feel of useful and accurate knowledge. As we saw last week, many of those heuristics and theories come from people's social or ideological groups. That's going to be important, especially if you're correcting a misconception that is shared by one of their groups. Just like with individuals, you're communicating a conclusion to a reasoning process. And that conclusion is drawn from evidence. But as we've seen so far in this class, evidence by itself doesn't lead to a conclusion. The evidence or the data in the terminology of the toolman argument is connected to the conclusion or the claim by a warrant. Some general principle that says when the data is X, then do Y. Most arguments depend on many warrants and those warrants take the form of different types of stasis. Questions in the stasis of fact or causation are independently verifiable. You can empirically test their truth. It's independent of your own thinking process. But a claim about causation or fact is either true or false, regardless of what anyone thinks. However, questions in the stasis of definition and value are not objectively true or false. We can't point to a definition that exists out there in the world, independent of human usage. Even the dictionary just follows usage. It doesn't prescribe it. And while you can probably think of some values that seem like they ought to be universal, there's going to be someone somewhere in the world who doesn't share it. And many of those values that we think ought to be universal, there are entire populations of people who don't share them. People from different cultures, different identity groups, different religions, different ideologies, different worldviews. And if you live in a community where everyone shares your values and definitions, you don't really have to worry about that. You can make arguments based entirely on new discoveries of fact or causation. But if you need to make an argument to an audience with different values and definitions, you have some choices to make. Let's say we have a group of people who are united by their belief in a particular conclusion, represented by the red book in the bottom right. They all believe that the conclusion and they have yet to come up against any reason to doubt it. But another group of people have. They do not come to the same conclusion as the red group, though they may not be able to see exactly why. Let's say for the sake of argument that you researched and critically evaluated the question that divides these two groups. And you agree with the blue group on its conclusion. And now you're confident enough in the conclusion and you believe it is important enough that you have taken up the challenge and decided to make the argument to the red group that their belief is wrong and the blue group's belief is right. I'm gonna call this a second person audience. When you speak directly to someone, you speak in the second person. So you can think of this in relationship to our pronouns. If I say I or me or we or us, that's the first person. Second person is if I say you, I'm talking to you. Third person is he, she or it or they or them. An other group that you and I are talking. So you're the second person, I'm the first person. Another group of people we might be talking about is the third person. And a second person audience that speaks back is an interlocutor. That's kind of a clunky term, but that literally means someone who talks between or speaks between. So it's not just me talking to you, but you talking back to me and then me responding. So there's this back and forth participation. And we might usually think that when we argue with someone that it is that person whose mind we're trying to change. And sometimes it is, but there are reasons and motivations to argue with someone other than to change that person's mind. The typical debate format we see in stage political debates or in the pundit commentary on cable news. Neither side is really interested in convincing the other side to change their conclusions. They're focused on other people who haven't made up their minds. That is a third person audience. So I'm talking to you, but I'm trying to convince someone else who's observing or listening to our discussion. And since I'm focused on that third party group, in order to convince them all I have to do is be more persuasive than you are. There are many problems with this kind of argument. It is focused on winning. Beating an opponent by rushing the audience to your conclusion rather than making people slow down and think critically about the entire issue. And there's a danger here in being too nuanced since your opponent might exploit intuitive fallacies that are more convincing than deliberate critical thinking. This combative atmosphere rushes everyone involved to get to the goal as quickly as possible. And the goal in the debate is persuasion, not genuine understanding. And because the debater is trying to win over that third person audience, he isn't even trying to persuade the interlocutor, his opponent. For that reason, each side can attack the other speaker, resorting to add hominem arguments or even demonizing the opposition as if their misunderstanding was the result of conscious deception or malice. And then there's what I'm gonna call the first person argument or maybe the first person plural. So I'm talking to you, you are the second person, but I'm saying what I want my group to hear. I'm speaking for us. My aim is to affirm my group's core beliefs and demonstrate my own commitment to the group's ideology. And that means that the primary audience, even though I'm talking to you, is actually us, the group that already shares my belief, the speaker's belief and conclusions. Even a well supported argument might be primarily constructed to affirm the group's beliefs. At least that's the emotional goal. The groups that we belong to praise our efforts when we put forward an argument for a claim that they hold dear. And that gives us some emotional confirmation that we're right, that we're not alone, that we're a part of that group and that we need to go out and spread the word. But when we're more concerned with what our in-group thinks, than we are with what our opposition, our interlocutor thinks, we make a different kind of argument. We tend to use the slogans and the terminology of our group, even though the people outside the group may not understand the premises behind those terms. And we often make claims about our values that our group holds with the assumption that everyone should already espouse those same values. In other words, we end up announcing the conclusions of our arguments rather than explaining the premises behind them. This doesn't help change the minds of those who hold opposing views. But it's emotionally satisfying because we get positive feedback from those who already agree with us. This is the kind of problem that people are describing when they use the phrase virtue signaling. Now, this term gets overused and frequently used to dismiss someone's argument without a rebuttal, just to cast aspersions on their motivation rather than on their argument, and which would be an ad hominem or red herring argument. But there is a common motivation for members of a group to demonstrate their commitment to the group's ideology. And that means that the primary audience is the group that already shares the speaker's conclusion. On the political right, we see people attacking others for lack of patriotism or adherence to a religious doctrine. And all the person making that argument is really saying, I'm a great patriot or I'm really committed to my faith. They're not actually changing the way someone else thinks. On the left, people often accuse each other of racism or sexism. And it should be obvious that virtues like equality and patriotism are valid and powerful virtues. And that's why they're invoked by people who are virtue signaling. However, because this kind of motivation isn't really aimed at the individuals being condemned, the arguer doesn't stop to define the terms or explain the warrants that are familiar to those in his group, but unfamiliar to those outside the group. We might have different definitions about what patriotism is or what constitutes prejudice. So instead of just saying, you ought to follow this norm, we should stop and explain the definition. It makes no sense to condemn a person for violating doctrines of your religion if she doesn't believe in your religion. And you're unlikely to make someone feel ashamed of committing a racial microaggression if they don't even know what a microaggression is or why the specific thing they did might be considered offensive. It's the same basic criticism that Jesus gives in the Sermon on the Mount or the Sermon of the Beatitudes in Matthew 6. The common thread is that even though you're pretending to be talking to one audience, you're actually more concerned with impressing those around you, those who see you and hear of you and hear the moral piety that you're expressing. Now, when people use terms like virtue signaling or social justice warrior or even hypocrite, they're usually being dismissive, implying that the person who is virtue signaling doesn't actually believe what he or she is saying, that he is being consciously dishonest and I don't think that's a necessary condition. There may be people who do this kind of thing only to get prestige among their own group, but there are gonna be a lot more who make a first person type of argument out of a heartfelt feeling. They honestly believe what they're saying and at a very deep level, approval from our group makes us feel good. So we naturally do what wins us approval. But if our motivation for arguing is to actually change the mind of someone who doesn't already agree with us, then speaking to them in the slogans and terminology that motivate our own group just isn't going to get the job done. And where virtue signaling really becomes problematic is when we signal our virtue to our own group by attacking the other group, by attacking the opposition. And this tactic evokes the argument as war metaphor. And this is a way of thinking that makes a lot of intuitive sense and awakens our really deeply held, evolved instincts for tribal warfare. Group solidarity means we unite the strongest when we're attacking someone else or defending against another group or rival tribe. But this shuts down the potential for critical thinking and any chance at open dialogue between all sides. Protests and counter-protests kind of work this way. You can see people taking this metaphor to an extreme when white nationalists and anti-fascist protesters face off at a protest. Many of them are literally dressed for war in helmets and face masks carrying clubs and knives. But even in these extremes, it is slogans and terminology that get thrown at the opposition more than bottles and rocks. And it's those slogans and cliches that seem like the right thing to do. It seems like we're communicating something. But all we're really doing is spurring recognition and enthusiasm from those who are already on our side. But we're doing the exact opposite to the other side. We're telling them that we're right and we're there to fight. We're not there to discuss or to even think. We're not there to listen. And so these are usually the people being described as a social justice warrior on the left or a culture warrior on the right. Bill O'Reilly titled one of his autobiographies as a culture warrior. And the metaphor presupposes that because some people don't have the same values, they are by definition our enemies in a war. Cultures in this way of thinking are like these homogenous regimented armies that are facing off on a battlefield. But in reality, each of us is part of many cultures, even the ones that some say are supposed to be at war, that we're supposed to choose sides or something like that. And if cultures really are at war, then being friendly to the different sides is treason. The goal would then be to silence our opposition, signal our loyalty to the group or capture the third party audience. That's how the argument has war metaphor functions and it has some pretty nasty side effects. But if our goal is to change someone's mind, then that's a complete failure. During the second war with Iraq, President George W. Bush said that we were fighting a war for the hearts and minds of the Iraqi people. And very quickly, people on the left responded with the slogan that you don't win the battle for hearts and minds with bullets and bombs. And that's true, but that's true even when the bombs were dropping over torical. So just like in debate, the virtue signaler is not really trying to communicate with his second person audience, with his interlocutor, the person he's talking directly to. He's talking at that audience, trying to balance his message off of them so that someone else will hear him. He's not really trying to communicate with them. He's certainly not likely to change anyone's mind. He's performing for his team. So now that you've put together a solid, well-researched, fact-based, logically valid argument, if you really want to communicate that argument, not just to those who already agree with it to show them how good you are coming up with arguments for our team's foregone conclusions and not just to win over people who are undecided, but also to communicate with people who disagree with you, then you have to take into consideration how the same message will differ in how each of those three groups receive it. So when you lay out your evidence and explain how it logically leads to your conclusion, that message will automatically contain a message to your own group. It's gonna say to them, hey guys, you were right all along, yay us. Obviously that's a message people like to hear. You're protecting their need for closure. When you deliver that exact same argument to people who had no opinion on the matter to start with, there is also a hidden message contained in your argument and it is that you know something that they don't. That might be good or bad. It might make you look intelligent or it might just make you look self-important. But either way, it probably won't trigger their emotions to agree or disagree beyond what is merited by that argument. But when your audience consists of people who hold a conclusion that is different than yours, the exact same message that told one group that they were right all along and another group that there was something they didn't know now tells the third group or the second person audience, a message that they are not going to like and that is that they're wrong. When you tell individuals that they're wrong, you threaten their self-esteem and that's going to cause some immediate emotional resistance, disconfirmation bias. But if your argument refutes a belief that unites them to each other, you're not just threatening their individual self-esteem. You're threatening their connection to each other, their group identity and disconfirmation bias when it's backed up by group identity can be a lot stronger than the disconfirmation bias that protects an individual's self-image or need for closure. And as we saw last week, when there's a connection between belief and social identity, we respond to social identity with a lot more vigor. When someone challenges beliefs that unite our entire group, we don't see it as a question of evidence or reason. We see it as an attack on our group by hostile outsiders. No matter how well-developed your argument, you're still an outsider. You're one of them with a capital T. Your audience will immediately make all sorts of assumptions about you that have nothing to do with your argument. And they'll use those assumptions as excuses to disregard your argument without even considering it. You might recognize this response as the ad hominem fallacy. It's a logical fallacy, but telling your audience that it's a logical fallacy might not exactly open their minds either. As we saw last week in most matters, we don't choose our social identities based on our values. We choose our values based on our social identities. We use moral and political issues like footballs and a competition between tribes. When psychologists trick people into thinking that their political tribe holds different values than they actually do, people change their own values to match it. It's not about principles. It's about us versus them. It's about winning. So we believe what we believe, not because those beliefs are the conclusions of research and critical thinking, but because those beliefs are like badges that identify us as members of our tribe. And anyone who doesn't wear that badge, well, they ain't like us. Therefore, they're the bad guys. And we don't even notice our group's biases, prejudices, or motivated reasoning. If everyone around us shares our biases and our foregone conclusions, then we assume those conclusions are just the truth, just common sense. But remember Albert Einstein said, common sense is just a collection of prejudices gained by the age of 18. So when an unbiased source of information presents us with facts that don't fit into our group's assumptions and prejudices and biases, then we assume that it's the source of information that's biased against us. And this is known as the hostile media phenomenon. So that otherwise objective source is now one of them. We look at them as if they're part of the other tribe. They're in league with the other tribe. And it goes back to that us versus them battle for victory. And when we're confronted with evidence that threatens our tribe's victory, anything that might suggest we're not always the good guys or that the other guys aren't always the bad guys. We're very good at finding excuses to dismiss that evidence. More intelligent people aren't necessarily better at critical thinking, but they're definitely better at motivated reasoning. So in this case, identity protective cognition. That means that they're better at cobbling together ad hoc arguments, full of fallacies, immense information that seem complicated enough to be critical thinking, but are really just skillful instances of motivated reasoning. They know a lot of data, so they have a large data set to select from, to create selected evidence. But they're no less likely to disregard counter evidence. They're no less susceptible to disconfirmation bias. And this kind of motivated reasoning is even visible at the neural level. When you put people in an FMRI machine and look at how their brain actually distributes energy while they're thinking about these political and ideological conflicts. When we're confronted with counter evidence that threatens our political identities, we respond to it with different parts of the brain than when we're confronted with counter evidence to ordinary information. When test subjects learn that Thomas Edison didn't actually invent the light bulb, they can think about that rationally and update their beliefs. But when they learn that their beliefs about gun violence don't match the data, different parts of the brain are activated. The parts that deal with a fight or flight response. We respond to threats to our political identity the same way we respond to threats to our physical safety. One of the co-authors of this study, Sarah Gimbal compares this neural response to the brain's response when you walk through the woods and come up on a bear. And this reaction isn't limited to responses to things like the tone of the person describing the information. Even if you put together a solid argument, you do the research, deal with all the evidence, not just the selected evidence. You identify and back up your warrants. You consider the rebuttals and either refute them or use them to qualify your claim. Even after all of that, you put together this great toolman argument. Even after that, depending on what you're saying, people might react to you as if you were a direct threat to their personal safety. So it can seem hopeless sometimes, especially if you listen to cable news pundits and read anonymous political comments online. Maybe it's you they're responding to, but maybe it's the bear. The bear that's not there, the bear they think is there, because a core belief is under threat. And it might need to be under threat. It might be wrong. But even if it is wrong, you've got to find a way to turn off that alarm that is sounding in their heads. If we've done the research and the critical reasoning and come to an earned conclusion, and if our goal really is to change people's minds, not just win arguments or increase our own prestige, then we have to think of our audience not as opponents, not as members of another social group or ideological tribe. We have to think of them as belonging to our own group, not to an ideological group or an identity group, but a discourse community, a group with a common goal of understanding the truth. And we have to make all of this explicit. We have to convince them, our audience, that we're on the same side. And that means that we'll have to focus on communicating something that has nothing to do with the evidence or with the logic of our argument. We have to be more than factually accurate and logically valid. And that's because we're gonna be doing more than showing someone a well-constructed argument. You have to show them that even if your argument threatens their belief, neither you nor the argument are a threat to them, not to their identity, not to their worldview. When they discuss the worldview backfire effect, Stephen Lewandowski and his colleagues suggest that the way to disarm it is to affirm the group's identity and its worldview. And that might seem like the opposite of what you wanna do, but note that this does not mean that you're affirming their false belief. You're just telling them that you're not threatening who they are. You're not dismissing their more abstract values. You're just focusing on one bit of misinformation or one invalid premise. In this respect, Cook and Lewandowski's strategy has a few relevant predecessors. The psychologist Claude Steele has been doing studies for decades that show that at the individual level, people are much more willing to change false beliefs if they or someone else can pair that correction with some sort of self-affirmation, something that makes them feel good about who they are. Motivated reasoning, things like disconfirmation bias or cognitive dissonance has a goal. And it's not just denying the truth. The goal is to make us feel like good people, morally good, competent, autonomous, liked by others. And if new information threatens that self-concept, such as when you're told that someone was offended by something you said, you might respond by saying that that person's just too sensitive or that there was nothing objectively offensive about what you said. But the reason you're dismissing that argument is primarily because you don't wanna think of yourself as a bad person who would hurt someone's feelings. That's the ultimate goal. And that ultimate goal is probably not indirect conflict with the goal of the other person. So to switch roles, if I want to convince someone else that he said something offensive, I want to first make it clear that I'm not calling him an offensive person. I'm just focusing on the specific statement. It's the statement that was offensive, not you that was offensive. I might say, I know you didn't intend to offend anyone, but here's why that statement might be interpreted as offensive. So I'm affirming the person's self-image even while I'm trying to correct a misperception. And I might even affirm something about the person that has nothing to do with the thing I'm trying to correct. Steele's experiments show that as long as people have something to feel good about, no matter how unrelated it is to the thing they're talking about at the time, they're more likely to think critically about the matter that you want them to reconsider. And Steele's research demonstrated empirically what has been suspected by a few people based on anecdotal evidence long before. Most importantly for our purposes in a rhetoric class is the psychologist Carl Rogers. Like Steele, Rogers was dealing with individual people face to face. He was a psychotherapist, so he had people on the proverbial couch telling him about their lives, and then he would respond with advice based on basically psychotherapy rules and assumptions that have to somewhat, to some extent, been disproven since then. But the nature of his role was the same. He had to convince someone else that he had something to offer them about themselves, especially some things about themselves that needed to be changed. So he can't just come across and say, well, here's what your problem is. That's gonna cause the person to lock up. And so because he had to deal with this, he had to find strategies that would allow him to say, here's what's wrong with you without saying there's something wrong with you. To see when people would put their defenses up to protect their need to see themselves as good and complete and intelligent people, versus when they would take those defenses down and listen to the advice that he had to give. And he points out in his book, Client-Centered Therapy, Things Like, a person learns significantly only those things that are perceived as being involved in the maintenance or enhancement of the structure of the self. This is about relevance to the self. So there's lots of new information out there that you could learn. If you're in a university, there are lots of classes you can take, lots of classes that you have to take that might seem like a burden because the information is not something you see as directly relevant to you. You may be wondering, why do I have to take a writing class if I'm going to be an engineer? You don't see the direct relevance to you even if you acknowledge that that might be relevant knowledge to someone else. So Rogers points out that we have to point out to someone why this information is relevant to them. This is very similar to what Lloyd Bitzer said about the rhetorical situation. We have to communicate exigence. Here's why this is important to you, the audience. Second, Rogers points out that the structure and organization of the self appears to become more rigid under threats and to relax its boundaries when completely free from threats. So just like Claude Steele would prove in his later research, Rogers surmised that when we feel our self-esteem threatened, we become more hostile toward the new information and toward critically evaluating what we already believe. So third, Rogers says that the educational situation which most effectively promotes significant learning is the one in which, A, the threat to the self of the learner is reduced to a minimum, and B, differentiated perception of the field is facilitated. So if we want to avoid triggering that reaction in other people, we have to find a way to make our interlocutors feel safe and let them see that their identities are not permanently bound to this one false belief. They can let that belief go and still be themselves. Another way, according to Rogers, that you can reduce the feeling of threats is to let your interlocutor know that you're listening. Rogers writes, when the parties to a dispute realize that they are being understood and that someone sees how the situation seems to them, the statements grow less exaggerated and less defensive and it's no longer necessary to maintain the attitude that I am 100% right and you are 100% wrong. The influence of such an understanding catalyst in a group setting permits members to come closer and closer to the objective truth involved in the relationship. In this way, mutual communication is established and some type of agreement becomes much more possible. In other words, instead of just throwing information at the other person, that that person doesn't want to hear, you're just in that way going to trigger a reaction. They're going to throw counter information back at you and whether it's coherent or not, their goal isn't to be right, their goal is to feel right, is to defend themselves, their identities as people who know things and aren't wrong and aren't bad. But when we listen, we create this reciprocal process where they can contribute something to the conversation and we listen to that and respond to that rather than just continuing to throw information based on our motivation, our goals. And by listening to the other people, we can shape how we say what we say but also what to say next. We can figure out what questions need to be answered based on what they say. And this makes the other person feel like an interlocutor, not just a passive receiver of information. It makes both people into listeners as well as speakers. Now, the reason this is relevant to us in a rhetoric class and the reason in a rhetoric class, this is known as a Rosierian argument, is because Carl Rogers' writings and lectures were taken up by the rhetoric scholars, Richard Young, Alton Becker, and Kenneth Pike and described in their book, Rhetoric, Discovery, and Change. In that book, they devoted an entire chapter to Carl Rogers' strategy. They introduced that term, the Rosierian argument and they say that the Rosierian strategy seeks to reduce the reader's sense of threat so that he will be able to consider alternatives that may contribute to the creation of a more accurate image of the world and to eliminate the conflict between the writer and the reader. There are three things that the Rosierian argument will try to do. The first is to convey to the reader that he is understood. In other words, you're letting the reader know that you're not straw manning them. You're responding to what they actually believe, the argument they actually hold, rather than saying that they believe something that they actually don't. Second is to delineate the area within which he, the writer, believes that the reader's position is valid. Now, let's be clear first about what this is not. This doesn't mean that you assume that both sides are equally correct. That's frequently called both-sidism and it's similar to the fallacy of false equivalency. But I'm gonna use the term false balance because false equivalency can refer to other things like comparing two things that aren't actually similar. False balance is specifically the representing of two sides in a dispute as equivalent even though the evidence clearly supports one side more than another. And this is something we frequently see in the media where because people in the media are so accustomed to being accused of being biased, they're accustomed to the hostile media effect they might provoke even when they're not being biased, they try very hard to say, well, here's one side and here's the other as if both sides were equally supported by evidence, even though evidence is where it is independent of who wants to believe it. So if we have someone who believes that the earth is flat and we put them next to someone who believes the earth is round and we treat both arguments as if they're equally valid or equally supported, we're committing the fallacy of a false balance. And that's not what the Rosierian argument requires because beliefs are conclusions and conclusions are based on a combination of premises, not just data, but also warrants. That means that something in that argument, something behind that conclusion could be right even if some part is wrong. So maybe they got their facts wrong, maybe the data is wrong, but maybe the warrant is right or at least the warrant is valid. In general, the warrant might be true. It just might not be true in this case because the data is wrong. That means that before you point out that their facts are wrong, you should take the opportunity to point out that the warrant is right. It's sort of affirming that they got part of the argument right. It's just that they didn't know some of the facts and everybody is ignorant of something. Don't use the word ignorant, but nobody knows everything about everything, but when it comes to our general assumptions about the world, we like to have those validated. On the other hand, they might have the facts correct, but they might be making an invalid inference based on a logical fallacy. In that case, before you point out the fallacy, give them the affirmation that they at least got the facts right. And beyond that, it's almost never a single warrant that leads from evidence to a conclusion. There are other ways you can separate out the different premises that might lead to a false conclusion. So think about the different types of stasis. If someone points to facts and then uses those facts to make a claim about policy, about what we should do about them, that claim will only be warranted if we agree on cause, value, and definition. How we categorize those facts, the definitions of the terms that we used to describe those facts, cause and effect relationships, causation versus mere correlation, and what outcomes we value or hold to be more valuable than others. Facts and causation are matters that can be proven empirically, regardless of the worldview or identity. But things like definitions of values vary from one group to another, or one culture to another. Definitions are usually tricky because we don't realize that we're using different definitions. We tend to forget that the associations we have with particular words aren't necessarily those that other people have for those same words. But once we're conscious of the different definitions, we can then clarify what we mean. But when it comes to questions of value, this is a bit more tricky. We can't solve disagreements about value simply by explaining them. Remember from the lecture on the stasis of value that the psychologist Jonathan Hyde introduces five different types of moral foundations that are different between liberals and conservatives. And that means liberals and conservatives all across the world, not just Democrats and Republicans in the United States, but around the world, there are people that are psychologically conservative, and that is people who have five criteria for morality. That is that harming people is bad, helping people is good. That is that fairness, you're obliged to be fair to people, treat people fairly, that you're supposed to be loyal to your group and don't do things for people outside your group and neglect people inside your group, that you owe something to the group. You're also supposed to submit to the authority figures. Now, who that authority is might differ. You might consider the government an authority, but you might consider the family a higher authority, and the church even higher authority. But whoever the authority is, you're supposed to obey the authority because it's the authority, not because of the specific arguments they make. And then there's the moral of purity. We're supposed to keep ourselves morally pure. We use this metaphor that only makes sense if we think of people and things and actions as if they're either clean or dirty. But liberals, people who are psychologically liberal around the world tend to prioritize non-harming and being fair above things like loyalty, authority, and purity. It's not that they don't think loyalty to the group or obedience to authority or purity are unimportant, but as you'll see in that chart, they're not quite as important. They're about half as important as things like not harming people and being fair to people. And we only really differentiate these when they come into conflict. So if someone with a lot of authority gets special treatment, a conservative might say that that's okay because he's an authority figure, whereas a liberal might say that his authority doesn't justify being treated unfairly. And whatever values people hold, we're not gonna have much success in telling that those values are wrong. So if a liberal wanted to say that authority is not a significant value, you shouldn't use it, you shouldn't prioritize it above fairness, that's probably not gonna change the mind of someone who feels that value. Likewise, a conservative can't say that you owe loyalty to your group. So as an American, as a patriot, you're not allowed to criticize foreign policy of the United States because that's un-American, that's disloyal to the group. That's not gonna change a liberal's mind, especially if the liberal sees an American policy harming people in another country. So you might be able to explain definitions and solve a problem, but you're not gonna be able to solve the problem or the disagreement simply by explaining your values. What you can do, however, is find the values where you agree. So instead of focusing on whether authority or fairness is more important, shift the focus to the premise about harm. We might both agree that not harming someone is more important than obeying authority or being fair. So the Rogerian strategy would be focus on that, at least mention that premise about value that we hold in common. And third, the third Rogerian strategy is to induce our audience, our interlocutor, to believe that he and the writer share similar moral qualities, like honesty, integrity, goodwill, and aspirations, namely the desire to discover a mutually acceptable solution to a problem. And this, here too, emphasizing common values helps, but also we want to be sure that we both see each other as on the same side, not as in opposing sides, where members of the same group who are trying to achieve a conclusion or a solution to a problem that is mutually beneficial. That means it's not good versus evil. It's not, I'm good, you're bad, you need to change and be like me. Demonizing people is not gonna help. It's just going to trigger a backfire effect. And virtue signaling is not going to help. So we really want to get rid of those distinctions. We're both good, and because we're both good, we both want to get to the truth. Now the authors who create this Rogerian model emphasize that the Rogerian model isn't a checklist of things to do in your argument. It's not a structure. They say these are only tasks, not stages of the argument. The Rogerian argument has no conventional structure. The closest they come to a structure is to say that this argument has phases. They seem to be saying that it doesn't have a specific structure because they want it to be something that is reflexive to the specific rhetorical situation. When you write, it's not like having a conversation because you don't have the other person there in front of you to respond to what you're saying. So you have to try to create this person who is your audience, the audience of your writing, the reader of your writing. If that person was there with you, if that person was an interlocutor that could ask questions and respond directly to you, let that decide what order to go in. So how can we most closely approximate in writing the kind of conversation we would have in an actual face-to-face conversation? Like they say, written argument lacks the flexibility of oral argument. And if the writer doesn't use a conventional sharply defined structure, there are at least phases to his argument. These phases can be ordered and the four phases they describe here. There's an introduction to the problem and a demonstration that the opponent's position is understood. Before we get into the data and the warrants of our toolman argument, we're gonna start with an introduction that illustrates the problem but also demonstrates that the opposition's position to that problem is understood. Then we wanna give a statement of the context in which the opponent's position might be valid. This is what you did in your toolman argument with the rebuttal. You understand that somebody might agree with your general warrant but think it doesn't apply to this particular situation. So you wanna go ahead and start with that in this phase of a Rogerian argument. Say that this warrant is usually true before you show that it doesn't actually apply to this one position. And then at some point you wanna give a statement of your own position. You wanna make your argument just like you did in the toolman argument. Maybe adding some qualifications there again because that is the meat of your argument. That is the core. That's the reason you're making this argument in the first place. But then you wanna follow that up with a statement of how the opponent's position would benefit if he were to adopt the elements of your position. If you can show that the positions complement each other, that each supply is what the other lacks. In other words, your conclusion isn't just good for you, it is good for you and your reader. So when we're revising a toolman argument to make it into a Rogerian argument, we can include the entire toolman argument. You don't have to give that up, any bit of it. But what you're doing is you're sort of sandwiching it between these elements that exist for the sake of communication, for the sake of breaking down barriers of mistrust between people from different conclusion groups. And you're protecting the self image and the group identity of the reader even as you're arguing that they should give up a particular belief. And they left out one of those elements that they had described previously which is that there's the similarity that we're both in the same group, we're not in opposite groups. But they say that this should be something that is there throughout, that you don't use condescending language, you don't use a language that condemns the other side. You don't even really call it the other side if you can avoid it. And again, these phases don't have to be a step-by-step numbered list. Now there are different arrangements, different versions of this Rogerian argument. For example, Maxine Hairston was a rhetoric scholar who adapted this into this sort of five-step plan and she emphasizes the need to preserve the feeling of similarity throughout, like give a brief objective statement of the issue under discussion, not a description that portrays one side is good and one side is bad. Then summarize in impartial language, keeps emphasizing the impartial language as a way to remind the reader that we're in this together we're the same discourse community. But summarize in impartial language what you perceive the case for the opposition to be including the premises that they have that you believe are valid. The summary should demonstrate that you understand their interest and concerns and should avoid any hint of hostility. And then make an objective statement of your own side of the issue, listing your concerns and interests but avoiding loaded language or any hint of moral superiority. Then outline the common ground or mutual concerns you and the other person or group seem to share. If you see irreconcilable interests specify what they are, we're not just going to ignore them. Outline the solution you propose and then point out how both sides can gain from it. Yet another version is derived from the social psychologist and one of the pioneers of game theory, Anatole Rappaport. Incidentally, in their book, Rhetoric, Discovery and Change, it introduced the Rosierian argument under that name. Young, Becker, and Pike also quote Anatole Rappaport quite a bit. But this version actually comes from the American philosopher Daniel Dennett in his recent book, Intuition, Pumps, and Other Tools for Thinking. He recalls having a conversation with Anatole Rappaport and from that conversation, he adopted this strategy for disputing someone which starts with attempting to re-express your target's position so clearly, vividly, and fairly that your target says, thanks, I wish I thought of putting it that way. This brings us back to the, as if this writing was a conversation. If it was a conversation, you wouldn't be able to straw man somebody who's sitting right in front of you, because they're there, they can say, no, you're misrepresenting me, that's not what I believe. It's harder when you're writing, but in this strategy, you actually want to say, okay, if that person was sitting across the table from me, and I described their position this way, would they agree and say, yes, that's the way, I wish I thought of putting it that way. Then you list any points of agreement, especially if they're not matters of general or widespread agreement. In other words, premises that your opposition holds that you also hold. They may not be relevant in a toolman argument, but they're very relevant here. I mentioned anything that you've learned from your audience or that opposing position. In reading their arguments in the past, you've probably come across new information that you didn't know about before. So mention that. And only after you've done all of those things do you get to say what your rebuttal or what your criticism is of their argument. Another more recent example is the video interview with Majid Nawaz. Nawaz is a Muslim writer and he co-authored a book with atheist author Sam Harris called Islam and the Future of Tolerance. And by the way, this is the same Sam Harris that was the co-author of the study about the way the brain responds to political threats. The study that showed that we respond to political threats the way we would a threat to our physical safety, like coming across a bear in the woods. And this sort of collaboration between people with very different worldviews, a Muslim on one side and an atheist on the other, which Nawaz calls adversarial collaboration, an agreement between opposing parties about how they'll work together to resolve or gain a better understanding of their differences. Nawaz recommends we start with an emotional process of collaboration. Start by rehumanizing your adversary, even though you may disagree with his or her perspective. See them as people, not just as enemies or soldiers for the other side. Then try to see the other person holistically as someone with a valid human experience. Even if they're wrong about something, they came to that wrong belief through ways very similar to the way that we all come to wrong beliefs. It doesn't make them bad people. We don't want to judge everything they are based on one misunderstanding. And third, we want to establish trust. Begin by organizing informal face-to-face interactions, get to know your adversary before the disagreement. Now this is something that would really only be possible in an oral or face-to-face conversation, but when it comes to writing, that means familiarize yourself with the writing of people who oppose your argument. Get to know the best versions of the opposing argument in writing and then that way you're responding to their best arguments rather than their worst or the easiest or the simplest. Then there's still the intellectual process. First, you identify common ground just like in the Rosierian argument. I select specific points of agreement, specific premises you agree about. Then practice intellectual empathy. Acknowledge when the internal logic pattern of an argument makes sense, even though you may disagree with a premise. Especially if there's a general warrant that's usually true but not in this case, doesn't apply to this particular data or they have some of the data but not all the data. Acknowledge what they got right. Then recognize your own moral compass and maintain your courage. In other words, you're still there to disagree. You're not there to say, hey, you believe what you believe and I believe what I believe so let's just agree to disagree. Agreeing to disagree is actually just giving up. It's just saying that I'm not going to try to communicate this to you anymore. And that might be easy but it's actually a bit disrespectful because you're almost saying that you can't handle the truth. So if we go back to our diagram, you're coming from the blue side, you've got the blue conclusion and you're trying to convince people on the red side who have the red conclusion to change one belief or one piece of misinformation that needs to be corrected. And both conclusions have arguments behind them. They've got premises behind them and some of those premises are going to be wrong but some of them are going to be right. And they're also going to be a lot of other premises that might not have been mentioned by both sides that both sides simply have agreed with and don't even have to bring up. Now in a Rosierian argument, I can combine some of their premises, the premises that are true while still rejecting certain of the other premises. And because I reject even one premise, that premise is a link and a chain to the conclusion and because I reject that one link then I have to reject the conclusion. But I'm going to make a special effort to include premises that are accurate even if they're not directly immediately relevant to my conclusion. So you're still going to reject some but keep others even if some of the premises that you mentioned aren't directly relevant to say a Tullman argument. And notice that all of the premises from your Tullman argument are still going to be there in your Rosierian argument. If they were necessary to the Tullman argument they're necessary in the Rosierian argument. But what you're doing is including a little bit more not because it's logically necessary but because that way you are maintaining that conversation. You are allowing the other side to contribute something to your argument and but because they get to contribute they get to save face. They get to hold on to their identity as people who know things and as good people and as people who are participating in this group effort. And so your Tullman argument really becomes a group effort not just you showing off how smart you are or how good you are at creating arguments or even thinking critically. Because thinking critically means being able to look at every aspect of our own arguments. Look at the data, look at the premises, look at the warrants that might go unstated. We did all that in the Tullman argument in order to be self-critical. What better way to be self-critical than to also incorporate oppositional arguments? Even if those arguments ultimately fail they at least provide you with new ways of testing your own conclusion. So let your opposition know that. Let them know that they have contributed to this conclusion that you've come to and so it becomes a group conclusion rather than an individual conclusion. And of course it's going to encourage them to let down their defenses. Because this isn't a war between two tribes over an ideological football, it's a cooperation. Then they're not as likely to reject or to dislike your Rosierian argument as they would maybe a Tullman argument that didn't include them. So the same basic argument is either going to be denied or approved not based on its logic not based on the data but just based on how open you are to conversation, how threatening this argument is and how well you're able to demonstrate that you're not a threat and that the truth is not a threat. That's the core of the Rosierian argument.