 In our previous example of an unshared warrant where one person argues that another person should choose the salad to go with a hamburger rather than the fries, the point of disagreement, as we noticed, would not be with the stated reason, which is a statement of fact, that the salad contains more vitamins, minerals and fiber than the fries. But a different conclusion would be arrived at based on a different reason that needed a different warrant. So the point of disagreement is not on the facticity, on the truth of the stated fact reason, but the point of disagreement is with a value judgment. One person values health more than flavor in that particular decision, and the other person values flavor more than health in that particular decision. These sorts of differences in value claims are very common as warrants, as unstated reasons, and they're very frequently the sort of hidden point of disagreement underneath an argument in which the stated reasons are mostly composed of facts and arguments about causation. We frequently take it for granted that other people value the same things that we value. We don't always try to specify what it is we value in order to see if the other people value the same thing. And frequently, we don't even specify what we value to ourselves. It's such an automatic process that we don't normally have to make it explicit. About 40 years ago, two psychologists, Richard Nesbitt and Timothy Wilson, set out to do some research on how people make decisions when they're buying things. And one of the things they tested, they pretended to be marketing researchers doing a customer choice test with these women's nylon stockings. And they would set up the table in a high traffic marketplace and they would ask people that came by to take a look at four different pairs of nylon stockings and tell them which one they found to be superior quality. And people would examine them starting with the one on the left and then the next one and then the next one and finally the one on the right. And what they didn't tell these potential customers was, all four of these pairs of nylon stockings was exactly the same. But despite the fact that they were all exactly the same, customers would still choose a favorite. And those favorites tended to be the ones that were either the second to last or the very last. And this happened even when you took the actual nylon stockings and rearranged them. So it was nothing to do with each individual nylon stocking, but about 70% of the time people would choose one of the last two in the selection. But they weren't aware of this. If you asked these people if the order of the placement of these nylon stockings had anything to do with their decision, they would look at you like you're crazy. But they would assume that they had a reason for choosing whichever one they chose and they would begin to list out reasons. They would say this one's darker, this one's more elastic or this one's firmer or better texture or something like that. But again, every one of these pairs of nylon stockings was exactly the same. Same brand, same style, same package. But the fact that so many people didn't realize that they didn't know why they chose the one they chose was the real sort of accidental but the most interesting finding in Nisbyn and Wilson's study. And their paper that they produced after this study was titled, Telling More Than We Can Know. Saying something, Telling More Than We Can Know is very similar to the philosopher Harry Frankfurt's definition of BS. But in this case, it's very clear that the people didn't know that they didn't know. The subtitle of the paper was Verbal Reports on Mental Processes. Now, a verbal report on a mental process is metacognition. This is thinking about thinking. And if we've learned one thing in this class is that we're not very good at metacognition. Our metacognition is usually very confident but very frequently wrong. So this is why the psychologist Jonathan Height compared the conscious brain, the reflective metacognitive awareness as a lawyer riding on the back of an elephant. The elephant goes where it wants to and the lawyer has no control over it. All the lawyer can do is make excuses for the elephant after it's done whatever damage it's going to do. He can pretend that he's in control but mostly all he does is serve up ad hoc rationalizations after the fact. He engages in motivated reasoning. He wants to persuade other people for the decision process that's already been made but he doesn't actually have control over that. And this sort of awareness, psychologists expected this kind of thing for a long time but it's really with Nisbon and Wilson's research that came along at the same time. Daniel Kahneman and other behavioral economists were starting to find the same thing. That this really became something that could be scientifically tested. Now, if we don't know where our value judgments come from and if these intuitive and mysterious value judgments can lead us to come to radically different conclusions than other people, even when we look at the same facts, then it's worth exploring exactly what value judgments are necessary to come to the decisions about things like policy that we come to. And remember, I've sort of designed this pyramid not in the order that you should introduce something in your argument or in your essay or writing or anything like that. You don't have to start with a fact and then add definitions. But you do have to have agreement on the things that are beneath each of these points of stasis before you can build the thing on top of it. And you don't have to have the same value conclusions as your reader when you're making an argument about cause and effect. You can say one thing causes another without saying that thing is good or bad. And you don't need to agree on cause and effect to have the same beliefs about values. Even though sometimes we judge one thing as better than another because it will have a positive outcome, what exactly makes that outcome positive is not dependent on the cause and effect relationship. But you do have to agree on both cause and effect relationships and value judgments before you can bring someone else to agree with you about a claim about policy, what people should do. Now as a point of stasis, sometimes these are called value claims. Sometimes they're called evaluation claims, the same root word there. Sometimes they're referred to as quality claims. Claims about one thing having a greater quality than another. And just like Richard Roydy said when he talked about final vocabulary, a lot of these value judgments are built into the terms we use, our final vocabulary terms. And they're the simple ones like saying something is good or something is bad. But then there are other types of more subtle value judgments built into some of the words we use which might say that this thing is not bad but it's not as good as this other thing. We have to enunciate our own value judgments which means we have to discover what they are and we have to put them into words before we're going to be able to find where we disagree with someone else's value judgments. And when we start to do this, the biggest mistake people tend to make is that they just repeat their value judgments. So they'll say it out loud and if someone else disagrees, they'll just restate the same value judgment in different terms. So exactly what makes a thing good or bad is one of those things about which we have the illusion of explanatory depth. We think we can explain what it is we feel and why what we feel is right, why our feelings about one thing are justified. But just like in other examples of the illusion of explanatory depth, we typically end up just repeating claims about value. We justify a value judgment with circular reasoning and sometimes just begging the question. So take a look at this example. This is from a writer named Leon Cass from his book called The Hungry Soul. In this book, the author Leon Cass runs through a list of things he finds to be so improper that they could be defined as immoral. So he's enunciating a very particular list of value judgments such as that yawning with an uncovered mouth is a bad thing, is amoral. He says, this is an embarrassment to human self-command to be caught in the grip of an involuntary bodily movement like sneezing, belching, or hiccuping. And just like yawning, eating in the street is an example of a lack of self-control. He says, it beckons enslavement to the belly. Though the walking street eater still moves in the direction of his vision, he shows himself as a being led by his appetites, lacking utensils for cutting and lifting the mouth. He will often be seen using teeth for tearing off chewable portions, just like any animal. Eating on the run does not even allow the human way of enjoying one's food. For it is more like simple fueling. It is hard to savor or even to know what one is eating when the main point is to hurriedly fill the belly. Now running on empty. This dog-like feeding, if one must engage in it, ought to be kept from public view where even if we feel no shame, others are compelled to witness our shameful behavior. Notice he's using some very heavy final vocabulary terms that he hasn't bothered to identify yet. And those final vocabulary terms carry value judgments with them that he hasn't stopped to explain or to justify. The closest thing to a reason why yawning with an uncovered mouth or eating while you're walking on the street seem to be bad things is because they're the kind of things animals would do. They show enslavement to the belly. And even enslavement is a final vocabulary term because it presumes that this is a bad thing. Well, if you're not enslaved to your belly, you're going to starve to death, so you have to give in to your hunger at some point. But he's not qualifying his terms. He's not bothering to be specific in what terms he chooses to describe the things he's choosing. He just tells the story. He creates this image of this person walking on the street and eating at the same time and compares it to a dog and expects that his reader will share that same value judgment, that feeling of disgust. He continues to say, eating is out of place in public. More uncivilized forms of eating like licking an ice cream cone, that's a cat-like activity that has been made acceptable in informal America, but that still offends those who know eating in public is offensive. This is clearly begging the question. He's not proving that eating in public is offensive. He's just saying there are those of us who know and if you're not one of those people will then the hell with you. Eating in the street is for dogs. Modern America's rising tide of informality has already washed out many longstanding traditions. The reasons long before forgotten that served well to regulate the boundary between public and private. And in many quarters, complete shamelessness is treated as proof of genuine liberation from the allegedly arbitrary constraints of manners. So in that last sentence, he seems to be aware that some people don't share his value judgments. He just doesn't seem to care. He seems quite confident that they're wrong and he's right and that the only explanation they might give is just to say that they're genuinely liberated. So as much as we'd like to dismiss these kinds of rants from sort of an uptight old man, this isn't just any author. This guy, Leon Kass, was a member of the President's Council on Bioethics. This was an advisory body put together by George W. Bush during the early part of his administration with the primary focus of coming up with a justification for banning human cloning and other types of medical technology that that administration didn't approve of. And Kass was highly influential in that group and highly influential in fabricating an argument or something like an argument to put actual governmental restrictions, legal restrictions on scientific research. But he wasn't very good at actually putting together an argument. His most famous argument against research into cloning comes from a 1997 article called The Wisdom of Repugnance. He says, we are repelled by the prospect of cloning human beings, not because of the strangeness or novelty of the undertaking, but because we intuit and feel immediately and without argument. Okay, already we have a problem there. He's using the word because, but instead of giving a reason, he's just giving voice to a particular emotion. We intuit and feel immediately and without argument the violation of things that we rightfully hold dear. Well, again, this is begging the question, saying that we rightfully hold something dear does not say why we should hold it dear. It's not a reason, it's just repeating and begging that claim. Repugnance here is elsewhere, revolts against the excesses of human willfulness, warning us not to transgress what is unspeakably profound. Well, if it's unspeakably profound, I guess you're off the hook as far as defending or explaining it. Indeed, in this age in which everything is held to be permissible, so long it is freely done, in which our given human nature no longer commands respect. He hasn't proven that human nature no longer commands respect. He just takes that for granted, begging the question again. In which our bodies are regarded as mere instruments of our autonomous rational will. Repugnance may be the only voice left that speaks up to defend the central core of our humanity. Shallow are the souls that have forgotten how to shudder. So what starts as circular reasoning is then sort of followed up by begging the claim. And the closest thing to a reason is that this is disgusting. Cloning is wrong morally because I don't like it. It gives me a bad feeling. And I have a bad feeling because I have a bad feeling and that's right. And anyone who doesn't have that bad feeling is wrong, is not only wrong, but shallow. So Leon Kass could be the poster child for the lawyer on the elephant. He's in no way bothering to find out why his emotions are the way they are. He's simply taking what system one intuitive cognition serves up to him. And he's not only justifying it, he's saying anyone who system one does not validate, anyone who thinks differently is wrong, is immoral, is shallow. As if his unexamined assumptions are always right and if you don't agree then you're an amoral person. This is a pretty cut and dry example of motivated reasoning. Starting with a conclusion and merely dismissing the very need to justify it, the very need to argue. And we have a word for the window dressing, the sort of pseudo-intellectual trappings that he puts around his gut feelings that was given to us by Harry Frankfurt as well as the Gordon-Pinney Cook study. And Kass's argument, although obviously if this was an essay submitted in this class, I think you know what the grade would be. But this articulation of personal disgust has been described as the yuck factor when it comes to cloning debates. Clones are gross, therefore they're evil. But despite the lack of a coherent argument, this has still been someone who's been very influential on at least people who already agree with him. He's a champion of motivated reasoning. So if you already think cloning is evil, Leon Kass has given the sort of the sermon to the choir so to speak that people were looking for. It sounds impressive to people who already agree with the conclusion. But to anyone who looks for an argument there, you're just not going to find one. The closest thing to an underlying warrant then would be that if I find something disgusting, it is therefore immoral. And I need not justify, I need not look for reasons, I need not identify premises beyond that. If I feel it, it's true the end. Anyone who doesn't feel it is evil. And incoherent as it is, unfortunately, this argument has been used to put government restrictions on what could be lifesaving and life-extending medical research. And unfortunately, this wouldn't be the first time when somebody's immediate and uncritical gut reaction has actually put a stop to lifesaving technology. In response to this, especially in response to Leon Kass's, the popularity of Leon Kass's argument among certain circles within the government, Stephen Pinker wrote the article that I signed for a class reading called The Moral Instinct. And in this, he doesn't just address the issue of cloning. He looks into the connection between our gut feelings and what we assume to be rational, intellectual principles about ethics, our moral values that we tend to think separate us from the animals. And that was a big theme in what we read from Leon Kass. It was that if you do anything like an animal, then that's bad. Well, in responding to his gut reaction, that's exactly what he's doing. But that's something we all do. And that's part of Pinker's point. Pinker is a cognitive linguist, a cognitive scientist who focuses on how language formation works. But in this article, he summarizes a lot of research by a lot of other psychologists and philosophers. But in particular, he calls out Leon Kass's warrant, which is that if I feel that it's bad, it's bad. And anyone who disagrees with me, anyone who doesn't have that feeling is shallow. Pinker points out that people have shuttered at all kinds of morally irrelevant violations of purity in their culture. Touching and untouchable in India, where there's a caste system and somebody of the upper class is forbidden from touching somebody in the lower class. Drinking from the same water fountain is an African-American during the Jim Crow era before civil rights. Allowing Jewish blood to mix with Aryan blood. In other words, if a Jew and a Christian got married, that was considered amoral for a long time in European and American history. Tolerating sodomy between consenting men, autopsies, vaccinations, blood transfusions, artificial insemination, organ transplants, in vitro fertilization, all of which were denounced as immoral when they were new. Of course, autopsies are cutting open dead bodies. That's a violation of the sacred nature of the body. Vaccinations were injecting pathogens into people. Blood transfusions, organ transplants, this was violating the sanctity of one body to stitch together another body, in which case many people felt a sense of moral outrage at the very idea of that mixing of body tissue between bodies, even if the effect was saving someone's life. And just like Cass wants to legally ban, he wants to use the office of the president to put a stop to medical research that he thinks is yucky. So too, all of these other things were illegal. Some for centuries, some for millennia. So policy decisions have been made exclusively from value judgments that were not very well articulated. So if we're gonna be any better than that, we have to be able to figure out where our moral sense comes from, where that feeling of right and wrong comes from. And that's why I've also included this video from the BBC about the trolley problem. The trolley problem also goes back several decades. It was originally a thought experiment by the philosopher Philip of Foot and was later studied by philosopher Judith Jarvis Thompson. And over the last few decades, it's been one of the most repeated psychological thought experiments. It's been taken by over 200,000 people from over 100 countries, and they're all given basically the same two scenarios. In the first scenario, there's a trolley that's about to kill five people, but you can hit a switch and make it switch tracks where it's still gonna kill one person, but you're sacrificing one life to save five. And people are asked what they think about that. Would they do that? Would you hit that switch? And not everybody, but a majority of people say, yes, I would hit the switch. It's justifiable because I would be saving five people by sacrificing only one. Not to hit the switch might even be considered deliberately letting five people die. But when that same heuristic, it's better to save five than one, is moved to another anecdote, another version. In this case, you have to push a large man off a bridge and push him onto the tracks so that he will actually be run over, but being run over, that will stop the train. When people are given this version of the thought experiment, almost no one would choose this option. Even the people that said that it would be justifiable to hit the switch will look at the same thing. If I have to actually make physical contact with the person, if I have to look him in the eye as I push him off the bridge, that would be murder. That wouldn't just be sacrificing one life to save five. That would be somehow different. But when they're asked to explain why that would be different, the answers are always different. If you ask five different people why one scenario is more moral than the other, then you'll get five different answers. In fact, if you ask the same person on two different times, you usually get two different answers. People can't come up with a reason why the second version is morally wrong with the first version is morally right, but they're very certain that the first version is morally defensible and the second version is not. This thought experiment is frequently used to distinguish two types of reasoning. Distinguishing between consequentialism or variously known as utilitarianism as that term was defined by the philosopher John Stuart Mill in the 19th century. When he said, actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness. And happiness in that sense is more like our word wellbeing. It's not just this nice feeling, but it's actually health, growth, prosperity, survival. If you can save five people by sacrificing one, then according to the utilitarian idea or according to consequentialism, that is the morally right thing to do. The opposite of that is frequently called deontology or deontological reasons. And this was a term that was proposed by the philosopher Emmanuel Kant a century before Mill. And deontological theories hold that rightness or wrongness of an action is not entirely dependent on its consequences. And instead, typically focus on notions of duties, rights, and obligations. And again, these things, when it comes to what is right and wrong, what is our duty, those are very undefined. I just, I can't tell you why it's right, I just know that it is. And most of our intuitive and automatic judgments tend to be characteristically deontological. I don't know why I just know that it is right. While characteristically consequentialist or utilitarian judgments are often the result of slow, deliberative, cognitive processes. In other words, your system one, your elephant is a deontologist. Your system two has the choice to either rationalize and say yes, that deontological feeling of rightness is right. Or in some cases, if it has the luxury to sort of slow down and deliberate, it may decide that well, actually, it's better to save five lives and sacrifice only one. The thing is when you do that, people start to distrust you. And that was the point of this study that's still impressed by Everett Pizzaro and Crockett. I know that's a lot of text, but this is actually the study that's being described in the article by Scottie Hendricks about the trolley problem that I assigned as part of the reading list. And it also has these definitions about consequentialism and deontology. But this inability to really understand system one, to be able to find a reason that would justify a deontological perspective. This inability to go beyond saying it's right because it's right or it's wrong because it's wrong, which is a circular argument. This inability to explain our gut feelings like this is what Jonathan Haidt called moral dumbfounding. And yes, this is the same Jonathan Haidt that gave us the Lawyer Writing on the Back of the Elephant example. Haidt has studied moral dumbfounding for a long time. He would go out on a campus at the University of Virginia and ask people questions about would it be okay if a woman cleaned her bathroom with an American flag? Would it be okay if a family ate their dog after it got run over by a car? Is that okay? It's not like they killed the dog, it's just that now the dog's already dead, they decide to eat it, is that okay? In those kinds of cases, people will say no, that's amoral, there's a reason it's wrong. And he would then say, okay, what is that reason? And they can't come up with a reason. They just know it's wrong. This is the sort of moral equivalent of the illusion of explanatory depth. The assumption, the feeling of certainty, the feeling of closure, but the inability to explain it, to justify it. And a lot of the time people would sort of be aware that there was a problem here, that it was awkward that they couldn't come up with a reason. He says, people were often morally dumbfounded that is they would stutter, laugh, express surprise at their inability to find supporting reasons. Yet they would not change their initial judgments of condemnation. Dumbfounding seems to occur when a strong intuition is left unsupported by articulable reasons. We can't put it into words. As Leon Cass says, we rightfully intuit and feel, so stop asking me questions. The clearest evidence of dumbfounding is that participants will often directly state that they know or believe something, but cannot find reasons to support their belief. People who are dumbfounded will tell you so. They will say things like, I know it's wrong, but I just can't come up with a reason why. Participants who did so often also tended to report being more confused and to be relying on gut more than reason. They also made more dead ends and unsupported declarations. And importantly, Haidt points out, the existence of moral dumbfounding, this, the fact that this kind of thing happens so often, calls into question models in which moral judgment is produced by moral reasoning. So we tend to think of ourselves as starting with a question, like is this thing right or wrong? Well, let me see. Let me look at all the reasons and evaluate them and decide which one is right and which one is wrong. But that's not what we do. We have a gut reaction. It's already there by the time we're consciously aware of the scenario we're thinking about. We just don't know how it got there. So far we've been comparing consequentialism to deontology as who would you kill? Well, if we flip that around and say who would you save, we see that there are real problems with deontological thinking. If we're not capable of deliberating about something, if we don't like to deliberate about something, frequently our deontological moral system depends on being face-to-face. Just like with the trolley problem, if you're right next to the person that you would be harming, something tells you, no, this is something I can't do. This is a moral wrong. But the flip side of that is kind of a problem because if you need to help somebody but they're not close by, it's easy to rationalize not doing anything. So in one experiment, people are given $5 and then they're told that they have the opportunity to give some of that money back to donate it to a charity. And in one group, people are told if every dollar you give back helps this many people or if we can raise this much money, it will help this many people. And the people who were told this statistical information tend to donate on average about a dollar and 14 cents. But another group of people, instead of being given statistical information, they're shown a picture of one child and they're given the child's name and they're told about the child's village and they're told your money will go directly to this child and by this child, these resources, let's say school supplies or food or something like that. And the people who are shown an identifiable victim tend to donate on average twice as much about, twice as much as the people who were given statistical information. So on average $2.38, close to half of the $5 that they've been given to choose how much to donate. So as long as they're confronted with that face, the deontological moral system kicks in and they're willing to donate more. But then there's a third group that's given both the statistical information and the image and anecdote about the child. In that group, the donation drops again, almost a dollar. These are the people that are shown the image but then they're also shown the statistics. And it seems that something about being given more information switches off the deontological morality system and puts us back into number crunching mode where we don't feel like we're really dealing with people. And when we're in that mode, most people just don't feel that moral imperative that they felt when they're confronted strictly with helping one person. And this inability to engage in utilitarian or consequentialist reasoning has a huge effect. People tend to give a lot of money to high profile tragedies that affect a few people. And I know that the 3,000 people that died on 9-11 and more than 1,000 people died during Hurricane Katrina that's still a great tragedy. But the amount of money, and the figures on the left are in millions, something like $3 billion were given to the victims. But when we look at other events like the Asian tsunami back in 2008 or tuberculosis or AIDS or malaria. In the case of malaria, malaria affects about 300 million people. And yet it gets less than a billion dollars, much less. It looks like close to 500 million. So in these situations, we should be just as motivated if not more motivated to help the scenarios in which many, many more people suffer. But we don't, we tend to help the people that are more identifiable. So it's that deontological reasoning if it kicks in, people are much more likely to donate. But if you just give them a utilitarian or consequentialist explanation, that's usually not enough to spur people to do the right thing. Even people that realize that they should be just as motivated to help people dying of AIDS or dying of malaria or dying of tuberculosis as they are people dying from a hurricane or terrorist attack, they just don't end up giving as much. And this distinction between deontological and utilitarian thinking is also why Stephen Pinker introduces his article by asking us who do we think of as more moral? Mother Teresa, Bill Gates or Norman Borlaug? And the universal response is Norman who? But we would think, okay, well, I at least know who Mother Teresa is. I know she helps people and I know who Bill Gates is. I know he's wealthy. So I think Mother Teresa is probably much more moral, a much better person than Bill Gates. And that might be justified because we frequently see her. When she was alive, we would see her surrounded by the people in the most poverty-stricken areas of India in Calcutta. She would be there with the people she's helping. There would be dozens, sometimes hundreds of people that she would be helping. The thing is, Bill Gates and the Gates Foundation has helped save millions of lives of people in Africa and elsewhere who would have died from waterborne pathogens, malaria, things like this. He is one of those people donating a lot of money to causes like malaria that don't trigger a deontological moral impulse. But Norman Borlaug has saved an estimated billion people. And he did this not by working directly with people as they were suffering, but by developing crops, by selectively breeding crops like wheat that could grow in drier environments or harsher environments so that more wheat could be grown so that more people could be fed. So he's fed more than a billion people. He's saved an estimated billion people. And yet no one knows who he is. They don't see him as someone as of high in esteem as someone like Mother Teresa. So this distinction between utilitarian or consequentialist on one hand and deontological thinking helps us to sort of get a grip on the fact that the things we intuit and feel are not always shared. Or just because we feel something doesn't mean it's necessarily the best thing to do. That it's gonna have the best outcome. We might feel a real moral imperative to help one person and ignore a thousand. Helping out thousands of people just doesn't trigger that same sense of right and wrong that helping one person does. But beyond that, we have many more factors that influence what we think is right and wrong. What we feel is valuable or less valuable. Moral or immoral. As he was studying moral dumbfounding, Jonathan Haidt realized that well, if the people he's interviewing, that he's questioning can't identify what causes him to think one thing is right and one thing is wrong, then he's going to try to identify those characteristics himself. He and his other researchers over a couple of decades of study have looked to see if people from different political parties have different beliefs about what's right and what's wrong. Beyond just individual popular events like he doesn't go around asking people do you think abortion is good or bad? Do you think gun control is good or bad? He doesn't stick with the sort of hot button issues. But he'll ask other things that try to get to the foundations of people's sort of intuitive deontological gut reactions about morality. And in his TED Talk from 2008, he identifies these five moral foundations. The first of which is do no harm or help people if you can. We want to avoid harm, protect people from harm and we want to help people, care for people to the extent we can. That's the first sort of moral foundation. The second moral foundation is fairness or reciprocity. We try to be fair to other people. When we do something for other people we expect them to reciprocate, to return the favor. The third moral foundation is loyalty to the in-group. This might be nationalism, patriotism, something like that. Or it might be something local, like just being a fan of a particular sports team or being true to your school. The fourth moral foundation is respect for authority or obedience to authority. And the last is the belief in purity or sanctity. Some things should be mixed with other things. And as he points out, this can apply to a lot of things. It can apply to sexual morality. It's the foundation of racism, but it could also be the kinds of food that you will and will not eat. And when he and his colleagues would design questionnaires to try to find out what people thought was important and what wasn't, they found that with both people who identify as liberal and people who identify as conservative, both thought that avoiding harming other people and trying to care for other people, that was a moral virtue. That was something that morality could be founded on. And being fair and reciprocating paying back people who do things for you, that also is a moral foundation. On this, both left and right agree. But on the other three, group loyalty, obedience to authority and purity, this is something that liberals tended to not care that much about. Whereas conservatives saw it as equally important, just as important as not harming other people or being fair to other people. And he points out that while liberals don't really care about the purity aspect with things like sex or race or ethnicity, they are more likely than conservatives to be very focused on the purity of the food. Being very careful about what you put in your body. And this isn't to say that people on the left don't care at all about authority or loyalty or purity. But you see how often it's endorsed on the left that he's got liberals on the left, obviously. And harm and fairness as moral, the importance of those moral pillars is very high. It's ranked at like four out of five. Whereas authority, in-group and authority, loyalty and purity are only around about two. So they're still important, they're just half as important as avoiding harm and being fair. But with conservatives on the right, we see that they're all around three, three out of five. So all five of these criteria are equally important on the right. Now that's distinguishing between Democrats and Republicans in the United States. What if we went to another country? Well, Heidt and his colleagues took this around the world and they found that even in places with a diametrically opposed political philosophy, like the formerly communist countries like Russia and their Eastern European, former Eastern European holdings, the former Soviet bloc. Even in that area of Eastern Europe, there were liberals who placed non-harming and being fair as very important as three and a half or four out of five in importance. And they're also, they had conservatives. These are people that, in the case of Russia, the old hard-line communists, which we think of that being a left-wing political ideology, but the people who adhered to it who conserved that ideology were much more likely to put authority, loyalty, and purity as equally relevant, equally important, along with fairness and harm. And across the rest of the world, the same thing. Western Europe, Latin America, South Asia, Australia, the Middle East and East Asia. All of them have people who are morally liberal, who place doing no harm and being fair as very important, and people who are conservative, who think doing no harm and being fair is no more or less important than obedience to authority, loyalty to your group, and some sort of ideal about purity. So typically, when we analyze things this way, we put things in these terms, people tend to get defensive. If they're on the left, they say, well, of course, the other three don't matter as much, whereas somebody on the right might say, well, think about this scenario in which obedience to authority is very important, or group loyalty is very important. But instead of thinking about how you would respond, let's get outside of our own culture. Stephen Pinkard gives us another informative example with the case of Gillian Gibbons. This is a British woman who was teaching a class in Khartoum in the Sudan in Africa. She was a volunteer teacher, she was there trying to help the local community in this very impoverished area, and she was teaching this class of kindergarteners, of five-year-olds, and the class got a new teddy bear that was theirs collectively. And she said, let's pick a name for the teddy bear. And the class wanted to name it after the most popular kid in the class whose name was Muhammad, that was his first name. It was a very common name in Muslim societies, but also it's the name of the prophet Muhammad. And they named the teddy bear Muhammad, but as soon as the kids got out and told their parents about this, people showed up at the school demanding that Gillian Gibbons be executed. It's a capital punishment for blasphemy, in other words, for insulting the prophet by naming the teddy bear Muhammad. They saw that as an attack on the authority of the prophet Muhammad and on the Quran and Islam in general. And this violation of authority was a capital offense. And so Gillian Gibbons was arrested, but the authorities protected her from the lynch mobs. And those were people that were showing these sort of conservative array of values, placing obedience to authority, in this case, even above the do no harm moral foundation. But as Jonathan Heights psychological division between liberals and conservatives shows, if this is an international thing, this model predicts that there are Muslims who are going to be psychologically liberal and who are going to be opposed to putting obedience to authority above the moral foundation of doing no harm. And around the world there were protests to say not in our name, free Gillian Gibbons because whatever these individual Muslims thought about what she did, it didn't outweigh the moral imperative to not harm someone. And this example, because it's an issue that's outside of our particular ideological system, forces us to the left side of Heights moral foundations graph, the two channel moral judgment. And that's whether you're a liberal American or conservative American. If you're in this class, odds are you're not a conservative Muslim. So even if you revere certain authorities and believe that respect to those authorities is as important as non-harming, the Quran and the Prophet Muhammad are probably not among those authorities that you revere. So this story probably excites the same deontological moral outrage for both liberal and conservative Americans because it's a violation of the basic belief in freedom from harm. A woman's life has been threatened. And for us, there's no counterbalancing moral imperative about obedience to Muslim laws or representations of Muhammad. So that means that if we're going to have an argument with the people who are saying that Jillian Gimmick should be put to death, we're going to be arguing with someone who has a center for value, a moral foundation that we do not share. So how would you do that? How would you argue with someone who believes that she should be put to death for blasphemy, a moral foundation that would fall under the respect for authority, moral foundation for Muslims? In this case, you can't just say, well, I don't recognize the authority of Muslim law or the Prophet Muhammad. Therefore, your argument is invalid. If you do that, the point of stasis will continue to rest on an unshared value premise. And the person with whom you're arguing will continue to argue holding that premise and using it as a warrant to come to the conclusion that giving should be put to death. So that means you have to construct the argument so that it can function without rejecting the moral foundation of respect for a given authority that your opposition has, that your reader or audience might have. And you can do that by offsetting the weight of that moral foundation and emphasizing the weight of other moral foundations. And doing this requires examining the facts and the definitions that this moral foundation rests on. So we can argue that Given's action was not disrespectful if we can agree that respect is defined by intentions. We reevaluate the definition of respect that we're using or obedience to authority that we're using. We're saying it's based not on just the actions but also the intentions behind the actions. That might not be agreed. We might have to argue for that first. But if we can shift to that premise, we can shift that stasis of definition to the premise that respect is defined by intentions, then we can say that Given's didn't intentionally disrespect the prophet, so she did not violate the moral foundation of authority. And you could even focus on defining that is specifying the meaning of the name Muhammad in this case. Because remember the teddy bear was named after a child in the class rather than the prophet directly. And then there are the facts that the protesters don't seem to be clear about, and such as that it was the children in the class who chose the name Muhammad, not Given's herself. These aren't just any facts and any definitions that come to mind. These are the facts and definitions that are warrants for the value judgments that Given's is guilty of disrespect for authority. Without those warrants, we might persuade someone to come to a different value judgment without telling them to give up that moral foundation. You don't have to reject the moral foundation in order to alter its application to a particular case. And that's a good thing because very few people are ever going to be open to altering their value judgments, their moral foundations, especially within an argument. But they might reassess which value judgments are appropriate in a particular case. And we do that by how we select our facts, how we interpret those facts with particular concepts and how we define those concepts. And sometimes that means subdividing those concepts. So when we make an argument like this, we don't have to shape the argument to depend on our own moral foundations and exclude all others. We don't have to do like Leon Cass or the people who wanted to execute Julian Given's by saying that our values are right because they're right because they're right. And anyone who doesn't share them as an infidel or is a shallow soul. In order to reach any kind of consensus with our audience, we have to identify moral foundations. That means ours as well as theirs. Any that will form warrants, which could lead to alternative conclusions. These ways of categorizing the foundations of moral or value judgments help us to identify the types of value judgments that might be underlying conflicting conclusions. This doesn't mean that we need to share all the value judgments that our audience might hold. We just need to know what other types of judgments there are out there so that we can figure out how to reach a conclusion that's better for everyone. While everyone may acknowledge that it's wrong to hurt someone else, we recognize that some situations force us to choose between harming one person and harming another. We may have to choose to harm one person in order to stop them from harming someone else. Some people will value moral foundations like authority, purity, and loyalty as much or more than the moral foundations like non-harming and fairness in certain situations. And it probably won't do any good to try to convince them to reject their own values. Some value judgments will depend on possible consequences, but some won't. If you make a consequentialist argument for fairness or protection from harm, then you need to prove that a particular action will actually have that consequence. In that case, the stasis of value will sort of lean on the stasis of causation. But if you're making a deontological argument, or you're dealing with a deontological premise that your reader or audience might have, you'll need to focus on definition and fact. If you're trying to persuade someone to withhold a deontological judgment in a particular case, you can examine the fact and definition premises that underlie the application of that moral foundation in order to argue that it doesn't apply in this particular case. So value judgments in the abstract are very difficult, usually impossible to change. But when we focus our attention on particular situations using particular definitions, then we see that there is usually the possibility of leveraging one value foundation against another. Use the values that you have in common to counterbalance the values that you don't share. Only then can you work toward a decision about what should be done in a given situation. And that's when we finally move on to the stasis of policy.