 I originally thought that getting a new island for the kitchen was going to be too expensive, but I have to say, you do have a good counter-argument. We've all been involved in disagreements before, and we've all seen them go well or poorly. Sometimes people start off with different ideas, discuss them respectfully, and come away from the experience better than they started, either with a better understanding of the other person, or a better understanding of the situation, or hopefully both. Sometimes, not so much. Which raises an interesting question. How should it look when rational people disagree with each other? If our brains weren't absurd engines of confirmation bias, dedicating all their processing power to the task of not changing our minds, instead of figuring out what's most likely to be true, what would an argument look like? We have something of an answer for this. In 1976, mathematician Robert Allman published a paper titled, Agreeing to Disagree, where he modeled an argument between two perfect Bayesian thinkers, that is, thinkers who update the probability estimates of their beliefs perfectly according to evidence. He proved that regardless of what they initially believe, they must eventually agree with each other after a finite amount of time. That doesn't sound too surprising in and of itself. If you plunk two androids with different beliefs down in front of each other and let them talk it out, truthfully updating each other's information about the world, you'd probably expect them both to walk away with identical beliefs afterwards, but it's how they get to consensus that's really weird. Android A knows that Android B is a proper Bayesian and is updating her beliefs rationally according to new information. The moment that B says that she believes something different than A does, if she's a fairly knowledgeable and reliable person, that conflict alone should cause A to update his probabilities for various beliefs. The mere fact that there are two rational people who disagree about a particular point should make both of them less certain of their positions from the get-go. Even more weirdly, the model predicts that as the two share information, they should end up switching places multiple times, overshooting each other's position repeatedly. For example, if A leans left and B leans right on some political issue, the theorem says that after she's incorporated A's disagreement into her calculations and learned some of his reasons for thinking that way, she should find him to her right. How often have you seen two people disagreed like that? Never? Yeah, me either. But hey, the requirements for Alman's agreement theorem are pretty strict. Both parties have to know that the other party is perfectly rational, perfectly truthful, updating their beliefs perfectly with new evidence. That's a tall order, right? Well, game theorists have tried relaxing the requirements of the theorem in many ways, and it seems to hold even in highly suboptimal scenarios. The math shows that even people who aspire to be Bayesians, people who aren't super smart and who don't trust each other too much, should eventually reach consensus with their debate partners. No perfect Android brain is required. So, what's the deal? This 2004 paper, Are Disagreements Honest, advances a theory as to why humans tend to fall short of the Alman Standard. Stop me if this starts to sound familiar. Look, I'm a reasonable person. I think that everyone should have beliefs that are self-consistent and agree with the evidence, but a lot of people just aren't smart enough to see that their beliefs are mutually inconsistent or they let wishful thinking govern their better judgment. That's why these other people disagree with me. It turns out that if one of your starting assumptions is that you're just smarter or better informed than anyone with a different opinion, there's no real impetus to update the probabilities of your beliefs upon learning that people disagree with you. We generally acknowledge that that's a terrible attitude for a rational person to hold, and we get understandably upset when we recognize other people acting that way. Treating their own opinions as gospel truth and dismissing any contradictory opinion is nonsense. Only an egotistical jerk would believe that they had the right answer to absolutely everything, right? But those Alman-style arguments just don't seem to happen the way that they should if people were being good Bayseans. The paper's authors take this as compelling evidence that practically everyone is being dishonest in some sense, claiming not to privilege their own opinions just because they happen to hold them, but doing exactly that. You, me, whenever we argue and don't get anywhere is because everyone in that discussion is tacitly assuming that only an idiot would believe anything different than they do. A cynic would just leave it there. People or egotistical jerks who will refuse to change their minds about anything. But I've always been a bit of an optimist about the human capacity to overcome bias and approach rational cognition, even if we never really get there. And in this paper, some cognitive science researchers put forward a seemingly effective method to foster disagreements that are a little less broken. They paired up people who held opposing views on many controversial topics in modern politics, things like abortion, gun control, euthanasia, that sort of stuff. Then examined the effects of framing their arguments in two different ways, arguing to win and arguing to learn. The general character of arguments to win should be familiar to anyone who's seen a flaymore on Facebook. When informed that they were trying to outperform their conversational partner, tempers flared, facts gave way to rhetoric, and nobody was convinced of anything. When pulled afterward, the participants indicated a stubborn certainty that there was only one right answer to the question, their own. However, when informed that they were trying to learn as much as possible from their partner, there was a stark difference in tone. Conversations tended to be more respectful and thoughtful. And after the experiment ended, both parties indicated an increased feeling of subjectivity, that the right answer depended a lot on where you were coming from. They might not have changed their minds, but they converged on some sort of consensus that it was less of a clear cut issue than they originally thought. Does that pattern sound familiar? The Almond Agreement theorem might not brook disputes about matters of fact, but is perfectly fine with differences of opinion. Androids might well disagree about their favorite flavor of ice cream or taste in music without being irrational. By arguing to learn from their partner, both parties in the experiment might not have updated their beliefs about the topic at hand specifically, but by relegating it to a difference of opinion, they actually reached a rational conclusion. Together, it's probably not a mistake that approaching arguments looking for new information brings people closer to the ideal of the agreement theorem. For rational people looking to construct the most accurate beliefs they can, that's what every argument should be in the first place. Maybe the next time you're debating someone, you could ask yourself which thing you're doing and whether it would lead you to approach consensus in the Almond fashion, as weird as it might be for us humans. Do you think that it would be possible for humans to reach agreement about all matters of fact? Please leave a comment below and let me know what you think. Thank you very much for watching. Don't forget to follow us on social media and don't stop thunking.