 I have this idea that all of Isaac Asimov's books contain the same lesson that knowledge is a virtue for its own sake. I call it my moral foundation theory. Morality is a touchy subject. If you ask a group of people about what sorts of things are right or wrong, you won't just get a number of interesting and varied answers. You might start an argument, possibly an angry one. We care deeply about the morality of our actions, and those values fuel a lot of what we do, who we vote for, what sort of things we buy, what media we consume, even what time we get up on weekends. People don't usually reason these decisions through using rigorously defined systems of ethics to evaluate their moral weight, most of the time we're shooting from the hip. We see something and have an immediate emotional reaction to its seeming goodness or badness. But where do those impulses ultimately come from? In more cerebral scenarios, people frequently draw on different ideologies to determine right and wrong. An orthodox Jew and a Wiccan would probably have different conceptual frameworks in place when deciding the morality of eating bacon. We expect this sort of discrepancy in plenty of cases, but there are also a number of instinctive moral intuitions that seem to span all cultures and ideologies. Regardless of what time period or continent you find a human society in, you'll also find some pressure in that society towards fairness and moral outrage at favoritism or discrimination. The categories of what sort of people should be treated as equals obviously varies quite a bit, but the sentiment is always the same. Fairness feels right, unfairness feels wrong. You'll find similar attitudes towards unjustified harm, respect for authority, and numerous other things. The seeming universality of these moral judgments suggests that there's something more going on than some sort of cultural convergence. Like all cultures across the entirety of human history just happen to agree that needless suffering is bad. It implies some sort of basis, a principle or set of principles that drive individuals and societies with wildly different histories and traditions to the same instinctive moral reactions. If you're crafting some sort of theory about the origin of these ubiquitous and homogenous feelings about morality, there's probably a necessary element of just so story about such an explanation as with all evolutionary psychology. We don't really have the luxury of jumping in a time machine and grabbing some early hominids as a control group. All we can do is come up with some story that sounds plausible, make sure it agrees with existing evidence, and hope that it agrees with what we find moving forward. Moral foundations theory, or MFT, is just such a story, formulated in 2004 by social psychology researchers Jonathan Haidt and Craig Joseph. Building on previous research, they suggested that all of our instinctive moral responses could be classified under a handful of categories, or moral foundations, each of which could be argued to have arisen due to some practical utility for maintaining a functional society. For example, we probably share revulsion at the idea of some privileged little scumbag getting away with everything just because their dad happens to be friends with the local police chief. According to Haidt and Joseph, this reaction of disgust can be chalked up to the intuitive moral foundation of fairness slash cheating, a knee-jerk mechanism of revulsion that triggers when we see people who think the rules don't apply to them. As you might imagine, valuing equal treatment and justice could very well give social animals an edge in certain situations where cooperation is helpful but risky. The idea is that humans have this visceral moralistic reaction because it was evolutionarily advantageous. Because our genetic ancestors who instinctively pummeled those who didn't play fair did better than ones who were like, huh, that guy took two bananas. Oh well, Haidt and others have revised and expanded moral foundations theory since the first paper on the subject, proposing new foundations, like liberty, and suggesting a number of potential applications and implications of the theory in many different fields, from abnormal psychology to environmental ethics. His bestselling book, The Righteous Mind, famously makes the case that the traditional divides that we see in politics between the right and the left can be chalked up to a set of unconscious moral frameworks which apply different moral weight to different foundations, like leftists care more about harm and justice, right-wing folk care more about purity and authority. Although the theory has been massively influential in academia and popular culture, a fair number of criticisms have been leveled at it, and they're probably cause for generous dollop of skepticism. For example, some have questioned MFT's implicit assumption of modularity, the idea that we can divide mental phenomena as complex as intuitive moral responses into neat little mechanisms, each activated by just one type of scenario and explained by a single evolutionary need. Neuroscience data gathered after the publication of the original paper seemed to contradict this assumption, revealing a tangled web of interdependencies that seemingly can't be boiled down to five or six distinct foundations, at least not without a lot of hand-waving and squinting. Speaking of which, another of the main critiques of moral foundation theory is its lack of rigorous theoretical justification. The supposedly essential foundations that dictate all human feelings of right and wrong, and as some have argued, all social and political discourse, are kind of pulled out of thin air. Height and Joseph have even published responses to arguments that MFT is distinctly lacking in the T department with a sort of, yeah, duh, freely acknowledging that the categories they invented are almost totally arbitrary, based on nothing but a handful of previously published papers and the author's whims. To their credit, they have invited others to revise the original foundations with new ideas to aid the MFT research effort by proposing and testing new categories, but there's an air of unfalsifiability about the whole endeavor. Come up with some categories off the top of your head. If the categories don't fully characterize everyone's intuition about morality, then we just have the wrong categories and need to come up with new ones. Maybe that's par for the course for evolutionary psychology, but it does make grand pronouncements about how the theory explains anything seem a little overblown. And perhaps most damningly, although the five-factor model proposed by the MFT appears to offer a useful taxonomy of moral intuitions in many populations, it doesn't seem to work everywhere. A 2018 study found that the standard questionnaire used for the MFT's framework failed to meet the cutoff for significance when people from significantly different cultural backgrounds were tested. If these categories really are evolutionarily coded the way that Heighton-Joseph suggests, there shouldn't be any issues replicating their findings in any human population. But if you get far enough away from McDonald's eating society, the MFT doesn't predict anything consistently anymore. Still, the idea that the moral similarities we see across cultures are due to evolutionary instincts designed to encourage the survival of the species is a compelling and intuitive one, and many people seem keen to use it as justification for their own ideas about society and ethics. In the wake of MFT, several other evolutionary models have been proposed to explain these seemingly universal moral intuitions. Morality as cooperation, or MAC theory, suggests that all moral impulses can be explained as facilitating cooperation in zero-sum game-theoretic situations, that all of those instincts about right and wrong only exist to make us very good at playing the iterated prisoner's dilemma. Relationship regulation theory posits that moral judgments are always made in the context of some social affiliation, designed to grant advantage to one's in-group and facilitate beneficial social interactions. The theory of dyadic morality, or TDM, boils the entirety of moral judgment down to the perception of harm, the dyad in question being some perpetrator and some victim. According to TDM, you don't need five or six foundations to explain moral intuitions, you just have to figure out how people perceive that dyad. Who is harming whom? Each of these models makes interesting predictions about human psychology, incites some sort of experimental evidence in support of their claims, but they all have something else in common too. In each of their founding papers, they all cite the moral foundations theory and its numerous problems to argue that their model is better. Considering all of the problems, ethically that might be okay, but the dog piling? It just doesn't feel right. What do you think of the idea of evolutionarily determined moral intuitions? Please leave a comment below and let me know what you think. Thank you very much for watching. Don't forget to subscribe, blah, share, and don't stop thunking.