 Have you ever noticed how Godzilla always seems to target things like power lines and dams? He's a real utility monster. We're going to be talking a little about ethics, and I'm sure that you know a thing or two about the subject, but let's take it from the very top. We see people acting in certain ways, and we feel that those actions are good, or bad, or neither. Sometimes we disagree with each other about those evaluations, or we feel conflicted about a scenario that pits our moral instincts against each other. Still, it feels important that we be able to tell good from evil, and those inconsistencies lead us to wonder, what is it exactly that makes something right or wrong? How can we explain the rightness or wrongness of an action, especially to someone who disagrees? That's the role of ethics. Philosophers attempt to develop some rigorous foundation for our feelings about morality, to try and figure out the rules that govern right and wrong. There have been a number of promising theories advanced over the past few millennia about what those rules are, or maybe even what they ought to be. One example, utilitarianism, suggests that what our moral intuitions are pointing at is some sort of maximum ultimate well-being for everyone, that all of morality can be boiled down to increasing welfare and decreasing suffering. Many people find utilitarianism compelling as a model of morality, because it agrees with our intuitions in many situations, and provides unambiguous criteria to resolve sticky disputes and conflicts. Torturing people needlessly results in lower collective well-being, so it's evil. Check. Feeding the hungry results in higher collective well-being, so it's good. Check. Pulling a switch so a trolley kills only one person instead of five people results in some negative consequences, but on the balance it results in higher collective well-being, so it's ultimately good. Sure, that sounds totally plausible. Some have taken this correlation with our moral intuitions as proof that utilitarian ethics is objectively correct, that if someone has moral inclinations that point in other directions, they're simply wrong, or mistaken. Although this sort of all-in commitment to the utilitarian framework has its advantages, there are several scenarios where utilitarian ethics, taken at face value, seem to mandate behavior that doesn't quite align with our moral intuitions, or worse yet, behavior that seems outright evil. As a theory focused entirely on maximizing a single variable, utilitarianism makes no allowance for other things, like rights, justice, equality, or virtue, unless they serve some instrumental purpose for increasing utility. That can make it very uncomfortable in certain thought experiments. Let's take a look at a few. Utility Monster. This is Felix. Felix is just like you, but he has this particular psychological quirk. He enjoys things a lot more than most people. If you would enjoy some ice cream, Felix would groan in ecstatic delight at each lick. If you would love having a puppy to play with, Felix would experience rapturous transcendence, weeping at how happy the puppy made him. Anything you could possibly imagine appreciating, Felix likes it better. Now, that's all well and good, but there's a finite amount of resources on this planet, only so much ice cream and so many puppies to go around. A strict utilitarian would have to weigh the ultimate enjoyment produced by each of these resources as they're consumed, and come to a somewhat irritating conclusion. Whatever Felix wants, he gets, even if that means that you don't get any. If that ice cream you were going to enjoy can be safely stored for Felix to enjoy later, you shouldn't get a single scoop. If you had some time to play with your puppy, but Felix is free right now, you'll just have to wait until he's done. In fact, what are you doing right now? Because if it's bringing you less pleasure than Felix eating ice cream, morally speaking, you should probably be slaving away in the ice cream factory, along with everyone else. This is an example of Robert Nozick's famous utility monster argument, highlighting how utilitarianism violates one of our key intuitions about morality, a sense of fairness or equality. It seems to favor individuals who are capable of greater enjoyment, even at the expense of the welfare of others. If a serial killer is having a good enough time, utilitarianism would seem to suggest that we line up and let them do their thing. The experience machine. Another of Nozick's famous thought experiments should be familiar to anyone who's seen the Matrix. If we developed sufficiently advanced virtual reality, something that would simulate any subjective experience, winning a marathon, discovering a cure for cancer, whatever, would hooking everyone up to the machine be morally justified? Would it be mandatory? If the only metric for morality is maximizing some subjective experience, like pleasure or happiness, it seems that we should be doing everything in our power to get people into the machine. After all, if the value of climbing Everest can be fully reduced to the subjective experience of having climbed Everest, there's no reason to have anyone out there risking their lives to climb it. They can feel the same way, safely and more reliably, in the machine. So even if nobody ever actually climbed Everest, even if humanity had done nothing of substance besides inventing the experience machine, it wouldn't be any worse of a world. In fact, it would be a utilitarian utopia, optimized to generate as much positive subjective experience as we could handle, unless of course you think that a world of real achievements might be more desirable. That would seem to indicate that there's something besides subjective experience that matters. Organ harvesting. Organ donation is a great way to do a lot of good in the world after you're dead. Your liver, your heart, everything that keeps working after you're untimely demised can give another person a new lease on life. But the organ donation procedure isn't really proactive. What if, instead of waiting for you to kick the bucket, the hospital were to just, you know, reallocate organs according to utilitarian principles? I mean, how much pleasure do you think you can manage in the remainder of your lifetime? Is it more pleasure than two or three other people would have if their lives were saved by your, uh, gift? A strictly utilitarian approach, one which made no allowances for petty things like rights or autonomy, would mandate that we dice up healthy people if there were at least a couple others who could be saved. At the very least, if you were to wander into a hospital for a checkup, so long as you had two working kidneys, you'd probably wake up without one. Some have made arguments that the fear of getting diced up would outweigh the benefit of such a policy, but if they could keep a secret murder slash organ reallocation conspiracy under wraps, would that necessarily be a good thing? The repugnant conclusion. When we talk about utilitarian ethics, we tend to focus on one part of the equation, increasing well-being. That's where a lot of the support for it as a standard of morality comes from, because it usually endorses courses of action that end up improving people's lives. But there's another possible approach, notably posited by Derek Parfit. Rather than trying to improve the existing human lives, we could just make more humans, a lot more humans. In fact, in Parfit's framing of the utilitarian calculus, it seems that the maximum utility we should be pursuing is the most populous society we can manage, even if life in that society is just barely worth living. Take these two groups of people, A and A-prime. The height of the bar is the well-being of people in the group, and its width is the number of people living in it. Group A has a small number of people at a high standard of living, while A-prime has that population, plus another population enjoying themselves less. Not bad, mind you, but not quite as stellar. It would be weird to say that even if their lives are worth living, if someone wasn't going to enjoy themselves as much as the happiest people, they should never have existed in the first place. So it makes sense to say that A-prime is better than A, or at least not worse. Now let's compare A-prime to Group B. Same number of people, slightly lower maximum happiness, but higher average happiness overall. That looks even better, but where has that brought us? We've found that a larger population at a slightly lower standard of well-being is superior to a small population of very happy people. We can repeat the same procedure over and over until we reach what Parfit called the repugnant conclusion, a massive population of people who are just barely eking it out. With the structure of the above argument, utilitarianism would seem to suggest that that's more desirable. I'm sure that all the parents in the audience are reassured, but that seems wrong somehow. Repugnant even. In scenarios like these, where utilitarian models of morality seem to conflict with our intuitions about right and wrong, some philosophers choose to double down on utilitarianism, saying that the model is so good that even when it points at seemingly immoral conclusions, we should bite the bullet and simply accept those conclusions as being correct. Utility monsters should be fed. Organs should be harvested. Our squeamishness about following the recommendations of utilitarian ethics is simply due to our imperfect moral apparatus. Still, there's a key difference that's worth acknowledging. Unlike other phenomena, there's no objective measure we can hold up to models of morality to evaluate their truthfulness. If we disagree about which scientific model better represents the world, we can turn to data and experimentation to resolve our differences, or at least to figure out what sort of information might lend credence to one idea or another. But the best we can hope for out of a system of ethics is internal consistency and some sort of correlation with our moral intuitions, which may be biased or incorrect. There's no science that can be done to empirically verify a system of ethics without begging the question, without deciding beforehand what framework for morality is correct. In that light, we should probably be extraordinarily careful about prioritizing the dictates of ethical systems over our moral instincts. It's good to try to develop rigorous systems, both to try and explain those instincts and to give us useful tools for navigating dilemmas. But we should be mindful that those intuitions are what compelled us to develop ethics in the first place, and that methodological tidiness is a poor substitute for truth, especially when you're talking about right and wrong. Do you think that we should give precedence to systems of morality over our moral instincts? Please leave a comment below and let me know what you think. Thank you very much for watching. Don't forget to blah, blah, subscribe, blah, share, and don't stop thunking. Mwah.