 Aaron Powell And I'm Trevor Burrus. Aaron Powell Joining us today is Christopher Freiman. He's an associate professor of philosophy at the College of William and Mary. Welcome to Free Thoughts, Chris. Chris Freiman Thank you. Thanks for having me. Aaron Powell So you wrote the chapter in Arguments for Liberty, a book published by Libertarianism.org, on utilitarianism, which is what we're going to talk about today. What is utilitarianism? Chris Freiman That's a good question. It's a moral theory which says that the right thing to do is basically the thing that produces the best results. And when you put it like that, it sounds pretty commonsensical, pretty uncontroversial. Why wouldn't you want to do the thing that produces the best results? But once you start getting into the details, it's a little more controversial. So the way a utilitarian would think about what counts as the best results would be the thing that produces the most happiness for everyone affected by your action. So in essence, if you're trying to decide what the right thing to do is, you look at all your alternatives. What can I do in this situation? How much happiness is this going to produce for everyone that's affected by my action? How much suffering is going to be produced for everyone affected by my actions? And then subtract suffering from happiness, get net happiness, and then you just pick the thing that produces the most happiness on net. Aaron Powell So what's happiness? Chris Freiman Well that's, yeah. Aaron Powell Just pure raw, unmeasured joy. Chris Freiman It depends on what you ask. I mean a lot of utilitarians think it's pleasure. And I can see the attraction of that view. That view has some pretty famous objections lodged against it. I think the view that I prefer is happiness as desire satisfaction. So it doesn't have to be pleasure in the way that we traditionally think of it. It's just, you know, happiness is getting what you want. It's satisfying your preferences. So you could imagine people having a preference for a life that involves very little pleasure. Maybe they're a religious aesthetic or something like that. But we could still say that they lived a happy life if their preferences were satisfied. Aaron Powell We're not going to challenge preferences as a general rule. I mean, say heroin addicts and the contemplative life of Aaron and his fountain pens are equally valid. Aaron likes fountain pens, by the way, so we can continually get on him for this subjectively bad preference, I think. Aaron Powell Exactly. So he's bad on the utilitarian front. But I mean, are the fountain pens of Aaron and the heroin addict the same thing? Aaron Powell Yeah. It depends on which utilitarians you ask. So Jeremy Bentham famously said it's all the same. It's fountain pens, I guess, I don't know, that's still probably a lower pleasure, I think, fountain pens. But you know... Aaron Powell He probably used them. Aaron Powell Yeah. That's true. He probably did. Aaron Powell That's true because he didn't have a choice. So I'll give you all pleasures, all preferences are equal. Mill famously disagreed. He said there are these higher pleasures and lower pleasures. Higher pleasures tend to be ones that require cultivation, the use of our higher faculties. So reading Shakespeare is a higher pleasure, even if it's the same in intensity and duration as the enjoyment you get from, what was your example, heroin? Aaron Powell Yes. Aaron Powell Yes. We would have a reason to prefer reading Shakespeare to using heroin even if quantitatively they were the same. I don't know, I have a soft spot for Bentham, for the Bentham view that all pleasures, all preferences are worth satisfying equally, intrinsically. We might have instrumental reason, so if you have a preference for hurting other people, we would have a reason not to satisfy that because that's bad for the person that you're hurting that's going to frustrate their preferences. But in and of themselves, yeah, I say pushpin and poetry to use Bentham's example, totally fine. Aaron Powell Is this sort of pleasure, these sort of desires, fungible or just is it simply an add them up? So if you said the person who gets their desires to hurt people, well we don't want to have them do that because it's going to cause suffering, but does it really matter? Like if they get a lot of it or there's a whole bunch of people who have gotten together and like all of them would get a lot of pleasure out of torturing Trevor, can we say and then … Aaron Powell That's a non-zero. Aaron Powell An alternative, an alternative, the same thing would be is do we have to treat everyone's preferences the same? So if I'm making a decision and I'm weighing it up and this is going to, you know, I could take an action that's going to bring 10 units of desire satisfaction to a stranger or eight units to my child, can I prefer my child over the stranger? Aaron Powell Right. So I would say morally speaking, no. You can't prefer your child to the stranger. You might have legitimate self-interested reasons for preferring the child to the stranger. So I think you could say a morally righteous person or a morally perfect person on a utilitarian view wouldn't give preference to their child simply because it was their child. But you might have reason to depart from the morally perfect thing to do, just all things considered. So there might be more to life than being morally perfect. That's a semi-evasive answer so let me soften you up a little bit. If it turned out that you could benefit your child in some way, say, I don't know, how old is your oldest child? Eight. So this is, we're talking elementary school, I guess, yeah. So you say, like, suppose you could get your eight-year-old into an elite elementary school, but it required, I don't know, falsifying the admissions test of your child's rival for the final spot or something like that. Even though you might think you have some sort of special parental duties to benefit your child, it seems like there have to be limits. And it seems like you can't cheat other kids, strangers kids, out of benefits to secure benefits for your own kid. So at the very minimum, it does seem like we have duties to strangers that can trump our duties to our own children. Does this deal with rights at all? I mean, does it come into the play at all? Can we say this is not allowed because of a rights claim or does utilitarianism just avoid that language? I like rights. I like rights talk. But I think the reason why we care about rights, the reason why rights are valuable is because they have good consequences. So a helpful analogy for understanding a utilitarian perspective on rights, I think, are traffic lights. So you have the green light that gives you a right of way. You have the red light that imposes a duty to stop so that other people can go through. The rights generated or tracked, I guess, by traffic lights aren't natural rights in the way that a lot of libertarians think of natural rights. They're conventions, to be sure. But they're good conventions. And you can make a solid case that it's not just a matter of opinion that having traffic lights is a good thing from a utilitarian perspective. They actually generate good consequences by coordinating our behavior and so forth. That's not written in the fabric of the universe that a green light gives you a right of way. But just given the kind of beings we are, the way intersections work, this is just a useful convention. And it's useful to talk about rights. And I think that point generalizes to all rights talk. So rights to property, to bodily integrity, et cetera, et cetera, free speech, that's meaningful within a utilitarian framework. But we would just say the rights aren't intrinsically important. They're instrumentally important. So is this to some extent the difference between ... Because one problem with that is, yes, you could say in general having rights increases happiness or desire satisfaction, which looks like a rule utilitarian notion. But there may be lots and lots of individual instances where the violation of a particular person's rights will be utility maximizing, which then those exceptions in that act utilitarian sense kind of eat the rule. Right. So this is something that as a ... I don't know if I would say I'm a full-fledged utilitarian, but I definitely lean very heavily that way. This keeps me up at night. I think I'm committed to ... So this is a long-standing debate within utilitarian theory. So you say you have this rule, you have these rights. In typical cases, they lead to good consequences. They maximize utility. But what happens in a particular case where it doesn't and you're led to this dilemma? You either follow the rule or you break the rule for the sake of utility. And this creates not quite a paradox, but at least a problem because if everybody is constantly calculating whether or not they should follow the rules, if I'm constantly calculating whether I should keep my promise to you, you're not going to trust my promises. The institutional promising starts to break down. Happiness is frustrated, et cetera, et cetera. My view is close to Siegwick's view, Henry Siegwick, that there might be sort of an esoteric utilitarian morality where the truth is act utilitarianism, which is basically the idea that with each act you take, you ought to maximize utility. So if breaking a rule maximizes utility, so if stealing a loaf of bread maximizes utility, you should do it. That's the right thing to do. But we might not want to teach everybody to do that. We might want to teach them the rules and tell them to stick to the rules and not to constantly calculate the utility of following them. But this is maybe a noble lie or something like that that we would tell people. Now one thing I would say, again maybe to make it a bit more palatable, is I think any plausible moral view has to allow us to break rules. So, I mean, even Nozick, who's kind of the arch rights guy, says you can violate rights to avoid what he calls catastrophic moral horror. He doesn't really specify what that is or what the structure of that theory would look like. But I think on any plausible view, if you can steal a loaf of bread from the back of a truck to feed your starving family, you got to be allowed to do that, morally speaking. And I think one virtue of utilitarianism is that it provides a very clear, compelling explanation for why you're allowed to break that rule, why you're allowed to break the rule against stealing. Well, because in this case, stealing actually produces significantly more good than harm and so you ought to do it. And I think pure rights-based theories actually have trouble explaining why I might be able to steal bread from the back of a truck, you know, why we might be able to do certain sorts of things that under normal circumstances we would be very opposed to, but in emergency conditions would be okay to avert catastrophe. It seems like an empirical question about how many rules broken rights, let's call them, for the purpose of this. How many of those, when you break them, undermine the rule? Because as we know, like sociopaths, there's a sort of stable amount, a number of sociopaths in sort of a human society. About 10% can be there who constantly are breaking rules and stealing from people because they can take advantage of everyone else's trust. If there were too many, then they would kill the very system of trust that they're predating on. So this would make the utilitarian calculus very difficult in the sense of saying, okay, well, is this time when I break the rule, is it not undercutting the existence of the rule? And no one would know the answer to that question, which gets to a really difficult problem with utilitarianism is that you don't know enough about what helps people out or what's hurting them or whether or not you're contributing to rule breaking too much from the moment where you make the decision, you'd be paralyzed by the amount of data that you would need. That's right. So this is why I would advise people not to constantly calculate the utility of rule breaking and just say something like, here are the basic rules. Don't lie, cheat, steal, et cetera, et cetera. In emergency circumstances, you can break the rules and don't bother doing any more calculation than that. And I don't think that's too problematic. In fact, I think that's probably pretty close to common sense morality. So you stop at the stop sign. But if there's no traffic and your child is severely injured in the back seat and has to get to the emergency room at 2 a.m., then I think most people would say, well, okay, then maybe you can run through the stop sign. You know, look both ways, but you can break the rule in that case. That doesn't mean that every single time you stop at a stop sign on your drive to work, you're constantly thinking, well, can I maximize utility by running through it this time? You just say, no, just follow the rules in normal circumstances, in extraordinary circumstances, in emergency cases, then you can break the rule. So I don't think that's going to threaten the stability of the rules in the way that constant calculation might. How do we articulate these rules? We teach people these rules. We teach them not to break them. Where do these rules come from? And can we assess the rules from within this system itself? It's the noble lie. Like, who's in on the noble lie? I guess we are. Yeah, I don't mean you don't air this. Too many people should not listen to this podcast. Don't send it to all your friends. Yeah, well, if my presence was on here, if that was broadcast, it would dissuade people from listening in the first place. Yeah, so that's a really good question. So I mean, I'm sympathetic to a kind of Hayekian view where it's hard to totally step outside and isolate particular rules and assess them for their value. So I think it is tough to say that we can have this Archimedean standpoint and say this social institution is good, this one's bad, and let's just pick it the way someone like Bentham might think. With that being said, I'm not quite as skeptical as someone like Hayek is about our ability to rationally assess rules. I mean, one really kind of interesting bit of utilitarian literature is this paper that Bentham wrote that was published posthumously where he talks about the irrationality of anti-sodomy laws. He just said, look, from a utilitarian perspective, this doesn't make any sense at all. You know, mutually beneficial, what's the problem? It's happiness maximizing that obviously was very counter-conventional when he was writing it. But I think he made a really compelling argument at the time, even though in some sense he was standing outside his traditions, outside of his conventions. And I think that we still have a bit of critical leverage today even to assess. We can just look at certain institutions. We have empirical evidence that some work well and some don't. And so we're not totally unable to assess them critically. Utilitarianism has this problem that, I mean, answers have been articulated for a lot of these things. But there's a lot of instances where it clashes with our intuitions about moral principles, rightness and wrongness. And one of the ones that has always been most troubling to me is if the right, if we're so focused on action and it's assessing the consequences of action. So it's all kind of, all the assessment happens post-act, not pre-act. Does it care about motivation? Because we can think of lots of instances where the same act with the same consequences we think is morally laudable if done for one motivation, but if done for another motivation is morally repugnant. So I think we could separate our evaluation of actions and motivations. So we could say this was in some sense a bad action because it failed to maximize utility or had really disastrous consequences. But I'm not going to blame you for taking that action. I might even praise you for taking that action because based on the information that you had at the time, you had all the reason to think that this would produce good results and it just didn't work out that way. And I think there's actually utility in praising good motivation. The idea being we want to encourage people to act with empathy, to look towards the consequences of their action, so on and so forth. But I don't think there's any contradiction in saying something like this was a, like this was not, this is not the action you should have performed given what happened. So you can make an analogy to like an investment choice. I could say right now it looks like investing in Google, I don't know, is a good idea or something like that. But suppose some scandal comes out tomorrow and Google stock plummets and you lose a ton of money, there's a really meaningful sense in which that was the wrong investment to make. But I'm not going to blame you for making that investment because it was the right call at the time. And we want to praise people for making sensible investments as opposed to, say, playing the lottery. Same thing with moral theory. If you give money to a charity, that all evidence in the case is really helpful. And it turns out that there was some secret scandal and it ended up hurting people. I would say that was the wrong thing to do in an objective sense. You couldn't have known it at the time. And because you couldn't have known it, you did the best you can and I'll praise you. What about the flip side of that though? Because you could have so an action where the outcome was good, but the motivation behind doing it was so awful that I think we have a strong intuition that it was the wrong thing to do in the first place. So, you know, lying to someone, flattering someone in order to get something out of them or... Let's just get... I mean, I'm certain. Let's say you believe that giving to charity will help through some weird mechanism kill a million kids. Right. And so, you give to charity, but you honestly believe that a million kids will die if you give to charity. I don't think you even need to get to like absurd things like that. But you tell it to... We love talking about the absurd ones. But just instances where someone is... You're being manipulative, intentionally misleading, self-serving. We would say like you shouldn't have, even with the consequence of the flow for it, like it was wrong for you to have taken that action. Yeah, maybe this is splitting philosophical hairs, although, I mean, that's what I do. That's what you do for a living. That's right. And then we say, well, I mean, depending on the particulars of the case, I would be inclined to say, you did the right thing, but for the wrong reason. So, just in this cartoon case where you give to the charity thing it's going to kill a million people, but it saves a million people, I would say, yeah, you did something great if we focus on the giving to the really good charity. In that sense, you did something great. But you did it for the wrong reason, and I can condemn you for your motivation. And there's good utilitarian reason to condemn you for your motivation, because just as a general rule, people who are motivated to spend money in ways that they think will maximize human death are not going to be utility maximizing. And so there's utility in discouraging people from having these kinds of malevolent motivations. So, how do we get to libertarianism? Having an understanding of utilitarianism and what they mean, what is the utilitarian argument for libertarianism? Yeah, so the argument that I've given in the chapter is just if we look at our menu of institutional alternatives, you know, free market, capitalism, welfare state capitalism, socialism, et cetera, et cetera, looking at all these alternatives, realizing that none of them are perfect, they all have their flaws in terms of maximizing social happiness. Free market capitalism does better than the alternatives. And I think that the great virtue of the market from a utilitarian perspective is one Hayekian point about it gives us information via the price system about how to efficiently allocate our resources, so it solves this knowledge problem. So if you ask me right now, how should I arrange my consumption choices to use my resources most efficiently? I don't know. But I do know that if the price of to use Hayek's example, like tin rises, that sends a signal that tin is scarce and it should be conserved and that's going to rise the or raise the price of stuff made from tin. So I'm going to buy less of it, which is going to be good for the economy as a whole, et cetera, et cetera. The price is giving me information about how to efficiently use goods. So that's one virtue of the market. The other virtue is the one that people since Smith have been harping on, which is it gives us an incentive to serve other people, even if that's not our ultimate motivation. So we engage in trade to benefit ourselves, but in doing so we give other people what they want. Most people aren't hyper altruistic who are willing to devote their life to the maximization of social welfare. They're just not motivated to do that. They care primarily about their own happiness, the happiness of their friends and family. Maybe they give to charity. I think the average donation is something like 4% of income to charity, which is not nothing. But capitalism gives us an incentive to supply other people with what they want, even if we don't ultimately care all that much about their well-being. So that's an argument for allowing there to be free markets because they wealth maximize. But that's not necessarily the same thing as libertarianism, because libertarianism has a much more radical view of the role of the state than that, because what you just described could be the Nordic model. So we have markets, but we also redistribute an enormous amount of money. It could also be the Elizabeth Warren, Bernie Sanders style, like, yeah, let people start companies, but we're going to regulate the hell out of them to protect the workers and we're going to impose all sorts of restrictions on what you can buy and sell. We're going to, you know, the soda ban in New York, you know, we are going to stop these. Like all of that would seem to be still okay even with accepting the wealth maximization of allowing people to exchange goods and services in markets. Except for maybe the big soda thing because that makes people happy, right? I mean, pure prohibition on people buying things that make them happy. Well, but it only makes them happy in the moment. But we, the enlightened, know that in the long run, you know, because presumably they have desires later on. And so at some point in their life, they're going to be like, whoa, I have a strong desire that if I could go back, I wouldn't have drank those sodas because now I have diabetes or all sorts of other health issues. Yeah. So just on that, there's a, I think it might have actually been James Mill who said something like, only the wearer knows where the shoe pinches. And so I think it is, so it's by no means impossible, but I think it's improbable to think that as a general rule, third parties have a clear picture of my preferences than I do and how to satisfy them. And Mill, John Stuart Mill, I think actually makes a pretty nice utilitarian argument against paternalism and on liberty where he says, A, you have more knowledge about your interests than third parties and B, you generally have a stronger incentive to pursue them than third parties. Again, you can imagine exceptions to these rules, but in general, that strikes me as pretty plausible. The other, I think the other big consideration to think about when we're thinking about the Nordic model redistribution regulation and so forth is all of the kinds of public choice worries that a lot of libertarians are familiar with. So I mean, I discussed this a little bit in the chapter. I don't have a problem in principle with some sort of redistribution from rich to poor. And I think a lot of utilitarian philosophers who aren't libertarians, which as far as I can tell is most of them, is that's one of their stumbling blocks. They say, well, look, $1 brings so little satisfaction to Bill Gates. And it would bring so much more satisfaction to somebody who's living on $5,000 a year. So the happiness maximizing thing to do is take that dollar from Bill Gates and give it to the person who's significantly poor. I think that's right as far as it goes. And I think you could say if we had this kind of frictionless system of redistribution where it was just you take the dollar from the rich person, you give it to the poor person and that was that, that's fine. I'd totally be on board with it. I think there are compelling reasons even in practice to support something like a universal basic income. And we could argue about how exactly that would be implemented. But I think we've learned enough from public choice economists to know that there's the ideal model of how you'd want the state to work and then there's how it actually works in practice. And when you look at regulations and when you look at redistribution, they're just massive inefficiencies. And then it becomes an open question whether or not it's going to do more harm than good. What does this say about, I guess, secretive programs? So take the surveillance state. So one of the, I read a couple of years back Glenn Greenwald's book on Snowden revelations. And one of the really interesting things about it was he, you know, he's published this book laying out what the NSA was up to. And he's saying this was really harmful. But all of the harms that he lists are harms to us from knowing about the program. They're like, we're now going to change our behavior because we know we're being watched. We're going to feel like our privacy is invaded, which this was not his intent, but would be basically mean that all of the harms are come from him publishing it and not from the program. So does utilitarianism say that the government can do all sorts of things like privacy invasion or restrictions? And as long as we don't know about it and so therefore we don't know that our preferences aren't being satisfied or being curtailed, it's okay. So one thing that I find that I sort of end up repeating myself ad nauseam is this distinction between what a utilitarian would want to endorse in principle versus what they would want to endorse in practice. So yeah, I mean in principle, and I think this is the problem that many, many people have with utilitarianism. In principle, you could justify anything. So I mean surveillance, a surveillance state, that's the least of your problems in principle. Like, right, you're harvesting the organs of innocent people, all this stuff is on the table, no pun intended, if you're a utilitarian. And so I would say in that sense, in terms of in principle, if we could get a secret surveillance state that genuinely promoted the public good, then as a utilitarian, you'd have to be okay with that. That being said, I would say in practice, given what we know about human psychology and the incentives that we face, it's probably a bad idea. So it gives people a lot of power where the cost of abuse is pretty low, given that it's not done with a lot of public oversight. And so I would say, oh, just given those sorts of considerations, putting a lot of power in the hands of people who have fewer incentives not to abuse it than they otherwise would, that's a reason to be extremely cautious about it. Is it clear that on the surveillance state side, the civil liberties side of this equation, because you discussed in your chapter how the market helps people be better utilitarians in the marketplace, but as Aaron had asked before, there are other concerns in libertarianism including civil liberties concerns that are not market forces and trying to figure out what maximizes utility there. So we're all sitting around here talking about we don't like surveillance state very much, but a lot of people don't mind it. I mean their preference function is... Or find it comforting. Or find, as I'm saying, they not only don't mind it, but they feel better at night because the Patriot Act is out there and some guy is watching them with a drone or through their toilet, whatever. They feel better about it because we've always been at war with Eurasia, you know, kind of situation. Does that mean that on a pure utilitarian calculus, if that is the case that people actually prefer, we're different than they are and there's no market transaction or they prefer security over liberty, then that's the way it should be. I mean, I'll take this a step further. The recent violence in Charlottesville was a group of people who have a very different set of preferences than those of us in this room. And you saw it to an extent with Trump's election. Nationalism is utility maximizing for a lot of people that they're... His voters... You can say, you know, trade, restriction trade is going to make you less wealthy and they will say, I don't care. What matters is preserving our tradition, is keeping the factories open, is men being employed in the kind of work that men are supposed to do or is keeping my culture in a certain way, so keeping out brown people from across various borders. These aren't like hypotheticals. These are a significant part of the population. How do we address those kinds of preferences? Yeah, that's a great question. So, I mean, I think one thing you could say is you have utilitarian reason to try to reform certain sorts of preferences. So, I think the immigration cases is super clear cut in that respect. So, people, I think that's right. They do have these kind of nativist preferences and they say, you know, I want to preserve American culture, American identity, whatever that means. And so, let's restrict immigration. But we know that the economic benefits of immigration are just super high and also counterbalancing the nativist preferences or the preferences of cosmopolitan like me who think like I have a super strong preference to have really open borders. So, that's a counterbalance. But I think we want to say in terms of the long-run happiness and prosperity of not just the country but the world, it's better for people to not have these preferences. And so, we can criticize them from that perspective. So, preferences that facilitate positive sum cooperation are good preferences from a utilitarian point of view. So, can we then, we're forced to some extent when we're policy making especially, but even in our own individual actions to trade off current against future preferences. Do we discount one versus the other? Do we say to people, like, how do we measure, you know, like, yes, it's going to, you know, we're going to not satisfy your nativist preferences now in order to maximize the preferences of non-existing people, you know, currently non-existing people who will exist at some time in the future. Yeah, well, yeah, so that opens up a huge philosophical can of worms. How do we, what do we say about the preferences of future people? I haven't thought too much about that surprisingly, although utilitarians sort of generally have thought a lot about it. You know, I'm inclined to think that the preferences of future people have to matter just because it would be very difficult to make sense of a lot of our policy decisions if we didn't think that they mattered. So just take something like climate change. That's really going to hurt people who don't yet exist. And so if we said, you don't care about the preferences of non-existing people, then it's not clear why we should be so concerned about mitigating global warming. I mean, it is going to negatively impact people who are currently alive, but it seems like some of the greatest harms are going to be to non-existent people. So even though I don't have a great technical argument for it, I think that's a first pass as to why we should care about the future preferences that aren't yet materialized. So on that point, it reminds me of one of my, one thing that always upsets me in the sense of thinking about market transactions and highest and best uses, the global warming question in future generations is a good one, but let's try something different. The preservation of historical things. You can federate statues. More like what was happening. So when the ISIS in the Taliban were destroying things like Palmyra in Syria, these, for me, I have a very high value on those and I can't actualize my preferences. I can't go buy it. I would. I would go buy it to preserve it, but I can't. Or it's not even using ISIS as an example. If someone wants to bulldoze, I don't know, the armory where John Brown in Harper's Ferry, Virginia held up, like even though actually that's not the original building, but never let's just say what someone bulldozed that and turned it into a subway sandwich shop. I would be very upset about that. And my preferences would be super high. I just would actually have the money to fix that problem. And it seems like the only thing I could appeal to would be future preferences of future generations. They'll want to see this too. They'll want to have this preserved. Yeah. I think that's a fair point. Yeah. So I would say I think that's a totally legitimate way of thinking about those sorts of cases. Although I would want to make sure that we had, I don't know, empirical evidence that future generations will value those things. Because I think one thing that I am committed to saying is monuments, art, so on and so forth, these things only have value in so far as people value them as they satisfy people's preferences. That actually strikes me as, so any moral theory has its pros and cons. That to me has actually always seemed like a pro. Yeah, it's pretty mundane observation, yes. Yeah. Like if there was this monument that, or like a piece of art or something that nobody liked, I don't know. It just, it seems like it doesn't have value. Or if it's like Mad Max World Zombie Apocalypse and all you have is a Picasso canvas to start a fire, the Picasso paint is only mad at that much a door. Yeah. Right, exactly. And so I would say, yeah, so the lost utility that we might be depriving future generations of, that's certainly a relevant consideration. But I would resist the claim, not that you're making this claim, but I would resist the claim that great art, historical artifacts, et cetera, et cetera, somehow have intrinsic value that exists apart from the way humans, you know, rational creatures in general interact with them. What about market failures, especially public goods? If we're going to use the market as a utility maximizing entity and therefore promote it as a general rule, what about things that the market either can't provide or doesn't provide at an efficient level? Yeah, yeah. So I like to use this analogy in various forms. So here's, let me give you my theory of Steph Curry failure. So Steph Curry, I don't know what he shot the last season. Probably like 52% or something like that. Maybe like 47. I think it might have been that. So he misses, I think like 53% of his three-point shots. Oh, just a three-point. Yeah, just a three-point. It's what we go into. True, we could spend the rest of the podcast on basketball statistics. Probably know more about that than philosophy. Yeah, so just in terms of three-point percentage, he probably misses about 53%. And so, you know, if you're judging Steph Curry from the standpoint of perfection from 100%, you say, it's pretty good. Or like Babe Ruth. So Babe Ruth batted, I don't know, like 345. 340 is like that. Something like that. So in absolute, so he didn't get hit like two-thirds of the time. That's really bad. Maybe 342. I'm obscured that I can't remember that exactly. And so if you say like judging from, if your standard is success or perfect, perfect success, perfection, these players look pretty bad. And so you say, well, like Steph Curry has failed. You say, well, okay, but then I'll grant you the word. I'll say, okay, he failed. But then the question is, practically, what are the implications? Should we tell Steve Kerr, the coach of the Warriors that he should cut Steph Curry? Well, no, that's absurd because the standard by which you would judge Steph Curry is not perfection. It's what's the next best alternative. And so if you cut Steph Curry, someone else is going to take his place. And that person's going to miss even more than 55% of his three-point shooting. Because it turns out that Steph Curry is the best that there is. 45% or whatever it is is about as good as it gets. And so that's kind of my attitude about market failure, which is I think markets fail many times in the sense that they leave welfare gains on the table. We could imagine systems that do a better job of, say, providing public goods. That's the standard case. But just because we can imagine an alternative that's better doesn't mean that that alternative is actually feasible. And so once we start considering all the public choice worries about the actual incentives and information that politicians have, that bureaucrats have, that lobbyists have, et cetera, et cetera, then I think we become a lot more optimistic about the alternatives to free market capitalism. So it's not perfect. Just as Steph Curry is not perfect. But I think we have good reason to believe that it's better than the available alternatives. Does utilitarianism commit us to ultimately volunteerism? So the question of if we're not anarchists, and so we're going to have a state, the question of obligations to the state and obligations to obey the law. So on the one hand, you could say the law is that set of rules that we teach people to follow. That yes, there may be instances where it leads to injustice or in this case, the loss of preference satisfaction. But by and large, a society with respect to the laws. But that would seem to, that like by and large doesn't seem to apply in a lot of instances. Like almost every law that's on the books right now, if you were to go through all of them, is utility minimizing. And so does it, does utilitarianism effectively commit us to disobeying most of the time, if not outright rebelling all of the time? Well, so you'd have to calculate the expected social value of an act of rebellion. But certainly I think a true blue utilitarian will deny that there's any kind of independent duty to obey the law. So I think all of these duties are going to look like your duty to stop at the stop sign. Generally speaking, there's instrumental value in stopping at a stop sign. You don't want to undermine other driver's expectations that they can go through the right of way without that social utility, all that same song and dance. But you're in the middle of the desert and you're rushing somewhere important and you can see in all directions for a mile that there's no car. There is no moral wrong and just burning through that stop sign in my view. And I think all legal obligations that you might have are the same. So there might be value in acting in predictable sorts of ways, so on and so forth. And maybe there are some sorts of government programs that do good. But if you say like, I don't know, smoking marijuana or something like that, you say, that's a utility maximizing act and I'm considering all the negative externalities on third parties, blah, blah, blah, just for the sake of argument, it's a utility maximizing act. It happens to be against the law. So what should I do? I would say, well, if you're genuinely correct in thinking that it's utility maximizing, yeah, do it. Disregard the law. And I think that that generalizes to all legal obligation. You've said you write in the book that obviously the utilitarian view would be willing to endorse redistribution. It's not off the table. Nothing is off the table if it raises utility. But you also write about how we may not want to go that far if we're thinking about the effects of economic growth. And in particular, you cited a really interesting argument by David Schmitz about the production factors. If you could discuss that. Yeah, I really like that argument. And so Dave was my dissertation director actually at the University of Arizona. And he has this really nice piece, which originated as a standalone article and then became a chapter in his book, Elements of Justice. So he addresses the argument we discussed a little bit earlier, the diminishing marginal utility argument for redistribution. So that's the argument. A dollar means very little to Bill Gates because he already has everything under the sun. What more could he possibly want? What could he spend that dollar on? If you give that dollar to somebody who's earning $5,000 a year, there's a lot of really valuable stuff they could spend it on. So it looks very straightforward, a utilitarian should take from Bill Gates and give to the person poverty. And one of the arguments that Dave makes against that is, well, that is true if you're only considering consumption. But if you're also considering production, it's not so obvious. And he gives an analogy to units of corn. And he says, suppose you have somebody who's very corn rich. So they have a lot of corn. They have eaten their share of corn. They don't want to eat any more corn. And then you have somebody who's hungry, not starving to death, but just doesn't have very good food to eat. They have something that's worse than corn to eat. He says, if you're just considering consumption, it would make sense to take the one unit of corn to the corn rich person and give it to the corn poor person. That's utility maximizing. But things change when you consider the future and you consider production. He says precisely because the corn rich person has no reason to consume that unit, production becomes a higher valued use of that corn. So they might plant it, the real world analog might be savings and investment and so forth. And when you consider that very small changes in the growth rate can have huge impacts over time, impacts that tend to be very good for the poor, then it's not so obvious that when we're looking not at a snapshot but looking at a 10, 15, 20 year picture that the utilitarian thing to do is redistribute that corn. And I think just a broader point is that economic growth is a very underrated instrument of poverty alleviation. And you hear people talk about trickle down economics. It's not about trickle down economics. It's just I have a lot of money and I can invest in a manufacturing process that makes bread two cents cheaper. Okay. I have an incentive to do that. And now bread is two cents cheaper for everybody. So people have that much more money to spend on other stuff. And I think really if you look throughout history, that's how we alleviate poverty is by making better stuff at lower prices, not redistribution. So when people argue about for libertarianism ideas, broadly speaking, a lot of different arguments are used. Nine in arguments for liberty. Nine in particular. Only one correct one. Only one out of nine is correct. But on that point, a lot of people associate libertarians with a very strong rights theory, a taxationist theft, a don't tread on me kind of thing. What do you think? Not just why is it correct, but is there a virtue that the utilitarianism has rhetorically over these other ones that are often more assigned to libertarians? Yeah, yeah. In addition to being true, I do think there's rhetorical value in it. Just because libertarians, the kind of self-ownership libertarian that we come to associate with libertarianism is a kind of esoteric doctrine. And not a whole lot of people accept it for better or for worse. So when people say taxation to provide health care for other people is a rights violation, that gets traction with some people, but for others it doesn't. And I think saying, look, this program is going to have very good consequences or this program won't have very good consequences. That's a consideration that everybody cares about. I hesitate to use the word theory neutral, but it's something like a theory neutral consideration. Everybody wants other people to be happy. We want the country to be richer. We want people to be more satisfied. And so I think it has a wider appeal than maybe some of the more doctrinaire rights views. So that's just an additional reason why. I don't know. Libertarian philosophers, they're not big fans of utilitarianism, but I think we should have more. We should have more that should come over to my side. Thanks for listening. This episode of Free Thoughts was produced by Tess Terrible and Evan Banks. To learn more, visit us at www.libertarianism.org.