 Hello everyone and welcome back to Conversations with Tyler. Today I'm here with Peter Singer of Princeton University. Peter is one of our most important and influential intellectuals. For our purposes what is new, there is a new and very much revised edition of Peter's classic book Animal Liberation. It is now titled Animal Liberation Now, the Definitive Classic Renewed and by my estimate it's about two-thirds to three-quarters new material. Peter, welcome. Thanks very much, Tyler. It's been good to be with you. When large language models make it possible for us to speak with dolphins, what will be the first question you ask? Ah, what do you think of humans maybe? And what do you think they will say? Nothing very positive, that's for sure, because a lot of dolphins have been caught in nets for catching tuna and impaled to stop the meeting fish that people want in some parts of the world. I don't think they'll think too well of us. And what do you expect the dolphin to ask you? So we explain to the dolphin that you're Peter Singer, right? Right, okay. What's the question? Why have you failed? Why have you failed to stop humans treating animals so badly? So I'll ask you the dolphin question. Why have you failed? It's harder than I thought it might be. And that begins, I think, with the fact that humans eat animals, and it seems like we're very conservative about what we eat. Relatively few people are prepared to really think ethically and openly about what they eat. They're worried about having to change that, eat something different, maybe stand out from their friends and family by not eating meat and therefore implicitly criticizing those people when they do eat meat. I think there's a whole lot of factors that just make it hard for people to make that fundamental change about what they eat, and without that, they're not really going to fundamentally change what they think about animals. I have a general question about how totalizing ethics can be, or how totalizing utilitarianism can be. So if someone says to me, as you likely would, well, at the margin we should treat animals better, much better, I would agree. But if someone asks the question, say, do we wish that human beings had never settled the new world? And there would then be many more intelligent mammals alive in the new world today. I simply think that question is intractable, and that the limits of utilitarian reasoning are fairly tight around marginal changes, small events. So I'll ask you, do you wish that human beings had never settled the new world, or New Zealand? No, I don't wish that. I agree that the question is intractable. I'm not sure whether it's intractable in a philosophical sense because we can't sum up the values, or whether there's just so much factual information that we don't have, including how do we make comparisons between the well-being of the bison, let's say, that we're here more numerously, as compared with the well-being of the humans. If you think, say, about the Earth's current population, which is what, about eight billion human beings, can we conclude that it's too many because it's too much pressure on animals, or is that also an intractable comparison in the same way? Yeah, I don't think you can say it's too many or it's too few, unless you make assumptions about the kind of technology that we have available to live in a sustainable way on the planet, and to allow non-human animals to live as well. But also in that question, of course, there's the question, there's the issue of how do we compare the value of human lives, and let's assume that they're human lives that are lived positively, that they are rich in fulfilling human lives, with the lives of non-human animals. I mean, I totally agree that they're different, and I think it's reasonable to argue that a rich and fulfilling human life contains more happiness than the life of, I don't know, a cow or a sheep. I'm not going to deny that. I don't think animals are equal in the sense that their lives contain equal value with that of humans. But you still think, in principle, we should be trying to do these total comparisons of one state of affairs against another, or those just big picture questions that we need to set aside. We should be content with marginal comparisons, because you're sounding like more of a marginalist than what I was expecting. I see. Well, I'm certainly prepared to say that clearly the suffering we inflict on animals in factory farms is indefensible. We cause far more suffering than any good we do for humans. Arguably, actually, it's a negative for humans as well. So that's what my focus is in animal liberation now. And in fact, a lot of my work has been aimed at that. It's been aimed at what we should do about people in extreme poverty. It's been aimed at things like, should we have the right to say we've had enough of life and we want to die if we're terminally ill and in pain? So in a way, the big picture questions that you're asking, I find them interesting theoretical questions. And I can give you the answer that a utilitarian would give if you want that answer. But in practice, I do think that I can't really make a lot of headway with that. How do you think about what is sometimes called the meat eater problem? So most countries, probably all countries, as they become richer, they start eating more meat. At some point, they start treating the animals less well. If not factory farms, there's just more mass production of animal meat. Do we need to calculate a trade-off to wish for those countries to become wealthier? Or do we just root for the wealth and figure we'll sort it out later? My view over the long term has generally been that countries will have to pass through this stage where they've got wealthier, they can purchase more meat, they raise animals in factory farms to do that. But then eventually, they'll go on to become more humane and more civilized. And they'll see that that's the wrong thing to do. And they'll treat animals much better. And the outcome will be better on the whole than it was before they became prosperous, when there were a lot of people in extreme poverty. And they also weren't treating animals well, although they didn't have the power to raise so many of them. But still, because they were using them for food and because if your own survival is at stake or that of your family, you need to eat more, then you'll do anything to animals to do so. It was not a good situation. So in other words, what I'm saying is, I was prepared to swallow the short term negatives for the long term outcome. I have to admit that I've become less confident about the long term outcome. I still hope that it's there. But the fact that there's been relatively little progress in terms of treating animals better over the 48 years since the first version of animal liberation appeared does make me concerned that we're not going to be going in that right direction or not for a long time. I'd say you could peer ahead in either 70 years, 80 years, and things basically didn't get better compared to now. Would you then think the meat eater's problem is a real dilemma? Moral dilemma? If it seems it's not not a stage, people will just keep on eating meat through factory farms more or less forever. Then I would think, yes, there's a real question as to whether it would have been better if people had not had the prosperity and the ability to raise animals in that way. You once wrote a book entitled A Darwinian Left. Do you think you're becoming more or less Darwinian over time? My Darwinian intuition is people won't ever get much better because they evolved to eat animals. Not to have factory farms, but to kill them in painful ways and simply not worry about it. What's your current view on how Darwinian you are? I'm still 100% Darwinian. I don't think there's an alternative explanation of how we exist and of the biological elements of our nature. But I also think it's compatible with being a Darwinian to say we are a being who's evolved the capacity to reason and reason can lead us to conclusions which can influence our behavior. I don't see that as being anti-Dawinian or non-Dawinian. I just see it as a realistic appreciation of the fact that we have evolved as rational beings and that that leads us to certain conclusions we wouldn't have reached otherwise. But one might start seeing in Darwinian fashion that the evolution of rationality is just far more selective than what we might wish for. So there's people who are very reasonable and then in other contexts they'll do terrible things. There could be a prison guard, it could be animals. You've written plenty about many, many other examples. Is there really this general faculty of reason that overrides those evolved intuitions? I think there certainly can be and I think there is for some people some of the time. The question would be is everybody capable of that or even if not everybody, are we capable of getting a dominant group who do follow reason in general universal directions, who use it to develop a more universal ethic that applies to a wider group of beings than their own kin and family and those that they're in cooperative relationships with? I think there's evidence that that is possible and we don't yet know to what extent that can spread and start to dominate humans in future generations. I wonder sometimes how much we can have an ethics that truly separates morality from partiality. So here's a question I gave to Sam Bankman Freed. I said as a utilitarian, let's say a super being offered you a gamble where we would in essence double the population on earth created dual earth somewhere equally happy with 51% and 49% we wipe out the earth we have now. That would represent an increase in expected value and furthermore I asked Sam would you keep on playing this game double or nothing? Now Sam being Sam he just said yes and indeed in his own life he did continue playing the game double or nothing. And he ended up with nothing or worse than nothing. But doesn't a thought example like that mean we can't really be utilitarians in the big picture of things we need to be more loyal to the earth we have and not consider gambling in a way for these extra earths? I think it says more about gambling than it does about being a utilitarian. I agree it's a paradox and you could say it's a paradox particularly for maximizing utilitarians who normally will talk about maximizing expected utility and this seems to be a case where you don't want to maximize expected utility. Although of course there are arguments that would say well your expected utility is actually low given that you can't infinitely long keep doubling utility I assume but you had a hypothesis where you have a twin earth right and if you could just create a twin earth I guess you could but for the real world we can't keep doubling utility and so we shouldn't do double or nothing. I'm not claiming to solve this paradox by the way I think it is an interesting and somewhat baffling problem with maximizing expected utility. But does the question show not only that it's some unusual paradox in a corner of the moral universe but that in all our choices that assessments of utility are within some framework that is pre-assuming a certain amount of partiality and that there's no escape from that partiality no fully objective outside viewpoint. I don't understand why you're seeing this paradox as relating to partiality I mean it's it's just as true if we're completely impartial about universal good right so it's I don't understand why you think that it shows that we inevitably are going to be partial. Well take the Bernard Williams question which I think you've written about let's say that aliens are coming to the earth and they may do away with us and we may have reason to believe they could be happier here on earth than what we can do with earth. I don't think I know any utilitarians who would sign up to fight with the aliens no matter what their moral theory would be. Okay you've just met one. I've just met one so you would sign up to fight with the aliens. If the hypothesis is like that that the aliens are wiser than we are they know how to make the world a better place for everyone they're giving full weight to human interests but they say even though we're giving full weight to human interests not discounting your interests because you're not a member of our species as you do with animals but unfortunately it just works out that to produce a better world you have to go. I'll say okay if your calculations are right if that's all right I'm on your side. And you're making them a little nice you're calling them wise they may or may not be wise they're just happier than we are they have less stress depression so if they could rule over the earth they would do a better go of it than we would. I would still side with the humans. I would not I mean what you've shown now is that their interests happen to coincide with the universal good that's the way to produce more happiness full stop not just more happiness for them and if that's the case I'm on their side. How do we know there is a universal good I mean you're selling out your fellow humans based on this belief in a universal good which is quite abstract right. The other smart humans you know mostly don't agree with you I think I hope. Yeah but you're using the kind of language that Bernard Williams used when he when he says you know whose side are you on right you said you're selling out your fellow humans as if I owe loyalty to members of my species above loyalty to good in general that is to maximizing happiness and well-being for all of those affected by it and I don't claim to have any particular loyalty for my species rather than the general good. But if there's not this common metric between us and the aliens what you just measure you hook people up to a scale you measure they have more of it than we do let them come in if that doesn't exist what is the common good or universal good in the setting. I don't know if that doesn't exist but you said they're happier than we are which suggests that there is a common metric of happiness and that was the basis on which I answered your question if there's no common metric I don't really have an answer I would I would try to use the metric of of overall happiness and I'm not sure why I wouldn't be able to use that but if we assume that I couldn't then I would just not know what to do. So you wouldn't fight for our side even then you'd throw up your hands or just not you know this is you know this is not about a football team right you can give you a loyalty to a football team and support them even though you don't really think that they're somehow more morally worthy of winning than their opponents but that's this is not a game like this there's everything at stake. To what extent for you is utilitarianism not only a good theory of outcomes but also a theory of obligation. I'm sure you know the Donald Reagan literature there's though you prefer the outcome with more utility yeah but what should I do can still be a complex question. Well it can be a complex question in the sense that it may be that we don't want to directly aim at utility because we like to get things wrong so I still I think normally if we can be confident in our calculations that we are doing the right thing then I think the obligations that we have are to maximize utility but it's been argued that we're more likely to make mistakes if we do that and rather that our obligations should be to conform to certain principles or rules and I think that that depends on how confident you are in your ability I certainly think we should follow rules of thumb sometimes when we can't be sure of what's the right outcome and we should do what generally is accepted and you know you go back to Sam Bankman Freed obviously I think that was his mistake he was too confident that he could get things right and fix things and didn't follow basic rules or at least it's alleged that he didn't follow basic rules like don't steal your client's money. But isn't there a dilemma above and beyond the epistemic dilemma so say you Peter Singer you're programming a driverless car and you're in charge ideally you would like to program the car to be a utilitarian and benthamite car that if it has to swerve it would sooner kill you know one older person than two younger people and so on but let's say you also knew that if you programmed the driverless car to be benthamite basically the law would shut it down public opinion would rebel you'd get in trouble the automaker would get in trouble how then would you program the car? Yeah I would program it to produce the best consequences that would not be prohibited by the government or the manufacturer so I'm all in favor of making compromises if you have to to produce the most good that you possibly can in the circumstances in which you are. But doesn't that then mean individuals should hold on to some moral theory that may be quite far from utilitarianism? Well it's not just a compromise you need to be very intuition driven non-utilitarian just to get people to trust you to work with you to cooperate and in that sense at the obligation level you're not so utilitarian at all. You maybe that will depend on your own nature as to whether you think you're going to be led astray if you're not intuition driven or you may think that you can be aware of you know self-aware about the risks that you're going to go wrong and you're not exactly intuition driven but you're driven by the thought that I could be mistaken here and it's probably you know gonna have more value if I don't just directly think about how to produce the most utility. Let me continue with the number of the easy questions. If you take current AIs large language models you would agree they're not sentient they're not beings right? I would agree with that. So they don't count I agree with that too. So there's something about sentience that is essential for a being to count but we actually know very little about consciousness. You can read philosophy of mind you can look at neuroscience it seems to me one of the most mysterious and baffling areas in all of human knowledge. We have introspection I'm not sure how much to trust introspection so does it worry you to be erecting a moral theory based on sentience some notion of utility happiness when all of our scientific inroads toward that concept seem to be at least for the moment very very badly flawed. So they're incomplete certainly are they flawed in the sense that would lead us astray in terms of making those decisions based on sentience I'm not convinced of that. But say you're trying to compare human animal welfare to non-human animal welfare we have a sense many of these other animals are sentient that's pretty much certain they feel pain but we don't know how to compare them to us we don't really know even where in science to look for such a unit of comparison. How do we know they don't just overwhelm our well-being Derek Parfit like conclusions doesn't one just become a radical agnostic if all of our judgments rely on this utility thing which to me is somewhat mysterious. So I don't think you have to be completely agnostic about all these issues. I think there are some cases and again I would instance factory farming where we can be very confident that what we're doing is causing more pain and suffering than it's doing good to us. But that's not to say that there aren't a lot of other questions including that question you asked about you know is it good that humans colonize North America where it's it's harder to make those decisions because there are involving comparisons and some of these decisions are quite practical too for example even within the animal movement if you say well should we focus on trying to get people not to eat chicken because chickens are so intensively farmed and there's so many of them it takes you know less of a chicken to make a meal than it does of a cow so should we do that or you know then we're trying to compare the suffering of more chickens with fewer cows or fewer pigs. I agree those questions are still ones that we can't really get a grip on but that doesn't mean that there's nothing we can do that is based on sentience happiness suffering and so on. If there are many intelligent and sentient space aliens would that make the extinction of the human race less of a tragedy. If the idea is that then these intelligent and sentient aliens might populate our planet or take that they're just out there they're just out there so speed of light is too fast we'll never catch them vice versa so then the extinction of our species is still just the same loss right it's a it's a loss of a certain amount of happiness and the fact that it's less of a proportion of the happiness or well-being of sentience in the universe doesn't mean that it isn't the same loss so it's linear and separable yes so the notion of a universe empty of sentient and intelligent life there's nothing special about the zero point. No I mean as you say assuming that the existence of some sentient life isn't going to regenerate and repopulate the universe that's a reason why extinction is worse than the loss of most of our species but otherwise no. What is the margin at which you wish to police nature so I've argued for instance we should not subsidize carnivores per say there may be some other reason to do so but the idea oh we're going to introduce wolves back into this national park should not be an especially desirable prospect what's your view on that and where should we stop. Yes that was a interesting pioneering article that you wrote I think policing nature and I tend to agree with it so I think it's reasonable to raise a question about why we should reintroduce predators and as you said there may be effects on other animals and plants in the area where we're introducing them but to do so just for its own sake just because they were once here and were once part of the ecology is not to my view a sufficient reason for introducing them if we know that is going to increase the suffering of some prey animals. But how much should we spend trying to thwart predators? I think that's difficult because again you would have to take into account the consequences on not having predators and what are you going to do with the prey population are they going to overpopulate and maybe starve or destroy the environment for other sentient beings so it's it's hard to say how much we should spend trying to thwart them. I think there are questions about reducing the suffering of wild animals that are easier than that and that that's a question that maybe at some stage will grapple with when we've reduced the amount of suffering we inflict on animals generally but I don't think it's a it's not nowhere near the top of the list for how to reduce animal suffering. What do you think of the fairly common fear that if we mix the moralities of human beings and the moralities of nature that the moralities of nature will win out nature is so large and numerous and populace and fierce human beings are relatively small in number and fragile and if the prevailing ethic becomes the ethic of nature that the blending is itself dangerous that human beings end up thinking well predation is just fine it's the way of nature therefore they do terrible things to each other. Is that what you meant by the moralities of nature I wasn't sure what the phrase meant do you mean the morality that we imply that we attribute to nature? Read in Tooth and Claw. If we think that's a matter that is our business do we not end up with that morality trumping ours we become subordinate to that morality and a lot of very nasty people in history have actually cited nature well nature works this way I'm just doing that it's a part of nature it's more or less okay how do we avoid those series of moves? Right it's a it's a bad argument and we try and explain why it's a bad argument that we don't want to follow nature that the fact that nature does something is not something that we ought to imitate but maybe in fact we ought to combat and of course we do combat nature in many ways maybe war between humans is part of nature but nevertheless we regret when wars break out we try to have institutions to prevent wars breaking out I think a lot of the activities are combating nature's way of doing things rather than regarding it as a model to follow but if humans are a part of nature flat out and if our optimal policing of nature leaves you know 99.999% of all predation in place we just can't stop most of it is it then so irrational to conclude well this predation must be okay it's the natural state of the world our optimal best outcome leaves 99.999% of it in place like how do we avoid that mindset? I think we can avoid that mindset because you know if we don't have any option about leaving it in place we just regret it and I do regret the way nature works I mean I think it's a very powerful argument against the idea that this world was created by an omnipotent omniscient and omnibenevolent creator that just seems to be impossible because of the way nature works but you know that's the world we live in. Given low fertility rates in virtually every wealthy nation is there something self-defeating about secularism as a philosophy? Is secularism responsible for the low fertility rates I mean I think you know Roman Catholic countries in some of these sorry Roman Catholic couples in some of these countries have birth rates that are as low as secular people? Well as these countries have all secularized right Israel is somewhat an exception there above replacement I don't know of any other wealthy countries it's not only secularism but it's secularism plus birth control plus a number of other features of modernity does that mean this whole enterprise is just self-defeating? I hope that it's not self-defeating and I hope that birth rates will drop in less secular countries as well or that those countries will become more secular I think that project of secularism is is sand and right and I certainly hope that we won't see a world in which secular people have few children and religious people have more children and therefore secularism disappears I accept that there's a possibility that that could happen I think that would be a very bad outcome. But why be so loyal to a project that is so poor at producing utility babies being utility right happiness well being something? Well I think one reason for supporting secularism is that it seems to me to be true I mean I don't think that there is a divine being so I can't simply say let's adopt the idea of a divine being and fool ourselves that there is one I would rather say given that there isn't let's find ways of having enough children to produce good long-term outcomes. But you do have this piece I think it's titled secrecy and consequentialism where you say a true consequentialist should be willing to entertain or even advocate ideas just because they will help the world so why not move away from secularism become a kind of religious Straussian. The Amish have more kids we all need to have more kids and that would be one of the you know false ideas that you would at least publicly embrace. Yeah the the article secrecy and consequentialism and I should acknowledge my co-author Katergina de Lazari Radek is more about acting in certain ways that you keep secret like when it might be right to lie but bad if your lie became known publicly then it's about fostering general ideas like becoming religious and I would think that there would be sufficient negative consequences to that particular idea that we wouldn't want to do it anyway. Should abortion be legal or illegal say in western countries? I think abortion should be legal. From a utilitarian point of view which I would not myself apply to abortion but why isn't it just better to have the babies the benefit to the baby the baby to be seems to outweigh the costs to the family. Well for a start not all abortions reduce the number of children who will be born often abortions terminate a pregnancy that is poorly timed but the couple have a plan to have ex-children and they will have ex-children. But truly on average the number of babies will go down right? Possibly it will on average but those babies would be born to mothers who did not want to have them at that particular time maybe their circumstances were ones that would have made it difficult for them to bring up the child well and so the child might be a less happy child and the mother or the parents might also be much more stressed because they had to rear a child at a time that wasn't suitable for them. So the fact that yes there'll be an extra being in the world and we can hope that being is going to have a positive life still doesn't mean that abortion actually is maximizing utility. But that's a lot of stress on the family to outweigh the value from a whole new life and then that baby will become an adult in turn have other children if the optimal discount rate is lower zero it's really a lot of future gain you're foregoing right? So but I mean you're assuming that having more children in the world today is a net positive which may be the case but there are also negatives to it in terms of that child and then adult will continue to use products that consume energy and produce more greenhouse gases and deprive wildlife of habitat so there's a whole lot more involved than just simply saying having another child is a plus. Sure but you don't say well once population falls enough due to low total fertility rates well then the time will come to ban abortion. I've never met a person who made that calculation they're either you know for or against abortion being legal they don't say oh at current population levels but in 80 years get back to me I'll change my mind which suggests to me it's just not a utilitarian calculation. I do think it would make a difference if the world were underpopulated there's one question is whether you would make it illegal another question is whether you would discourage it or think that individual women who had abortions were doing something wrong so those are all I think they are relevant factors but if what you want if population falls and you want to have more people then I think there would be better ways of doing that than prohibiting abortion you would give baby bonuses. Oh but two both right you don't have to do one or the other. Well but you're probably going to get enough of a population increase by doing the ones that involve more conception rather than more term pregnancy termination. Now you're one of three co-editors of the Journal of Controversial Ideas which I believe started in 2021 is that correct? I think we produced our first issue then yes. What have you learned doing that? So I've learned that there is a need for our journal actually I guess I suppose I thought that when we began but I wasn't really sure but one of the interesting things is that we have published a number of interesting papers not necessarily saying I agree with them but papers that have worthwhile ideas that should be out there which would not have got published otherwise which were being rejected and that happened in the most recent issue which we just published at the end of April this year with an article called Merit in Science which actually had 29 co-authors I think two of them were Nobel laureates and was objecting to the fact that as the authors claimed positions research jobs and also research grants were not simply going to those with the best qualifications or the greatest merit in scientific terms but we're going also on the criteria of maximizing diversity and inclusion and the authors of that paper as I say distinguished group of scientists submitted it to the proceedings of the National Academy of Sciences and was told that they wouldn't publish it because it might be harmful to some people and I think they shopped it around to another couple of journals before somebody suggested they send it to us and you know it was published in the Journal of Controversial Ideas it was written about in the Wall Street Journal and the New York Times we've had a hundred thousand views in the month since we published that issue which for a peer-reviewed academic journal is pretty unusual and it's open access right it is free open access that's right we are supported by donors anybody who would like to donate to us please do so but yeah we're managing if you look only at submissions and not acceptances what's the most common topic you see crossing the desk of the journal at the moment I would say it's transgender issues because there's so few outlets where you can say your mind or there's a lot of hostility about it so there's two things about a journal one is as I say that we are prepared to publish controversial ideas that other places are not prepared to publish the other is we're prepared to publish under a pseudonym if the authors don't wish to be identified with the article and so roughly I'd say about a third of the articles in each issue authors prefer to publish under a pseudonym and in the area of transgender studies that's because many academics have been severely abused and harassed and their lives made quite difficult if they have published things that were seen as transphobic although I think that term is used far too broadly we could simply say they were not accepting the idea that a person's identification as a gender is necessarily the last word as to what their gender is do you worry that with AI everyone will just be identified oh who's writing style is this and it will tell you that's a possibility actually I must admit that's one I hadn't thought of it could be a worry I don't know whether maybe we'll get an AI that will be able to mix people's writing styles so that they're harder to identify do you ever reject good or potentially good pieces just because they're not controversial enough yes and we say that in in our call for papers that articles must be controversial in some sense and we tell our reviewers that and occasionally they write back and say yeah this is a reasonable article but I don't see why it couldn't get published in any other journal so we we don't publish it then organizationally institutionally put aside philosophy but what do you think of the current effective altruist movement so I think it's it's making progress I think it's a good forum for ideas and I think it's had a significant influence so on the whole I view it positively which is not to say that I view every aspect of it positively let's say you were called in to give advice and probably you have been what do you tell them they should do different I've had concerns about the extent to which effective altruism has moved in the direction of very long-term thinking about the future so thinking not just about the present or even the next century or two but thinking about the next million years or when mcascola thinks has talked about the next billion years I understand why he's doing that he's talking about the possible loss that could exist if we become extinct and if in fact there is no other intelligent life in this corner of the universe so we're not replaced by others so yeah I can understand why he wants he wants to emphasize the importance of preventing extinction but I think there's there's another there are a number of concerns one is uncertainty about whether we can actually make a positive difference in this direction and also whether if you encourage people to ignore present suffering you're actually going to have a long-term drawback in that people may become more callous and that may actually contribute to making the world a worse place so I my advice is not to forget about the present and to continue to have a really major focus on things like reducing extreme poverty reducing animal suffering protecting the environment from climate change right now uh I'd like to see effective altruism with more of a focus not an exclusive focus but more of a focus on those issues is there too much emphasis on existential risk from agi in your opinion I'm not an expert on that risk but yes I think there is too much of an emphasis and I think perhaps that has something to do with a lot of the people in EA people who like these kinds of problems you know how are you going to align um in super intelligent agi with human values and that's a really interesting problem and in some ways it's a more interesting problem than how are you going to reduce the suffering of animals in factory farms or even how are you going to help people in extreme poverty so I think that's perhaps why there's been more of a tendency to talk about that and focus on it than is really justifying not everyone knows you've written a book on hagel what have you learned from hagel who is not a utilitarian right not necessarily not a utilitarian and it's let me just say firstly that book was written a long time ago I think in the early 1980s secondly it's a very slim book it's a hundred page book for what was then called past masters series by oxford university press but it's a good book let me just very short introduction make a point of adding that thank you thank you I'm glad you think it's a good book I think what I learned from hagel was that the nature of society at a time including its economic interests does influence people's ideas um and of course you know that's an idea that a lot of people will associate with max but max really I think took that from hagel um so I'm going to say that that's an insight that I learned from hagel rather than from max if you today were to write a book about some other philosopher you know you're granted the free time to do it miraculously who would that philosopher be well I did again with catagena delisari right I write a book about siege week and marks too yeah yeah I have written a very short book about max as well yes but um um yeah uh I'm interested I'm I'm still interested in the English utilitarians but I'm interested in the roots of utilitarianism uh as as it goes back further um so I'm not sure um David umes certainly a favorite philosopher and in some sense an early utilitarian I suppose if I had free time to write uh um would certainly be a candidate where did parfit go wrong so the final two volume set to me it seems like a mess intellectually what happened there it's a three volume set now three okay yes yes yes um you think the whole three volume set is a mess or you think the third volume is a mess or it's often interesting but I the whole thing to me seems a mess the project is too hard he's not thinking marginally enough and to make all of consequentialism and maybe Kantianism compatible I just don't think it can be done I like the fact that you think the project is too hard and you started off by asking me about whether it was a good thing that humans settled North America um so I think I see the project as making a case for the idea that there are objective values that things matter objectively and I think it makes a significant contribution to that project so I don't think of it as a failure um there are some parts of it that may be a failure maybe the attempt to reconcile the three theories a form of consequentialism form of Kantianism and a form of contractualism maybe that's a failure certainly the proponents of those other theories the non-consequentialist theories don't really approve of it and in fact I don't like the form of consequentialism that he ends up with as part of the triple theory and I talked to him about that and the third volume to some extent grapples with that and in the volume I edited this collection of papers called does anything really matter which originally Parford was going to reply to in the volume but then his reply characteristically for Parford grew so long that it had to be a third volume and that's really why there's a third volume um but I think that there are um so yeah so so I think that there are problems with what he says about consequentialism um and he doesn't really address act utility act consequentialism it's ruled consequentialism that he's reconciling with the others um and so I so I don't think he succeeds in producing uh reconciliation between the major ethical theories I think that part of it is a failure um why did he fail I would say he became so committed to the idea that he had to show that morality is objective that otherwise nihilism is true nothing matters and his whole life would be a waste that uh that influenced his acceptance of arguments that otherwise he would not have accepted who is an underrated philosopher that we should be reading more talking about more thinking about more uh well um I've used to say that sigwig is an underrated philosopher because he's the best of the utilitarians of the 19th century utilitarians and he certainly was neglected um and that's why katajina and I wrote the point of view of the universe to try to show that he is a great philosopher and that his ethics is still relevant um but other people like Parford have also uh praised him so I'm not sure to what extent he's still neglected uh say someone active today whose work you admire maybe not even in your areas um so uh okay as a younger philosopher and I must admit he was one of my students I greatly admire Richard Chappell um he's somebody who has a fairly popular sub-stack um log now and uh has been as a little book on Parford in fact we're talking about um does a lot of interesting work I'm a big fan of your book pushing time away my grandfather and the tragedy of Jewish Vienna a few questions in that direction should we respect the wishes of the dead yeah that's a good question and just you know I had that feeling in writing this book because so for those who don't know this book is about my maternal grandfather who was a victim of the Holocaust I was born after the Second World War so of course I never knew him he left a lot of papers a lot of writings and I wanted to read them and in a sense I suppose I thought in reading them and then subsequently when I decided to write about him that I was bringing him back and undoing the terrible crime that the Nazis committed against him and so many others um but did I really think that uh I had an obligation to do that because of he might have wished it I toyed with that idea but I never fully convinced myself of it but never fully rejected it either right yes perhaps that's true um you know I I've held different forms of utilitarianism throughout my career and for quite a long time I was a preference utilitarian and a preference utilitarian thinks that what you the good you should maximize is the satisfaction of preferences and there's a question then about whether the preferences of the dead can't and my supervisor at Oxford R. M. Hare thought that they did so he thought in fact that if there was some ancient Roman uh and you came you stumbled across his tombstone and he graved on it I want an oil lamp to be burning on my gravestone forever that gave you a reason not necessarily an overriding reason to put an oil lamp on his gravestone and to watch that it kept burning um I wasn't totally convinced to that I must admit but his population shrinks and the past accumulates we're going to be in big trouble right yes that's right because we'll all have to be looking after the preferences of all of those in the past but anyway I'm now a hedonistic utilitarian so on that view uh no the only reason to pay attention to the wishes of the dead is that the living will have more confidence that their wishes will be respected. Referring especially to early 20th century Vienna during times of great cultural achievement are most thinkers utilitarians and I would say no but I want to hear your view. I haven't actually thought about uh Vienna's heyday no certainly they were not utilitarians but then um utilitarianism is a relatively new theory and uh I say in terms of spreading to non-English speaking cultures uh that's happened much later too so it's not surprising that people were not utilitarians in uh late 19th century early 20th century Vienna um but they knew enough they knew Bentham and Hume and they knew consequentialism in some form they could have been it wasn't some mysterious idea they'd never heard of isn't this like the driverless car problem where you actually want a society of non-utilitarians um I don't know do you I mean again you know yes it was uh there was some wonderful works of culture and art produced but um the working class had a very rough life and um maybe they would have had a better life if there'd been more utilitarian thinking. If I try to think today which government in the world is most utilitarian not globally not for animals but just for its own citizens I would tend to think that's Singapore does that make sense to you? It's quite possible um I think the Australian governments are actually reasonably utilitarian um and I'm more familiar with them than I am with the governments of Singapore but Singapore is a strong candidate. If I think of Singapore it's a big success people are quite well off they have more freedom than a lot of outsiders like to admit but it seems not that happy to me relative to per capita income Australia relative to income seems really quite happy to me are those consistent with your impressions? Maybe there's things that utilitarianism has no control over like a better climate it's not sort of hot and sticky and humid like it is in Singapore all the time Australia has great beaches um you know as the natural assets will play a role in how happy the population is. On your book about your your grandfather your ancestor is at the very last page of the book you say that justice was done to the Nazi leaders and I fully agree but for a pure utilitarian what does that mean? I think we want to punish people who do great evil as a way of um indicating that it's it's got an expressivist function that this was a terrible thing that they did and so this is a kind of ultimate condemnation that people should learn from be educated from and also a deterrent function if other people think that they're going to get away with that. But in terms of just downright justice flat out right and wrong I consider myself a two-thirds utilitarian but not completely you're more than two-thirds can I pull you down to two-thirds and bring in justice for these extreme cases where you can just point and say that was wrong? I think we all have that intuition it's not that I don't have any retributive intuitions. I don't want retribution I just want to be able to say it's wrong without having to count up the utils. Sorry to say that what they did is wrong right and that they should be punished for sure but it's not that I want retribution per se right as you noted there are other reasons to punish them yes so I'm happy to say that what they did was wrong and they should be punished for it and that you know at a level that is compatible with utilitarianism but also compatible with a lot of other views is that enough for you? But is there some non-utilitarian reason that you're willing to call your own and let in the door and if so how do you stop that reason from growing? Yeah so you know at a theoretical philosophical level I'm going to say no it's all explicable in terms of the utility but you know as I say that doesn't mean that I don't have non-utilitarian intuitions about it. What do you think of Freud as a philosopher? So I don't think Freud was a great philosopher I think that if you were a great philosopher he would have been more open to different ideas and clearly he was highly authoritarian in his thinking he had a set of ideas and wasn't really prepared to book opposition to them and that's why my grandfather broke with him along with Alfred Adler when the split between Freud and Adler occurred my grandfather this is what my mother and aunt told me my grandfather acknowledged that Freud was in some sense the greater genius but just really didn't like the way he behaved and how authoritarian he was and therefore sided with Adler and I think yeah if Freud had been more open to ideas he wouldn't have treated dissent whether Adlerians or other forms of dissent in the way he did. Why does so much of professional philosophy today seem so boring? It's my subjective opinion but I meet many philosophers who to me don't seem philosophical at all not at all the case with you or the other guests we've had on the show but what has gone wrong or do you challenge my premise? No I don't really challenge your premise although I do think there's a lot of good work being done in philosophy but I think there's obviously immense competition for jobs and for particularly for tenure jobs and you're going to be reviewed your case for appointment or for tenure is going to be reviewed by your peers other philosophers and they are going to look for work that is the kind of work that gets published in a highly rated journals which is going to tend to be work that reviews criticizes builds on work that is already going on in those journals so I think it's it's difficult for young people to break away and say here's a different area that I want to work in his or here's something that is broader less narrow that I want to work in and so we tend to get articles which say well here's a theory that some philosopher holds I've got an objection to one part of this theory and maybe I've got an alternative to that and so in a sense it it instead of aiming for the really the lifeblood of the subject it's aiming for the capillaries of the subject in a way it's the smaller things that you can get right and you can get articles published in good journals how would you reform that I think you would really have to change the system of making appointments and tenuring people you would but what would you replace the status quo with the dean does it all the students vote there's no tenure whatsoever or a lot of options yeah there are a lot of options I'm reluctant to get rid of tenure it has been a protection for people to have different and controversial but it's not so much anymore right given that you started this journal which I'm glad you did but surely you must think tenure is not enough now um yes it's it's not enough but I worry that people with controversial ideas would simply get fired by deans when you know people on twitter start criticizing them I've I've really been disappointed in some of these recent controversies where people have been criticized for saying things the extent to which university administrations have not stood up to this twitter storm and have instead suspended or in some cases you know people who were not on tenure dismissed or not reappointed people for controversial ideas and I think tenure has provided protection for some people so that's why I'm still reluctant to get rid of tenure but I'm not sure maybe giving students more of a role in appointments would be a reasonable factor because they would be interested in things that the professional philosophers might not see as part of philosophy but they would for how many years have you taught at Princeton now 23 24 how have your students changed over that time what do you notice uh it's actually got harder to get them to do the reading I think um I when I first came to Princeton I was very pleasantly surprised by you could set students reading and you would find that when you talk to them about it they nearly all had read it I think they're reading less now and that's disappointing so that's one difference otherwise I still find you know I've always found that there's a reasonable level of idealism I think that fluctuates but I find American students again this is particularly a Princeton experience there are some of them who are just wanting to get through the course and get a decent grade and get on with the degree but there are quite a few who come to philosophy courses and particularly ethics courses like mine really wanting to think about what they're going to do with their lives how you know what are their what are their ultimate values how are they going to live um and I find that refreshing and I don't think that's changed dramatically in any direction either up or down over the 24 years I've been at Princeton last three questions first is a mental state utilitarian surely you're concerned with happiness what advice might you give us today to enjoy our own lives better oh I am very concerned with happiness and uh the advice I would give is to think about making your life uh fulfilling and meaningful not to think it that it's just about earning more money buying more consumer goods having a richer lifestyle but um contributing to making the world a better place so this is obviously the goal of effective altruism but there's a lot of psychology research showing that people who are generous um and are thinking of values and in their lives are in harmony with values do actually find their lives more rewarding and more satisfying in the spirit of your new book animal liberation now what are three things we can do to help other sentient beings well firstly we can uh stop eating non-human animals and particularly I would say stop eating animal products from factory farms that seems really important secondly we can support uh the organizations that are trying to combat factory farming and if you want to find the best ones you can go to animal charity evaluators which is like an effective altruism site for animal charities and thirdly you can get political about this you can tell your political representatives uh at all levels of government that animals matter and that you're going to be more likely to vote for people who have strong policies on animal welfare final question after your book tour is over what will you do next oh um so I'm gonna relax a little bit from what I've been doing um I'm going to be teaching my final semester at Princeton because I'm retiring from Princeton at the end of the fall semester and I'll want to put some time into doing that and then I'm going to stop and think are there still other things that I'm really keen to write and do I'm sure I'm going to be speaking doing short form writing about the issues that are important to me um do I still have a major book in me um at this stage I must admit I'm not sure what that would be but um you know I've been working hard on animal liberation now and um more recently been trying to promote it and publicize it so I think I need to take a break and take stock and think Peter Singer thank you very much thank you Tyler it's been terrific talking to you again