 So welcome to this breakout session on defining sentient joy. My name is Ben Romney and I'm presenting this based on a paper that I wrote called a gradient rubric for human and non-human utility. You can find that my website be Romney.com slash ethics paper dot PDF. And just as a little background about myself, I'm a senior software engineer at Qualtrics. That's my day job. By night, I'm a dabbling moral philosopher. So thanks for giving audience to some of my ideas today. All right. So when we imagine our planet's ideal future, we must not restrict our circle of concern to the human species alone. So I read a recent estimate that there are 20 quintillion or 20 billion billion animals on the planet that we share the planet with. Many of those are insects, but several, you know, believe billion are mammals just like us. And then there's also plants, which were brought up in the chat with David Pierce's discussion that may have moral relevance with their capacity to experience happiness. Robots, you know, who knows if our current robots can experience happiness, but if they can, there's definitely moral relevance there. So there's just given the huge numbers with how many life forms there are. There's tremendous opportunity for ethical progress in this sphere. So usually with humans, people align with Jeremy Bentham's dictum everyone to count for one, nobody to count for more than one. But there's been less conversation for how to best quantify the value of a non human life. So for example, does the life and happiness of a gorilla count as much as out of a human? Or how would it compare to that of a chicken or an ant? So identifying the relative parameters and then building a rubric that counts utility accordingly would enable us to prioritize our efforts as we try to, you know, make the world a better place for all its sentient beings and live in a more ethical way than we do now. So my thesis is, is this the maximization of global happiness requires a utility function that counts for both human and non human beings. To this end, I would propose a gradient rubric by which we may approximate various life forms capacity for happiness. So throughout this presentation, we'll just kind of hit these three points, the importance of estimating the capacity for happiness, as opposed to other things like intelligence or other attributes. I think that the capacity for happiness is, is the most important thing to estimate a proposal of a rubric that takes in several parameters and outputs a score of of an organism's capacity for happiness, and then we'll talk about the rubrics implications. So I take the utilitarian position that the maximization of global happiness is the ultimate good. And one of the things that also many times the most efficient way to improve global happiness is to alleviate suffering where it exists. So some people might say, okay, well, like, why not intelligence, like many people justify keeping animals in factory farms based on the fact that we're more intelligent than they are. So if you take that attribute as your guiding factor. We should remember that Google deep minds alpha alpha go computer recently defeated the best human go player in the world and could be considered more intelligent, at least in the narrow sense than than humans. In the coming decades will certainly see AI that can, you know, run the gamut of human abilities and be more intelligent generally. But it's unlikely that they'll have the capacity to experience happiness and suffering. We certainly hope to be able to find out if they can because that would that is an important thing to know. But it's, I think it's unlikely that they can feel happiness as much as as humans or other animals can at this point. So, if intelligence is our guiding attribute, we would, we should have no qualms allowing super intelligent machines to treat humans as we have treated non human animals. So for moral consistency, it might be wise to follow David Pierce's recommendations from the previous discussion. And from a historical perspective, happiness and suffering are inextricably tied to life on planet Earth. At the dawn of life four billion years ago, when the first single celled organisms emerged, they were motivated to go towards particles that were helpful and shy away from particles that were harmful. And so those, those primitive emotions of pleasure and pain were foundational to what life means, what it means to live in, at least on this planet. And so that that kind of gives insight to how important these two emotions are happiness and suffering. So the main problem with holding the capacity to to have happiness as our kind of like guiding star when it comes to signing moral weight to different entities is that it's difficult to measure. I like the philosophical zombie problem so like you, you guys can't know with any certainty that I am conscious. I might just be a zombie that does and says all the things that a human would do and say but inside the lights aren't on. You can reasonably assume that since I appear to operate the same way you do that I am, I'm sentient just like you are. But in reality, the only thing that you know is that you yourself are conscious. Similarly, we can make assumptions that other people can experience happiness and suffering. And also other animals can and perhaps plants and robots, and, and we can use our observations and and a few other parameters that go into this rubric as, or in order to inform our, our calculation of utility. So, with that, here's the utility function that I propose utility equals O plus N plus S plus M plus P plus E. So we'll go over each of these six parameters but they stand for observed emotional behavior, neuron count, self awareness, memory capacity, potential and external utility. So these six attributes is more measurable than consciousness itself is it's there. They're less hard problems to solve. And when we add them up, it will give us a maximum utility score of 100. So like the Buddha would be 100 and like, I don't know, a grain of rice would be zero. That that's the utility function and I will will go through each of the parameters. So for the first one, observed emotional behavior. So, if we want to understand another human the first thing we do is observe their, their behavior like their vocal sounds facial expressions bodily movements etc. We can do similar things with animals and plants. And we can end and robots, we can ask these questions to what degree does the entity respond to pain and to what degree does the entity seek out pleasure. It's sometimes remarkable how happy a puppy can be, but also, you know, you've ever been to a factory farm or a slaughterhouse. I actually did some research during university at one. It can also evoke a powerful sense of emotion on the other end of the spectrum. So you can assign a maximum of 15 points to each of these to these bullets and then 30 points total for this category. And then neuron count. So humans have 86 billion neurons. As we know computational power increases as a number of transistors on a microchip increases of similarly. We could assume that as a number of neurons increases in a brain, the amount of consciousness or capacity to experience happiness would also increase. As a thought experiment, you could imagine removing one neuron at a time from a brain, and by the time you get to zero consciousness would most likely cease to exist and so clearly there is some tie between neuron count and consciousness. So, an African elephant has 257 billion neurons, which is about three times that of a human. So to get our maximum value of 20 for this category we divide it by 12.85 billion for a maximum value of 20. It's important to note here that I assume all neurons are equal but in further iterations of this rubric, we might want to value cerebral cortex neurons or other neurons with a little higher weights. All right, self awareness. So, you know, it's, it would be difficult to identify with pain or pleasure if you didn't have a sense of self, at least to some extent. So one way to measure that is the mirror test where you draw a dot or stick a dot on the forehead of an animal and then put it in front of a mirror and if it tries to scratch off the dot, we know that they recognize themselves. Humans, dolphins, killer whales, bonobos, chimpanzees, Asian elephants, magpies, pigeons, ants, and at least one species of fish have passed this test. Cats and dogs, interestingly enough, cannot recognize themselves in a mirror. It's also important to note that like a blind human wouldn't recognize themselves in a mirror. So this test isn't sufficient to quantify self awareness completely but hopefully we'll develop other tests and in the end be able to add up to a possible of 20 points in this category. Memory capacity. So John Locke argued that the essential thing for personal identity is a capacity for memories that connect him to his past self. And for humans, most of the happiness and suffering we experience is actually in relation to things that have happened to us in the past. So like if I don't remember something that happened to me years ago that was painful, that's less relevant than something that I remember every day and that sticks with me. So extra care should be taken with entities that have high memory capacity. And so there's a possible score of 15 for the maximum for this category. So potential. So potential is the ability to become an entity with a high degree of capacity for happiness in the future. So this is a little less direct. It's relevant over time. It's important because without this criterion, there'd be little ground for valing the life of a newborn baby who doesn't have memory capacity, or self awareness to the extent of an adult. But because of potential we, they, you know, they still have at least some points in this category. Similarly, an adult human who's in a permanent coma may have a lower score for potential. So the total here is 10 points and external utilities the final parameter. And so this is important for endangered species. You know, bring the rest of the world a little more sadness when the last endangered member of endangered species passes away. It's also important for people in commas like we talked about in the last section. They're happy, they're happy, they might not be experiencing happiness in their coma but their continuing existence brings happiness to their family. So there's an external benefit for other individuals happiness. So this one's the only one with a possible negative value. So like mosquitoes that have malaria could spread suffering. Also criminals may have, you know, negative utility in this category. So that's, that's one thing to note. This is the final rubric with the max point values. It's important important to note here that this is just a model. And, you know, other moral thinkers may assign points differently or have other categories but I just intend for this to be to lay the groundwork for the discussion and details can be refined as it come to light. So that notified of chat. Let's see. Okay, cool. So, yeah, we'll have time for some questions at the end. So here are a few select examples. So human score of 86 gorilla 65 cows 54. So, one thing to know is that this is meant to be used at the individual level. Not the species level so like it's entirely possible that, you know, certain gorillas might have a higher score than certain humans. And, you know, that might not sit easy with some people but it's something that this rubric leaves open for possibility. And it upends Bentham's victim everyone to count for one. Bentham wasn't ever clear on his scope in the animal kingdom. So this rubric does justice to the elephant in the room and all the other species as well. And that's I think I would play somewhere around in the like five to like 10 score robots today maybe the Roomba might be somewhere around like a one or two. I didn't actually go out and to make to get the gather these scores I didn't run any studies or a lot of them are just best guesses at this point I didn't, I didn't, you know, run any peer reviewed studies or anything. So, these are these are estimates. Neuron count actually is pretty. That's the only one that I can say like, I'm pretty confident that I got the right numbers. But yeah, so that's those are just a few examples of what happens when you when you crunch numbers through the through the equation. And so applications. I leave it up to the reader to draw their lines with the scores. So like some readers may decide to not eat anything with this score utility score above 40, or maybe not to squish with tissue paper anything that has a positive score. Lots of different applications. And it's important to have an objective measure of this. Now that we're encoding these algorithms into our everyday lives. For example, self driving cars and the trolley problem we would hope the engineers that Tesla have thought through and have don't just use a random number generator when they decide how a car is going to react in an in an emergency situation. Also looking at, you know, human and animal rights over the centuries it's clear that we are improving and extending our circle to other species for example in 1971, the US outlawed horse consumption. So, you know, cows and pigs have similar scores as horses so for moral consistency, it would make sense that in the near future, we would ban the slaughter of those similarly scoring animals for food consumption as well. And as we continue to develop more accurate models. We will continue to make the world a better place for all sentient beings. So, that's it for the slides I already have a few questions. Feel free to add them to the chat. Okay, so this one's from Caleb Jones. What are your thoughts about applying this rubric to super organisms. Okay, like aunt colonies biomes ecosystems. Yeah, like so some of the biggest organisms. I think the largest is a fungus in Oregon. And then there's also an aspen grove here in Utah that is, you know, quite extensive. I do think that, and then aunt colonies. Yeah. So, I think there can be some shared consciousness that goes between entities. Again, that's really hard to measure. But I think one thing we could do is consider. Well, I think we'd have to adjust this rubric. Like I said, this isn't a perfect rubric but I think we could maybe consider each and a neuron, or perhaps each root in, or an aspen root to be a neuron as well, and like have that be able to add those up. But yeah, I do, I do think that those would be important to include. Thanks everyone. I'm sorry I didn't leave a little more time, but hope that was, you know, interesting and helpful for everyone. Okay, thanks.