 Ryan Reynolds' newest movie, Free Guy, was fine. On the surface, it's a lighthearted story about a decent guy in a crazy world who gets through the daily grind with the help of his best buddy. If you like what Ryan Reynolds does, and I generally do, you'll probably enjoy the movie. Plus, there are a few scene-stealing cameos by people like Channing Tatum that I think are worth the price of admission. And yet, the whole time I was sitting in the theater, I couldn't help but feel like there was something wrong. And the more I thought about it, the more that feeling grew. It took me a bit to figure it out, but I eventually realized that what bothered me was not the movie itself, but the incredibly dark irony of the fact that the Disney Corporation made a film where the central villain is, well, Disney. And they did this without a single shred of self-awareness. Seriously, we're supposed to hate the antagonist because he bought up another developer's IP and then broke his contract with them while inserting their work into his game because it was easier than coming up with something original. Maybe if some other studio had made the film, I'd have liked it more. But in all honesty, it just didn't do it for me. However, Free Guy does raise at least one really interesting question that I do want to talk about. And it's something we've never actually explored on this series before. So get ready because we're going to take a look at the meaning of human rights in a world with genuine artificial intelligence on this short edition of Out of Frame. For those of you who haven't seen Free Guy, don't worry. I'm not going to spoil anything that would ruin your experience. You just need to know the basic premise. Reynolds character Guy is a non-playable character in a game that's basically a knockoff of GTA Vice City. He's literally operating on a programmed loop where every day is pretty much the same. Breakfast, work, armed robbery, you know how it goes. The cycle repeats until Guy randomly runs into a player named Millie. Their chance encounter somehow jumps starts his underlying consciousness and he gains the ability to make his own choices. In other words, he gains agency and becomes the world's first true artificial intelligence. After a while, Guy becomes famous for going around doing nice things for people and everyone in the real world assumes he's a player. Everyone except Antoine, the head of the game development studio that made Free City, who wants to shut down the game so he can compel his customers to buy the sequel. But what Antoine doesn't know is that Guy is self-aware, who learns and changes and grows. The question is, does that actually make him sentient? Does it make him a person? And most importantly, does his individuality imbue him with the right to life? Free Guy would like us to think so. I'm not so sure. Let's think this through a little more. But first, there are a few ideas we need to clarify. One, rights are a philosophical concept meant to help us define the ethical limits of human behavior. What types of interactions with other people are and are not acceptable? Two, rights are predicated on the idea of moral agency. If we're going to make any kind of ethical claims about which behaviors are right and wrong, people must be able to choose how to act and to be held accountable for their actions. Three, when I say person, I'm referring to an individual moral agent that is understood to have rights. Another thing everybody should understand is that rights, properly defined, are distinct from privileges or entitlements. It's not about demanding free stuff at other people's expense. It's about how we act. For example, we should all have legitimate moral expectations not to be enslaved, assaulted, murdered, or otherwise harmed or coerced by other people. We should expect not to be imprisoned without cause. We should all be secure in our property, free from theft and intentional destruction. Of course, the necessary corollary to all those expectations is the responsibility to uphold those same rights for everyone else. Functioning societies all over the world depend on people and institutions respecting individual rights. And the more universally these principles are upheld in a given society, the better off everyone in that part of the world tends to be. Likewise, the less nations' institutions respect individual rights, the poorer and more volatile the society. If you want a deeper exploration of individual rights, check out the episode I did on Zack Snyder's Justice League. But for right now, the important thing is just to understand that rights are very important and also that the concept is essentially unique to human beings. However, the discovery of genuine artificial intelligence could totally change our conception of moral agency by effectively introducing us to another species that could understand, expect, and respect universal concepts surrounding morally permissible interaction. But does an artificially intelligent entity deserve the same rights as human beings? I think that's a really interesting question, or at least it would be if free guy ever bothered to explore it in depth. I mean sure, we root for Guy because he's played by Ryan Reynolds and everything about the game world seems real. We also want to see him succeed and survive simply to stick it to Antoine, both because he's an arrogant buffoon and because of the convoluted and largely absurd contract dispute he's involved in with Millie and her former partner Keys. But the movie never really grapples with the implications of its own ideas. I'm not saying that free Guy had to get into a deep philosophical discussion about artificial intelligence, but I will say that I'm far more likely to frequently rewatch and think about a number of much better science fiction films or TV shows that do. Like Ex Machina, Archive, Westworld, Rays by Wolves, Ghost in the Shell, Her or Blade Runner, the original or 2049. And what makes these movies and TV shows better at exploring what makes a person a person is not that they are serious gritty dramas and free Guy is an action comedy. If Pixar can deal with big questions about the concept of death and the nature of the soul, a feel-good romp can certainly explore the idea of when, if ever, a non-human intelligence might cross that ephemeral line. Am I a real boy? I think what makes stuff like Westworld, Blade Runner, and Ex Machina so much more effective is that they know which questions to ask. Starting with this, what even is artificial intelligence? The dictionary definition, intelligence exhibited by an artificial entity, isn't exactly illuminating. Fortunately, the history and theory of AI is long and fascinating. IBM, which developed the famed Watson computer, describes it as machines that can mimic the problem-solving and decision-making abilities of the human mind. They also go on to describe the difference between weak and strong AI. Weak AI is basically what we have now, digital assistant-type programs like the S-word or the A-word, or self-driving cars. Strong AI doesn't exist yet, but it refers to the kinds of self-aware, robotic, or digital intelligence showcased in the movies I've mentioned. Strong AI then would have the ability to pass what's called the Turing Test. Created in 1950 by the famous computer scientist and mathematician Alan Turing, the test's rules are straightforward. If a machine can carry on a conversation with a human, without the human realizing that the other person is a machine, it has passed the test and is considered intelligent. We see this test frequently applied and passed in these kinds of stories. For example, when Caleb interviews Ava in Ex Machina, or in how Bernard is able to pass for a human being in Westworld. But even if passing a Turing test was enough to determine some form of personhood, it is woefully insufficient to establish moral agency, let alone the capacity for moral responsibility, which is a trait severely lacking in Ava, Dolores Maeve, and a ton of other synthetic characters in fiction. On the other hand, there are also beautiful and alluring characters, like Samantha from her, or Joy from Blade Runner 2049. Joy asserts agency and empathy, and it actually does seem like she's on the verge of what I would consider to be true consciousness. Her destruction is one of the more moving deaths I've seen in any movie, but even so, that doesn't necessarily mean we should consider it murder to shut down her program. Still, there is clearly some level of artificial self-actualization that should be enough for an AI to be considered a full person, understood to have the same rights as anyone else. So, where is that line? Well, that's a difficult question, and perhaps one that will never have a perfect answer. But for me, it comes back to what I said about moral agency and responsibility. The way I look at it is that a rights-holding moral agent is a person who is capable of both demanding that their rights are respected, and who demonstrates the ability to respect those same rights in others. This is something that virtually every human being can do, even if they don't always do it. That fact, along with our existence in physical reality, means that people can also be held accountable when they do end up violating someone else's rights. And for me, that's a critical distinction. While mistreating animals is cruel and abhorrent, I would argue that animals can't have genuine rights, because they're neither capable of respecting the rights of others, nor would they be held personally accountable for their actions. No lion will ever be arrested or put on trial for murdering a zebra. No raven or magpie will ever be jailed for larceny, and they wouldn't understand it if they were. That imagery probably seems silly, but it's a real problem for anyone who tries to make claims about moral agency and universal ethical principles with respect to animals. A person that is incapable of respecting equal rights for other people doesn't actually meet what I believe are the minimum conditions to be considered a fully-fledged moral agent. And that brings us back to artificial intelligence and free guy. Guy is just code. He doesn't have a physical body. He is a literal product, the result of other people's paid labor and creativity. And setting all the questions about intellectual property aside, Guy can only exist if that code is stored and operated on a computer. Fundamentally, Guy is just a program owned by Antoine's studio and exists as data on his servers. Those servers add to be built and purchased. And under most conceptions of property rights, Antoine should be able to do whatever he wants with those servers, including destroy them. In order for Guy to be a person, and not just a mimic of a person, he isn't simply entitled to the labor and property of others. He has to be able to respect other people's rights to their property and sustain his life through voluntary interaction. Even by the end of free guy, I'm not sure that he can. But at some point, I can imagine a world where artificially intelligent beings are self-sustaining and capable of existing for their own sake. When that day comes, we're all going to need to agree on the set of moral principles that define how we treat each other. So, we better start seriously thinking about it now. Hey everyone, thanks for watching this episode of Out of Frame. This was one of the more fun and interesting episodes to do, but I feel like I'm barely scratching the surface. I'd love to start a serious conversation with everybody about artificial intelligence and human rights in the comments. For those looking to participate in even more discussion, I'd encourage you all to join our Discord server. And if you're a fan of the show, please consider supporting us on Patreon or SubscribeStar. It'll give you access to special bonus content, swag, a private channel on Discord and more, and it will help us keep making Out of Frame every month. Either way, please don't forget to like this video, subscribe to the channel, and ring that bell icon. Join our email list and follow us on all the social media, so you never miss an episode. Thanks for watching.