 Computers allow you to make more mistakes faster than any other invention in human history with the possible exceptions of handguns and tequila. I'm Josh, and this is Thunk. Is it a fedora week? It feels like a fedora week. Last week we talked about how the mind is an emergent property of neurons. That is to say, if you get enough simple neurons together communicating in a particular pattern, then you'll get a complex-looking system out of it that we called the mind. So now that we've covered all of that, let's talk about robots in science fiction. Science fiction could be argued to have been born as a genre with Mary Shelley's Frankenstein. In Shelley's book published in 1818, young Victor Frankenstein's obsession with the solution to the puzzle of life and death leads him to create an undead monster who proceeds to terrorize the living in an attempt to get Victor to craft him a bride. Despite the monster being a stitched-together sack of organs, Victor's repeated denials of his humanity seem to ring hollow. The monster longs for human companionship. It teaches itself to read. It speaks very articulately, and eventually it requests the same rights as its fellow man. Now, scientists haven't figured out how to resurrect cadavers, but they are working towards Victor Frankenstein's dream in a different way. Artificial intelligence. It's difficult to define and quantify intelligence, but we're all aware that there are machines that do intelligent tasks much better than people can. We all know that computers are faster in certain kinds of math than people are, but there's also Deep Blue, the computer from IBM who beat Gary Kasparov, the world's leading chess master, as well as Cleverbot, an interesting utility that can convince people that it's an actual person chatting to them half the time. My question is, if consciousness is just a pattern in the brain of network neurons, which are basically switches, and a computer is just a collection of transistors, which are basically switches, would it be possible to have consciousness in a computer? A lot of people think that AI is always going to be some sort of pretend consciousness, that there's something about computers that renders them fundamentally unable to do what brains do. Weirdly enough, a lot of this conviction comes from science fiction, a genre that started with the story of a genuine artificial person. When computers were first introduced to popular culture, they were very crude and limited in the ways that they could obtain and process information. A lot of science fiction writers recognized that computers were going to be part of the future, but it was very easy to highlight the differences between the crude computers of their time and the people that were using them. Not Asimov, he knew what he was talking about, this was just sitting on my desk. The robot from the 60s TV series Lost in Space would shut down if someone used an idiom around it. Kirk was able to explode artificial intelligences all over the universe just by talking to them. Basically, when most people think of an artificial intelligence, they think, Beep boop beep, I do not understand your human emotion of love to you. However, the entire fields of psychology, neuroscience, and neurology are based on the idea that the brain and the mind are mechanistic in nature. That's emotions, hopes, dreams, personality, all of it. When someone with clinical depression takes medication, it's not magic, they're just changing the frequency and interaction of switches in their skull. If that's true, the difference between you and a computer is probably just a question of the number of switches and the programming. How many switches? Well, the human mind has almost 100 billion neurons. That's almost 40 times the number of transistors in a current leading edge commercial processor. Also, those neurons have several neurotransmitters that they can communicate with as opposed to transistors which can only talk in ones and zeros. That's a pretty crazy hurdle, but if technological development keeps the pace that it has been, it's not infeasible to suggest that in a few years we will have the capability to simulate a thinking conscious person so closely that we won't be able to tell the difference. The question is, what happens if or maybe when we make an artificial intelligence that's indistinguishable from a person? What if it makes a compelling argument that it deserves all of the rights that a person has? Are we going to pull a Frankenstein on it? Or is the idea of a machine that is a person inherently flawed? Leave comments, let me know what you think. Last week, we talked about Reine Descartes being right about a lot of stuff, but wrong about human thought. Let's see what you had to say about Cartesian dualism. Sammy, I am posted a very thoughtful argument about how, despite the fact that everything in the brain is reducible to chemical processes, it's still useful to have labels for the sort of large scale emergent phenomenon like consciousness. Personally, I vacillate between the need to have labels for things and the need to demonstrate to everybody else that they're using those labels incorrectly. So that's just my thing. Another Sam friend of mine requested more hats, Mission Accomplished, and also asked why it's Cartesian dualism and not day Cartesian dualism. So I looked it up on Wikipedia and it turns out that it's from the Latinized version of his name, which is Cartesius, which I didn't know. The same page also listed about 10 other things that are named for Descartes, including the Cartesian diver, Cartesian product, and also something called Cartesian anxiety, which is a sort of longing to know absolutely everything with absolute certainty. Descartes was kind of a show-off. Next week, I'm going to talk about surveillance and privacy. If you'd like to read up on it, I've left some links in the description. Blah, blah, subscribe, blah, share, and I'll see you next week.