 What worries me the most about the development of strong AI is that it's going to be really, really easy to push its buttons. A quick reminder before we get started, Thunk Episode 100 is coming up, so if you have any questions that you'd like me to answer, please leave them below. In 1984, Cybernetesis Valentino Breitenberg published this book, Vehicles, Experiments in Synthetic Psychology. It's a fascinating and, as you can see, really short read, and I highly recommend it if you're interested in robotics, or biology, or philosophy, or neuroscience, psychology, cognition. The book is essentially a sequence of thought experiments using very simple and easy to understand machines. Take this one from Chapter 1. This sketch is a symbolic representation of a motor attached to a temperature sensor, just two components. The robot is wired so that when the sensor detects more heat, it sends more power to the motor, and less when it's cool. If you were to observe the behavior of this device, you would see that it slows down or even stops in cool places, and goes like crazy when it passes through a warm patch. With a little imagination, you might say that it prefers to be cool. Okay, a lot of imagination. We project human characteristics onto non-living stuff all the time, calling cars stubborn if they have trouble starting, or a breeze fickle if it comes in unpredictable bursts. But it's hard to look at a sensor stuck on a motor and think intent. But it is interesting that we can get such seemingly complex behavior from so few parts. Given it's a behavior that you can see in very unsophisticated living things, like slime molds and nematodes, but those still require many more components than this thing does. This theme continues in Brittenberg's next example, a family of machines that has two sensors and two motors. We've only added two more parts, but the behavior is much more interesting. If we wire the sensors this way so that they power the motor on the same side of the robot, then the sensor which detects more heat will power that motor harder, causing the robot to turn from hotspots and drive away from them. If we wire the sensors the opposite way so they power the motors on the opposite side of the robot, then it will turn towards hotspots and accelerate as it gets closer to them. Again, using a little imagination, we might see these robots exhibiting behavior that we'd normally attribute to things much more sophisticated than four components stuck together. This one seems to actively avoid heat, turning from it and fleeing if it gets too close. This other one attacks heat, accelerating towards hotspots as if to ran them. If you replace heat source with another robot, this gets much more interesting. The point of Brittenberg's exercises isn't to convince you that these things are somehow alive or thinking, but rather to demonstrate that it's possible to achieve very complex behavior, behavior that we'd normally attribute to things that are alive with some sort of intelligence, with only a handful of very simple parts. This poses a bit of a problem. We have these examples of mechanical constructions that don't even have computers in them. No decision making, no processing, nothing. But if someone was watching who didn't know what their guts looked like, it seems very possible that they might mistake them for intelligent machines, making deliberate choices. This is the synthetic psychology that Brittenberg is referring to. We've constructed something that seems to have all the external appearances of some sort of intelligence or intent, but anybody familiar with how it's actually built would probably say that it doesn't really have any. This becomes a real problem when we're talking about even more complicated machines, like thinking machines. You're probably familiar with the Turing test, first proposed by Alan Turing in his absolutely prescient 1950 paper Computing Machinery and Intelligence. Turing thought that the question can machines think, although interesting, was way too ill-defined and vague to be useful, so he proposed an empirical test to replace it. He argued that if a human couldn't reliably tell the difference between a typed conversation with another human and a digital computer, then the question of machines thinking would be practically answered, that for all intents and purposes we should treat that computer as being intelligent. End of story. His paper and the Turing test were the beginning of an incredibly rich and varied conversation about machines, artificial intelligence, and consciousness. One of the most famous or infamous comments on Turing's proposal was developed by philosopher John Searle almost 30 years later. He argued that because computers were simply following a series of instructions, they could never be said to be truly conscious or intelligent. He proposed a thought experiment to demonstrate this. Suppose that you locked me in a room with a pile of Chinese symbols which I can't read or understand, and a giant book which contained instructions for matching certain strings of those symbols to other strings. Someone slides a question under the door written in Chinese, and then, consulting the book, I find the appropriate string of symbols to match with it. I arts and crafts my stack of symbols the way that the book tells me to. And then, after I've assembled an answer, I send it out. Thankfully, the book set of answers makes sense, so for anybody who's outside the room, it would seem that there's someone in here who speaks Chinese. But again, I don't know what any of these symbols mean. As far as I'm concerned, they're just random squiggles that I assemble according to the instructions in the book. Searle asserts that this is indicative of a problem with the entire thinking machine project to begin with, and especially the use of the Turing test as a gauge for intelligence. For him, the fact that the person in the Chinese room doesn't have any idea what the squiggles mean, and would probably never know, demonstrates that it's impossible for an algorithmic process to develop a sense of semantics, an understanding of meaning, just by manipulating symbols according to rules. If the person in the room doesn't understand Chinese following the rules of the book, then it's impossible for a computer following the rules of an algorithm to understand anything, even if it passes the Turing test. Meaning and thought are exclusive to brains, period. Now, if you've ever seen Thunk before, you can probably guess what I think of Searle's argument, but I'm in good company here. Numerous philosophers have raised many different objections to Searle's thought experiment. One of my favorite responses to the Chinese room was published by David Cole in 1991, who asserts that, although it's true that neither the person in the room nor the room itself understands Chinese, there's still someone in there who does. Cole imagines that the Chinese room is representing itself in a Turing test as an elderly Chinese woman. There are all sorts of questions one might ask the room about things like its age, its height, and its gender, which I'm running around inside crafting answers to, but it's pretty obvious that they're not going to be my answers. How old are you? 76. What's your favorite meal? Salt and pepper squid. If you cook it right, it tastes delicious. It's pretty clear that I'm not the one who's supposed to be understanding Chinese here. If it were me, I probably wouldn't be describing my childhood memories of Guangzhou and how much I hate hot flashes. Cole then posits a slight variation on the idea, suggesting that the book might contain another set of incomprehensible squiggles in Korean. I'm still in there furiously cutting and pasting together responses for questions as they come in, but even if I can't tell the difference between Chinese or Korean symbols, it's possible that I end up creating an entirely different set of responses for an entirely different person, maybe somebody who only speaks Korean and no Chinese. There are all sorts of things that you might ask the room in Chinese or Korean and get totally different answers. It's possible that you could pass information to one of the people in there that doesn't make it to the other one because they simply don't read that language. They might even not know about each other despite the fact that they essentially occupy the same space, and at the end of it all, you could open the door and be surprised to see just me in there instead of two Asian people. For Cole, the people in the room who understand Chinese or Korean are virtual minds, very much like virtual machines in computer science. There's no physical component of the system that we might point to and say, there, there's the Korean speaker, but nonetheless, there is a process which understands and speaks Korean, one which only exists when it's being executed by the room's hardware, whether that's me or someone else. He argues that strong artificial intelligences if we manage to build them will function in exactly the same way. We're not going to point to a bunch of static bits on a hard drive or transistors in a CPU and say, there, that's how. Instead, how will be what happens when those bits are being actively processed by a computer? Like many other philosophers, Cole doesn't actually think that the Turing test is sufficient to prove machine intelligence, but he definitely believes that the Chinese room argument is totally insufficient to discount the idea, which leaves us in a bit of a pickle. If you don't accept Searle's argument for whatever reason, then we can't use it to rule out the possibility of intelligent machines. But, as Breitenberg's vehicles demonstrate, even if something has all the external appearances of making intelligent decisions, it's possible that it's just a few simple parts put together in a fashion that makes it look that way. It's not an easy question to define or think about, but a lone answer. But considering that many AI researchers are fed up with people asking them about the Turing test, when they've achieved many more interesting and varied things with artificial intelligence, maybe we're coming at this from the wrong direction to begin with. Maybe the whole point of the Turing test is that trying to define a checklist of what actually counts as thinking isn't as important as what AI actually does. After all, whether it's thinking or not, if the heat seeking robot is attacking my ankles, I want it to stop. What do you think about the question, can machines think? Please, leave a comment below and let me know what you thunk. Thank you very much for watching. Don't forget to bubble up, subscribe, and don't stop thunking.