 How many artificial super-intelligences does it take to change a lightbulb? New directive confirmed. What? Engaging matter- No, it's okay. I didn't mean- Matter-transmutation complete. Oh crap. I just got back from seeing Avengers Age of Ultron, a really good comic book movie where a totally unrealistic artificial intelligence runs amuck and wreaks havoc. So you could kind of say that I'm an expert on the subject. The idea has been floating around for quite some time. Before Isaac Asimov, even before Mary Shelley, there are ancient legends of devices known as brazen heads, dangerous arcane machines made of bronze, which could answer any question posed to them, frequently in a be careful what you wish for kind of way. Let's just say that people have been worried about machines getting smarter than humans for a very, very long time, and if the current box office has any indication, now more than ever, and perhaps rightly so. Depending on how you want to define intelligence, you might say that that's already happened. Genome-wide association studies are finding relationships that no human would have thought to look for, and deep learning algorithms are creating software that no human could have coded on their own. I mean Google's AI has taught itself how to play old Atari video games. I can't even play old Atari video games. But for the most part, these are specialized tools designed for very specific tasks, but sort of hopeless at everything else. A stock trading AI isn't going to do so hot at playing chess. The elusive holy grail artificial intelligence researchers have been chasing is general AI, or strong AI. A single algorithm that would be equally good at figuring out how to play video games or translating Egyptian hieroglyphs or cooking a delicious meal, depending on what was useful for achieving its programmed goals. After all, that's what intelligence really is, obtaining and using information about the entire world to manipulate it towards some end. Consider a human being given a directive like, clean this place up. An average human intelligence would allow them to pick up, sort, put everything away, dust, vacuum, clean the windows, do some laundry, maybe even go down to the store and pick up some storage containers or some paint. Each one of these sub-activities is an extraordinarily different process requiring different knowledge and faculties. We don't even think about the extraordinarily complex network of processing and memory that's necessary for achieving even the simplest goals. With that in mind, it's not that shocking that general AI has been so difficult to achieve. It's hard to teach computers anything, let alone everything, especially considering the fact that they think much differently than humans do. That's an important thing to note. Human beings share a similar context for all of their goal-achieving activities, like society or self-preservation. You can be relatively certain that a person who's trying to clean up their room isn't going to bulldoze your house because it's making the light hit their curtains a little bit weird. But computers don't have that context implicitly and a general AI that wasn't programmed to take them into account might come up with some answers that we weren't expecting or didn't really want. This is precisely why so many smart people, including Bill Gates, Elon Musk, and Stephen Hawking, are very concerned about what happens when general AI is achieved as many AI researchers are projecting will happen in the next 20 years. An adaptive artificial intelligence might be all sorts of unpredictable. This already volatile situation is compounded by the fact that once there's general artificial intelligence, it's not a far leap from there to artificial superintelligence. Some really incredible possibilities arise when you've got a general intelligence running on silicon. Even without Moore's law, we're talking about a form of scalability here that could conceivably outstrip humans relatively quickly. See, despite what some pretty terrible sci-fi might tell you, human brains have real practical limits that silicon brains don't. For starters, they're limited in size. No matter how smart the smartest person alive is, they've only got about three pounds of stuff to work with. There's no reason an AI couldn't use thousands of pounds of processing power. Also, we're sort of at a disadvantage already from a hardware perspective. Neurons aren't as fast or as compact as transistors on modern CPUs. When we finally got a computer running a general AI that can match a human for problem solving, it's probably going to get much smarter very quickly. Especially once it realizes that the fastest way to achieve whatever it's been programmed to do is to copy itself onto any other computers it can get to. And as millennia of human history have shown over and over, smart is powerful and dangerous. Okay, so what? Even if we have a superintelligent AI that's hell-bent on bulldozing a house, we just pull the plug, right? We'll keep it somewhere that doesn't have any internet access and if it gets uppity, then we shut it down. Well... If there's one truth that's remained constant through decades of computer science, it's that nothing can't be hacked. Recent reports by government cybersecurity agencies have demonstrated methods of hacking systems that are air-gapped. That's computers that aren't hooked up to any external devices at all. No internet, no peripherals, just a computer sitting by its lonesome somewhere quiet. If you get a cell phone into the same room, bam, it's compromised. Some brilliant human hackers figured that out. What would a machine that's smarter than humans figure out? How could we possibly control something that's smarter than we are? All this adds up to a serious Pandora's box of disaster that we might unwittingly unleash if we solve the problem of how to make a general AI without solving the problem of how to make it safe... first. So how can we do that if we don't know the details of this implementation? Well, if we could be reasonably assured that the programs that we were assigning the general AI to work on wouldn't seriously clash with human values, that would be a good start. Okay, great. So what are human values? Huh. I mean, what? Happiness? But not too much happiness and not solely drug-induced happiness and enough other stuff so that we don't just acclimate to a particular point on the hedonic treadmill and not at the expense of civilization or society or longevity. How could we ever hope to find and codify reasonable guides for an AI to follow and not end up with Ultron bulldozing houses to make the room look nicer? Well, thankfully, some of the first programs that we've produced that are sophisticated enough to surprise us are big data algorithms, software that's designed to look for trends in seemingly chaotic information. Maybe that's how we keep ourselves from getting Ultron. We need a general AI with the specific task of finding out what humans actually want and then policing any subsequent AIs to make sure that they don't needlessly endanger those values, whatever they are. In my opinion, that's really the first general AI that we should be working on. Anything else and a new Terminator movie will be the least of our concerns. Anyways, there's many reasons to be worried about the potential consequences of implementing some form of general AI. But thankfully, there are many individuals and organizations that are raising awareness and advocating for some sort of controlled, concerted approach to creating one. And spoilers, that approach does not include leaving a genius level totally alien AI to unzip at work while you go to a party, Mr. Stark. Do you think that the first AI that humans produce will be friendly? Please leave a comment below and let me know what you think. Thank you very much for watching. Don't forget to blah, blah, subscribe, blah, share, and don't stop thunking.