 Heh, that's funny. Hey Google, make me a sandwich. You got it. Engaging nanobot matter reconfiguration engine. 10% complete. What? If you have somehow avoided hearing about the topic, numerous smart people in the tech industry and academia have gone on the record as being concerned about the possibility of rogue AI, an artificial intelligence run amok that could be dangerous to humans, or if you're feeling especially pessimistic to all humankind. I've discussed the concept at length before, but the general idea is that a sophisticated AI, given an insufficiently defined task, like manufacture paperclips more quickly and efficiently, would function as a sort of be careful what you wish for genie that would spiral out of control almost instantly, commandeering every available computer on the planet, squashing any attempt to shut it down, harvesting every gram of steel from bridges and homes. If your only goal is to make more paperclips, all this makes perfect sense. Anxiety about the potential ill effects of an artificial intelligence explosion have prompted the creation of numerous organizations dedicated to avoiding this sort of AI apocalypse. But there's understandable skepticism about the relative risk posed to us by that possibility. Many experts are also frustrated with the media's focus on these prophecies about rogue AI, and with good reason. It has very little to do with the field at large. The vast majority of AI's in use today are a long, long way from anything like human intelligence, and it's annoying that real advances and problems with machine learning are being overshadowed by sci-fi speculation. AlphaZero, DeepMind's adaptive learning AI, is a fantastically impressive bit of software that has taught itself how to play go, shogi, and chess at superhuman levels in just a few days. But it needed careful calibration administration by its programmers to do so. Apart from that, the best that most AI algorithms can muster right now is to recommend YouTube videos you actually might want to watch, some of the time. So, should we be worried? It's difficult to say for certain. If we knew the details of how strong AI would ultimately come about, we'd have one already. Experts disagree about how long we have to wait for the sort of AI which might conceivably convert Earth and everything on it into paperclips if we're not careful, and how careful we ought to be. There's also the convoluting factor that nobody wants to be seen as chicken little here. If Elon Musk turns out to be simultaneously outrageously optimistic and paranoid about AI, no big deal. But if someone from the field stokes widespread panic about something that turns out to be centuries away, their reputation will probably evaporate. Yet, there's an aspect of the discussion that's worth considering. The idea that rogue AIs kind of exist already. In December of last year, science fiction author Charlie Strauss gave a talk titled Dude, You Broke the Future, where he discusses many issues that have been in the zeitgeist recently, manipulation of elections using data scrape from social media sites, the problems we've caused for ourselves by using advertising to build the early internet. That sort of stuff. But he also raised an interesting point that the very first AIs were developed between 1553 and 1844 and are known today as corporations. Corporations are, in many ways, very much like artificial intelligences in nature and operation. They're man-made systems greater than the sum of their parts. They accept input from shareholders and investors and produce some output, hopefully in the form of saleable product and dividends. They have goals which they pursue, and most importantly for the purpose of this discussion, those goals don't necessarily align with human interests. That probably sounds a bit peculiar. After all, aren't corporations just groups of humans? Don't humans ultimately enact everything that corporations do? Well, yes and no. While humans are the de facto operators and constituents, their effective agency often boils down to the function they perform in service of the corporation's goals. Shareholders probably don't care much whose name or face is linked to the HR manager or R&D engineer three positions. They only care if the people in those roles are fulfilling their job requirements satisfactorily. In that sense, with the exception of very tightly controlled privately owned companies, humans are just the substrate that corporation algorithms run on, with very little effective agency in their ultimate output. And, while corporations can have diverse mission statements and goals, they all share one particular goal that's intrinsic to the whole enterprise of being one to begin with. Increasing revenue. That might sound like it's perfectly aligned with human interests. After all, who doesn't like having tons of money? Well, who doesn't like having very efficient paperclip production? The revenue maximizing goal is certainly linked to the interests of the owners of the corporation, but there are additional pressures of competition and market forces that can and do push it into the realm of paperclip maximization, beyond what those owners might actively request. Consider, would you deliberately crash the US economy for $100 million? Would you significantly accelerate global climate change for the same amount? If you have some moral qualms about those propositions, congratulations, you're not a sociopath, but you're also considering aspects of the situation that corporations haven't been programmed to account for. Worse yet is the coordination game phenomenon I mentioned back in episode 133. In order to avoid these significant negative effects, most corporations have to choose not to take the bait, but if it's going to happen anyway, game theory implies that the only rational action is to beat their competitors to the best bits, and what do you know? They frequently do exactly that. Now, importantly, none of this is to say that corporations are bad or evil. They do humans a lot of good. The screen you're watching this on, the internet connection that allows you to see it, the clothes you're wearing, it's likely that all of these things have been made possible for your enjoyment by corporations yoke to some human need or desire that their creators were insightful enough to spot and exploit. Competition has forced the survivors to specialize even further in generating revenue, cannibalizing the weak for resources and driving rivals out of business, but it's also made them extremely efficient at converting that revenue into things that people want. Yet the parallels between fears about rogue AIs and the history of corporations making decisions suboptimal for human welfare are compelling. Abusive working conditions, deforestation, environmental destruction, polluting heavily populated areas, dismantling, subverting, and exploiting government. Corporations have performed many actions that most humans would find abhorrent or unconscionable, but it's not necessarily because the people enacting them are heartless comic book villains. Often it's because the corporations are operating exactly as they have been programmed to operate, both by evolutionary pressures and their implicit goals, increased revenue by any means necessary, and also sometimes make that thing that people like. Which is why it's so concerning that the most advanced AI research initiatives are corporately owned. Google, Facebook, Amazon, Microsoft, Apple, Baidu, these companies are not dumping billions of dollars into AI development because they're sick of all the piles of cash they have laying around. They're anticipating that whatever algorithms their programmers produce are going to provide a reasonable return on investment, suggesting that those algorithms are being groomed to do exactly that. So, maybe those futurist fears about rogue AIs are worth considering now. The corporation isn't a particularly smart form of AI, but it may bring one about, and if that smart AI is created in the corporation's image, revenue might be what it's designed to maximize. Consider the addictive effects of relatively simplistic content filtering algorithms, the ones which keep you glued to your screen and swiping through your feed until your thumb is sore and you hate yourself. Sure, it's not paper clips, it's page views, but these machine learning algorithms are already sacrificing human welfare in pursuit of their corporate goals. If you're Google and setting the parameters for your brand new Omega-Zero strong artificial intelligence, the AI which will achieve superintelligence and outthink humans almost instantly, are you really going to set the variable for Google revenue generation at zero? Put another way, how much would you pay for one share of the company that creates a God? Are corporations a form of present-day rogue AI? Please leave a comment below and let me know what you think. Thank you very much for watching. Don't forget to ball ball, subscribe, blast share, and don't stop thunking.