 A lot of engineers like to think that expertise in a particular domain gives someone insight into every domain, like if you can build the biggest bridge, then you ought to be crowned king of the world or something. Personally, I don't think that you ought to be able to order people around just because you can build a higher arc. I'm not proud of many early thunk videos. In fact, many of them make me want to fling my phone across the room in frustration. For example, in episode 14, I show off a new fangled gadget I paid a gut-wrenching amount of money to beta test for Google, the Google Glass headset. I talk excitedly about technological singularity, the blurring of the line between memory and instantly accessible information, all while wearing a device in my head that eventually gave me migraines so bad I'd have to take the day off work to lay on the bathroom floor and occasionally vomit. Like I said, gut-wrenching. Of course, mine isn't the only tale of optimistic techno-futurism that runs headlong into painful real-world outcomes. It's not an unusual thing, and it can get much worse than crippling headaches. The atomic age of unlimited energy came with a persistent threat of a nuclear war. The promise of a truly egalitarian digital utopia was accompanied by ubiquitous surveillance and misinformation botnets. A 1969 essay by Emanuel Messin sums up a common attitude towards these unpleasant, unforeseen outcomes, that the effects of technology are not inherent to the technology itself, but depend on what humans choose to do with the opportunities presented by it. For Messin, technology itself is causally inert, leaving us with a moral responsibility to use it wisely, which means both rejecting bad outcomes and embracing good ones. We have to avoid what he calls negative externalities, those unfortunate, unforeseen disasters like global warming, loss of privacy, and opioid epidemics. But we also have an imperative to embrace the opportunities presented by tech, seizing its full potential for good. He suggests that social structures like governments, economies, community institutions, and businesses are often incapable or insufficiently motivated to either use technology to make people's lives better or to predict and prevent those adverse effects. For Messin, what's really necessary to ensure that our tech is working in humanity's best interests is institutional innovation, what Silicon Valley technologists now call disruption. If those established institutions were innovating and harnessing the bleeding edge of tech, if they honed their internal processes to keep their finger on the pulse of the most recent developments and stay one step ahead of the next big thing, they'd be able to anticipate its potential gifts and drawbacks and direct people to use it wisely. That is to say, what we really need to address the problems posed by technology is more technology. Messin's view is fairly common, as is its corollary, that more technology is, on average, a good thing. After all, tech is developed to solve problems, so more tech means more problems solved. And if the tech causes problems, we should develop new tech to solve those problems. But fellow philosopher John McDermott takes a different view. His response to Messin's essay titled Technology, the Opiate of the Intellectuals, is unsettlingly prescient for a dude writing before the end of the Vietnam War. Like, for context, this was written in a time when computers existed, but Pong didn't. McDermott finds Messin's techno-optimism ubiquitous among authors of his day, with starry-eyed speculation that the machinery of innovation will end poverty, free mankind from all work, radically increase individual freedom, and end ideology. Heady stuff. But in his estimation, there's an intrinsic political character to technology that can't be innocently ignored. McDermott poses a simple question. Why should we believe that technological innovation will necessarily benefit all mankind, rather than shifting the balance of power to those closest to that innovation, while disempowering the rest? It's a fair question. There are numerous examples throughout history of technology being used to seize power and control others. What assurance do we have that these events, on average, lead to better lives for most humans, rather than better lives for the people who can make internal combustion engines, or the people who can manufacture cheap steel, or the people who have the atomic bomb? The standard defense of technology as a net benefit for all humans everywhere hinges on the uniquely meritocratic way that it selects for those who should be in power. It chooses nerds, problem-solvers, a group defined only by their ability to solve puzzles that require technical expertise, with no other collective goals that would be necessary to form a political class. In fact, it's often said that nerds have an implicit aversion to ideology, that they're uniquely disinterested in political power, as a group, because playing politics would occupy time that would be better spent doing nerd stuff. If technology tends to empower those dedicated problem-solvers, we should expect them to simply work on solving our problems, without taking any undue advantage of their position or power, right? McDermott doesn't buy this argument. He points out that the difference between what messing characterizes as positive opportunities of technology that must be seized and negative externalities of technology that must be prevented is inherently ideological, citing a real-life nerd puzzle reserved for only the biggest nerds in the US at the time that he was writing, harnessing those new-fingled computers to calculate how to distribute bombs in Vietnam for maximum Vietnamese casualties. The simulations certainly determine the most rational way to distribute bombs, given the goals of the puzzle's designers. But those goals didn't include the values of, let's say, the Americans who were against the war, or of the Vietnamese people being bombed. McDermott points out that those dissenting voices weren't included in the calculus of the bombing program, not just because they didn't align with the politics of the people managing the program, but also because they're inherently antagonistic to the program itself. A nerd can't solve the bomb allocation casualty puzzle if it includes a variable like, please don't bomb me. So they omit it. McDermott also notes that technology has an inertia and gravity of its own, dragging the rest of society into its operation and underlying ideology. As the Vietnam bombing program worked along, it came with spiraling demands for other technical programs to support it. With such precise calculations of payloads and targets, you need better trained pilots and better targeting systems to deliver those bombs exactly where they're needed. You need improvements in bomb manufacturing to ensure that production rates are more predictable. You need better management of supply chains to ensure that the factories always have what they need. A single technological puzzle has created a cascade of new puzzles in its wake, each with the potential to create its own cascade, increasing the scale and complexity of all the processes that feed on and into the original program. This spiraling complexity demands the same systematic disregard of values that conflict with the goals of the original program. A steel mill worker who delays a shipment due to humanitarian concerns needs to be replaced as soon as possible. You know, so the bomb factory has a steady supply of steel. A pilot who decides a chosen target isn't worth bombing screws up the calculus for the next run and needs to be reprimanded. The individuality and variance of opinion of the folks who are drawn into the web of this technology need to be suppressed in service of satisfying its goals most effectively. McDermott suggests that this undertow of blind obedience to the whims of technological programs like the Vietnam bombing algorithm is like watching the historical progression of the Enlightenment in reverse rather than demystifying systems of power, imbuing the common populace with the skills and understanding necessary to control their world via democratic means and closing the gap between the upper and lower classes. Technological systems obscure the mechanisms of control, minimize the influence of individuals to affect the system's operation, and reinforce the class divide. When I think about the inscrutable algorithms driving YouTube and Facebook, I have to remind myself that McDermott is just talking about a bombing simulation running on punch carts. He doesn't know what's coming. Speaking of social media, he also notes that when denied insight into the actual mechanisms of power by technological obscurantism, people don't just shrug their shoulders, accept their ignorance, and move on. They invent narratives to fill that space. Wild conspiracy theories about fluoridation and vaccine-delivered 5G microchips don't make a lick of sense, but with no reasonable explanation as to why anyone is doing anything. With every human cog in the machine, given the bare minimum of context necessary to fulfill the ever-growing demands of technology that isn't being used according to their values, these are the stories that people cling to to understand their world and why it's not working for them. Anyways, considering all these factors, McDermott sees unregulated technological innovation as a force that ultimately works to centralize power in the hands of the already powerful. Simply by the nature of the problems they ask nerds to solve for them, nerds who are systematically selected for and arranged in compliance with their ideology. He suggests that despite protestations that the technocratic hierarchies created by unmoderated innovation are apolitical, it will inevitably lead to the recapitulation of class divisions, between a technological underclass whose values get left out of the development process and the ruling class who gets to pick what problems they need solved. Now, I grew up on stories of radical technological innovation coming to humanity's rescue, lifting up the downtrodden and freeing the oppressed. It's not like you buy Google Glass if you didn't spend your childhood binging Star Trek and wishing you had a real visor like Jordy LaForge. I have understandable reservations about McDermott's bleak take that technology, without strict oversight, is inherently corrosive to the ideals of egalitarian democracy. But his predictions about the role tech would play in the politics of the next 50 years, how its small stochastic contributions to equality would be eclipsed by a general trend of obfuscation and centralization of the levers of power in the hands of the powerful, how the demands of tech would require the systematization of bigger and bigger swaths of the economy, even the conspiracy theory thing. It's hard for me to say that he wasn't onto something. The historical precedent of democratizing technologies like the printing press are encouraging, but they have to be weighed against those technologies used in the service of oppression, like the cotton gin. And I'm not enough of a historian to make that call. But what about you? Do you find McDermott's arguments compelling, or do you think Messi is right in his techno-optimism? Is technology ideologically neutral, or does it tend towards supporting the ruling class? Regardless, please read McDermott's essay. I've linked it in the description. Leave a comment, and let me know what you think. Thank you very much for watching. Don't forget to follow, subscribe, and share. And don't stop thumping.