 Man, I used to turn out these videos weekly, but it's taken me like three weeks to put together a decent critique of piecewise analysis. I can't believe I've been reduced to this. In Episode 237, we talked a little about the gap between large language models and artificial intelligence, as well as some of the problems that can arise from mistaking a digital Ouija board for a source of well-reasoned information. But some folks have a different interpretation. You could ask me the question, some people say that these big models are just auto-complete. Well, on some level, the models are auto-complete. We're told that the large language models, they're just predicting the next word. Is that not so simple? No, that's true. They are just predicting the next word. And so they're just auto-complete. But ask yourself the question of what do you need to understand about what's been said so far in order to predict the next word accurately? And basically, you have to understand what's been said to bring that word out to you. So you're just auto-complete, too. Hinton isn't the only one in this boat. In his 1969 book, Continuances of Reinforcement, B.F. Skinner quipped, the real question is not whether machines think, but whether men do. Four days after chat GPT was released, open A.I. CEO Sam Altman tweeted, I am a stochastic parrot, and so are you. The thrust of the idea is that for all the objections raised about how shallow an algorithmic these predictive text programs can be, we aren't categorically different. As much as we'd like to believe that there's something unique and mysterious about our intelligence, human minds are nothing more than machine learning algorithms at their most fundamental level, spitting out the next token in a sequence according to a strict deterministic set of statistical operations. The only meaningful difference between us and auto-complete is a matter of scale. The assertion that human minds are nothing but sufficiently powerful predictive text dimensions is an example of reductionism, a sort of analytic strategy that claims some phenomenon can be explained in its totality by adding up a set of lower level phenomena, each of which can be understood independent of the whole. Like a tricycle is composed of a few dozen parts, each of which could be considered in isolation. A petal is shaped like this, a handlebar grip is shaped like that, they each have this or that set of properties, shape, strength, elasticity and so on. If you add up every one of those individual parts and how they interact with each other at their boundaries, you'd end up with an array of properties that fully describes a petal-powered machine with three wheels, and you'd have a reasonable answer if anyone were to ask you how it works. Reductionism is so integral to how we approach certain problems that we usually don't even notice or acknowledge it, nobody would be confused if you started planning a party by breaking it up into food, decorations, games and so on. As natural as reductionism can be, there are contexts in which it's obviously inappropriate or applied in the wrong way. If someone tried to compose the most beautiful symphony ever by just stapling together the prettiest moments from a dozen other symphonies, they'd rightly be laughed out of the concert hall. But it's not always easy to tell exactly when a reductionist argument is or isn't warranted. Philosophers have been arguing about it since at least the time of Plato, and many have tried their hand at drawing a bright line that will differentiate tricycles from symphonies. Scientists and philosophers of science who lean heavily on reductionist arguments seem especially keen to defend the strategy by denouncing improper uses of it. Daniel Dennett, Richard Dawkins and others have taken stabs at what makes a reductionist argument invalid, implying that everything else is above board. Unfortunately, despite warning their readers away from giant unjustified leaps in reasoning, which… yeah, I don't think anyone would say that's a good way to arrive at correct conclusions, they don't really give us a robust method to decide if a particular reduction is justified or not. Like Dennett claims that bad reductions are greedy, trying to claim reducibility without a robust explanation as to how that reduction works. Okay, how much is enough? Is the claim that human minds are stochastic parrots greedy? How would we know if it was? More importantly, these accounts seem to skip over why someone would be over-eager to deploy a reductionist explanation in the first place. Why reductionist moves are so popular? Why they go screwy so often? Or why accusing someone of being reductionist is usually meant as a dig in a way that calling them a Platonist or a nominalist isn't. Taking a slightly different approach, philosopher Richard Wurdy argues that rather than trying to enumerate every way reductionism can possibly go wrong, perhaps it would be helpful to look more closely at what it's for. What it does for us when we use it in an explanatory context. A claim like a tricycle is nothing more than this collection of parts suggests that a certain set of words can be replaced in their totality by a different set, words describing separate components, which can help us answer questions that might have been too vague to answer without the new vocabulary. Considered as an irreducible whole, the only thing you'd be able to say about why a tricycle doesn't work is something like, uh, it's not, it's shaped weird. But if a tricycle is considered as nothing more than a collection of parts, each with its own intended shape and function, we can stop talking about the tricycle and talk about the parts instead, which parts are doing the right things and which aren't, like that wheel isn't round. We've effectively replaced tricycle talk with parts talk, which allows more specificity and clarity in certain contexts. The tricycle vocabulary is still available as shorthand if we're in a hurry, but we can switch back and forth between the two levels of description fluently as the situation demands. The thing is, there's an implicit assertion that nothing of value is lost when we eject tricycle language from our vocabulary, and there are two ways to go about satisfying that condition. We could diligently map each feature of the higher level description to the new framework, finding ways to save the appearances we're accustomed to and parse them using the new set of descriptors, or we could simply declare that none of the harder stuff is valuable enough to keep. Like, take the reductionist claim that a human being is nothing more than $160 of chemical elements. It would take an insane amount of legwork to translate thought, morality, sensation, poetry, kindness, and all that into purely chemical terms. There isn't even a reasonable scientific starting point for how we would go about doing that in principle, but a person making that sort of claim doesn't really want to reduce everything about human beings to atomic terms, to facilitate a discussion of poetry by allowing us to talk about this carbon atom moving this way instead of that way and how it really throws off the rhythm of the Third Stanza. What they're really claiming is that only the stuff that can be described in chemical language matters, and everything else is simply not worth talking about. This is where reductionism gets its unsavory reputation. Trying to explain something by replacing dialogue about it with dialogue about its more simplistic parts can be a useful method to clarify otherwise opaque ideas, but it can also be a rhetorical move to minimize or dismiss inconvenient features of the big picture, to sweep anything that might be troublesome for a particular agenda under the rug. You might hear echoes of engineer syndrome here. Folks whose jobs require reducing complex problems to simple models sometimes get into the habit of approaching every other problem the same way, ignoring everything they can't capture in simple terms, no matter how essential, in the interest of building a theory they can work with. We can see evidence of good faith reductionist impulses that do their best to bridge important vocabulary to lower level descriptions, and doing the legwork to preserve and accommodate our common sense understanding in the new framework. Like explaining tricycle writing in terms of components, readily gives us the tools we need to port language like turn left into a more parts level description without losing anything in the process. Reducing heat to the average kinetic energy of molecules also gives us some handy descriptive tools without compelling us to ignore the warmth of a fireplace. But we can also see clear examples of people weaponizing a claim of reducibility to reshape an otherwise unfriendly landscape of discourse. If I don't care about the role morality might play in some context, I can assert that it reduces to neuroscience, and because there's no way to talk about morality in neuroscience terms without shaving off so much of it that it's barely recognizable, I've effectively excised at it from the conversation. That's a risk we have to confront on a regular basis. It's simply not possible to build a perfect one-to-one representation of the whole world in our heads. We all have to make decisions about which details we're comfortable shaving off to achieve our desired outcomes, and which ones we'd rather keep around, even if they come with a fair amount of explanatory baggage. But those are choices, and they're inexorably tied up with our values and goals. Accusing someone of being reductive then is claiming that they're trying to solve important questions they don't really care about by saying, why don't we just not talk about that anymore? This framework puts us in a decent position to answer our initial question. There are clear ways that human minds aren't just large language models. It's an interesting notion that the fundamental operation of the mind is somehow analogous to a very powerful autocomplete engine, but as anyone who's thinking about this subject should be aware, it's obviously more indifferent than that. Jeffrey Hinton, Sam Altman, and similar LLM evangelists don't really offer any account of how to map mental phenomena onto these algorithms in a way that captures prominent features of our own cognitive experience in that new vocabulary, stuff like subjectivity, attention, and agency. They're more interested in declaring those phenomena are reducible in principle to the operation of artificial neural networks, so they can move beyond vague philosophical inquiries about whether LLMs are truly intelligent or conscious or whatever. It's an understandable move. If any fifth grader could ask a hard question that undermines the very cool thing I'd spent my life building, I'd probably also look for any possible way to shift the burden of proof, but it's not an honest move. You can tell by the casual way they turn the things off. Does this linguistic account give us a decent lens to distinguish good from bad faith attempts at reductionism? Does your frontal lobe itch every time you hear someone use the phrase in principle? Please leave a comment below and let me know what you think. Thank you very much for watching. Don't forget to blow up, subscribe, and share, and don't stop thunking.