 I'm trying to write a song to welcome people to the thunk community, but I can't really decide how it should go. I mean, should I use that chord? Discord? If you'll indulge a little of my creative process here, I've rewritten this introduction several times, using numerous different examples of a phenomenon that you might be familiar with, something like analysis paralysis, failures that arise due to a desire to find an optimal solution, or a reticence to implement any solution out of fear that it might not be the best way forward. Slow government responses to the threat of COVID-19, stalling on placing an order for groceries because I wasn't sure what kind of bread I should get, vacillating on which to-do app I should use, that sort of stuff. After fleshing out each example, I'd wonder if it was really the best way to approach the topic, whether there might not be a simpler or clearer or more relatable way to get the point across. And here we are. Only took me four hours. The compulsion to optimize is easy to understand, especially for folks who have a particular interest in improving cognition and rationality. Whenever game theorists or psychologists analyze how we make decisions and how we ought to, the gold standard for rational decision-making is that of maximizing expected utility. Making choices in such a way that the decider has the best possible odds of getting the most desirable payoff, whatever that is for them. When we talk about things like cognitive biases or errors in judgment, we're talking about deviating from that theoretical framework. If you're not maximizing utility, you're acting irrationally, and the greater the discrepancy, the greater the error. People who care about making good decisions probably think a lot about how close their choices get to that ideal, especially with the stock market where it is right now. But the real world is very rarely as clear cut as in those experiments. In order to even measure deviation from demonstrably optimal decisions, we had to construct tests that satisfy some truly meticulous conditions, conditions that are hard, if not impossible, to get in everyday life. Like, say you wanted to start your first day as a perfectly rational actor, wholly committed to behaving in a way that will maximize your expected utility. What should you have for breakfast? Well, you'd need a well-defined set of metrics for how to value the overall utility of the various things that you might eat, probably including nutritional content, pleasure of consumption, costs, time for preparation, novelty, maybe the moral aspects of how it's sourced and produced, all sorts of stuff. You'd need an exhaustive list of items that you could potentially eat for breakfast for comparison. You'd need some serious number crunching to evaluate the overall utility of each of those items in the current state of the world. Incidentally, did you know that there's a thunk folding at home team? Needless to say, by the time your breakfast utility calculator spits out an answer, you're probably going to be very, very hungry. That enormous gap between the maximum expected utility definition of rationality and the computational complexity of everyday life might strike you as suspicious. If everyday decisions like what should I have for breakfast are so far outside the realm of what can be reasonably evaluated using our yardstick of maximizing expected utility, maybe all these cognitive biases and predictable errors and cognition aren't really mistakes, but well-adapted tools for a resolution of problem that isn't really being measured in the very specific scenarios engineered in labs. In his essay, The Fiction of Optimization, psychology researcher Gary Klein notes how far the limitations of utility maximizing and removing it from the realm of usefulness, going so far as to suggest that it's actually detrimental to take it too seriously. He says that all the fuss that's made about how we should correct for and eliminate biases in the interest of making people better utility maximizers, it's kind of missing the forest for the trees. If you watch people operating in the world, they actually do remarkably well at making good decisions in rapidly changing circumstances. Klein's original research focused on veteran firefighters, observing that they made many split second choices in life or death scenarios that were incredibly good in almost automatic, without any sort of comparison or weighting of different options, the way that a utility maximizer might, he takes the position that we should understand both our cognitive apparatus and the best ways to use it in terms of bounded rationality, a framework that examines rationality with an eye towards its real world constraints like time, processing power, that sort of thing. The bounded rationality paradigm might just seem like a fluffier version of utility maximization at first glance. Yes, yes, if you don't have the time or the resources to find the right decision, making the most optimized decision you can with what you have is still rational, I guess. But the shift of emphasis to the limitations of decision making has some interesting implications. Rather than viewing cognitive biases as absurd deviations from rationality, Klein suggests that they're better understood as useful heuristics, mental shortcuts and rules of thumb optimized for rapid, good enough choices, where we'd otherwise be totally helpless to find anything close to a decent answer. In that view, trying to eliminate the effects of bias would be like trying to fling your oars overboard because one is slightly bigger than the other, opting to paddle yourself around with your hands. Bounded rationality also frees us somewhat from the behavioral implications of fixating on optimization. If we constantly judge our actions and choices by comparing them to some theoretical world where we have infinite time and processing power to evaluate our options, it seems totally reasonable that we defer decisions as long as possible, gathering data and hemming and hawing until the very last possible moment to be assured that we were not missing anything. Unfortunately, that approach tends to err on the side of inaction. Let's not be hasty is the refrain of those who maintain the status quo up until it's too late for them to change the course of events, at which point the rational option is trivially easy to calculate because it's the only one left. This failure of delaying action in deference to optimal decision making is exacerbated by an observation called Freitkin's Paradox. Let's say that I give you a choice between a handful of spiders flung into your face and a slice of cake. You can probably answer that with no hesitation, less than a second of thought. I hope. But let's say that I give you a choice between this slice of cake and that slice of cake. Well, now this is a real conundrum, isn't it? You might ponder that for minutes, maybe ask for more information about the composition and flavor of the cakes, ask the waiter which one he prefers. This leads to a somewhat counterintuitive conclusion. A decision maker who's trying to maximize their expected utility will spend the least amount of time considering decisions that give them the greatest benefit, and the most time considering decisions that really don't make that much difference at all. Show of hands, who's been trapped in a two hour meeting about something that doesn't really matter? Freitkin's Paradox puts a point on the behavioral problems of treating maximum expected utility as the fundamental measure of reason. Not only are we in danger of getting stuck trying to optimize past the point that we really should have been acting, it's likely that the vast majority of that time will be spent on fine tuning inconsequential details rather than substantially improving outcomes. In the context of bounded rationality, where we're trying our damnedest to make the best possible decisions inside strict budgets of time and processing power, it starts to look a little silly to bemoan things like confirmation bias, which seems specifically geared to light a fire and get people to act rather than endlessly debating which slice of cake will really hit the spot. Klein wraps up his essay by offering an archetype of what an optimized bounded rationalist might look like by examining some champions of highly constrained, high accuracy decision making. Speed chess players. There are fair ways from firefighting, but there are some commonalities in a way that experienced chess masters and veteran firefighters go about making moves without resorting to time consuming optimization. Pro chess players don't really stack potential choices up against each other and evaluate the relative pros and cons to fine tune the best possible move. They look for clear openings and threats. They will play out some possible responses to check to see if there's anything disastrous or awesome that might result from a certain line of play. And if they see a strategy that's clearly dominant after a reasonable amount of thought, they pounce on it. There are certainly subconscious processes at work that are trained by experience. Filtering which choices get elevated to conscious consideration, but there's no trace of the stepwise analysis you might expect out of an optimizer. Nobody at that level of play is thinking, well, I get better control of the center, but I'll lose a pawn and that bishop might be annoying. It's just no, no, no, maybe. Yes, your move. Klein notes that this approach really shines when the clock is running out and the pressure amounts. While Deep Blue was clearly a superior optimizer, it didn't really have any capacity for desperation moves. Wild risky half-second maneuvers designed to maybe open up some possibilities where there isn't enough time for a proper plan. The computer would sit there and patiently grind down its remaining time in almost exactly the same fashion as its other moves. While human players might sometimes be able to snatch a stalemate by just trying anything that looks promising and seeing if it sticks. I know, it's a weird message for a channel that's been mostly about thinking more thoroughly about things, but rather than cogitating endlessly to identify increasingly optimal solutions, maybe the best course of decision-making habits are actually those which enable us to identify blazingly obvious advantages and act on them without hesitation. I certainly could have used that advice when I started writing this episode. How about you? Do you think that you're reaping the greatest possible utility within the limits of your apparatus? Please leave a comment below and let me know what you think. Thank you very much for watching. Don't forget to blah, blah, subscribe, blah, share, and don't stop thunking.