 Each time I'm used, somebody's found dead. Yet I bring delight and never cause dread. What am I? I don't know, it's a mystery. Do you know how some people just don't get puzzles? Like, you tell them that you've got a goat and a wolf and a cabbage, and you're trying to ferry everything across a river in a boat that can only hold one at a time, and they start asking whether the goat can swim, whether there's any rope to tie up the wolf, how the boat could possibly be cramped enough that a cabbage won't fit. They're clearly doing their best to figure out how to answer the question, but they're not really honoring the spirit of the thing. Although we use terms like puzzle, enigma, mystery, brittle, and brain-tease are interchangeably to describe both these sorts of games and various intellectual challenges we face in real life, we're aware on some level that proper puzzles aren't supposed to work like real life ones, that they only admit a certain kind of problem-solving. Yes, you're right, you shouldn't get into the boat with a feral wolf. Not the point. In a 1997 paper, education researcher and theorist David H. Johnasson suggested that although the sorts of things we characterize as problems can be split up in many different ways, simple or complex, short-term or long-term, and so on, these differences tend to cluster around three categories. Puzzles, well-structured problems, and ill-structured problems. Puzzles, like the goat-wolf-cabbage thing, are rigidly defined and totally decontextualized. You don't need to know why we're ferrying animals across a river. You don't even really need to know that goats eat cabbage. The problem is presented in its entirety by the puzzle ER with all the information the puzzle E needs to solve it baked into the problem statement. There's only one correct answer and it's pretty obvious in retrospect. That self-encapsulation and ease of verification makes them a favorite tool for researchers who don't have to control for things like individual experience or perception when they're testing cognitive processes. Well-structured problems are more like exam questions. All the relevant features of the problem and its goal are similarly well-defined and laid out for the solver, but they require interpretation and application of a specific rule or fact to solve. A cantilevered beam of length L, modulus of elasticity E, and moment of inertia I is mounted to a perfectly rigid wall. Calculate the deflection at its furthest end. The methods and concepts used to solve well-structured problems are generally clear-cut, orderly, and only really useful for solving other isomorphic problems. If you know the cantilevered beam formula, you can use it to answer any cantilevered beam question, but you're probably not going to use it to fix your computer or bake a cake. They're sometimes called transformation problems, because the puzzler is just translating information from one well-defined state to another according to definite rules. Ill-structured problems encompass pretty much everything else. Their domains, methods of solution, goals, and evaluative criteria for being solved are messy and hard to define. Picking an apartment, tuning up a car, negotiating political disagreements, losing weight, designing a city, getting a Mother's Day gift from your mom, you're not going to find these sorts of problems on an exam. They're emergent and deeply dependent on the context in which they occur. They often rely on discovering and evaluating multiple ambiguous contradictory and uncertain sources of information, as well as considering the epistemic relationships between the solver and that information. They admit numerous possible solutions, and different people might disagree on which one is really the best answer. The puzzle, well-defined, ill-defined problem taxonomy isn't strict. These are just well-populated regions on a continuum from low-context, convergent solution problems to high-context, divergent solution problems. There are, of course, outliers and edge cases that blur the lines a bit, but interestingly, although older models of cognition propose that the entire range of problem solving is fundamentally reducible to the same activity, more recent evidence suggests that as we move from one end of the spectrum to the other, there's a categorical shift in the skills, abilities, and processes used to problem solve that we're doing one sort of thing when we're playing a puzzle game and something totally different when we're planning a vacation or something. That might sound a little strange, because the way we usually talk about problems and problem solving has no clear distinctions of type. But if you think about it, there are definitely cognitive processes necessary for ill-structured problem solving that simply don't come up when you're solving puzzles. Like, there's never any need to consider alternate perspectives or weigh pros and cons of various answers to a puzzle. Once you've found an acceptable answer, you're done. You never have to evaluate its effectiveness or revisit it. Even having a clear prompt that explicitly calls for a solution removes a key obstacle for many real-life problems, which is trying to define and evaluate what the problem is or if there even is one. I don't know, is it really that bad if the goat eats the cabbage? Schra at all and many follow-up studies have found that proficiency in the domain of puzzles and well-defined problems does not predict similar proficiency in the domain of ill-defined problems, concluding that they require separate cognitive processes. Of course, just because they're separate doesn't imply they're totally independent. If someone's bad at solving structured problems, they're probably not going to be great at figuring out complex, ambiguous, poorly-defined ones. But it seems entirely possible to be a real puzzle savant who's totally helpless when it comes to planning a wedding. If you'll recall, back in episode 172, we discussed a phenomenon called engineer syndrome. The observation that engineers who make a living by reducing large complex problems to simple tractable models have an annoying tendency to exercise that method in fields they have no real expertise in or they don't actually know what can and can't be simplified without affecting the outcome. Look, your lab mice are clearly malfunctioning. Obviously, you should try unplugging them in the context of Donacian's taxonomy of problems. Engineer syndrome can be seen as one instance of a more general human error, mistaking one category of problem for another and applying the wrong sort of cognitive processes to try and solve it. That can certainly be annoying when someone keeps trying to use creative problem-solving skills to bypass the implied constraint of a puzzle. But if an architect tries to reassure you that you're in good hands by showing you their city skylines builds, you might want to look for someone else. Can you think of any examples of someone mistaking an ill-structured problem for a well-structured one? Have you ever met someone who thinks their skill at chess would make them a brilliant military strategist? Please, leave a comment below and let me know what you think. Thank you very much for watching. Don't forget to subscribe to our channel and don't stop dunking.