 I have a really bulletproof method for making chop suey that involves a feather. I call it my system of a down. The human body is a complex set of delicate mechanisms, with an incredibly limited range of acceptable operating conditions. It's no wonder that bodies break down sometimes, and that fixing them can be a fussy process. I mean sure, you can bypass arteries and remove whole sections of the digestive system, but you leave just one scalpel in there. That might sound like a joke, but retained surgical foreign body used to be one of the most common surgical complications, almost one in every 1,000 procedures, right up there with things like ripped sutures or infection. It seems crazy that an entire surgical team could take half an hour to scrub in, go to such great lengths to create a sterile environment free from potentially infectious pathogens, and then drop a suture needle in someone's chest and just leave it there. But it happened, a fair amount. In his book, The Checklist Manifesto, doctor and author Atul Gawande uses a number of examples like this one to make a compelling case that the practice of medicine can be significantly improved by rigorous systematization, by creating and following algorithmic procedures to the letter. One such system, the counting of surgical implements before and after each operation, preneal every sponge by multiple parties, seems like a stupid bureaucratic thing that every surgical team has to do now because a few absent minded idiots left their forceps in someone. But in one study of 150,000 surgeries, there were 17 instances where counting caught something that would have been left behind. Systems like the counting method can be thought of as specialized technologies that human substitute for their own judgment and reliability in certain tasks, everything from NASA rocket launches to doing the dishes. As helpful as they can be, we tend to use them with grudging acceptance rather than enthusiasm, and I think it's worth discussing why that is. First, and most obviously, there's the ego problem. Intrinsic to that definition is an implication that someone's judgment and reliability needs augmentation, that is to say that they're just not good enough on their own. Every time the scrubner spends 5 minutes counting after a routine hip replacement, it's a reminder that surgery teams aren't perfect, that they sometimes screw up in embarrassing ways, that if they could just do this one simple thing right every time, they wouldn't need a system. It's got to be humbling to trade in human judgment for mindless box ticking, even if the box ticking gets better results. Second, there's the expectation problem. Our thoughts about what sort of things humans should be able to do right every single time are an important part of our attitude towards systems. Those ideas are usually just based on gut feelings and distorted through several funhouse mirrors of cognitive bias, including the just world fallacy and fundamental attribution error. In short, we'd like to imagine that only screw ups can screw up in certain ways, even if the data says that most humans struggle with something, and if we ever find ourselves in that situation, it was just an unpredictable fluke of circumstances. There's also the frequency problem. Because they're designed to fix relatively rare events, systems are useless the majority of the time, by design. Less than 1% of surgeries benefited from the counting method, and every single count was a new small obstacle that had been added on top of the extraordinary difficulty of operating. When you're the one who has to count every time, if you're not in that tiny fraction who catch something, it's hard to see the utility of the practice, and even if you are in that fraction, it's easy to convince yourself that you would have caught it anyways. Despite these annoyances, humans have successfully implemented some really amazing systems and benefited from their inhuman consistency. Pilots and construction crews use checklists to achieve incredibly low rates of mechanical failure. Science is a systematized process for discovering truths about the universe, far beyond what we could reliably determine otherwise. But just as good systems can empower us with superhuman reliability, bad systems can cripple us in horrifying ways. In 2006, the U.S. federal government's glacial response to the havoc caused by Hurricane Katrina was criticized by numerous parties worldwide. The failure to provide timely disaster relief was attributed to several factors, but one of the primary findings of the House of Representatives investigative committee was that the national response plan, the system that had been put in place to deal with emergencies like Katrina, did not adequately provide a way for federal assets to quickly supplement or, if necessary, supplant first responders. That is to say, the resources to help existed, but the operation of the system as planned didn't root them where they were needed when they were needed. In his 2009 paper Making Hurricane Response More Effective, economist Robert Horowitz contrasts this inefficiency with the heroic success of the Coast Guard, as well as the corporate juggernaut that is Walmart. Just before the hurricane hit, Walmart CEO Lee Scott effectively deputized everyone in the management hierarchy to respond as they saw fit to the oncoming catastrophe, saying, a lot of you are going to have to make decisions above your level. Make the best decision that you can with the information that's available to you at the time, and above all, do the right thing. That temporary suspension of the normal systems for corporate decision making was responsible for invaluable support efforts in the days following the disaster. Store managers gave away life-saving supplies to emergency responders and local residents. Police officers were invited to use Walmart offices for housing and as a base of operations. Some employees on their own initiative broke open locked areas of the building to provide victims access to water and medication. Horowitz similarly attributes the exceptional response of the Coast Guard in the period immediately following the storm to their flexible, decentralized and highly adaptive structure, citing their improvised partnership with local fishermen who knew the area to mount a more effective rescue effort. That's not anything that could have been realistically written into a checklist ahead of time. That was just quick thinking and adaptability, and it helped them rescue thousands. Importantly, the inadequacy of the National Response Plan wasn't just a case of too many bureaucratic steps or too much red tape. As the House Committee put it, the primary failing was one of initiative, the inability of the people following the plan to take independent action. When it was working, as intended, the system's output compelled thousands of diligent, motivated relief workers to wait around helplessly for days, rather than responding to the situation according to their own judgment. A better plan might have actually facilitated a more responsive relief effort. But in this case, and as Horowitz argues, every case, the central planning and systematic approach to disaster relief hurt rather than helping. Which leads us to law and jurisprudence, the philosophy of law. Law is also a system, like checklists or counting surgical implements, with the added complication that it's supposed to, well, it depends on who you ask, but let's say that it's meant to ensure that people aren't total jerks to each other, to promote a helpful framework for the operation of a society, and more recently, to make the exercise of political power accountable. We generally agree that good laws exhibit some common characteristics. Fairness, justice, effectiveness, that sort of stuff. Unfortunately, these values sometimes conflict with each other in border cases, and it's mostly a subjective call as to which should take precedence, which accounts for some of the variation we see in the implementation of legal standards between different societies and cultures. One of these variations is a tension of more or less the sort that we've been discussing, between rules and standards. For legal systems that lean towards the rules side of the spectrum, like the US legal system, the most important aspect of what makes good laws are their predictability. The idea that laws should function more or less like clockwork. Every law should be clearly worded and unambiguous, and whenever someone breaks one, the result should be algorithmic, like flipping a switch. The sign clearly says no parking $200 fine, and if you park there, you get fined $200. Period, end of story. If the text of the law results in some undesired consequence, well, then the law must be wrong, and you should fix it. Using the law. However, for some legal systems, the priority is more about thoughtful and sensible application of legal power where it's really needed. For standards-oriented systems, it's well and good to try and encapsulate everything that law needs to do with careful thought and exacting terminology, but single-minded application of those rules often misses the forest for the trees, failing to convey the underlying principles we're actually trying to uphold. The problem isn't that you're parked in a no parking zone. The problem is that if you park in a no parking zone, you're blocking access to that driveway, and that's a jerk move. I don't really care what the sign says, just don't drive like a jerk. It's easy to imagine the potential drawbacks of this sort of discretion in the application of the law, as we hear complaints about them fairly often, even in the mostly rules-leaning U.S. The wealthy and powerful frequently get reduced punishment. Rules who are traditionally discriminated against get demonstrably harsher sentencing. The enforcement of some especially restrictive laws, like speed limits, seems to depend more on the mood of the police officers involved or whether their monthly quota for tickets is coming up than the safety of other drivers. But there are some issues that rules-based jurisprudence runs into that echo some of the problems with systems that we've discussed. Rules are brittle and tend to be unhelpful or even disastrous when applied in situations not envisioned by their authors. They tend to age poorly, especially in areas where rapid change is the norm, like the technology sector. In order to be specific, they often have to cover so many different explicit situations that it becomes difficult or impossible to navigate, leaving you just as uncertain as you might be under a system that leans more towards standards. Rules can also provide a false sense of rigor and objectivity, just as with other systems. It's entirely possible to implement seemingly innocuous rules, which even when applied to the citizenry equally result in some sort of monstrous injustice. How would you feel about a universally enforced, fairly applied law forbidding any sort of travel except by private helicopter? If you're like me, you may find the urge to systematize deliciously seductive. My first impulse every time anything gets seriously screwed up is to imagine and implement rules or procedures that might have prevented the error. It's not a mistake that I have about 15 different reminder apps on my phone. Sometimes that's absolutely the right call, even when our egos tell us that we ought to be able to function on skill alone. But while it can be comforting to have some sort of formal structure in place instead of relying on human judgment, it's important to bear in mind that the wrong systems aren't just irritating, but can actually make our goals harder or impossible to achieve. Whether that's a heart transplant, a hurricane relief effort, or national legislation, we shouldn't mistake having a rule for having a solution. What systems have you found most helpful in your life? What do you think of rules versus standards jurisprudence? Please leave a comment below and let me know what you think. Thank you very much for watching. Don't forget to subscribe, blah, share, and don't stop thunking.