 I just started a D&D campaign with a guy who's playing as a wizard who specializes in prophecy. You know, for the foreseeable future. In the mid-80s, futurist and entrepreneur Paul Satho developed a mantra that he spread through Silicon Valley in numerous company cultures. Strong opinions, weakly, or loosely held. The saying was intended to combat a fairly common problem for startups, the indecision and paralysis that comes from uncertainty about the future. Rather than sitting around waiting for enough information to be confident about which course of action would be best, Satho suggested that the quickest way to get to a decent prediction was to hammer out a tentative forecast as quickly as possible, however flawed or incomplete, and then test, amend, and refine it with analysis and criticism. The aim was to rapidly develop good predictive models by laying out an assertion, any assertion, with the implicit understanding that it was to be questioned, debated, and amended to get to something better. Redesigning the user interface from scratch is going to make the project late. Well, we did it on that one project a couple years ago. True, but we went with a very simple design and we were working weekends. Right, we may be able to do it before the ship date if we're willing to crunch and if the client approves a fairly simple design. Of course, as with many pithy phrases that contain a kernel of wisdom, strong opinions weakly held has been used as a justification for all sorts of behavior, not all of it in keeping with Satho's original intent. Several authors have noted that, well, teams may aspire to dispassionate analysis of ideas, regardless of whose ideas they are. There's often other factors at play that dictate which ones get scrutinized more closely. When the boss makes some prediction, even if it's just a jumping off point for discussion, it might take an awful lot of conviction and gumption for an intern to voice any doubts about it, even if those doubts are well founded. While a junior member of staff might float an idea that gets torn to shreds in short order, anchoring bias also has a substantial effect on these discussions, where the first idea voiced will inevitably color the group's evaluation of the one's following. In its worst form, strong opinions weakly held can be hijacked as a convenient defense for blowhards, people who like to have opinions, loudly, over and over, without any acknowledgement that their statements are unsubstantiated or possibly even damaging. If chastisers are confronted about these assertions, they can simply claim to be making strong speculations as per the mantra, ready and willing to change their minds if anyone cares to do the work to refute those statements. Look, I'm just throwing stuff out there to see what sticks. Maybe your mother really isn't a baboon. I'm not certain about it. If you show me some photos, I'll happily back down. It's just a strong opinion weakly held. No reason to get worked up. Sappho's goal is an admirable one, and many business owners have attributed some amount of their success to his lesson about getting a good enough predictive model by seizing the first crappy one that comes to mind and iterating away its obvious shortcomings. If that's all that you need, and it works for you, great. But I have an amendment that might make that process even faster and more accurate. Maybe. In 1964, analyst Sherman Kent submitted a paper to Studies in Intelligence, a peer reviewed journal covering subjects of interest in national intelligence agencies like the CIA or MI5. The paper, Words of Estimative Probability, was ignored by those agencies, despite detailing some significant failings that might have easily been avoided with a tiny shift in how they communicated about uncertainty. One example Kent cited was a 1951 report to Big Wigs in the US State Department about the possibility of a Soviet invasion of Yugoslavia. The report concluded with the statement, although it is impossible to determine which course the Kremlin is likely to adopt, we believe that the extent of satellite, military, and propaganda preparations indicates that an attack on Yugoslavia in 1951 should be considered a serious possibility. That's a sobering warning to be sure. But days later in an informal conversation with an official who had read the report, Kent was asked to put a number on that estimate. His best guess as to the percentage chance that the USSR would invade. He put the odds around 65%. The official was astonished, saying that just about everyone in the department had interpreted serious possibility as a substantially lower number. Now curious, Kent went back to his colleagues who had co-authored the report and asked each of them the same question, and was dumbfounded to find little or no consensus. The estimates for what serious possibility meant ranged from 20% to 80%. This sort of ambiguity undermines the entire enterprise of quantitative risk assessment and analysis. If you spend months working incessantly with classified documents, dossiers, and satellite images trying to tease out the likely course of world events, and your final judgment on the matter can be interpreted as anything from one in five to four out of five, why even bother? Just say it's a serious possibility and knock off early for lunch. Instead, Kent suggests a set of standard terms to communicate the probability of an event happening or not happening unambiguously. For example, if you put the odds at around 75%, he suggests using the word probable. If it's a coin toss either way, even chances. If you say something is almost certainly not going to happen, its likelihood should be on the order of 2% to 12%. Of course, you're probably not an intelligence analyst, or should I say you're almost certainly not an intelligence analyst. But Kent's recommendation for disambiguating uncertainty seems to have useful corollaries for Sappho's stated goal of rapidly developing fairly accurate predictive models. It seems right that having a concrete starting point makes it easier to develop a decent model quickly, especially compared to an endless cycle of requesting more and more data to try and eliminate uncertainty. However, Sappho's original paradigm of strong opinions weekly held, at least as it's usually practiced, doesn't encourage any fine-grained evaluation of odds. The opinions generally take the form of bold pronouncements without any indication of confidence. X will happen. This might be a useful starting point for a discussion, but an even more useful starting point might be something like, I'm 75% sure that X will happen. Not only is this framing less ambiguous, inviting critique of both predicted outcomes and their relative likelihoods, it very naturally invites speculation and fine adjustments of those ideas in a way that strongly stated opinions don't. It's certainly possible for a team to work hard and diligently train themselves to interpret pronouncements of strong opinion in a particular way, to internalize a culture of treating statements as invitations for criticism and dialogue in a way that might cause exasperation in any other context. But acknowledging and quantifying the uncertainty inherent in a prediction seems like a much shorter path to that sort of open discourse. When I hear someone say something like, I put the odds at like 30% that this ships on time if we redesign the UI. My mind immediately engages with the problem as it really is, a prediction of likelihood. I start thinking about the variables involved that might lead someone to that estimate and comparing them to my own intuitions. If I think that number's low, that line of inquiry flows very easily into exploring why our mental models differ so much and how to synchronize them. In a world of pandemics and wildfires and all sorts of poorly understood, rapidly changing situations, uncertainty has become a much more prominent feature in how we interact with our environment. Many people struggle to view the future in terms of estimates and probabilities. If you tell them that wearing a mask makes it 90% less likely that they'll accidentally infect someone, they'll still press for some sort of hard and fast guarantee that it either will or won't work, unwilling to engage with the problem or change their beliefs or behavior unless the uncertainty is eliminated. It's certainly easier to parse a strong opinion as being either right or wrong when it either does or doesn't come to pass. But I think that stating predictions in terms of estimative probability might be a quicker and more effective way to draw people into the process of predicting and revising odds. I'm like 75% sure of it. What about you? Have you participated in either sort of guessing about the future? Have you had any strong opinions weekly health? Please leave a comment below and let me know what you think. Thank you very much for watching. Don't forget to- Don't forget to blah, blah, subscribe, blah, share. Who the hell are you? I'm here from the future, stupid. That's not very nice. I'm sorry. I didn't mean anything by that. It's just that I know literally everything that you know plus a little bit more. Okay, fair enough. I was thinking of doing a retrospective video for episode 200 where I watched some of the old episodes and do some sort of commentary on them. Everybody who's watching this right now should go to the link in the description and select five episodes that they'd really like for me to watch, reflect on, comment on, maybe cringe at. Yeah, what what he said. Yeah, it'll be fun. Maybe.