 I am wonderfully pleased to be able to introduce to you Evelyn Fox-Heller, our speaker for today. Evelyn received her PhD in theoretical physics, a little weight from health law and policy. Initially, her doctorate in theoretical physics at Harvard University. She has taught at a number of different universities, including the University of California at Berkeley, where she was professor in the department of rhetoric, history, and women's studies. Just pay attention to all of these different ideas that she's been studying and teaching in. And she has taught at Northeastern University, SUNY at Purchase, and New York University. Her most recent home is Massachusetts Institute of Technology, where she is professor of narrative of history and philosophy of science, in science, law, and technology. Sorry, science, technology, and society at MIT. One of the numerous academic and professional honors she has held is the Blaise Pascal Research Chair for 2005-07, which I think we all would like to obtain because it means you can live in Paris for two years. Which she did. And she has elected membership in the American Philosophical Society and the American Academy of Arts and Sciences. She's the author of over 10 books, focusing on the history and philosophy of modern biology and on gender and science. And she is speaking, her most recent book, sorry, is called The Mirage of a Space Between Nature and Nurture. And her topic today is Legislating for Catastrophic Risk. So please join me in welcoming Apple. Thank you, Elaine, for the introduction and the invitation. It's a pleasure to be here. This talk is a little afar from the other talks I'm giving while at Dalhousie, but I think it's appropriate to your, I hope it's appropriate to your concerns. My concern, my particular focus is on the question, what is the rational response to catastrophic risk? And in that question is the question of what the rational legislative response and how we deal with that. And the paper comes out of a very big preoccupation of mine with climate change. So the question of how people proceed and respond to risk has received a great deal of attention in recent years. Research over the last three decades has shown that because of the so-called cognitive limitations or defects of the brains we have evolved, we are generally prone to misestimating and misperceiving risk, tending especially to overestimate the magnitude of those risks that are highly evocative and ignore less evocative ones. As the author of a recent Time Magazine article explains, we pride ourselves on being the only species that understands the concept of risk, yet we have a confounding habit of worrying about near possibilities while ignoring probabilities, building barricades against perceived dangers while leaving ourselves exposed to real ones. But how do we know which risks are the real ones? For that, we turn to experts who assess the risk of some hazard objectively. That is, they compute first the magnitude of the hazard and second the probability that it will occur and multiply these two numbers together. Ordinarily, however, people do not, ordinary people do not, not even tacitly perform such calculations. Instead, they make use of various kinds of shortcuts, heuristics, rules of thumb that permit them to make rapid and intuitive if biased assessments. One such shortcut is the availability heuristics, the rule of thumb by which people evaluate the probability of an event according to the ease with which relevant instances come to mind. Cass Sunstein is a legal scholar, apparently serving as the administrator of the White House Office of Information and Regulatory Affairs, and he has been strongly influenced by the work of Danny Kahneman and Amos Amersky. Sunstein is especially concerned with the difficulties of fashioning rational public policy in the face of the threat of disastrous events. Such events engage intense emotions, and these emotions together with the availability heuristic lead us to, quote, I'm quoting Sunstein here, lead us to focus on the worst case even if it is highly improbable. Indeed, he writes, quote, when intense emotions are engaged, people tend to focus on the adverse outcome, not on its likelihood. This disposition to misestimate or to altogether overlook probabilities has predictable costs, especially, Sunstein argues, inclines the public to overspend on regulations. His conclusion? Because ordinary people cannot be relied upon to make rational assessments of the risks they face, the protection of public welfare requires that regulatory policy be based on expert judgment and not on popular sentiment. A common explanation for the disparity between lay and expert judgment is that biological evolution has endowed humans with two different systems for apprehending reality. One most commonly used in everyday life is nonverbal, experiential, and very fast. The other that's relied upon by experts is analytic, deliberate, rational, and slow. But it is, but has the virtue of being reliable, or at least that's the presumption. Paul Slovik studies the role of affect in decision-making and argues that affect is a central characteristic of the experiential system. To most readers, this might seem simply to underscore traditional views of emotion as antithetical to reason. And it's a dixit to take into account for the relative unreliability of the affect-based system. Sunstein himself often seems to accept such an account. But Slovik regards that view as overly simplistic and credits at least the possibility of a constructive role of emotion for emotion in decision-making, especially in an uncertain and hazardous world. He writes, quote, although analysis is certainly important in some decision-making circumstances, reliance on affect and emotion is a quicker, easier, and more efficient way to navigate in a complex, uncertain, and sometimes dangerous world. Close quote. Slovik is not alone. The relation between emotion and rationality has come in for extensive re-examination in recent years, and a number of authors have joined Slovik in this effort. Often looking to the findings of contemporary neuroscience for support, where it has been argued that emotional responses, precisely because they've been honed by evolution, might at times be useful supplements or even alternatives to conventional understandings of rational decision-making. Among all the emotions, the role of fear is surely the most controversial. Fear is also the emotion most obviously relevant to the threat of catastrophic events. And its relation to the misperception of risk has been a particularly prominent theme in these discussions. Slovik has emphasized the importance of what he calls the dread factor. Arguing that differences between lay and expert estimates of the risk of serious hazards is due largely to lay tendencies to focus on catastrophic potential. Quote, the higher a hazard score on this factor on the dread factor, the higher its perceived risk. The more people want to see its current risk reduced, and the more they want to see strict regulation employed to achieve the desired reduction in risk. Other studies have also cited fear as a factor that magnifies the perceived risk. But fear has also been a topic of concern as a consequence of heightened risk perception. As Franklin D. Rose about long ago reminded us, fear is something to be feared in and of itself. And Sunstein agrees. He writes, fear is a real social cost, and it is likely to lead to other social costs. And elsewhere, he writes, if information greatly increases people's fear, it will to that extent reduce welfare. When viewers of Sunstein's 2005 book on the laws of fear know that Sunstein clearly shares Roosevelt's concern, and they write, in Sunstein's view, the major thing proponents of democratically grounded risk regulation have to fear in essence is fear itself. Moreover, Sunstein seems to represent what has become a common view among economists, cognitive psychologists, and lay readers of this literature. Especially since 9-11, and that may be more true in the U.S. than in Canada, but it's certainly very conspicuous in the U.S. That shock gave rise to an enormous literature on the political abuses of fear, and virtually all of which takes as a starting assumption in the prima facie counter productivity of arousing public fear. The 1% doctrine of the Bush administration has come in for particular criticism, especially by those critical of the use of a discourse of fear to promote the war on terror and to justify the war in Iraq. In November 2001, concerned about the possibility of a second attack, Vice President Cheney argued, quote, if there's a 1% chance that Pakistani scientists are helping al-Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response. It's not about our analysis, it's about our response, goes quote. From the point of view of decision theory, such a policy would be impossible to endorse, but as a political strategy it was unarguably successful, promoting political decisions that would otherwise have been extremely difficult to defend. Later, as these decisions came to be regretted, the doctrine came under mounting criticism for inducing a culture of fear in the American public. As one commentator put it, the war on terror has been about scaring people, not protecting them. Sunstein notes that Cheney appeared to be endorsing a version of the precautionary principle, according to which it is appropriate to respond aggressively to low probability, high impact events. He also noted that the more familiar context in which the precautionary principle is generally discussed lies elsewhere, namely in climate change. Indeed, he writes another vice president, Al Gore, can be understood to be arguing for a precautionary principle for climate change, though he believes that the chance of disaster is well over one percent, close quote. Important fact, climate change does pose a multiple of risks of quite high probability, but for the present I want to focus on risks that are so devastating that they threaten the survival of civilization as we know it. In short, on catastrophic risks that most experts show to be of low, even if non-negligible, probability. Here, terrorism and climate change clearly share the same dilemma. How to rationally respond to the threat of such events? How much to spend and how aggressively to act in the effort to avoid catastrophic events? But the form of action envisioned are quite different. Cheney envisioned a war on terror and on Iraq. Gore thought about regulation. Also, to the extent that the precautionary principle is understood as a mandate to do no harm, invoking that principle in the two context screens to the fore, how very different can be the kinds of harm anticipated or, for that matter, ignored. Clearly, for climate change, the political shoe is on the other foot than it is for terrorism. But Sunstein's criticism of this principle is more general. He seeks to base his critique on logic, not on politics. Sunstein worries that reliance on the precautionary principle reinforces heuristics, like the availability heuristic, that lead both to excessive fear and to neglect of more important risks. He argues that codifying that principle embodies distortions of risk perception that themselves result in serious harm. He suggests even that it might be better to call rename the precautionary principle as the paralyzing principle. Well, the real problem is that it offers no real guidance, not that it is wrong, but that it forbids all courses of action, including regulation. Every possible action, including regulation, risks doing harm to someone. The bottom line, human judgment is flawed. And because of the fear that such flawed judgments arouse, worst case scenarios have an especially distorting effect on that judgment. For Sunstein, the solution is in the advice of professional analysts and not of popular will. But this presupposes the expert analysis delivers rational assessments of what the risk really is, unlike the public perception, which is a rough one, cast as irrational. And the obvious question is, what notion of rationality is being invoked in these discussions? What is meant by real risk? Broadly construed, rationality refers simply to the exercise of reason, to the deliberative process by which humans draw conclusions. But recently, especially in economics and political science, the term has come to be used more narrowly, well, recently for some time, for decades now, referring not to reasoning in general, but to the particular kinds of reasoning required for analytic assessments of risk, and followed by maximization of the net benefits associated with any suggested policy. A rational choice, in this view, is an optimal choice based on an objective and quantitative assessment of costs and benefits. Other choices may be based on reason, but they are suboptimal, and by definition, less rational. Thus, where studies of human judgment focus on how people do behave, their irrational choices, rational choice theory shows us how they should behave. For decades now, it is the latter, the narrower definition of rationality that has prevailed as the gold standard for American public policy. For example, Oira, in the office of which Sunstein is head, is responsible for vetting and approving all regulation proposed by any federal agency or department before submitting it for White House approval. This office is mandated to base its recommendations on the regulatory principles as laid down by executive order, and the most recent formulation of that order reaffirms the basic principles that were first laid down in 1993. Quote, as stated in that executive order, and to the extent permitted by law, each agency must, among other things, first propose or adopt a regulation only upon a reasoned determination that its benefits justify its costs. Two, tailor its regulations to impose the least burden on society, consistent with obtaining regulatory objectives, taking into account, among other things, and to the extent practicable, the costs of cumulative regulations, and three, select and choosing among alternative regulatory approaches those approaches that maximize net benefits. The premises underlying both rational choice theory and cost-benefit analyses have been extensively critiqued by economists, philosophers, and other social scientists and on a variety of grounds. One line of argument began more than 50 years ago with Herbert Simon. Simon argued that because of the finitude of both our computational capacity and informational access, all rationality, including expert reasoning, is bounded. His goal was to replace the global rationality of economic man with a kind of rational behavior that is compatible with the access to information and the computational capacities that are actually possessed by organisms including man in the kinds of environment in which such organisms exist. Bounded rationality is still the best we can achieve and much of Simon's subsequent efforts were devoted to developing models of how such bounded rationalities would operate. Central to this effort was his focus on how, in practice, behavioral choices depend not only on an actor's computational ability, but also on prior experience with the structure of the environment in which the action is required. As he put it, human rational behavior is shaped by a scissors whose two blades are the structure of the task environment and the computational capabilities of the actor. More recently, the psychologist Gerg Gigerin, there and his colleagues at the Max Pung Institute have taken up Simon's challenge and extended his efforts through years of careful observation of how humans actually do go about making decisions. Bounded rationality, he writes, is not simply a discrepancy between human reasonings and the law of probability or some form of optimization. It dispenses with the notion of optimization and usually with probabilities and utilities as well. It provides an alternative to current norms, not an account that accepts current norms and studies when humans deviate from those norms. Bounded rationality means rethinking the norms as well as studying the actual behavior of minds and institutions. Gigerin is especially critical of discussions that ignore the environment in which behavior occurs and he suggests ecological rationality as a better term. In Gigerin's view, ecological rationality does not make use of computations at all. Rather, it employs a set of conscious or unconscious heuristics, what he calls an adaptive toolbox, that have been honed by biological evolution as well as by individual and cultural experience. It is these heuristics that give rise to what we call our gut feelings, intuitions and hunches, where the terms gut feeling, intuition or hunch, interchangeably refer to a judgment first that appears quickly in consciousness and second, whose underlying reasons we're not fully aware of and third is strong enough to act upon. Gigerin's claim is that gut feelings provide a basis for action that not only need not be less rational in computation, but that in the appropriate environment can sometimes even be superior. I'm a little busy. Oh my gosh. Yeah. Well, let me try and share. I don't want to pass out. I did that in a Canadian university once before and it was a disaster. I'll see if I can remove this. No, no, don't bother. No, don't bother. No. Okay. His best example is catching a bull. Reflecting a view that is widespread in cognitive psychology, Richard Dawkins, famous from the selfish gene, offered a description of the process a bull player uses to catch a fly ball. Here's his description. When a man throws a bull high in the air and catches it again, he behaves as if he has solved a set of differential equations in predicting the trajectory of the bull. He may neither know nor care what a differential equation is, but this does not affect his skill with the bull. At some subconscious level, something functionally equivalent to the mathematical calculations is going on. Because Dawkins' description is in such striking contrast with how bull players actually proceed, it provides gigaretsia with a very good point of departure. Real bull players do nothing like calculating the bull's trajectory. Instead, they employ a variety of heuristics that are both easier and more effective. One especially effective procedure, apparently also used by dogs, is what Gingarenza calls the gaze heuristic. The gaze heuristic is a stunningly simple rule of thumb that enables the player to be at the precise spot just when the ball lands and hence to catch the ball, but it does not enable him or her to predict where it will land. It requires nothing more than fixing one's eye on the bull when it is high and running in a direction that maintains a constant angle between the line of sight and the ground as it comes down. It does not require knowing its initial position of velocity. It does not require knowing Newton's laws or even of the fact of gravity. As humans have evolved in a world governed by gravity, the fact of gravity is, as it were, already built into their adaptive capacities. The gaze heuristic is only one of many of Gingarenza's examples of adaptive thinking, but it well illustrates a central moral that he wishes to draw. Rationality is said to be a means toward an end, but the end is dependent on the interpretation of bounding rationality that is being taken. In optimization under constraints, the end is to estimate the point at which the ball will land. Knowing the cognitive process can inform us, however, that the end might be a different one. And in case of the gaze heuristic, the player's goal is not to predict the landing point, but to be there when the ball lands. The rationality of heuristics is not simply a means to a given end. The heuristic itself can define what the end is. And for that goal, the gaze heuristic is not only more practical, but it's also at least equally and often a good deal more reliable. Now, assessing the costs and benefits of human health or lives is an obvious challenge for rational choice theory, and it is one with which economists have long struggled. Low probability and high impact events occurring in the distant future pose two especially difficult challenges for cost-benefit analysis. First is the temporal disparity between when and by whom the cost of regulation must be paid and when and for whom those of non-regulation come due, and the notorious problem of discount rates in hearing in the effort to estimate the cost of future climate change in present terms. But in the interest of time, I want to jump directly to the second even more fundamental challenge. The difficulty, often impossibility, of computing either the magnitude of the possible catastrophic events or the probability of their occurrence. This problem is specific to catastrophic events. It is especially acute for assessing the risks of climate change. It derives first from the inherent fetus of structural uncertainty of climate science, second from the uncertainty of behavioral responses to such challenges, and finally from the implications of all this uncertainty for cost-benefit analysis. Climate science can tell us what catastrophic climate changes are possible, and it can even tell us that the probability of such changes is not negligible, but it cannot provide reliable estimates of just what that probability is. Indeed, the more extreme the event, the more uncertain the probability of its occurrence. The obvious question is how in the absence of quantitative estimates of the probabilities of extreme events, on the one hand, and of the magnitude of their outcomes on the other, how can one estimate the expected costs they will incur? One answer is simply to limit the outcomes to be considered to those whose likelihood is judged to exceed some minimal value, that is, cut off the tail. A number of authors have recently argued that conventional decision theory systematically undervalues the effects of hard-to-predict but high-impact events, popularly referred to as black swans. Indeed, the response of many cost-benefit analysis is to bypass the problem altogether by assuming that as the size of the event increases, this probability decreases so rapidly that the right-hand tail of the distribution can simply be ignored. In other words, extreme events need not even be taken into consideration in estimating costs because of their very low probability. Unfortunately, all available indications argue that the high-impact events are not distributed in this way. Furthermore, the very uncertainties inherent in the dynamics of climate change, as well, of course, as those due to the incompleteness of our knowledge of those dynamics, adds to the fatness of the tail, the thickness, and thereby raising estimates of the probability of a large catastrophe on the basis of whatever estimates we've made on the basis of the available data. That probability may still be small, but nonetheless, it can make a huge contribution to the costs involved. And cutting off the tail leads to a major distortion by any logical criteria. Ditto for estimating the economic impact of extreme events. The absence of prior experience and, hence, of prior knowledge requires extensive speculation about how to extrapolate beyond what is known. And small modifications in these speculations kind of have an enormous impact on the final computation. Indeed, larger even than changes in discount rate. For this reason, Harvard economist Martin Weitzman concludes that, quote, the answer to the big policy question of what to do about climate change stand or fall to a large extent on the issue of how high the temperature damages and tail probabilities are conceptualized and modeled. Not how high, how they're modeled. By implication, the policy advice coming out of conventional, thin-tailed cost-benefit analyses must be treated with skepticism. Quote, quote. Weitzman goes on to suggest various ways in which conventional analysis might be modified to take proper account of the risk of extreme events. But the point I want to emphasize is that what is normally taken as providing the basis for rational decision-making, namely cost-benefit analysis, the standard against which human behavior is judged to be lacking in rationality is itself deeply problematic. For the sorts of problems we face in this area, the tools needed to connect rational decision theory with our predicament are simply not available. And an obvious question arises while environmental economists search for better that is more rational ways to account for the impact of extreme events might it not be possible to identify heuristics that, however imperfect, provide a more reliable guide for future action than do the dominant modes of analysis that are now in use. Indeed, might even ordinary people have evolved or developed heuristics that can outperform standard cost-benefit analysis. Gigarenza and his colleague Fiedler, Klaus Fiedler, seem to think they have. For example, they suggest that slavic data indicating a central role of dread in skewing perceptions of risk in the face of extreme hazards might be reinterpreted as evidence of ecological rationality. Catastrophe avoidance, they write, may not be seen as a socially expensive subjective whim but instead as attention to the third moment of the frequency distribution. The dread risk dimension corresponds to the skewness of the distribution and attention to skewness corresponds to dread risk and the degree of skewness measures the degree of dread. Moreover, they suggest that in assessing the risk of low probability, high impact events, people's attention to skewness may be perfectly reasonable. Close quote. Although the authors do not explicitly say so, a reader might conclude that in environments in which the frequency distributions of hazards is substantially skewed as it is in climate change, dread risk might be seen as an appropriate, even effective heuristic. Indeed, as an example of what slavic himself calls affective rationality. So far, my discussion is focused on the reliability of risk assessment and the closely related question of what people believe. But there's another problem as well. Belief is only a precursor to action and certainly not in itself sufficient to guide behavior. Indeed, the gap between belief and action is huge and is subject to much commentary. For example, when public confidence in the reports of climate scientists was at its peak in the US and when belief in the imminent dangers of global warming seemed to be shared by a majority of American citizens, which it no longer is, people nonetheless expressed a widespread reluctance to make any sacrifice that could help in lessening the dangers. Indeed, there seemed to be a growing gap between intellectual awareness of the problems and a willingness to enact effective precautions. Now why should this be? Psychologists have generally attributed this gap to a lack of emotional engagement with either the urgency or the magnitude of threat. People, while they may believe they're at risk, they don't feel at risk. A task force meeting in 2008 and 2009 found at least a partial explanation in the different ways in which affect-driven and analytic processes function. And they go back to what I talked about earlier. The two types of processes typically operate in parallel and interact with each other. I can actually skip this. The bottom line was I suggested the affective system is not engaged by the warnings of climate scientists. And here, too, the relevance of fear has particular salience. Emotions are rational by this argument because they enable us to act as a drawing on the neuroscience literature, especially under conditions where rational analysis either fails or is inconclusive. This claim bears especially on fear, an emotion sometimes responsible for the difference between life and death, according to Joseph LeDoux. If you were a small animal threatened by a predator and had to make a deliberate decision about what to do, you would have to consider the likelihood of each possible choice succeeding or failing and could get so bogged down in decision-making that you would be eaten before you made the choice. Under such circumstances, fear would clearly seem to be a useful heuristic. But most of us are very wary of fear and for good reason. Our experiences with the political uses of this emotion after 9-11 clearly underscore just how powerful an emotion fear is and how unwise the choices it can lead us to make. Furthermore, as a motivating force, its effect is notoriously double-edged. While fear can spur people to action, it can also impede action. Even if a necessary ingredient for translating belief into action, it can also lead to avoidance, denial, and inaction. The issue, apparently, is one of context. Climate scientists, even those who are themselves alarmed, may be especially wary of evoking fear in their readers. Indeed, there are powerful constraints inhibiting all scientists from directly seeking such an engagement. There, after all, is the domain of the rational, not of the emotional. Their aim is to inform, to evoke in the reader a rational response to what they as scientists have learned. And of all the emotions, fear is generally regarded as most counterproductive to the forming of rational responses. Furthermore, because of our current sensitivity to the ease with which it can be politically manipulated, the fear of fear has now itself become a political weapon in the debates about climate change. As climate scientists know better than most, this is a weapon that climate skeptics do not hesitate to deploy. Those like Jim Hansen, who elaborate on the scenarios that cannot but be frightening, are called alarmists, fear-mongers, and they're accused of creating a climate of fear, of spreading climate porn, a narrative of fear. No one wants to appear guilty of such charges, especially not climate scientists. But if conventional decision theory has routinely undervalued the risk of catastrophic events, if conventional uses of cost-benefit analysis cannot be taken as a standard against which to judge non-expert estimates of such risks, and if fear is identified as a central factor leading us to, so quote, overestimate such risks, might not be viewed, at least in this particular case, as compensating for the underestimates common to expert reasoning, that in the face of risks before which conventional theorizing about risk manifestly fail, that fear, rather than something to be avoided, might sometimes serve as a useful heuristic for a more rational response. One reaction to the difficulty of assigning probabilities to inherently unpredictable events, such as the tipping point of runaway climate change, is to give up on computations that depend on them, and instead try to avoid such events in whatever ways are possible. The precautionary principle is one obvious form this response takes. And Cass Sunstein has done an excellent job of enumerating many of the problems with this principle, especially in its crude form of to do no harm. He focuses primarily in the great variety of ways in which harm can be done, including by the very exercise of precaution, and he's right. These costs too need to be included in the calculation. But a key flaw in Sunstein's efforts to amend the precautionary principle is that he fails to address the fundamental problem that invites its formulation, the difficulty, that is the difficulty or impossibility of performing such calculations. And for this, the writings of the philosopher Hans Jonas may be more to the point. Writing over 30 years ago, Jonas was already at that point, at that early point, worried about the future of the environment, and he sought to articulate an ethics for the future, especially an ethics of responsibility for distant contingencies, where, quote, that which is to be feared has never yet happened and has perhaps no analogies in past or present experience, close quote. Indeed, he argued that it is precisely when scientific knowledge is insufficient for predicting the future that an ethically required extrapolation must take over. For Jonas, the mere knowledge of possibilities suffice for such an extrapolation, for the identification of appropriate ethical principles. As he writes, it is the content, not the certainty of the then, that offers to the imagination as possible, which can bring to life principles of morality here to for unknown. Central to Jonas' method is what he calls the heuristic of fear. His argument is often likened and even conflated with the precautionary principle, but I think this is a misreading. For Jonas, the heuristic of fear is more in the nature of a requisite for the moral considerations that need to underlie a precautionary principle or any other such principle. By his reasoning, we learn what it is that we value, what we are committed to preserving, only when that something is under threat. Accordingly, moral philosophy must consult our fears prior to our wishes. In order to learn what it is we truly cherish. The particular challenge raised by distant and future threats is that appropriate fear may not be in evidence. And despite the pervasiveness of fear as a natural autonomic response to present and imminent danger, future threats require an effort of reason and imagination to evoke the appropriate fear. Our response to dangers that are imagined and distant arouse our response to dangers that are imagined and distant are in that sense less natural. They require not only reason and imagination, but also education. And Jonas writes, we must educate our soul to a willingness to let itself be affected by the mere thought of possible calamities to future generations. Bringing ourselves to this emotional readiness, developing an attitude open to the stirrings of fear in the face of merely conjectural and distant forecasts concerning man's destiny, requires a new kind of sentimental education. Thirty years ago, the forecast to which Jonas referred may have been merely conjectural, but they are no longer so today. Yet, an appropriate response to the measurements and predictions of contemporary climate science has not been forthcoming and, indeed, seems to be ever-receding. Two summers ago, the Honest on Record, the U.S. Senate declined to even consider legislation to regulate greenhouse gas emissions. Those whose hope had been raised by Obama's early promises were devastated as the Canadian political scientist Homer Dixon wrote, climate policy is gridlocked and there's virtually no chance of a breakthrough. Three months later, the outcome of the 2010 elections lent his predictions stark affirmation. Many factors contributed to this denouement, but surely one critical contributor was the advice our legislators had received from the country's most respected economic analysts. Regulation might be desirable, but the calculations most commonly employed purport to show that it cannot be justified on purely economic grounds. Quite simply, it does not pass the asset test of positive net benefit. In other words, economic analyses have an enormous influence on public policy and if the assumptions on which such judgments are based are faulty, as I've argued they are, we all bear the consequences. Indeed, the confidence with which policy analysts have accepted the application of mainstream standards of economic rationality to the particular problems posed by climate change seems something of a puzzle to me, especially given the mounting criticism of those of the standards that we've begun to see among economists themselves, and especially in comparison with the scrutiny under which the claims of climate scientists have recently been put. Perhaps it is time to put economists to the same kind of scrutiny. Without question, the task of estimating the cause of climate change taxes economists with the difficulty of dealing with problems extending well beyond their traditional domain. And it is of little wonder at questions about the applicability of traditional analytic or rational criteria in this new domain should arise. As Joan has so presently observed, the specter of such distant threats require new ways of thinking, ways of thinking that are unfamiliar to contemporary human sciences, but that may prove to be more reliable and hence more rational than those with which we are most familiar. I mean more rational for the simple reason that they better prepare us for actions appropriate to the threats we face. Thank you. No one find the border. I have no questions. Sure. I guess I struggle with is like you're talking about the fear of stick of fear and that we should value that feeling because it points us to what we value and what we want to hold dear. But what confuses me is that we have nothing to compare it to. In the face of unknown catastrophic risk, our values are our values in here and now and so I just wonder how sustainable they are or if you think that values are so central or core to ourselves that we can assume that our values would be the same in this like unknown future. The last. I understand. And I think that this is assuming the same that faced with catastrophic threat that we are reduced to a kind of core moral level that is in this, you know that we will never give up. I hope. And I'm thinking of, you know, one of the stories that is always in the back of my mind when I think about this is in the days of Manhattan Project when they were performing the first test of the A-bomb. And Fermi, Enrico Fermi was sitting behind a tree or somewhere protected calculating the probability that the bomb would ignite the atmosphere. I mean, igniting the atmosphere is pretty scary. And as calculations came out, the probability was really very small like I don't know, 10 to the minus 40 if it's something really small. In fact, the calculation was subsequently published in their physical review. I argue that suppose this calculation came out 10% or 20%, would he and the other scientists have proceeded? And I claim, this is sheer, I can't prove this, but I claim that they would have stopped. That faced with such a disastrous possibility and now we can calculate the exact probability but we might be able to calculate it that is over 10%. It's faced with that order of magnitude probability we wouldn't risk it. Nobody would risk it. Or maybe there were some wounds out there but most people would not risk it. And that's the core of discovering what you truly value. What we truly value is the survival of the earth, the world in which we live. Because when I was conceptualizing values, I was thinking about like particular detailed values of each individual, but as the way that you're articulating a complete value to just mean like survival, like interest in life. These calculations are very hard to do and they just, the outcomes are so uncertain. But suppose we, it's interesting that there's very little literature on the actual human costs of global warming. Everybody has agreed that we should limit global warming to two degrees centigrade although that's already not feasible. But nobody tells us what's the cost of two degrees warming in human lives, or three degrees. Suppose we actually were able to pull together all the health data on that and we show that the cost of two degree warming is not unreasonable, it would be somewhere between two billion and ten billion. What's the population of the earth? Is that a seven billion? What? Between, let's say between one and seven billion. I mean a big range, but it's in the billions. Would that make it stand up? Would, you know, billions of deaths make you stand up? I don't know. I think it would. Even though it's not your own immediate family or your own immediate neighbors, I think there's a point beyond which most people would not morally go. And I think that intuition is shared with me on this. Yeah. I think kind of a worry is that the more belief system seemed of a lot of people, including people with a lot of political power in North America right now seemed to work on one of two assumptions. One is the scientists are going to find a way around it. There is an engineering solution. And so we trust that we're not going to have to deal with climate change because there's some magical side of different answers out there. Just we have about the smartest ones out there finding it yet. But the other one that wasn't really scary to me even more are the evangelical right and reporting the things living in Canada as prime minister that believed that God wouldn't let it happen. That God is not going to let human fund suffer another equivalent of Noah's flood. And so we don't know the answer, but there is an answer there. And those sorts of different belief systems that I hear are part of the resistance and they're rooted in a kind of moral view. It's quite different from mine, but it's just as passionately held and I don't think you're going to reach. You do with those. I don't know that you can do anything with those, but in the States that's not such a prevalent, those are not prevalent responses. And in fact, there's a significant part of the evangelical community that has taken on the burden of protecting Europe. And that is opposed to the climate change deniers. So the forces are a little bit different. It's really about the left wing, you know, associating climate science with the left wing and calling, pointing to the money that the climate scientists get so they've been bought out. It's really about protecting and fighting against regulation and fighting against the intellectual elite in the Northeast. Our money to keep portraying the issue. So, you know, affecting those beliefs is, I mean, there was a program on frontline just a few days ago about the climate, it's called the climate of doubt, and it was about this extraordinary change that has taken place in the US in the last four years and how climate, the issue of climate could not be raised in this election, in this campaign. Four years, and it was a very deliberate strategy and they changed belief. But I don't think, and I don't hesitate to use fear, by the way, but I think, anyway, fear is, fear is the real issue, the focus of my talk. It's obviously a very tricky, difficult emotion, very subject to manipulation. But without it, I think one can proceed. Is it different when it's fear of distant possibility? It's definitely different. That would known as this point. You don't want to call it the same emotion in the same way. Maybe not, maybe not. But that was exactly what Yonez says. Fear is automatically evoked when it's immediate and close. But it's not when it's distant in the future. And we have to learn how to become afraid. And that's not going to happen if climate scientists are muscled when it comes to predicting what the costs are. It's about the future. I think it's about uncertainty so much. I think it starts with fear. It's both. Because that's when you get to folks and a political agenda of instilling uncertainty in a public debate. The problems about the future are more glaring when you turn to the question of discount rates, which I didn't talk about to let down, and how we estimate the value of a future life. And by most calculations, those economic calculations like 500 years in the future were a tiny fraction of the present life. There were a bunch of hands, yeah. What do you think about climate terror? Climate terror? I mean, greed, peace. You know, the guys who flew a paraglider onto the nuclear assault in Spain were sort of against them. The guys who flew a paraglider. You know, the paraglider is the type just to sort of demonstrate there's this potential, there's this threat. And then they traced back the climate or greed, peace, terrorists to have some connection with other forms of terror. The women. That was against nuclear power. Yeah. Not against climate change. Greed peace is in an awkward position here because their principal concern historically has been with nuclear power. And nuclear power is one of the... It's a central resource in fighting climate change. So they're kind of caught here. But what's their connection to terror? None. Well, I mean... Come on. It's just the... Do you think it is sort of morally grounded in some sense and if the purpose is to try and... In both that, would you risk your fear or, you know... Greed peace is not killing three... How many people were killed in 9-11? 3,000. 3,000? 3,000. Pardon me? 3,000. Yeah, 3,000. Greed peace is not killing people. I don't know why you call it acts terrorist. I guess I'm not highly legal and destructive with what I'm saying. What's the difference? I mean, there are all kinds of political actions which are... I mean, sit-ins can be very effective. They may be... They may block traffic and destroy, you know, disrupt the city. But the... You're talking about civil disobedience, not terror. Yeah. Yeah, I have a question. So I think it's a big problem to conflating them with each other. Yeah. And civil disobedience, I think, depends on the context totally. But you have to ask the question, what political strategies are available? What are possible? What can people do to make their concerns visible? I have to share my chair with you. So my understanding of fear is that it's based in the... Somewhere in the base of our brain. And it is a gut, fight-or-flight response that is created by it. And I'm just sitting here pondering also a line from Popeye, which is, I'll gladly pay you Tuesday for a hamburger today. And thinking about the kind of... The very dangerous calculation that our governments, both Canadian and US, are making to edify present populations and their present programs and risking the long-term viability of the planet. But based on what you've been saying about the distance-ness of the fear, I just wonder what does cause some of us to be fearful about it for one thing and not just be accepting that we're better off today and we'll worry about the rest of it later. And also, what could in your... Have you thought about what could encourage more fear if that's what is needed? Well, I think what is necessary is spelling out the much more vivid spelling out of the implications than I see just about anywhere. Of the cost in human lives, the cost in health, the cost... The human costs of serious climate change. I think, you know, people think, oh, we can buy real estate in Siberia or we can move to Halifax. We'll have eternal summer in Halifax. Wouldn't that be heaven? So your government in mind is not going to pay for that research? Are you hopeful that academia will? Well, government pays for the research that academia does. Yes. So, yeah, I think that is what... I think academia should, and it's still possible to get funding for, I know, a lot of different kinds of research. I think the World Health Organization has to step up its efforts. But what makes... I mean, you know, the honest talks about a sentimental education. What may... What evokes concern about the future? Now, I think it's clear that people who survival, daily survivors under threat are not susceptible to concerns about three generations down the road. Their primary concern is their survival to tomorrow. And that's unavoidable. But what about the rest of us? Those of us who survive are pretty secure at least for a while. What does it take for us to evoke concern about future generations? And I think, again, this is what Jonas is saying. It's fear. I mean, you have to present the consequences. So what... By my little story about Fermi's calculation, something has to happen to people to... They're not concerned in Europe. They're not even thinking about the risk of igniting the atmosphere. But if you present them with a calculation, then they gotta start... They gotta think about it. There's no choice. And they do. And it changes people. So what makes me concerned about future generations? And not... Well, because I believe the data. I believe what the climate scientists are saying. And I extrapolate from it because I know that they're not filling in the gaps in the argument. A denier or people who are persuaded by the deniers have the easy place to hide. It's not true. It's just what these climate scientists have been bought out by the NSF. That they have a good life. But along with Popeye, I'm thinking about the Aboriginal conception of taking care of the earth for seven generations behind and seven generations to come. And I'm not sure that that apparent tradition was motivated by fear or if it was a more broad mindset. It's not saying it's always motivated by fear, but that fear can evoke such a concern. Because it evokes what your most basic values are. What you will not tolerate. Only when faced with the threat. You care about the seven generations in the future. That's the argument. Fear is a tool. Yeah? I'm going to agree with use fear to address some issues. And when it comes to climate change, I personally believe it associates with some human activities. And we do need to take responsibility for our actions and reap the consequences of those actions. And even though there is a God, I don't think God will rescue us from our bad behaviors. I don't think so either. But that's not a bad note on which to... So I want to thank the situation, Situating Science Program and University of King's College for cosponsoring with the Helpline Institute this seminar. They were the primary accurate in bringing Evelyn in for Boston and we were very fortunate to be able to take advantage of her coming here to give a number of talks. So if you just look online, you'll find information about the various talks that were taxing for Dr. Keller to give Yalba a period of two or three weeks? Three and a half weeks. Three and a half weeks, yes. So we're fortunate for her presence and we're fortunate to the Situating Science Program. When I was doing research on Dr. Keller for purposes of introduction, I came across a Guardian Book Review table which was Fox Among the Lab Rats. I thought it was pretty clever and I think that makes us the lab rats but very willing, conscious capable lab rats participating with you. It was a bit of an experiment and I thank you very much for... Thank you.