 What interests me, and what keeps me sometimes sleepless, is how people make decisions under risk and uncertainty. According to much of research today in behavioral economics, in psychology, in other fields, humans systematically violate certain norms that are called norms of coherence. That's like consistency, transitivity, and other content-blind norms that have no content, no context, no causality, no time, nothing. And my question is, are these good norms? And the assumption is that people who violate these norms would incur costs. For instance, suffer from less wealth, health, or just happiness, or whatever it is. In this paper, we have looked at the question, is there evidence that violations of coherence would actually lead to this kind of costs? And why is this important? Because the assumption today in many fields is that you and I, people, violate coherence, that this is an error and which leads to costs, and it's used as an explanation for all kinds of human disasters. And as a consequence, the government has to step in and lead us not just where we want to be. And this is the political side of the question, is there any evidence that this type of experimental demonstrations justify governmental paternalism in the 21st century? So the way we approached this question is that we defined a number of so-called cognitive errors, cognitive illusions, or whatever term it's used, which are all violations of coherence. Here's one example. Someone has a, or you have a severe heart condition and you think about whether you should have heart surgery. It's a dangerous surgery. You ask your doctor what the prospect is. The doctor has now two ways to answer these questions which are logically equivalent. One is, there's a 90% chance that you survive. The other is, there's a 10% chance that you die. Human beings react differently. So they are more willing to accept the operation if it's positively framed, 90% chance to survive, and not go for it otherwise. According to the coherence literature, this is an error because it's logically the same thing. So this is what we are talking about. One can easily defend. It's called framing. One can easily defend people by thinking. Now they are basically thinking. They're reading between the lines. They know that the doctor gives a message, but it's beyond the coherence. So we were looking at framing, at intransitivity, at many other violations of coherence, and searching the entire literature. And since you may miss something, we were looking in the review articles on these so-called cognitive errors for studies that show that it has actually an impact on health, on wealth, on happiness outside the laboratory. And in addition, we asked our colleagues for studies that we might have missed that can show that violations of coherence actually have costs. So what were the results? Now, as I mentioned, almost everyone in these fields assumes that violations of coherence are costly. So we were looking first at the so-called money pump. Money pump is if you prefer A over B, B over C, and then C over A. That's intransitive. And the argument is if you're willing to pay a little bit for your preferences, you are a money pump. I can take out all money from you. We were looking whether there is evidence. We found no evidence in the literature, and in the rare cases where someone committed an intransitive circle, people very quickly learned. So then we were looking for framing, for preference reversals, and all the other major violations such as Bayesian inconsistency and additivity of probabilities. So the bottom line is we could not find any consistent evidence that violations of coherence would incur loss of health, of wealth, of happiness, or something else. What is the relevance of these findings? First, it shows that in large parts of the social sciences, we feature a notion of rationality for which we have no evidence that it has costs. That's a chess that we may have the wrong notion of rationality. The alternative is to put aside this purely logical notion of rationality and replace it by what we call an ecological notion of rationality. That is a notion of rationality that is sensitive to the structure of the environments, to the goals, to the content, to the context, and which avoids that intelligent behavior is mistaken as irrationality. For instance, just to illustrate the point, it's a very simple example. One of the most featured demonstrations of incoherence is called the Linda problem. How does it work? You read a story about a person named Linda, which reads like she's 31 years old, studied philosophy, and it's written in a way as if or to suggest that she might be a feminist. There's no evidence that she's a bank teller. But then the question is, what is more likely Linda is a bank teller? And you say, boom, what? Or Linda is a bank teller and active in the feminist movement. And you say, yeah, that at least makes sense. But then, by coherence measures, you are wrong because the probability of being A of A can never be larger than the probability of A and B. It's like a set and a subset. That's the reasoning. But the reason that people make a different conclusion is not irrational. They think. They're intelligent. And the norms in this case are content-free norms. The only thing according to the coherence norm you should think about is the word probable and end. Nothing else counts. It's the set-subset relation. And probable must mean, mathematically probable, and end must mean logical end. If you just have a look in the OED, the Oxford English Dictionary, that's an illusion. Probable means many things. So when we did an experiment and made it clear that it's about probability, so by having the description of Linda and asking there are 100 people like Linda, how many are bank tellers, how many are bank tellers and active in the feminist movement, the entire so-called illusion disappears. People are smart. They use intelligence. So one of the key results is that we need to rethink our standards of rationality and also our standards of calling people irrational. And as this example shows, human intelligence is much smarter than simple logic. And with simple logic, we would understand very little back to this example. For instance, when I say I invited this evening friends and colleagues, the end doesn't mean the intersection. It means the logical or, so the entire union. And we understand this immediately. This is a very smart intelligence to infer whatever it means and not a logical error. So here the theory of rationality is quite simplistic and leads us to blame people and the human intelligence is irrational where it isn't. What are the lessons to be learned? What's the future? So I work on developing an alternative conception of rationality that no longer is defined by logic or coherence. And that's what I call ecological rationality. It's about the strategies that people use and the environment in which they make decisions and how that fits that has little to do with coherence. So this is a vision about rethinking rationality. And it respects more than looking at logic what actual people do. Second, the political side is to stop the message that we are all irrational and according to some books predictably irrational. By rethinking rationality and analyzing better what people do wrong and what people do right without just relying on coherence. And here the maybe more important political messages that we should start teaching young people how to make good decisions. So that includes statistical thinking that's the coherence part but also heuristic thinking that is how to deal with uncertainties where coherence and statistics doesn't help you very well. So what smart rules that a doctor can use to make better diagnosis? So we work with experts how to regulate the financial sector rather than doing complicated estimations and computations like value at risk computations where a large bank has to estimate thousands of risk factors correlation matrixes in the order of millions and that borders on astrology. The result is not more security, not more safety, but more uncertainty. And here very simple rules if we systematically study them can bring an alternative to getting in more rationality, more reason and a safer world.