 Well, welcome, welcome everyone in presence online. Let me welcome also Roland, funded by the city of Toulouse, the Jean-Jacques Lafond Price is awarded every year to an international economist who has made outstanding contributions to theoretical and or empirical research. This year's prize goes to Roland Ben-Abou. It's both a great honor and a great pleasure for me to introduce Roland. It's an honor because I've always been very admirative of his work and I've learned a lot from him. He's a deep thinker whose past breaking contribution have had a long lasting impact on the profession. Roland received his PhD from MIT in 1986. He taught in particular at NYU, MIT and Princeton, where he has been since 1999. His work has been recognized through many honors such as the membership of the American Academy of Arts and Sciences and several others learned societies and academic networks. He is a corresponding member of the Institute of Arts in Paris and he has been invited to deliver many keynotes at international conferences. Roland is also known to be a high provider of public goods to colleagues and students in bilateral interactions, seminars and conferences. He is a member of scientific councils including ours of program and organizing committees of conferences. He has also fulfilled more than his share of editorial jobs as associate editor or more recently as co-editor of the American Economic Review from 2014 to 2020. Let me briefly mention his scientific contribution also you know them. He is one of the top series in the world and his work has been repeatedly ahead of the game, sometimes too ahead of the game. In almost every area that he chose to work on his ideas and contribution were off the beaten path at the time, but they turn out to be premonitory. So for example, one of his first work was on inflation in the late 80s and 90s. Unfortunately, that was a time when inflation was kept under control for the first time. Also now it may become much more popular with recent events. Roland noted that inflation alters agents behavior for several reasons. One of the reasons the first is that they may stockpile. That's something you have forgotten, we have forgotten in the last 40 years. But of course, when you want to stockpile and store goods just before the nominal adjustment, price adjustment, and that's interesting mainly of course when inflation is high. It affects behaviors when they search both through the level and through the variance of inflation. So, both in Issyrian in the data, the level of inflation increases the dispersion of real prices in the industry, and therefore it induces you to search more. And also the variance of inflation is important. Just consider the decision we face almost daily when we observe high price at the shop. Should we visit another shop or else in further the shops common cause has gone up, and that there is no point searching further. So the variance is matters a lot. One of the biggest models was that they were fully microfunded, while less well known that than a subsequent work. They already illustrated some of Roland's quality, like economic relevance, originality and rigor of analysis. A few years later, Roland became a pioneer of both the new economics of geography and the economics of income distribution. One of his work is incentive to segregate and the welfare impact of such behavior in a world with local externalities, neighborhood effects, school funding, as well as global linkages like complimentary skills and knowledge So for example is 1993 paper on the working of a city describe the economics underpinning of the ghettoization of cities. This research initiated a substantial literature connected socio economic stratification residential segregation, human capital inequality, neighborhood effects, local school funding, intergenerational social mobility and aggregate outcomes like metropolitan and national activity. In the decades before the recent explosion of big data empirical work on these topics. This model predicted several of its key findings, such as a positive relationship between local inequality and is in residential segregation, a negative correlation between segregation and social inequality and a negative correlation as well between inequality and social mobility is something which is called literature as a great gas beaker. NBR macroeconomics and role 1996 paper on inequality and growth and is 2021, 2001 paper with FAA on the link between prospects of social mobility and the pulse lack of support for high level of distribution, installed him as one of the key players in the context of income distribution economics, and he has since been very involved as an organizer in the NBR group on income distribution and macroeconomics. I will not comment on Roland's work on behavioral economics, and we just say that Roland has more than anyone else, anyone else contributed to cross disciplinary effort trying to understand individual and group decision making with many of his prediction statements received on pay call validation unifying seems of his many contribution to decision making is that deviations from the home economics paradigm are not random and reflect the demand for important features such as such as empathy and desire I believe systematic deviations from the paradigm lead to sharp predictions, which are amenable to on vehicle testing. The point is is truly a price breaking work on collective denials of reality, reality, group sync. And to better understand the contagious aspect of collective blindness, how long incorporates emotions such as anxiety that arise from uncertain prospects. Because of such anticipatory preferences, an agent may prefer to ignore through selective encoding or forgetting of receive signals, a posterior rationalization, etc. The real hazard he faces, even at the cost of bad decisions. Clearly the paper examines how the nature of economic or social interactions between agents determines the modes of thinking that will emerge from the general equilibrium. This analysis of group sync highlights the possibility of denial of reality by community as a whole. I help us explain the recurrent cases of companies, institutions, or political regimes that self destruct through collective blindness. Today we'll be talking about beliefs and these beliefs, the economics of which we're thinking and we may learn more about this line of research. I said it was a big honor for me to introduce how long it's also a great pleasure. I felt a bit ambivalent when I was asked to introduce Roland, as he's both a very valued co-authors, and a very dear friend. By the way, we are delighted to welcome today Laurie, his wife and their children Rafaela, Yann, and Andrea, thanks for being here. Nonetheless, I accepted for three reasons. The first is that also I'm thrilled by the choice of Roland as for the Jean-Jacques Lafon price and nothing to do with it. As you all know, the price winner is actually selected through a vote by all TSEP members. And that was very enthusiastic votes. We have a lot of very prestigious and recognized scholars and the votes are usually pretty close. Roland was not a Stalinist vote, but actually was pretty enthusiastic as those things go. Second reason, this introduction gives me the opportunity to thank Roland for being an old friend of TSE where he spent three years as visited regularly and sits on the scientific council. Many of you know him and appreciate his advice and integrity. And finally today gives me the opportunity to thank Roland for everything I've learned from him and for his longstanding friendship. That really means a lot to me. Let us give a big round of applause to Roland Ben-Abo, winner of the 2021 Jean-Jacques Lafon price. Thank you, Jean, for this incredibly nice introduction. It's a great and humbling honor to be here on this occasion and also a special pleasure for many reasons. As Jean mentioned, I and my family spent many wonderful years in Toulouse, and in particular, well several important things started here. Two of our children were born here during our first day, so we have two Toulouse in the family. And the second is a line of research very interconnected and in fact very similar to what I will be talking out today that Jean and I started when I first visited in 97-98. And much of what I will talk about today on the theoretical side, always much to this collaboration. And also particularly happy about the association with Jean-Jacques Lafon for many reasons first because he made everything possible, including this amazing building and the manufacturer before that. And I've kind of just like our children have grown, I've seen TSC grow from one building on the campus to the manufacturer to now this amazing place full of amazing people. And also Jean-Jacques and Colette were as well as Jean and Nathalie were super helpful friends when we were in Toulouse on all these occasions and especially when we were struggling with our newborn twins. And finally, it's a great pleasure to see you all here again and in particular Jean and we have an ongoing or several ongoing research agendas, so thank you very much. How much time do I have, an hour or an hour? Okay, all right. So, let me start with a little bit of a historical, this doesn't seem to be working. Let me start with a little bit of historical perspective or overview on how economists have been thinking about beliefs. Okay. Seems to be stuck. Sorry. Okay, perfect. Okay, thank you. So, for a long time and I would say until the mid 70s, the way we modeled or we thought about agents, people, forming expectations and forming other beliefs was as fairly unsophisticated. The way you tried to forecast the future was to look at the past and do some kind of naive extrapolation or adaptive learning. And that was the way things were done. Then from the mid 70s to the mid 80s and still going on today, there was there was a revolution or two linked to rational expectations and game theory. And now the way we model economic actors was as hyper sophisticated statisticians and strategists who did not know everything because everything, you know, information is costly to acquire and but you know what information they had, they use very efficiently very strategically. And that gave rise to so called rational expectations in macroeconomics and then, you know, sophisticated refinements of Bayesian equilibrium and dynamic equilibrium and game theory and that is still, you know, to be fair, largely dominant in the discipline today. The pendulum swung with a heavy influence coming from psychology the so called cognitive revolution in psychology, introduced in economics largely to come on first ski and others. And now in this behavioral literature, at least in the first wave of the behavioral literature agents to be a little oversimplifying or dumb again, or at least they are very imperfect statisticians very naive strategists. They're prone to all sorts of mistakes that they can't help making when dealing with probabilities when dealing with contingent thinking. And here I've mentioned a few confirmation bias base rate neglect hot hand fallacy. If you look up heuristics and biases in Wikipedia you'll find over 100 of them listed. But you know also what Danny Kalman kind of refers to a system one thinking. And then what I want to talk about today is a literature that has been growing since I guess, maybe the 2000s or something like that both theoretically and experimentally or empirically, which is generally known as motivated beliefs or motivated cognition or motivated reasoning, the layman's translation would be wishful thinking. And here the idea is that yes people depart from, you know rational expectations and Bayesian updating etc. But not because or not just because they are imperfect statisticians, but still cold blooded statisticians or try to be, but because of desires because of emotions. And that is many beliefs that we hold, we hold because we want to hold if they, even if they don't fully accord with reality, but they might be pleasant or more pleasant than the alternative to hold, or they might help us function in everyday life, get out of bed, finish a paper, and so on. I have to say, you know that initially this was, and to some extent still is strongly resisted or maybe even met with hostility in some quarters in traditional economics, you know very still wedded to rational expectation, Bayesian way of thinking, and so on. The idea that people not try to deceive others that's very standard, but deceive themselves that was kind of hard to accept. And, and in on the other side in the kind of first wave of the behavioral economics. The idea that it's not just limitations of the cognitive machinery but that it's kind of that's kind of motive behind it was also resisted. Maybe because the, you know, academic psychology was traumatized by the possible link to Freud, you know, repressed memories of unconscious desires etc. And that that had a bad name. Today, this literature has progressed and what I want to do today is to give you a little overview of it, kind of switching back and forth between the some glimpse at the theory. I've worked on and, and with john in particular or or some I've done by myself but they're very closely linked together. And especially I want to emphasize the experimental and empirical work that has been done, which has both nourish the development of the theory and provided tests of the theories and ways of guiding it one way or the other, and, and none of this work is mine, nor ours. But it's, it's the work of a lot of exciting people, including very young people. Okay. So when I talk about motivated beliefs or wishful thinking, you know, what are they going to be about or what could they be about they could be about most things that we care about and care about is important. So first, the cell going in concentric circle the self, how talented, intelligent, attractive, moral, am I, am I going to be successful or dismal failure, sick or not. What kind of a person am I what's my identity, and so on. Then, about how the world works, either smaller groups of people. This is where you get into stereotypes, or larger groups like society, what are the causes of inequality, what are causes of social mobility, what is immoral or moral are other people's trustworthy or not trustworthy There are very important beliefs on which we base our decisions, every day, and therefore the way, you know, whether these beliefs accord with reality or not is an important question or what drive these beliefs. And there's a lot of evidence and I guess today it may be obvious when you look at, you know, politics, conspiracy theories, anti vax, etc. It may be completely obvious that people to a large extent, believe what they want to believe or try to believe what they want to believe. But as I said, in at least in the academic literature and 20 years ago, this was somewhat heretical. So our view is that, you know, beliefs are not just signals for better decisions that you try to get as right as possible, given the information you have. But people treat them as assets as valuable possessions that they treasure that they invest in that they defend, sometimes at the cost of their lives. Now, in order to operationalize this we have to, you know, there are two questions two hard questions that have to be answers the first one is how do you know what people believe. Okay, well you can ask them, but if the question is about how smart they are how honest, whether crime went up under Obama relative or under Trump relative to to to Obama or vice versa. You're not likely to get, or you're likely to get either a careless answer, or a strategic answer. Okay. So, so the way that typically experimentalists and experimental economists go about this is to make people put their money where their mouth is, is, you say you're more intelligent than average. Are you willing to bet on it you're going to take let's say an IQ test and then you're going to bet and either win money or lose money on your coming on top, or in the 50% top or something like that. You say that crime went up under Obama well we have the statistics statistics are you willing to bet on it. I'll be talking when I describe this literature about incentivize elicitation of beliefs that's exactly what it means. It just to say, well, you, we ask you what you believe and in fact we ask you to bet on it. Okay, so that first makes you care about it. And secondly, that makes it costly to lie. Secondly, we also like to see alright maybe outside the lab if you have these beliefs do you act on them in your daily life in terms of how you invest in your portfolios in your educational decisions in your fertility decisions and so on. This is takes us outside the lab and to the field. And third and maybe most tricky and this is where theory is is perhaps most useful is once we kind of know what you believe. How can we tell if these beliefs are objectively justified, or if they are motivated by some desires emotions, etc. And what are those if they are there. And, and so this is where in some sense both the theory and the empirics have been very complimentary. The theory pointing to some kind of symptoms of predictions of what you would see if the lease are motivated and the empirics you know going to test them. And so just to give you a little flavor of this, not from the lab let me mention an empirical study in the health domain that will kind of bring these questions to the four. So there's a terrible neurological disease called Huntington's disease it's genetic. If you get it, you will, you know, have get worse and worse through life is going to be terrible and you will die relatively young. If one parent has the gene, then you start with a 50% chance of developing the disease during your life. If you see a doctor start having symptoms based on your symptoms they can say well I think now that probably a 60% 70% 90% or another way to find out is there's a test that you can take that is 100% accurate. That will tell you from day one, whether or not you care the gene and therefore will develop Huntington's disease or not. Okay, so in this paper the authors looked at a set of patients who had a parent who had the gene, and they looked at their beliefs. And in this case they were not incentivized but we'll see how they measure to reality, their beliefs about the likelihood that they have the gene or that they have the disease. Okay, so I hope you can see this on the screen. So on the left hand side on the left hand side here is the motor score this is some, this is some score assigned by the doctor that, you know, based on which they update the probability that you have Huntington's disease. So based on that the actual probability that you have it it starts at 50% when you have no symptoms, and then it goes through 100% as your motor score gets worse and worse. Okay, now among the people who carry the gene but haven't been tested. What are their stated beliefs about the likelihood that they have it. Well they start below 40% and they don't evolve much even when the objective probability is close to 100%. There's even a substantial fraction like 10% who think that for sure they don't know who say that for sure they don't have it that the probability is zero. Now, as I said, this is what they say do they really believe it will have to see. There are one thing that do they act on it well one thing that they could do is take the test. Okay, and presumably you'd want to know if you have this disease, you cannot do anything about the disease but there are many things about your life that you can do that on whether you have it or not. And you can see that the fraction who are tested or is very very low it's never above 6%. Even when people have likely signs or even certain signs or possible signs. It's very very low. So clearly people don't want to know or choose not to know. And that's one way in which they can maintain these kind of implausible beliefs. Now is it just something that they say to make themselves feel better but they really know deep down that they have it. Well we can look at their decisions, major life decisions. So pregnancy, retirement, divorce is here, recreational change, financial changes, and we can compare the decisions of the people who have the disease, sorry who have the gene, but haven't been tested so they can still say that they don't have it to the people who don't have this HD expansion. And what you can see is that when you know you have the end to the people who are sure that they have it because they took the test. Well when you show you have it you can make some major changes. If you're a woman you're much more likely to have children because later it'll be too late. There's an increase in divorce is an increase in retirement your horizon has shorted their major financial changes, and also their changes in you know recreational activity, which may also have to do with your physical conditions so there are decisions very important decisions that depend on whether you have this gene and will develop this disease or not. The people who are uncertain who have taken not taken the test and we who have seen you know rationally they have some probably of having it that is between 50% and 90% but they say that they think it's 40% or below, they make very few changes. Okay, so this non updating or this stickiness of their beliefs translates into a stickiness of behavior that is clearly or presumably suboptimal behavior. Okay, so this is to introduce, you know the importance of beliefs and decisions based on them as a way of seeing whether they matter or not. Okay, so here's the outline the talk. I'm going to present a kind of unifying framework that's a big word or a way of thinking about self deception in the broad sense, or motivated cognition, and how it differs from these heuristics and biases. So let's focus on individual decisions like for these patients, what does the theory predict or suggest and what's the evidence from the lab, the social science lab concerning beliefs about IQ beauty and politics, the main need of the three things you want in life to be, I don't know, maybe money is another one. And then here to say not from the lab but from the field, then we're going to go and look at people in their work environment or in their real life environment and we'll look at their beliefs and decisions, concerning, you know, managers, people who choose to have a career or not, their financial investments. Then, and, and John has kind of already previewed that I will move on to collective beliefs. And the question here is when do these distortions become contagious or not. And again, a little bit of theoretical thinking, and then some tests related tests from the lab again IQ or team performance you can see here the team the collective aspects already, and then here it should be from the field again, something about the housing and financial crisis. And then if there's time but probably there isn't I'll talk another big belief, namely religion. Okay, so here I think I'm going to go relatively quickly because I've kind of already suggested it, you know, the way to try and set up this kind of framework is to ask, okay, why might people want to hold beliefs are not objective or not the most precise beliefs that they can arrive at given their information and given their cognitive limitations. And in fact, understanding theory that's what they would do, because you make best better decisions. There are two main reasons one is beliefs or something that you consume, shelling I called the talked about the mind as a consuming organs some belief some thoughts are more pleasant or scary than others having to do with self esteem, your ego, your identity, or, you know, how the future will turn out will it turn out good will it turn out bad. Am I, you know, am I having scary beliefs or reassuring beliefs, hopeful beliefs, and so on. The reason that you might have distorted beliefs is that they help you function so slightly over estimating your chances of success might be a good way of getting you to work to persevere to invest when things get tough. Another one which is also instrumental is if I want to convince others that I'm honest or smart, it might be useful to first convince myself and then I will be more convincing. So this is what I refer to as the hedonic and the functional value of beliefs. Now, once you have the why then comes the how you know how can you distort your beliefs will beliefs typically are formed or are updated by acquiring and processing information. So there are several ways that you can do that one is to just not want information like not get tested for for the disease. And the second is when you have information because we have it all the time in life is to pay selectively attention or to recall selectively or interpret in a bias way that that that information. Okay, and that's what I call us distorting your beliefs. And the question of course is ultimately, is it good or bad. Okay, so as I've explained these are going to lead to belief distortions which are very different or quite different from their kind of cold mechanical biases and heuristics, which are of course also very important. Mechanical here are emotions and desires that interact with cognition, these are not pure cognitive failures. When you change the motives, whether economic or psychological that drive these belief distortions you will see the beliefs respond because they're not just mechanical built in another implication which has been verified in literature for mechanical bias and heuristics smarter and more educated people are not less likely to give in to motivated beliefs, sometimes even more. And as I said that's consistent with another line in psychology that coming after the cognitive revolution has kind of reemphasize the role of emotions. So, a little bit of formalism but that's all there will be drawing on some papers by Jean and myself and then on the paper myself. So this I mean let me start with a instrumental motivation to distort your beliefs. Here's a nice quotation from William James the father of modern psychology that kind of illustrates the idea, believe, believe what is in the line of your needs for only by such belief is the need fulfilled. Have faith that you can successfully make it and your feet are nerve to its accomplishments. Nobody writes like that anymore to the unfortunately. So here's, you can just read this or for those who like these things you can look at the little model outline here. So somebody has a task to perform writer paper, you know persevere do their homework, whatever. And that is costly and the payoffs are in the future, and the payoffs are also uncertain they depend for example on your talent, this variable theta, and also obviously on your effort. Normally you trade off the cost versus the expected future payoff. But when it's time to do the homework or finish the paper or, you know, do the hard work. It's very tempting to do something else it's very tempting to psych slack off. You may be subject to temptation and so this cost might be magnified my feel magnified and you might work too little or persevere too little. So unless you can tie your hands in advance like Ulysses to persevere or not to succumb to temptation. One thing that you can do is to convince yourself that the rewards are really larger than they are, because the costs appears larger than it should, then you can offset that by convincing yourself that the rewards are actually or the property of success is large. That takes us to the information processing stage. A person gets some information about the likelihood of success so the project value, and it could be a high signal or low signal it can be a green flag or red flag. The relevant case is when it's a red a low signal which is going to clearly not improve your motivation. Now you can acknowledge that your chances of success or low and then that's going to compound your laziness here, or you can interpret or, you know them in a different way or forget that you saw this little signal or interpret it as a high signal, and that's going to maybe improve your motivation. So there's a trade-off here between the costs, the errors that you might do from overestimating your ability or property of success, versus the extra motivational boost that it gives you. And that's what the model, you know, formalizes. Another one, which is the hedonic value of beliefs, is not that they help you achieve, you know, concrete economic goals, but they are just more pleasant or less scary to hold. So this is kind of their very similar framework. Again, there's an action that is costly. There's a long run payoff, which depends on your action. And then on some unknown state of the world, will the market go up, will the market go down? Is our leader smart, is our leader dumb, etc. Is this a good industry to be in or not? And then from now until you find out exactly what your fate is, you're going to be thinking about it. You're going to be forming expectation of your final well-being, and let's say every night you're going to be, you know, wondering whether things are going to turn out well or not well, and experiencing either hope and happiness or anxiety at these thoughts. Clearly, hope and happiness are better than dread and anxiety, so you have, again, a motive to be optimistic to make you feel better or less scared until this final outcome, but of course, at the possible cost of, you know, making the wrong decisions. And the way you to do it is, again, either not acquire information at all just in case you might get a bad signal, you prefer not to have a bad signal, or if you had a bad signal is to misinterpret it, forget it, and so on. So here you're really consuming beliefs and it's this consumption rather than instrumental value that causes you to possibly want to distort your beliefs. And here's another quotation that illustrates that. So what comes out of this kind of model, you know, again, there's a trade-off between having accurate beliefs and having either pleasant or useful beliefs. And, you know, depending on where the terms of this trade-off you're going to be more or less realist. So I'm going to call realism. It's relative, relevant when you've got some bad news, you've got, you know, you're maybe not that smart, or that project is maybe not that good, how are you going to deal with those bad news? Well, you can be a realist and with some probability. You can acknowledge that this is, you know, the world you live in, and therefore maybe you shouldn't invest anymore and waste your time on this. Or you can be in denial and say, no, this was irrelevant, or I didn't see it, or I forgot it, or the, you know, the referee was biased, and I will succeed. Depending on how much you care or how sensitive you are to these anticipatory feelings, anxiety, hope, etc., or equivalently how much you need to motivate yourself to, you know, get out of bed and persevere, that will affect your degree of realism. And if you don't have these psychological motives, then you'll be a full realist, and then up at some point you'll start becoming less and less a realist, and then you'll reach full denial. Okay. And of course, you know, where this curve lies will depend on the cost of mistakes on and possibly on other things. Let me just directly go to the main predictions of this kind of model. What it typically leads to is asymmetric responses to good news and bad news, good news I accept bad news, I may, you know, deny or forget with some probability. So we'll be looking for evidence on biased recall biased awareness and biased updating of beliefs, or as I mentioned before, maybe I just don't want to know anyway, because I'm afraid of getting the bad news. So where we see evidence of willful blindness people who actually pay not to know. So we can vary this what I call the stakes or the motives, we should be more likely to see this for decisions for which the cost of mistakes is small that's kind of obvious, but one of them is very important voting. There are wrong political beliefs, because of ideology or whatever I will in some sense, vote in a way that's not my best interest, but you know that that cost is very small because I don't make much difference in the election. More interesting issues on which the final resolution is further into the future, because you know that's when you pay the price for having these more pleasant beliefs. This is a task for which perseverance is more of an issue. And then related to this fixed or long lasting forms of capital intelligence health attractiveness, etc. cultural capital things that have to do with me that are going to last for a long time and if they're going to be good for a long time and they're bad, they're going to be bad for a long time. And relatedly, especially we think about finance, in liquid assets assets that I cannot, you know, unload or sell if I think I get bad news on them, things that I'm stuck with. So this is what I call stakes dependent beliefs and that's going to be one of the recurring tests of motivation, namely when I change your stakes whether economic or psychological do I see your beliefs change when they shouldn't change. Okay, they should just reflect what information you have. And out of this you can also get so called endowment effects, you can get escalating commitments if I have a lot of let's say wealth I can convince myself that this is going to make me happy. And then once I've convinced myself of that or, you know, human capital I'm going to get more of it and then now when I have more of it I want to convince myself even more that it's going to make me happy, and so on. And at the end actually it can be a trap, it can be self defeating and related to what psychologists called a hedonic trail treadmill. Okay, now let me turn to evidence. Okay, and, as I said, I'm going to focus on recent or even brand new papers. This is not brand new but it's recent. And the title says it all the good news bad news effect and it's going to show you how we can, you know, elicit these beliefs make people put their money where their mouth is, and then see how they respond to information rationally or not. Okay, so we're going to have bring subjects into the lab and they're going to take an IQ test and so everybody gets a score. And we can rank them, or we can say you're in that. This one is the 10th, first, first, that's how second, that's how third, that's how etc. Or we can also rank them on beauty by a member of the opposite gender through some kind of speed dating exercise. Okay, so that's what these guys did. Or we can give them instead of an intelligence rank or a beauty rank we can get a random rank. Okay, and we don't tell them what their rank is but initially we asked them what do you think if you're in the IQ condition your IQ rank in this pool of subjects is or beauty etc. And then we make them bed for bed on it as I mentioned to put their money where their mouth is. And then we're going to give them a bit of information and see how they respond to that information. So, not once but twice we compare two subjects. So let's say I'm the subject I learned that compared to another randomly subject. My IQ test was higher. Okay, so now you asked me again, well, what do you think your rank is, I should, you know, we should quote a better rank if you tell me that it was lower I should put a lower rank. Okay, and base rule makes specific predictions on by how much you should adjust. And then again, you get another kind of you won or you lost this contest with another subject and then we asked you again about where you think you are in the distribution. On the third stage we say okay, we have these ranks we can tell you your IQ rank or your beauty rank. Would you like to know if you pay a little money we can tell you do you want to pay, or we're going to tell you unless you pay us a little money, do you want to not know. So what does this experiment lead to. It's kind of illustrated by this graph. So these on this side are on this side. This is the, you know, the predicted how much they should change their beliefs and this is how much they actually change their beliefs. This is for bad good news and this is for. And this is for bad news. And the thing that you either can read off the graph or not, I don't know, is the following. So and this is done both for the beauty condition the IQ condition or the random length condition. I don't care if you give me a random rank of nine out of out of 10. What we see is that people update close to rationally for positive signals, and the under update for negative signals, when it is something they have a stake in. So this is something you can see here these are negative signals. If it's, if it's the control condition, you're, you know, you respond fairly strongly to what how you should respond. If it's the beauty condition and the IQ condition, it is much more flat. So you're under updates to negative signals. The other thing that they show at the end of the experiment is if you have arrived at relatively optimistic beliefs about your rank in this beauty contest, you're willing to pay to learn exactly where you are. If you have arrived at relatively optimistic, pessimistic beliefs, you're willing to pay not to learn where you are. So we see these kind of two predicted signs from the theory, asymmetric updating and information aversion. Okay. And there's another experiment that finds similar things. Now, here's another one in a different domain which has to do with political beliefs. Okay, and something very, something very relevant these days fake news. So let X be some objective number. Okay, so either again a loaded number like before it was your IQ or beauty. And now again it could be your IQ rank, or it could be something politically loaded. Was unemployment higher under Obama than under Bush. Did crime go up or down after Texas passed this so that gun law, etc. These are all objective questions on which people form beliefs, which are loaded or not loaded which is you know, what is the length of this river. And again, illicit each subjects guess about X. Okay, and here fact we're going to listen to specific get which is the median gets that is the guest such that they have to put 100% probably that you know they're above the truth, or that they're below the truth. Okay, so if. So your guess you make a guess you're never sure that it's correct you could be above or you could be below. Okay, if you if you say that the unemployment rate under Obama was, you know, 15% you're much like you're very likely to be below that if you said it was 0% you're very likely to be above that we asked you for you know what was the number such that you're equally willing to bet on the fact that it was higher or lower than the number that you state. Okay, then we get that initial median belief, then the subject gets a message, which comes from one of two computers, one computer always says the truth. Okay, so if X was actually above your guess it tells you X is above your guess. The other computer always lies always gives you fake news if X was below your guess. It says, it was above and vice versa, and you don't know with 5050 you get the fake news and we 5050 you get the real news. Then we're going to ask you again, you know, to put your money on whether you think you got real news or fake news. And after that we'll see. We'll ask you again what is your best guess what is your median guess of unemployment under Obama or whatever. Okay. Now this is a very nice design, because we can ask well how do subject assess the reliability of real and fake news depending on whether or not they agree with what we think are there likely political motivations which here was going to be are the Republicans or Democrats. And then how do their beliefs respond to fake news or real news again depending on whether they align or not with their political desires. And that's a good job test because for anyone who's rational for a Bayesian, and even for non Bayesian who makes most of these, you know, mechanical mistakes in updating. There is zero information in a message that with probably 50% tells you that reality is above your median guests and with probably 50% is below. That's the definition of your median guests. So there's nothing to infer from such a message, either about the reliability of the source, or to update what you think was the unemployment rate. And indeed when you ask this question about something neutral in this online experiment. Most people don't update much about the source veracity, or about the question at hand they understand that you know there's no information there. So ask them a sensitive question. Okay, so what are sensitive question or what are sensitive topics. What are news that Republicans or messages are likely to want to believe and Democrats, the opposite. Well, US climb got better under Obama. If you have you know if you're, this is a kind of a pro you know a message that suggests this you know it's going to be will call it pro Democrats, a message that. The reform didn't decrease homicide that's going to be. Yes, or maybe even increase them would be a pro Republican motive, etc. So the design is that we're going to ask you a question such as what you know was the crime rate higher or lower under Obama or give us your. This under Bush now give us your estimate of what it was an Obama. And then you get the moves fake or real and then you update, then we give you another question and so on and then we go through this own list, which are all politically sensitive topics. And this one is a personally sensitive topics that we have seen before, which is based on an IQ test, you know, are you above or below the average. Let's see, well we see this kind of prediction of stakes dependent beliefs verified. I don't know if you can read the legends but the, the, they're explained here. Subjects who affiliate with or just say that the lean Republican believe with their money on it that the computer source is true is a truthful one when the message is pro party. You know, these motivations that we've lined up, and the actual for one, when it is anti party so this is the probability that they give to the source being true, relative to the mean so the mean is zero. And you see that if it's anti party, it's, it's, it's lower than average, if it's pro party, it's above average, and both the Republicans and Democrat do that. Moreover, the more politically identified the more higher your stakes are in, in, in, you know, in this view of the world, the stronger it is so these are the people who are partisan Republicans these are the moderate Republicans, these are the partisan Democrats these are the moderate Democrats. So the more at stake you have the more distorted, your beliefs about the reliability of the source are, and this is also something you can see on the right hand side. And this again is something that cannot be an unmotivated bias. And they find this for everyone or everyone except all but one of these questions before I was showing you the average but you can look here at climate mobility issues crime gender gun laws, as long as the, you know, these little segments don't intersect the right line. It means that they find a statistically significant effect. And except for you know the media being controlled by Democrat you find that there is motivated reasoning on all these topics, and you also find it on again on your own IQ. Interestingly, these beliefs are common or these biases are common to both men and women this one is specific to men. And it ties in very well or possibly can even explain the standard finding that men are more overconfident than women about their abilities. You know they interpret information that says to the contrary as fake news and information that validates it as real news, whereas women treat both types of information in this context where both both signals are, you know, meaningless they treat them the same. And then following the message. So this is about do you think these are real news or fake news. And then you can say okay now that you've had these news and you think they're real or they're fake. Tell us again your best estimate of, you know, climate mobility, whatever are these numbers, and you find that people follow the message when it is pro partisan that is they revise their beliefs toward the message when it is for partisan, even if it's fake and in fact it's more likely to be fake when it's partisan, and not when it is anti partisan. Okay, so that's a very nice experiment. So if you think back to the framework one way in which this kind of what part of the how or part of the way in which you, you mess with the information that you're faced with is by, you know, not recalling as well the bad news as the good news. So literature has actually tested this channel and look not just at asymmetric updating but at asymmetric recall. We're going to see first it again in the lab. So it starts again like the other one we bring in subjects and they take an IQ they take four question needs for an IQ test and they take it for money so there's incentive to do well. Then two months later to let memory, you know fade a little bit we call them back, and we show them again the four questions that they that they face plus two that they had never seen and we give them the answers. Did you answer correctly incorrectly or did you not see this question, or maybe you can't remember at all, and you get rewarded if you have a correct response you get penalized if you're an incorrect response and you get zero if you can't remember. Given, you know the design there's various types of mistakes that you can make eight possible types of recall errors, which they call amnesia namely, either. So S is either I got it right, good or bad, or I didn't see it. So amnesia is, I got it right or good, but I say that I didn't see it. Confabulation is it should be an S prime is I got it right, but I say I got it wrong, that's going to be very rare, or I got it wrong and but I say I got it right I remember that I got it right and again, you're betting on, you know, your answer to the question being correct, or delusion. There's a little bit of delusion is, in fact, it should be the empty which is I show you a question that you've never seen is oh yeah I met that question I got it right. Okay, so here's amnesia and you know you can make these in the positive direction or negative direction the gray bars are positive amnesia. Namely, I got the question wrong, and I forget and I say that I don't remember seeing this question. And this one is negative amnesia I got the question right, and I say that I'll remember seeing this question, you can see there's much more positive than negative amnesia. Confabulation, same thing. You know if you got it right, while you very likely to remember that you got it right. Sorry, if you got it. If you get it right, you're very unlikely to remember that you got it wrong, but if you get it wrong, you're likely to remember that you got it right. And as you can see here the delusion, namely, among the people who are shown questions that they never saw but say that they recall seeing most of them who say that recall seeing these questions say I recall it and I got it right, and very few say, I recall it and I got it wrong. And again, you're betting on this with incentives. Confab, it's interesting because you can control everything but you know it's a little far from the real world so I'm going to show you two experiments from the real world which I think are very nice. And they're both about memory. This one takes managers in who run stores, I think it's in Germany, relatively large stores, and in this company which is a national chain, the way they're paid or part of where they're paying is through important bonuses, which every quarter there's an in a tournament and the best rank managers get the bigger bonus and so on down down the down the line so this is not the experiment or giving you know payments it is the company and that's real pay for these people. And these bonus are important they can range up to 150% of the salary 50% of the money income of these managers and every quarter they get a meeting with their own manager and they discuss their performance and their ranking and therefore your bonus is going to be this and that so there's a lot of feedback they're getting well defined performance criteria and not some strange IQ tests or beauty contest, and it's a familiar activity. And there's a lot of data that the research have they observe 31 quarters of performance and feedback. Okay, so now we're going to have these managers predict their own performance next quarter. And we don't tell the firm because you know that's confidential, we couldn't ask them to recall their performance last quarter. Okay, and then we're going to measure various traits about the managers and again, it's all for money. So here you can see in this table what summarize at the beginning, namely that there's significant over optimism with respect to both the actual later performance so this is the actual performance later on. This is the predicted performance. These are the fraction of people who predict below who under predict their performance these are people who over predict the performance there are many people more who are over confidence, or you can compare their prediction to what would be a rational prediction, given the 31 quarters of data that we have on them and all the information that we have on them. And there you see an even more extreme degree of overconfidence 47% are above, you know, the objective forecast and only 31% are below the objective forecast. So how do they arrive at this. Well, through memory. Okay, we asked them to recall their performance in a previous quarter, and there's no reason for them to lie we have that number. So they know they can't fool us, and there's money at stake. And you can see here the actual rank in this previous tournament and the reported remember branch. You can see that most of the dots are above the diagonal. For getting is more prevalent for worse ranks. You can see it over here. So these are the these are the bad ranks and 68% recall better than actual rank and the size of deviation is pretty large. So we see, you know, distorted beliefs, and we see asymmetric updating or memory, I should say, and in fact there's a link between the two the more distorted the memory, the more that explains their optimistic forecasts. Okay. And likely, if performance was bad errors and recall a skew towards better than accurate performance, and so on. Okay, and predictions about the future can be largely explained bias predictions can be largely explained by bias recall. Okay, let's go to another field setting. Let's go now to Kenya. In this experiment they look at 4000 it was part of a bigger experiment 4000 people mostly in the farming regions and we'll look at their fertility decision, another major life decision. So they're interviewed for multi, you know, many purposes, once when they're about 22 and 10 years later when they're about 33. So for the purposes here, we asked them asked them when they're pretty to how many children you have how many. Today, if you could choose exactly how many children in total would you like yourself or your partner to give birth to, including those who have already been born so how many children do you want over your whole life. So this is kind of a neutral question who is the vice president, and 10 years later we say okay now how many children do you have now. And recall when you told us how many children you wanted, how many did you say that you wanted. Okay, and maybe we can give you some money if you get it right. Finally information offered you want us at the end you want us to tell you how many children you initially planned to have 10 years ago, or do you want us to not tell you. And then also do you recall the vice president. Okay, and we see a very similar pattern here and that the tables are kind of similar. The average desired number of children at 22 is 3.3. When you interview them 10 years later. They have about 4.0 children, and there's many years of fertility left so it's likely to be much higher in the end. And when you ask them how many did you want, they say well on average I wanted 3.7. Okay, so they were over optimistic. In that sense respect to what turned out and they recall things in a biased way. And in fact, as you can see from these little red lines the more the red line, the more children you have. Okay, the more you recall wanting children. Okay, so you're really rationalizing having more children than you initially planned to. Okay, this is true for both men and women I guess a little bit more for men and when you run a regression you find that you know if I try to explain how you recall things. Well, you put you recall things 40% correctly, and then 60% how it turned out you rationalize at 60%. You see especially here, this is mostly driven by those for whom the actual number of children is much larger than what they stated initially, for example here. If you have, if you had, if you desired let's say. Past desires. Okay, so among people who have five children, on average they wanted less than four but they tell you that they wanted 4.5. Finally, you can pay them to see if they will recall better. And if you're asking them about recalling the Vice President. That works, if you're recalling about your past fertility. That's over here so this is without monetary incentives this is with monetary incentives you see it gets more accurate this is for those who don't have excessive fertility so for whom the number of children is more or less what they said they wanted. For those who have excessive fertility, you can see that you can pay them or not pay them, they're not going to remember accurately, they're going to remember that they wanted what they now have. And finally, you can say well, if you asked me at the end I will tell you what your initial plan was, or another condition I will tell you and give you some money. Okay, well you ask. And you can see again, for those who have no excess fertility, who have more or less were on target. They are much more likely 60% likely to ask and get them, you know, when there's money involved for those who have excess fertility, then, even if you pay them, they don't really want to know. What are the motives what we can, you know, driving this bias recall and these bias beliefs well you know we can infer them relatively simply either you color, you rationalize what happened, or you have to face the fact that you're not really in control of your own fertility that you know you wanted three children but you have four so one that child at least is not wanted, and probably they're more down the line. How am I doing on time. Five minutes. Okay, all right. All right, so let me talk a little bit about social cognition. So this I was. So now we're going to go from individuals to organization. So, if you look at the reports that follow major corporate disasters or other disasters like the Columbia Space Shuttle crashes, or financial disasters and so on. We find a recurrence of things that really sound are the certain number of words denial, optimistic thinking. New thinking delude themselves, etc, including in the words of economists here, this famous sentence, the ability of investments to delude themselves okay but we don't we don't have models of investment investors and governments who delude themselves. We don't have models of optimistic organizational thinking, or of denial, and so on. So this is what, you know, was was the point of this kind of extension of framework to ask what when we have agents that interact, when are they likely are reliefs or recalls likely to spread or not spread. And so you extend this framework by having at the end agents who interact the payoff of each one is going to depend on their own efforts with a weight alpha, the average effort of others with a weight one minus alpha. And again, is this a good product a good market is the technology of the rocket sound or whatever. And now what is happening is that you know the stakes that you have in things turning out well mainly the difference between if theta is good versus is bad. That depends not just on on theta and what you do, but also on what others do. And therefore on what others think. Okay, so the optimal way of thinking now depends on how other people think. I'm going to ask, how will my beliefs depend on others people realism or denial. So since I'm running a little bit out of time I'm going to go directly to the, to the, to the main lesson because I want to show you at least one one experiment or two. And the idea here is when, if the fact that other people put their head in the sand when they see a bad signal is good for me, then I'm less likely to put my hand in the sand because reality isn't great, but you know at least they're keep working and they're making things a little bit better. And if when they put the head in the sand it makes a bad situation even worse, the firm is going to go bankrupt or the rocket is going to explode or we're going to go to jail etc. And I can't escape, I'm tied into this group, then I'm going to also put my head in the sand and then you'll put your head in the sand and so on. So this is kind of what I call this evil mutually assured delusion principle when you would want optimism to spread when it is been socially beneficial, it doesn't spread. And when it's harmful. That's when it spreads. Okay, and that's how you get harmful group think. And extend that to higher keys and higher keys you'll have contagion not horizontally but from the top, the people if the people at the top are deluded those delusions will trickle down to the people who depend on them. If the people at the top are realists, the realism will trickle down, and there's a bunch of other applications. So, let me show you one or two quick experiments that have started to test these motivated beliefs in the console in the context. So this one is called social exchange of motivated beliefs and again it's going to be using IQ when you know something very favorite because a it's easy to measure and be it's something that people really care about, you know, having high. So if you take these tests, then we're going to put them to arena great great or ring group in the treatment that's called motivation where you want to believe something the green group is those IQ score above median so you want to be in the group. And the red is those below the median in a no motivation treatment is a 5050% chance I assigned you to read or green. And the second phase, are you going to provide your own estimates of the probably that you're in arena great. We don't tell you, but you're going to tell me what do you, what's the chance you think that you're in the, you know, green group or red group. And, you know, you're going to put in your estimate over here and then you can adjust it continuously. And then at some point in what's called the exchange or the communication treatment you're going to be paired with another person who is in the same group as you, you don't know, neither of you knows, but both of you have beliefs. And now you can see not just you can express your beliefs but you can see their beliefs you can see their slider. Okay, so if I thought there was a probably 40% and I say they think oh, they think we're 80% of being in the green group, you know, maybe I'll adjust up. If I was the opposite, I see that the other one says 40% well I should adjust down. Okay, well again, our adjustment up and adjustment down are going to be very different. And then there is communication. So here you see. This is when you know communication or exchange starts. They separated between optimists and pessimists people put a relatively likely strong likelihood that they're in the, you know, in the top group the green group. But in phase one when you're by yourself. Okay, everybody starts a bit over optimistic, but the beliefs kind of stabilize some people become optimistic probably they did well on the IQ test. These people probably didn't do so well. Okay, and then you allow them to communicate their beliefs that's all they're doing they're communicating about their beliefs. There's this massive gap opening here for the pessimists, the people who typically did not do well and our pessimists, when they are paired with somebody else, they kind of collectively get themselves to. Okay. So, this is under when you cannot communicate you see only your own belief this is when we exchange our beliefs so we observe each other's beliefs you see that the pessimistic go up, especially in the low IQ group to some extent also in the high IQ group. So, the potential exchange causes to partially converge but very asymmetrically. Adjustment is mostly upward. It is driven by pessimistic subjects who had bad performance. Optimistics don't adjust downward. It's only pessimists were just upward. And this again is only when this is a group that you care about when it's IQ base not when it's random base. So this and you can see it also here these other. This is when communication starts all the green arrows are people who adjust up the red arrows are people who adjust down once communication or exchange is feasible. You can see here how social learning worsens the bias on average. It's the exact opposite to the wisdom of crowds. The wisdom of crowds is, you know, typically this coins in a jar. I'm not very good at guessing the number but if we all take our guesses we're going to come pretty close, whether this is the exact opposite because you know your IQ is not, or you care differently about your IQ than you care about coins in a jar. Okay. Let me skip this one and talk a little bit about just finish this with this with this and conclude. So, let me now go to again outside the lab and this is going to not going to be experimental but it's going to be kind of real world. This is a question that I was asked earlier, including by my colleague, waste young revisited the housing bubble and the collapse and asked the question, was it all due to bad incentives, or was it bad beliefs over optimism, and the standard story is that it was mostly bad incentives that led people to take excessive risks, deceive others, the government, you know, stepped into to socialize the losses, etc. And the other story is, well, is was this really kind of these evil doers taking advantage of lax regulation or government, you know, well to some extent surely but we'd like to know what they believe. So what they did was they tracked 400 of the people who were most active in this securitization history was packet who are packaging these, these collateralized mortgages. And they track them and they link that with are these guys buying a house buying a bigger house or selling a house and at what point during the crisis. And they're going to compare them to control groups of equally sophisticated people but not insiders to the housing market collateralization, either equity analysts, or lawyers who have nothing to do with housing. And what you can see here on this graph is this is the price of housing, the bubble and the collapse in various cities. And these are the transactions of these securitization insiders in black, the equity analysts in this one and the lawyers in gray. And basically what you can see is that these guys these so called insiders they bought at the peak, and they sold at the trough. Okay, their own houses. So, if we take this as reveal preferences of beliefs, they really thought that the market was going to keep going up. That's why they kept buying at the peak. And they didn't think that it was going to crash. That's why they only started selling well after the lawyers and the equity analysts who are not insiders. Okay, let me conclude, motivated beliefs and reasonings are ubiquitous and important, as Mark Twain said denial isn't in just a river in Egypt. Theory provides a way to identify them information avoidance asymmetric updating asymmetric memory. They're different to cold biases and heuristics, but they very often hide behind them or compliment them, you can say it was just a mistake, but really it was a motivated mistake. People trade off the costs and benefits of these distortions. Both can be large. This is a good thing also for experiments because you can manipulate the costs and benefits and see if they respond in the predicted way and that's how you identify motivation. So maybe a more promising channel to de-bias people including conspiracy theories and then just say, okay, let me give you more and more information. No, they already have plenty information. They're just using it in a motivated way. So you want to work at the motivation. And then there's the social cognition aspect which is that these things can become contagious. And I've shown you an experiment, you know, in this exchange of beliefs where optimism about IQ becomes contagious. And finally, I hope this illustrates the strong complementarities between theory experiments empirics, and also across social sciences, economics psychology sociology, and to some extent political science. Thank you.