 Good morning everybody, my name is Mike Hanley, I'm the head of digital communications here at the World Economic Forum and I'm really, really excited to introduce Molly Crockett, an associate professor at the University of Oxford in Psychology and Phillip Tetlock from the University of Orson and the author of the book Super Forecasters and we're here to talk about forecasting failure because we're at the beginning of 2017, last year I think was a watershed year for polling and forecasting and seeing into the future. I'm going to first ask Phillip to kind of give us an overview of some of the insights that came out of what's seen as a great forecasting failure, the victory in the US election of President Donald Trump. What did the forecasters get wrong? A useful place to start is by dissecting the concept of forecasting failure. Most people when they make forecasts, even when they say seemingly decisive sounding things like Trump doesn't have a chance or there's no way Brexit's going to pass, don't typically mean there's a probability of zero or a probability of one. They typically mean that it's extremely probable or it's extremely improbable. In forecasting you're only wrong, decisively wrong, logically decisively wrong when you say probability of 1.0 and it fails to happen or a probability of zero and it does happen. Now I think there were some people at Davos in January 2016 according to Bloomberg who came pretty close to saying things like that. So they should presumably take somewhat of a credibility hit, at least on that dimension of forecasting. One of the very best poll aggregators in the US is of course Nate Silver and his 538 site. A couple of days before the election he was putting a probability of about 70% on Hillary Clinton being the next president of the United States. Some other poll aggregators like Sam Wang at Princeton were up around 99%. They had some pretty good statistics. The aggregators had some pretty good reasons for believing that Hillary would indeed win. Hillary did indeed win the popular vote by a significant margin. She lost the electoral college by a significant margin as well. Now if I were to say to you, Nate Silver is putting a 70% probability on Hillary victory. He was one of the most accurate of the poll aggregators. There were a few forecasters who put probabilities of a Trump victory close to 50%. But there were very few. And the question is how wrong was Nate, should we count that as a forecasting failure? When Nate Silver said 70%, a few days before the election, was he wrong? Now that turns out to be a very difficult question to answer. Nate Silver has been making hundreds, if not thousands of political forecasts over the last several years. And we know that Nate Silver is fairly well calibrated. His system, his methodology is fairly well calibrated. We also know that prediction markets are fairly well calibrated. And what does that mean? It means when they say, when you look at all the times they say something is 70% likely, those things happen about 70% of the time. That means you're welcome. That means you're welcome. That means 30% of the time those things don't happen. In my work with the US intelligence community, in which we study lots of forecasters, the very best forecasters, we call them super forecasters, are wrong a lot. We live in a world where there's a lot of irreducible uncertainty. And the very best forecasting systems are the systems that are virtually perfectly calibrated, like when they say 70%, things happen 70% of the time, when they say 90%, they happen 90% of the time. Systems that are perfectly well calibrated are going to look, there are going to be conspicuous cases in which they look wrong. And if you throw out a forecaster or forecasting system, every time it's on the wrong side of maybe, you're never going to have a well calibrated forecasting system because it is in the nature of the political world that there is irreducible uncertainty. Now, I watched Nate Silver trying to explain to one of these comedy show hosts after the election why the 70% probability might not have been wrong, and he was mercilessly ridiculed. People don't get it. People don't think that way. But that is the current state of the art. If Nate Silver wasn't wrong, were there other forecasters who you can say were wrong? Well, I think the interesting question is how much of a hit should a forecaster or forecasting system take? How much of a credibility hit should it take? When it says there's a 99% likelihood of something happening, it doesn't happen. Obviously, there's somewhat of a hit they should take. How much of a hit is going to hinge on how extensive a forecasting track record they have? If there's an extremely well calibrated system, the hit's going to be much smaller. If that's the only forecast you've ever seen them make, you probably aren't likely to ever believe them again. So if I'm looking at the forecast and the forecast says 70% and I go to sleep feeling quite comfortable, is that my emotional state that's interpreting that number? You really shouldn't go to sleep feeling quite comfortable. I mean, if you're playing Russian roulette and you had a gun that had 10 possible places for bullets, and you knew that in three of those places, there was a bullet, would you feel quite comfortable putting it up to your head and pulling the trigger? Of course not. I mean, 30% chance is a non negligible probability of a Trump victory. A lot of commentators said that the shock and awe that was the result of both the Brexit referendum and Trump's victory was a result of the emotional, well, first of all, the driving force of the victories in those cases and the shock of the losers was down to emotional responses. Would you agree with that? The commentation over the top of the polls? I think the election was so close. The polls were so close. And the possibility of correlated measurement error causing a cascade in the battleground states like Pennsylvania, Wisconsin, Wisconsin and Michigan, that both sides thought there was a non negligible probability of losing. Trump apparently told his family the night of the election prepare for a rough night. Paul Ryan apparently told Chuck Schumer, you know, look forward to working with you as majority leader of the Senate. I think both sides had some expectations. I think the Trump side was probably the Republicans were more likely to expect to be losing because the data were were in that direction. I think the Democrats were more bitterly disappointed because they quite rationally had a higher probability of winning. So I think things unfolded in pretty much the way psychologists might expect. That's right. Molly, of course, you're an expert in moral decision making. And a lot of the commentary that followed both of those votes was around the emotional drivers of the decision makers of the electorate. Which bits of that commentary struck you as accurate around people shaping their identities and using their vote as a as a as a signaling or a message to their communities. I think what was very clear to me in the aftermath of both Brexit and Trump is just how powerful the motive to express one's moral views to assert one's identity is for behavior. And in many cases outweighs what we might call economic self interest. I think a lot of predictions do focus on economic self interest because it's easier to measure and it's much more challenging to try and quantify and model moral emotions, moral outrage. But as is becoming very clear, these emotions are incredibly powerful motivators, particularly in cases where where people may feel that their voices are not being heard and may use their vote as as an expression of those emotions. And what are some of the ways that psychologists are trying to help economists perhaps or forecasters integrate some of these ideas into their work? We're doing a lot in economics and psychology to try and better get a grip on how to quantify and and and build models around these emotions. And one way that that we're doing this is is is taking a view that decisions don't take place in a vacuum and they don't take place in isolation. They take place within the context of social relationships. And when people are making decisions, they're thinking not just about their own preferences, but the preferences of those around them, their friends, their family, and how their own decisions are going to reflect on their values and their embeddedness in these social relationships. And there's a lot of research going on in this area that I'm hopeful that that will become better at predicting these things in the future. And how what are some of the ways that those ideas are getting integrated into the science of forecasting or polling? Well, I mean, I'm not quite sure. But but one thing that seems to me like perhaps good direction is to recognize that particularly around issues that are contentious or controversial, we should consider the nature of the social relationship between a polar or a journalist and the person who's being asked to express their view or make a prediction. Because people care very much about their social image and their responses are going to reflect not just their true preferences, but also their concern for how they look in front of the journalist or whoever's asking them the question. There was a lot of talk about the hidden Trump voter, the hidden lever who wouldn't declare their allegiance because they would signal to the journalist or to the poll star a particular type of political orientation. Right. Was there some evidence of the hidden Trump voter, Philip? A little bit. The political science polling literature that was something called the Bradley effect and had to really pertain to a former candidate, African American candidate for Governor California and the under the overestimation of his polling support when he lost. And there have been a lot of studies of the Bradley effect attempting to quantify how big a bias that is. It proves to be a pretty elusive and small effect, I think it's fair to say. And I think the Trump effect, I think there probably was a small fact and that would have been quite sufficient to produce the outcome we're talking about today because a small systematic bias in the polls in the critical battleground states could easily have produced the quite decisive electoral vote victory that Trump had despite the fact he had a popular vote defeat. So Malia, can you think about some ways that pollsters might ask different questions that that might get to the source of the truth better? Yeah and this is this is an issue that we deal with a lot in the study of moral decision making because when we when we do research in this area clearly just asking people how good of a person are you? How moral are you? Is not going to necessarily yield an accurate response because as I mentioned earlier people care about their image and they're they're motivated to project perhaps a better view of themselves than then maybe the case. So one potentially fruitful avenue would be instead of asking people what their own preferences are to ask them their perception of the views of those around them in their community. This lets people off the hook off the hook because Billy or vote for Brian, I would never vote for Brian. Exactly it lets people off the hook they don't have to commit to expressing a view that may be controversial but they can share their knowledge of what others around them believe. This will get at preferences in a couple of ways. One we know that people are more likely to believe in outcomes that they want to see happen. They're optimistic so looking at what people's beliefs are can give clues into what their own preferences are. We also know that people project their own beliefs onto others so if I believe X to be true I'm more likely to believe that others will also believe X to be true. So I think maybe by asking people not just what their own views are but what their perceptions of those around them are could help us build better models of collective decision making. Philippa, how is forecasting changing and will 2016 what will the impact of 2016 be on the science of forecasting? I think there's a slow movement toward greater accountability and greater transparency in forecasting. I think people of growing elites are growing even elites are growing somewhat weary of vague verbiage forecasting in which nobody can really figure out what somebody is saying. If I say there's a distinct possibility Putin's next move is going to be in Estonia. That's a very safe sort of prediction for me to make because if Putin moves into Estonia I can say I told you distinct possibility and if he doesn't move into Estonia I can just shrug and say I just said it was possible. I think there's a growing awareness that the vague verbiage in which we express most expectations about political events today makes it virtually impossible to assess who is closer to or further away from being accurate. I think that's one of the reasons why and that's a very comfortable position to be in. It keeps you politically safe but I think there is a growing pressure inside many organizations and finance intelligence analysis and elsewhere to create systems that allow you to keep track records and it's only if you have a track record that you can make meaningful claims about forecasting failure. I think because a well calibrated system that says 70% is going to be around 30% of the time if you focus on those individual cases you're going to get a really skewed misleading picture of forecasting. So you need to create systems like forecasting tournaments from monitoring accuracy over the long term. I think that the World Economic Forum has a collaborative project with an organization I'm affiliated with, Good Judgment Project, in which they're launching a forecasting tournament. I think the Arab Strategy Forum which I think has some connections to WEF is also doing that. I know the U.S. Intelligence Community is doing versions of this. So a number of organizations are moving in this direction. I think it's a more evidence-based approach to forecasting and I think the world will be better off having greater transparency in the process. I think right now we're sort of groping in the dark and the events of 2016 sort of underscore that. Indeed. We're coming back to this emotional question. There's a lot of talk, you mentioned the elites and then there's the opposite word to that, these days seems to be the anger of those who are not the elites. There was a lot of talk about the anger of the electorates last year. As a psychologist, Molly, what is it that makes people angry and makes populations angry and how can leaders and decision makers factor those emotions into their decision-making and actions? One of the biggest drivers of anger is inequality and the unfortunate reality is that there's tremendous inequality in the world today. I read yesterday that something like the top eight richest people in the world own more than the bottom 50% report recently out of Oxfam and not only is there a tremendous amount of inequality in the world today, but that inequality is so visible because everyone in the U.S. has access to the Internet, Instagram, social media. The massive scale of inequality is more evident than ever before. And decades of research in psychology have shown that when people are confronted with inequality, when they're on the losing end of a bad deal, they will often behave destructively and they'll do that even if the destruction hurts them as well. So a lot of people would rather burn it all down. They would rather have nothing themselves if it means they can just eliminate the inequality and sort of level the playing field so that everyone is on a lower level. Right. So there's inequality, other kind of other other things that make people feeling disrespected, feeling that their views are not are not taken into account and feeling that those in power are contemptuous towards them, feeling ridiculed. And I think that there there has been a lot of that in politics. Can you are there are there ways that forecasters can take into account this emotional aspect? Are there are there is there research going into kind of measuring populations, moods and helping decision makers move things forward because of that? I was just thinking about Molly's answer. And I'm wondering whether inequality is the driver of the events of Brexit and Trump? Or is it the culture wars, immigration, things of that sort? And if I had to bet on what the biggest driver was, it would be more culture and immigration and symbolic identity issues than economic inequality per se. I mean, keep in mind that Trump won the macro economic models of presidential elections, we're viewing it as close to a toss up. It wasn't clear. But the economy has relative to what the economy was in 2009. I mean, the Democrats had a pretty good track record of bringing the economy back. The unemployment was pretty low in the United States. It's a 5th range. Now the percentage of the workforce that's actually potential workforce that's working is not this is far from an all time high. But there's there's the definite feeling that that that those in power are reaping the greater greater proportion of rewards. And that there's that is an underlying theme. And as you mentioned, the culture wars immigration, certainly some of some of President Trump's electoral themes were around that. Certainly the Brexit vote seem to seem to turn on that kind of on that kind of emotion. I see these I see these issues as really intertwined. I think that that the culture wars are using inequality as a tool to stoke outrage. And the issue of immigration also relates, I think, quite strongly to inequality. One of one of the the more revealing accounts that I read after the election was a young woman who had voted for Trump. And she she was very angry because she's working three jobs and barely getting by barely able to feed her kids and and and she sees immigrants coming in and receiving government assistance that she's not qualified for because she's just over the over the threshold. And again, people are looking around them, they're comparing themselves to others, not just two elites, but those in their immediate vicinity and status. And yeah, and these are these are really primal emotions. They they tap into very ancient circuits in our brain. There's there's evidence that concerns about inequality are are are present to some extent, perhaps in our primate ancestors and and and I just see I see the the the anger around immigration and the issues in the culture wars as being intimately related from inequality related to inequality, not not the same, but but I think that they're they're all wrapped up together. Certainly. Here in Davos, of course, we gather together leaders from all walks of life and there have been branded the elites and been accused of living in a bubble, partly because of the emotional surprise or response to the events of last year. There one commentator has been widely quoted as saying that the Davos consensus is always very important. So I guess my question to you is it to the both of you is how can, you know, groups of people groups of decision makers be more aware of or make themselves more aware of the emotions and the the forces that are driving. If the forecasting, if their emotional responses to forecasting, everybody sees what they want to see in the forecast. How can they avoid those, avoid those, the frames in which they view things. If a forecaster gives me a forecast, how can I apply a an objective screen to it rather than a subjective screen? Well, I would say something to reassure people at Davos that Ken Rogoff, I was joking, obviously, but the notion that Davos, my man, is that Sam Huntington phrase, the Davos man is always wrong, that's actually not true. It's very, very difficult to do worse than chance. Whether the Davos man is more accurate in the dart throwing chimpanzee is another question, but always wrong, that's not true. I say that the best predictor of tomorrow's weather is today's weather. That actually, experts are extremely hard pressed to beat a simple extrapolation algorithm that predicts continuation of more of the same. That's true. The dart throwing chimpanzee experts can beat a little bit, but predicting a simple extrapolation algorithm is extremely difficult. Which is where you get the surprise around Brexit and Trump because it's not more of the same. Predicting change is the great challenge, whether it's central bankers or intelligence providers, but it's not the same thing that most systems fail. Of course, that's what we're most interested in predicting is change. Hence the disappointment in forecasting. I might just ask you for a last comment. They're only on how decision makers can be more aware, how they can make themselves more aware, whether it's the employees in their company or family, how can they make themselves more aware of the emotional state of those constituencies? That's a great question. I think I would just finish by pointing out that we know from research in neuroscience that being in a heightened emotional state, being highly aroused, being stressed, directly impacts the brain systems involved in predicting outcomes and in making decisions, and in many cases, kind of throws a wrench into things. Increases uncertainty, increases volatility and noise in the system. So I guess to the extent that people can cultivate a sense of calm, a sense of maybe caring and socially aware calm, the better decisions that they'll make. Philip, do you see the reputation of pollsters and pollasters reviving over the course of the next year or two? I think that the future, I will make a bold forecast about forecasting. I think that the trend will be toward increasing transparency and rigor in monitoring forecasting. And that'll be done through a combination of forecasting tournaments and prediction markets. And once we have a plan, pollsters will be less likely to misinterpret particular outcomes like Brexit or Trump. Very good. If there are no pressing questions, are there any pressing questions from the floor? Isn't the biggest mistake we're making is that we're unwilling to admit that some things are just not predictable. In other words, we're taking Trump and we're taking Brexit because it's one event and you just have to guess yes or no. But other things like will a war occur, will peace process succeed and so on. Aren't there too many factors? It's really impossible to predict and it's kind of pathetic that we're even thinking that it is possible to predict. Well, that now seem to have taken a pretty strong position on the futility of forecasting. I would argue that you can't live with it, you can't live without it. All forms of policy planning assume forecasting. Anyone who has a policy preference on anything is making an implicit forecast, an implicit conditional forecast. So you're not going to get away from forecasting. The question is how explicitly are you going to do it and are you going to try to get as much juice out of the system as you can. 10 years out virtually nobody does appreciably better than chance. Predicting things within a narrower time frame, well-specified outcomes, you can achieve increments in forecasting accuracy. They're not huge, but they're palpable and a 10 or 20% increment in accuracy matters a lot. Within a three month to 18 month range which is I think the sweet spot zone for improving forecasting. A quick one. I read last week a French newspaper was going to do away with opinion polls and run out to the French election. Do you think this is a good idea? Very French. Every set. I think we're close. Thank you very much.