 The beauty of the pupil, there is a fact that is not widely known, but it's very unusual. The pupil normally contracts and dilates rhythmically, it's known as hippus, I think. But when people are engaged in a task, you assign them, you know, a multiplication task, the pupil dilates and it stays steady as a rock, hippus is gone. So the measurement noise is eliminated, and I don't know what the mechanism is, but it's absolutely obvious when you watch it. Measurement noise is eliminated when people are engaged in a task, so that, it is more sensitive than the other autonomic indices. That's right. Yeah. So the title of your book is Thinking Fast and Slow, and you talk about two systems, system one and two. Can you give some example or tell us a bit about the characters in the book? Well, the characters are indeed system one and system two, and system one is, you know, it corresponds to a distinction that everybody recognizes in their own thinking, that there are some thoughts that just happen to you, and there are some thoughts that you must generate. There is a lot of mental life that is completely effortless, and then there is some of mental life that feels like work, and so that distinction is obvious and people recognize it. Now, how you label it turns out to be quite important, so the proper labels would be type one and type two, and there would even be a third type, because I'm not sure that effortful reasoning and self-control and inhibition of responses are really exactly the same thing. They're probably distinct. So there are two or three types of responses. Turns out that when learning about types is very difficult and thinking about types is very difficult, but a brain seems to be wired to think about agents. So when you describe the system one and system two, and there are agents that do things, people find it easy, compelling and interesting, and system one and system two develop personalities. And so the personality of system one is that it does everything and it does everything quickly and most of the time it's right. But what it doesn't recognize, it doesn't recognize its own limitations so that when it encounters an ambiguous situation, it makes a choice. And when it doesn't know the answer to the question, it answers a related question. But it's never stumped or very rarely stumped by simple questions. System two is a very, you know, it's a different operation, it gets mobilized. When system one encounters difficulties? So you mentioned the difference between system one as being things that happen to you in system two or things that you do. Can you give us some examples of the two systems? Sure. You know, when I say the word mother, you have images probably of your mother and you certainly have an emotional reaction, and that's something that happens to you. When I say two plus two a number comes to your mind, you didn't bring it there, it just came, it happened to you. So and there are many, you know, in fact most of mental life is like that. You know, the words that I utter when I say the sentence, they just come to me, you know, I don't. I will stop and choose which word, that's system two. But most of the time, you know, when I speak, the words just come. So that's system one. And a system two is, well, there are really two types of operation that system two performs. And one is complex computations. And that is where the pupil dilates, and you know, there's, this is mental work. So mental work is involved in, you know, a short-term memory task. If I ask you what is your, what was your previous telephone number, you'll work. And your pupil will expand by about 30 or 40% of its area as you retrieve this. Then there is self-control, the inhibition of impulses, the when you are indeed choosing your words carefully because you don't want to offend. Or those are situations in which system two is hard at work, and you feel it. So it corresponds, system one and system two really correspond to experiences that are readily available and that everybody recognizes. So that distinction between something happening and something that you do is, I think, pretty compelling to most people. And the dichotomy that you've drawn between system one and two. How does that relate to the previous work you've done on heuristics and biases? Well, it turns out, you know, we had, Mr. Husky and I, when we started our work, we had something in mind that was fairly similar to that. We were interested in intuitive statistics. So in, you know, the estimates that come to people's minds about probabilities and so on. Now, in many of these cases, we were both teachers of statistics. So we were testing our own intuitions, but we knew that we could compute. So in our very first paper, we distinguish intuition from computation. And our point was that intuition is, in some cases, surprisingly error-prone and that people should rely on computation. That's fine. Yeah. So that's, that was the beginning, but we never studied what I now call system two. Then our work became controversial and people attacked it and criticized it. And there was something that essentially all the criticisms and all the experimental criticisms of our work had in common in that they were created a situation in which people could figure out the answer by working on it. And that was really the background. So, Mr. Husky and I, in the very last paper that we wrote together, we answered one of our very persistent and well-known critics, Goetge-Grenzer, and we pointed out that in his experiments, typically people would see, so, well, how would I describe it? One of our most, our best known examples in heuristics, and it's one of the best examples in the heuristics literature, is the Linda example. So Linda is that young, not so young woman. She's about 30 years old now, but I'm telling you that when she was a student, she was an activist, a feminist, marched in all the marchers. I didn't say feminist, actually. And then we asked people how likely it is that Linda now is a bank teller. Or how likely it is that she is now a bank teller and is active in the feminist movement. Now, there's no question that when you ask different people, those two questions, they will invariably say that it's more likely that she's a feminist bank teller rather than a bank teller. When you ask them the two questions to compare the two options, you're allowing system two to check logic. And by priming logical reasoning and by creating some, you can sensitize people so that they will detect that obviously, she is more likely to be a bank teller than a feminist bank teller. But that seems to be a different process. When people see only one example, they evaluate the fit of that example. When you show them two things together, they can also compare them and you provide another cue. And that was really the background to the distinction between the two systems with a controversy around our work. It was an attempt to resolve that controversy by pointing out, if you do it between subjects, if you do it the way the world is, so you make judgments intuitively about things that they happen, you get those effects. And you can make them disappear by allowing logic to play. Now, there's been a lot of airtime, I suppose, around the idea of the 10,000 hours of expertise. Is there anything to that figure of 10,000 hours? I have no idea really about the 10,000 hours. That is, I'm a customer of this. Erickson, who has promoted this figure, is a highly reputable researcher. But it's a crude approximation, I'm sure. I mean, there's nothing magical about 10,000. And I'm sure that it doesn't take the same amount of time to different people and expertise is not wholly defined and so on. But it gives you an idea that this is a lot of hours to become an expert where you see that qualitative change in the way things are done, where basically performance switches from what I call system 2 to system 1, that takes a long time. How many hours? I'm not committing myself and I don't know. One of the goals of the course is to kind of cue people to the difference between people who are actual experts and people who simply just claim to be experts. Is there anything that people should watch out for, any red flags, to kind of tell the difference between people who actually know or can actually do what they claim themselves? I mean, I think Gary Klein and I wrote a paper in which we actually suggested an answer. It's embarrassingly simple. But when somebody acts like a self-confident expert on a range of problems, then there's one question to be asked. Did that person have a decent opportunity to learn how to perform the task? And that requires getting feedback on the quality of performance and getting rapid and unequivocal feedback. In the absence of rapid and unequivocal feedback, expertise is just the self-confidence that comes with a lot of experience and that is uncorrelated with accuracy. This is something we've known for 50 years or more. So if somebody wanted to become an expert at a new task, what's the fastest and most efficient way to turn, as you said, that system two, that effortful sort of processing into system one? Well, there are really two ways of doing this and you have to use both. You have to use system two. For somebody to become an expert driver, you have to tell them how to drive. And I would say for somebody to become an expert diagnostician on the basis of X-rays, you have to teach them what those things look like so that they'll be able to recognize them. But then you need also a lot of practice with high quality feedback. So merely telling people how to do something is not going to turn them into experts and repeatedly telling them the same thing is not going to help. It's a lot of practice with feedback that creates real expertise. But you can abbreviate the time that it takes to reach expertise by having high quality instruction about what cues you should be paying attention to. So actually knowing what it is that discriminates the two categories if it's an abnormal scan versus a normal scan. Gary Klein has a beautiful example. He talks of a nurse in a cardiac ward who comes home and talks to her father-in-law as I recall and says, we have to go to the hospital because he doesn't look good to her. And it turns out that yes, he had to go to the hospital. He is in deep trouble. He needs 12 hours later or something. He is on the operating table. And what she had done, so Gary Klein did what he and others but I think he is the main guru of this type of enterprise. He found out what the cues were although she was not aware of the cues that she was using. But he found out that when arteries are getting obstructed, which will lead to heart attack, there is the pattern of distribution of the blood and the face changes. Now she had recognized, she had learned that pattern, but she didn't know what it was. Now when you are training nurses, you can show them the pattern. The goal of the course, the title of the course is the science of everyday thinking. And what we are trying to do is to provide people with the ability to think more clearly, argue better, reason better. I suppose learn to use system 2 to be more analytic, to unpack, read more carefully and so on. Do you have any advice for somebody in the course who is trying to improve their everyday thinking? Well, you know, my advice would be quite conservative. I mean it would be pick a few areas and pick a few things where you want to change what you are doing and focus on those. I mean do not expect that you can generally increase the quality of your thinking because I think you really cannot. But if there are repetitive mistakes that you are prone to make, if you learn the cues, the situations in which you make that mistake, then maybe you can learn to eliminate them. The history of success in enterprises like yours is that they are not always successful. People feel great when they hear of all these ways of doing things and of controlling themselves, but then when they are making a mistake, they are so busy making it that they have no time to correct it. One of the reasons I think for my skepticism about this is that I do not think my thinking is very much better than it was 40 years ago or 45 years ago when I started doing this work, so this suggests some humility. So pick your shots, pick a few areas, and then in those situations that you recognize as situations where you are prone to make a mistake, slow yourself down. One piece of advice, by the way, is that recognize situations where you cannot do it alone, where you need a friend, where you need advice, because if you do it alone you are going to make a mistake. So the nature of system 2 is that it is effortful, that it is something that you have to do. Now that is hard, and obviously, as you mentioned, trying to get people to be motivated enough to engage in system 2. Well, actually a lot of people have the tools and have everything they need in order to make better decisions, in order to learn a new task, but it is just a matter of putting in that cognitive effort to doing a little bit of putting in some elbow grease and actually making that happen. Do you have any advice for how to make that cognitive effort seem a little less effortful? No, I am not sure. I know how to make it less effortful. It is going to be effortful. What you can do is illustrate the costs and benefits of investing some effort. By the way, there are large individual differences. So Keith Stanovic, I don't know if he is in your list. He is not. He was on the list, but we couldn't catch up. You couldn't catch him. Keith Stanovic has a whole program of research distinguishing between what he calls intelligence and rationality. And rationality is in effect the ability to deploy system 2 where it is needed and to interfere with the mistakes that system 1 is apt to produce. And you find some people are more rational, but not particularly rational although they are intelligent or vice versa. That is one of the hardest tasks. Just getting people, they have everything available to them, but it is actually just one of the things. You can recognize. I have worked a lot with anchoring. That is a phenomenon. Somebody puts a number in your head and it looks plausible after a while. In fact, this is the way our mind works. We hear something strange. We try to make sense of it. Trying to make sense of it makes us more prone to believe it. Anchoring is a suggestion effect that is very powerful. You can recognize when you are being anchored. If you are in a negotiation situation and the other side has an outrageous number, you could become anchored and that is worth resisting. That is an example. Another example is that when you make explicit predictions, like when somebody with a young professor eventually gets tenure or not, remind yourself that the base rate of tenure is very important in that story. That is a system 2 kind of judgment. In your book, in the beginning, you talked about your relationship with Amos and a very productive and it sounds like an outstanding working relationship. How could you make that happen in a workplace in order to facilitate a better productive environment where ideas come freely? Can you describe the nature of that? Creating a productive environment is very different from creating exceptional collaborations. For the productive environment, I think there are some recipes and they are really well known. You have got to create many opportunities for people to bump into each other so that they can exchange ideas. You have got to encourage exchange of ideas between people who are not in the same field and Steve Jobs was famous for the suggestion of having very few restrooms in the building to force people from different units to meet each other on their way to the restroom or there. That is a recipe that works for encouraging exchange of ideas. Many places in the UK, many departments of universities and research centers used to have it is diminishing. They used to have coffee in the morning, tea in the afternoon, which was like 30 minutes where everybody would be in the same room at the same time. I think that is enormously productive. Now, how to get an exceptional collaboration going? I do not think there is any recipe for that. If you are lucky enough, it happens to you. I was very lucky. What is next? This was Matt's question. You have written this book. We know a lot more than we did about the differences between system one and two and that difference and in training and so on. What is the next? If you are looking at the landscape of the judgment decision making field at the moment, what do you think is something worth paying attention to? I am very skeptical about forecasting. That is very evident in my book. I think people have no idea what the future will be and I am no exception. I have really no interesting forecast. I have never tried to forecast the future. There is something that is very obvious that is happening. This is the tremendous spread of neuroscience and the merger of psychology and neuroscience. There you can make a confident prediction because so many very bright young students are going to that field and so they are betting their careers on it. You know that for the next 15 years there is going to be a lot of work in neuroscience and decision making, neuroscience and various aspects of psychological functioning. So that prediction is a no brainer. More complicated predictions I cannot make. Do you think that is a proof? I have always been a believer. There are some people who are by nature skeptics and other people who are by nature sort of believers and gullible and Amos was on the skeptical side very strongly and I am on the gullible side. I tend to have enthusiasm and to believe that new things are going to be productive. So among my close friends I am the most enthusiastic about neuroeconomics and that sort of thing but my close friends who are more Amos like they need more proof. We are presenting students with the cognitive reflection task and asking them to, we are giving them what should be interesting with 200,000 people who are taking the course to see what the difference is between kind of fast and slow. Maybe we should mention the cognitive reflection task. Yeah, you can mention it. By the way, you know that it was done by Shane Frederick. We actually had the bat and ball. He put the bat and ball example in an article that we wrote together. Yeah, you know my Nobel talk was based on a paper that Shane and I had written together. So it extended that paper and the bat and ball was one of Shane's many contributions to that work. Do you think it's a reasonable, there's been a lot of work since the bat and the ball problem trying to pin down exactly the nature of the differences? Keith Stanovic in particular has recently come up with demonstrations that yes, it is related to sort of self-control and to what he calls rationality. So he treats it as a test of rationality. Shane is more ambivalent about whether this is very different from intelligence and then there is a massively embarrassing result which is that there are gender differences and that nobody wants to see and nobody really believes that men are more rational than women and yet men do better in that test than women by a lot. It's not a small effect. Now my wife and Triesman is a well-known psychologist and a national science medalist and all that and she was completely uninterested in those puzzles and she says she suspects that women are much less interested in puzzles and much less competitive in that particular way than it looked trivial to her. She wasn't going to put a lot of work into it whereas I've always been one, show me a puzzle and I'll go to work on it. So what does success look like? In your book you mentioned that you'd like to equip students with the vocabulary and jargon of judgment decision making to at least help them recognize when they might be in this minefield. Can you think of what success looks like in the end of this procedure? For what I was trying to do, success is always measured I think by whether you have changed the language and I was very explicit that changing the language was the objective of it and to a significant extent this has been successful. So system one and system two are now part of the language. To the dismay of many psychologists who don't like this idea of systems as agents and they would have liked to have type one and type two I had tried type one and type two they would not have become part of the language. So success is new words as people understand anchoring, understand availability. Another word that what you see is all there is, I mean it has limited currency but it has some currency and so that's what success is like, it's really introducing terms that make it easier for people to see certain phenomenon. So we're trying to figure out whether this course is successful and you've mentioned for example Keith Stanovic's conception of the Cognitive Reflection Task so potentially seeing a change obviously not exactly the same questions but at the beginning of the end maybe a drop in belief in the paranormal maybe an increase in cognitive personality tasks to see the need for cognition or something. People want to think more or something at the end of the course. Can you think of another sort of benchmark that might help to gauge whether people are doing thinking more? This is very ambitious what you're trying to do and in a way the way that one would want to structure a course to achieve that objective would require a lot of practice. So ideally you'd want people to, you'd want as an exercise here's a mistake I made today or here's a mistake I almost made today so you'd want to make people introspective but the easier by far task is to make people critical of other people. So if you improve my thought as always being, you know I've said that the end of the book is to educate gossip really and that is because I believe that if you train people to be good critics of other people thinking and decision making eventually they will turn that on themselves but that this is the way, you know, this is the easiest way of doing it rather than making people do something that is inherently quite evocative which is monitor themselves and criticize themselves and they go along. My name is Danny. I think about thinking.