 When I've watched CSI, you get the impression that the tests that they run, like DNA tests or fingerprint comparisons, are completely objective measurements made by a machine. So you'll stick the sample into the machine, the machine will flash, match, and maybe even put up the picture of the person who matches this fingerprint or this DNA profile. I think what the shows miss is the extent to which the determination that something matches or is similar depends upon a subjective judgment by an expert. In other words, human beings are involved in it. And that's part of what makes forensic science interesting to me as a psychologist, is how much human and judgment and decision making is involved in the production of forensic science evidence. So the evidence that's presented to the jury is not the product of some machine, only it's also the product of a human being's analysis of what the results of the instrument mean or what the comparison means. And the surprising thing is how often different experts can reach different conclusions when evaluating crime scene evidence. How is it that the two people or a person can look at some piece of forensic evidence and come to the wrong conclusion? What are the cognitive mechanisms that might be going on? Well, it can only happen when there's some ambiguity in the evidence itself. As I said earlier, in some cases the evidence is just clear cut and no one would disagree. But other instances, there is an ambiguity, there could be multiple interpretations and expert judgment is required. When experts approach a task like that, just as any other human beings, they can be influenced by what they expect to see or to some extent by what they desire to see. So people who expect to see something and are highly motivated to see that thing are more likely to see it. They're more likely to interpret an ambiguous stimuli in a manner that's consistent with what they think or want to see. We all do this. Most of the time our use of expectations to help us interpret stimuli is very helpful because most of the time our expectations are correct. But sometimes they aren't correct. The problem for a forensic expert is how to prevent this process of what's sometimes called observer effects, the tendency to see what one expects or desires to see, how to prevent that from coloring one's interpretation of the evidence in ways that undermine the quality of the evidence that's going to be presented to the jury. And I think the best way to do that is to try to minimize the amount of contextual information that the expert receives. So if the expert approaches the comparison not knowing whether it's supposed to match or not supposed to match or what the answer is supposed to be, then it's more likely that the expert's judgment will be determined just by the scientific data and won't be colored by the surrounding contextual information that may create what we would think of as a bias. My sense is that the justice system works best if the scientific experts are basing their conclusions purely on the science and don't allow those conclusions to be influenced by other factors, such as other evidence that might suggest that the person did or did not do it or the police theories of the case or their suspicions about the case and so on. The same argument is made for use of blinding procedures in other areas. So when instructors grade exams, often they do it without knowing the student's name, I think it's a good practice because it prevents the professor from being influenced by other information about the student that may lead the professor to think that this student is likely to perform well or not to perform well. You know, we would like the instructors grading of the examination or the paper to be based on what's in the examination and the paper and not any of the surrounding information. Same thing should go for forensic scientists. This course is about the science of everyday thinking. What advice do you have for people out there who want to think better and do better in their everyday lives? Oh, that's a good question. Boy, I'm not sure I've mastered how to think well myself. I think we all struggle with thinking clearly, with marshaling our thoughts. You know, there are some times when I have been benefited by trying to be very systematic and decompose problems into elements and think about them carefully piece by piece. But the fact is I rarely make actual decisions that way. I think a lot of our decision making happens kind of intuitively through processes that we don't fully understand and really can't analyze. And so the kind of advice that I give people about making better decisions is to be careful about what information you allow yourself to consider. If you're a forensic scientist and you want to avoid being influenced inappropriately by extraneous information, make sure you don't know that information. If you're an instructor and you want to avoid being influenced by how attractive or charming the students are when you grade their exam, grade their exams blindly. So there are circumstances in which less information sometimes leads to better decisions. Knowing what those circumstances are and then blinding yourself to inappropriate information is maybe one of the best ways to improve your decision making. My name is Bill. I think about proof. I think about how proof is generated. I think about how people respond to proof. I think about proof that's put forward that isn't really proof.