 I think CSI gets forensic science wrong in a couple of ways. One is, in the television shows, everything happens a lot faster than it happens in reality. The other thing is that when I've watched CSI, you get the impression that the tests that they run, like DNA tests or fingerprint comparisons, are completely objective measurements made by a machine. So you'll stick the sample into the machine, the machine will flash, match, and maybe even put up the picture of the person who matches this fingerprint or this DNA profile. I think what the shows miss is the extent to which a determination that something matches or is similar depends upon a subjective judgment by an expert. In other words, human beings are involved in it. That's part of what makes forensic science interesting to me as a psychologist is how much human and judgment and decision making is involved in the production of forensic science evidence. The evidence that's presented to the jury is not the product of some machine, only it's also the product of a human being's analysis of what the results of the instrument mean or what the comparison means. The surprising thing is how often different experts can reach different conclusions when evaluating crime scene evidence. Can you give me an example from fingerprints or DNA? Well, from DNA, it's surprising how often DNA evidence often is very clear cut and it produces a clear profile where everyone would agree it's either a match or not a match with a particular sample. But when the samples are old or degraded or DNA from different people is mixed together, it sometimes becomes very complicated to reach a conclusion about whether a particular person should be included or excluded as a possible source. And so in those areas, we sometimes see experts disagreeing over interpretations. And that's where as a psychologist I become very interested because I wonder what are the processes that are determining people's judgment about these things and whether the judgments of some of the experts might be biased such as by other things they know about the case. Can you tell me more about that? Yeah, there was one instance that really struck me where experts were disagreeing over our local courtrooms here about the interpretation of a DNA test with the government's experts saying that this was a DNA match that incriminated a guy who was accused of rape and robbery. And an expert called in, a university professor called in by the defense lawyer who's saying, well, I'm not convinced that this really is a match. There are discrepancies here that I think are important. I later asked the government's expert, how can you dismiss the discrepancies that the defense expert was pointing to? And she said, oh, I know those aren't important discrepancies. Oh, for God's sake, they found the victim's purse in this defendant's apartment. Which is very striking because what it tells me is that information about the purse, which supposedly this defendant might have stolen from the victim, was influencing the way in which the DNA expert interpreted the DNA evidence and was doing so in ways that might not have been clear to the jury. The jury may not have known and may not have understood that the conclusion that there is a DNA match between two samples depends partly not on the DNA evidence but on something about a purse. So I think it's striking. It raises real questions about what experts should be relying on when making these judgments. Throughout this course we've been dealing with erroneous beliefs and claims where if people believe them it doesn't really do much harm. But I know you've had a lot of experience throughout your career in forensic science where faulty beliefs and claims can actually cause a little bit of harm. Can you tell me about that and tell me about some of the claims that are made by forensic scientists? Well, we've had cases where the wrong people have been sent to prison for crimes they didn't commit because forensic science evidence was misinterpreted. And so I think that's a very clear harm. When we look closely at those cases, often they're revealed by additional forensic testing, very additional DNA testing reveals that the initial DNA test wasn't correct. It's important to go back and look at those cases and ask what went wrong. How could the system have gotten this case wrong in the first place? And often it looks like there are elements of bias in the initial judgments. In one case I looked at in Texas where somebody was falsely convicted based upon a misinterpreted DNA test. It looked like the DNA analyst had been in communication with an eyewitness. There were two pieces of evidence against this defendant. There was a DNA match and an eyewitness identification. And the jury just thought it was an open and shut case. They convicted this guy in less than two hours of deliberation. They just thought that there couldn't be any possible error. But it turned out when you looked closely at the eyewitness identification it had been done through a show-up that was extremely suggestive. The defendant was the only person who was shown to the victim and he was shown in a way he was basically driven by her while he was inside a police car. And he was forced to wear a hat that was like the hat the perpetrator had worn and the victim said, I think he's the guy. Even though he was much bigger than the guy she had described who had committed the crime. So there was a faulty eyewitness identification which may have influenced the DNA analyst to think she had the right guy and to look at the evidence in a very confirmatory manner. And the evidence, there were certain things about the DNA evidence that were consistent with this person being a contributor to the DNA. But if you looked at it in the full context you could see other things that were inconsistent. The analyst credited the findings consistent with her expectations and ignored the ones that were inconsistent and therefore reached an erroneous conclusion of a DNA match. And so when the case went to court where the eyewitness was going to have to confirm the identification with the person actually there by then the eyewitness had already been told that there's a DNA match to the guy. And so any uncertainty the eyewitness might have felt about her identification was probably allayed by knowing about the DNA evidence and so we have two faulty pieces of evidence each supporting each other what appears to be an unassailable case for the jury is actually built on a house of cards and it turns out they got the wrong guy. So I think that's a real harm. He searched four years in prison for a crime that he did not commit. Later DNA testing found the actual perpetrator. That's good. So you said it comes down, often comes down to human judgment and interpretation. How is it that two people or a person can look at some piece of forensic evidence and come to the wrong conclusion? What are the cognitive mechanisms that might be going on? Well it can only happen when there's some ambiguity in the evidence itself. As I said earlier in some cases the evidence is just clear cut and no one would disagree. But other instances there is an ambiguity. There could be multiple interpretations and expert judgment is required. When experts approach a task like that just as any other human beings they can be influenced by what they expect to see or to some extent by what they desire to see. So people who expect to see something and are highly motivated to see that thing are more likely to see it. They're more likely to interpret an ambiguous stimuli in a manner that's consistent with what they think or want to see. We all do this. And most of the time our use of expectations to help us interpret stimuli is very helpful because most of the time our expectations are correct. But sometimes they aren't correct. And so the problem for a forensic expert is how to prevent this process of what's sometimes called observer effects the tendency to see what one expects or desires to see how to prevent that from coloring one's interpretation of the evidence in ways that undermine the quality of the evidence that's going to be presented to the jury. And I think the best way to do that is to try to minimize the amount of contextual information that the expert receives. So if the expert approaches the comparison not knowing whether it's supposed to match or not supposed to match or what the answer is supposed to be then it's more likely that the expert's judgment will be determined just by the scientific data and won't be colored by the surrounding contextual information that may create what we would think of as a bias. So my sense is that the justice system works best if the scientific experts are basing their conclusions purely on the science and don't allow those conclusions to be influenced by other factors such as other evidence that might suggest that the person did or did not do it or the police theories of the case or their suspicions about the case and so on. And it's the same argument is made for use of blinding procedures in other areas so when instructors grade exams often they do it without knowing the student's name. I think it's a good practice because it prevents the professor from being influenced by other information about the student that may lead the professor to think that this student is likely to perform well or not to perform well. We would like the instructors grading of the examination or the paper to be based on what's in the examination of the paper and not any of the surrounding information. Same thing should go for forensic scientists. Are there any claims that forensic examiners make that are difficult to test or are not supported by evidence? Well the National Research Council of the U.S. National Academy of Sciences did a report in 2009 on forensic science where they reviewed various areas of forensic science and they concluded that a lot of the claims that are commonly made by forensic scientists in court have not been adequately validated by scientific research. Particularly they are concerned with claims of some kinds of forensic examiners that they can identify things with certainty. Fingerprint examiners saying they can determine that a print found at a crime scene came from a particular finger to the exclusion of all other fingers in the universe and so on. I mean that would be a very high claim to prove and the sense of the scientific community is that there has not been adequate research to justify claims that strong. Some of the other kinds of claims that are problematic are claims of forensic scientists that they can assess the probability that certain propositions are true. I think it's highly probable that the bite mark found on this victim came from this person's teeth. To go from the examination of the similarity of two bite marks to the conclusion that the bite marks were probably made by the same person requires a certain leap of logic and a lot of academics are questioning whether forensic scientists can make that leap or whether they're making that leap too readily without adequate scientific documentation that they can do it. That's created a lot of controversy about what forensic scientists are saying in court and what they should be able to say that's become a hot issue right now, something that's being discussed widely. What about DNA evidence? Is DNA evidence infallible? Well, it's certainly not infallible. DNA evidence is often presented with very impressive statistics that measure the probability of a coincidental match between two DNA profiles. An expert might say the two DNA profiles we've compared have the same genetic characteristics and those characteristics would be found in only one person in a billion in the human population. That's very impressive, of course. Those estimates have some scientific basis, but they don't necessarily tell a jury what the jury needs to know. The one in a billion estimate is how rare the profile is. It tells you nothing about the probability that the profiles could match by mistake or by accident. Laboratories do make errors when testing DNA profiles. Sometimes DNA evidence is misinterpreted, as I mentioned to you earlier, and we've seen a number of instances where labs make mistakes, such as cross-contaminating samples. DNA can accidentally be transferred from one sample to another in the crime lab. These are not common events, but they happen sometimes, and they happen much more commonly than one in a billion cases. There's a margin of error in DNA testing, as there is in any other scientific process. Part of the difficulty is we really don't have good estimates of what that margin of error is. We'd like to think it's low. There's some evidence that it's probably not as low as one would hope. Can you tell me about the prosecutors' fallacy? Well, when statistics are presented about the frequency of matching characteristics, lawyers often disagree about what those statistics mean. I first noticed this about 30 years ago when I was, you know, I was a young professor and I was starting to talk to lawyers about forensic science and forensic statistics. Back in those days, those were the days before DNA testing was introduced and there was a lot of serology testing, and it would be common for labs to compare blood group evidence and say that two blood samples came from somebody with the same protein and enzyme blood markers and that those markers would be found in one person in 100. So basically we found a match. Match is the defendant. It's a one in 100 match. And then when I would talk about this, I noticed that different lawyers would have very different interpretations of what that would mean. The prosecutors tended to say, ah, a one in 100 match, that means there's only one chance in 100 that this defendant is innocent because, right, he's either the source of the blood or it's a coincidental match. And the coincidental match occurs with a frequency of one in 100, so there's one chance in 100. It's a coincidence. 99 chances in 100, he's guilty. And that was their interpretation. Now, it turns out that interpretation is fallacious and we call it the prosecutor's fallacy just because I noticed prosecutors doing it a lot. Not because prosecutors do it or not or only prosecutors do it. It seems to be very common among news reporters also. But the fallacy is this. The, although only one person in 100 would match, you know, in a particular community that might have millions of people, one in 100 could be thousands of people. The defendant is only one of thousands of people who match it. The evidence narrows down the possible sources of the evidence, but it doesn't narrow it down to the point where we can say that there's a 99% chance that the defendant is the source. In fact, we really, we really can't tell the probability that the defendant is or is not the source based upon the blood evidence alone. You know, we have to consider the strength of the other evidence. And whether this defendant is more or less likely than anybody else who would match to be the source. All right. And so that's one kind of fallacy people come up with. It involves over-thinking that you can determine the probability that the defendant is the source of some particular sample based on the rarity of the characteristics that match the defendant to that sample. You really can't do it. Now, I mean, defense lawyers tend to make another error. So it's not just prosecutors or these illogical things. Among defense lawyers, defense lawyers would say, well, the defendant, you know, my client matches on a characteristic found in one person in a hundred. But one in a hundred, there are like thousands and thousands of people in this community who would match, and he's only one of thousands, so the chance he's guilty is one in thousands. And therefore, the evidence is practically worthless for determining whether he's guilty. You know, and what that fails to recognize is that this forensic evidence drastically narrows the population of people who could be the source of the sample. Ninety-nine percent of all potential people who could be the source are eliminated without eliminating the defendant who may already be a suspect. So knowing that the defendant matches a characteristic that's as rare as one percent should greatly increase your confidence that he's the source, you know, in ways that go beyond what the defense lawyers recognize. So, you know, the net result is the evidence from forensic science can be very powerful, but it has to be interpreted in light of all the other evidence in the case. The people get into fallacious or erroneous thinking when they think that they can draw a conclusion about the ultimate issue of guilt or innocence or whether the defense is the source or not the source from the forensic science alone in isolation without considering the entire factual context. Gotcha. It sounds similar. Can you tell me about the Texas Sharpshooter fallacy and how it applies to DNA? Well, the Texas Sharpshooter fallacy, it's based upon the story of this famous Texan. Back in the old days, when Texans would carry firearms, I mean, I guess they still do, but back in the days when Texans would carry long guns, a particular Texan visited his neighbors a mile away from his own farm and claimed to them that with his long gun he was the most accurate shot of all times. And to prove this, he picked up his long gun and he fired shots toward his farm a mile away and he invited his neighbors to come over and see where these shots had landed. He claimed that he had aimed at targets on the side of his barn. And the neighbors came over a little later and when they got there, they saw that there were targets painted on the side of his barn and there were bullet holes in the center of each of the targets on the side of his barn and they were incredibly impressed that he could hit the targets from over a mile away and they claimed him the best sharpshooter of all time. But the fact is he'd actually cheated a little bit to do this and he had fired at the barn but he'd painted the targets after the bullets had already hit. We sometimes call this painting the target around the arrow or there's actually a Swiss version of this story I'm told where it involved a William Tell type marksman who was firing arrows. You paint the target around the arrow or you paint the bull's eye around the bullet hole. And by doing that it allows you to look like your system is much more accurate than it really is. Now the question is what does this have to do with forensic science? I've argued in academic writing that forensic scientists sometimes engage in a process that's very much like what the famous Texan did. That when we look at DNA analysts for example and they're comparing a DNA sample from a suspect to a particular evidentiary profile. Sometimes the evidentiary profile is a little ambiguous, hard to interpret. And what tends to happen is that they'll look at the suspect's profile and they'll use that to help them interpret the evidence profile in a way that causes the interpretation of the evidence profile to be closer to what the defendant's profile is. So in effect they then find a match and they say the bull's eye hit the target. We found the match and they compute the probability of that occurring by chance. But the computation is mistaken because what they don't take into account is that they were able to move the target around. The bull's eye got painted before the bullet or after the bullet already hit. So it's after they knew what the defendant's profile was that they interpreted this other profile. They did it in a way that caused the match to be more likely. That then distorts the statistics in ways that can dramatically cause a dramatic underestimation of the probability of a match by coincidence. So we've done some informal experiments in which we have engaged forensic scientists in interpreting DNA profiles and so on and we've been able to show that in fact they do this. Sometimes the world creates what we call naturalistic experiments where analysts are required to interpret evidence without knowing what the defendant's profile is. If this process really occurs we should expect sometimes that they would interpret the evidence in a way that would exclude a suspect. But then when they see the suspect's profile they change their interpretation of the evidence and in fact that happens. And sometimes they actually admit to doing this. When you ask them, why did you interpret the profile in this particular way? It's like, well, I considered the defendant's profile and I think that this part of the signal must be a true signal and this other part must be noise because the true signal is what matches the defendant. They're admitting what's in effect a circular logic. So this is a long-winded way of explaining a fallacious form of reasoning that can lead to misestimation of the strength of DNA evidence. I think it happens for other forms of forensic science as well. It may well be part of what happened in the Mayfield fingerprint error case and I don't know if you've talked about this but there was a famous error in fingerprinting that arose out of the Madrid train bombing in 2007, I believe. A terrorist bombed a train in Madrid and a number of people were killed and investigators at the crime scene found a plastic bag that contained some detonators and on that plastic bag they found a fingerprint. The fingerprint was searched through large databases and it was matched, quote unquote, to a man named Brandon Mayfield who was a lawyer in Portland, Oregon. And how did that match your credit? Later they found out it wasn't Mayfield's fingerprint. There was an Algerian suspected terrorist who matched it even better. They later found out. But for a time they were claiming that Mayfield was a match. Even though there was some discrepancies between Mayfield's fingerprint and this print on the bag, the parts that matched, they said, these are the good data, we will count these. The parts that didn't match, they said, oh well maybe the bag was distorted or maybe there was an overlay. So they credited the data consistent with the hypothesis of match. They discarded the data inconsistent. As a result their interpretation of the target changes over time in a way that makes it more likely that this wrong person will match. And that's an example of fallacious reasoning by a group of forensic scientists that can lead to a wrong result. The solution to this kind of thing is to use more rigorous procedures. We know from psychological studies and from experience in many areas how to make forensic science more rigorous than it is. The question is whether people in the field are willing to go to the extra effort needed to improve their scientific rigor. I hope they are. Do you think it's important to test claims and opinions when lives and livelihoods are on the line? Why no matter. I think we should just allow people to say anything they want regardless of whether there's a scientific basis for it. I'm joking of course. The answer would be yes. Human beings often come to believe things that are not true or not fully warranted. Tom Gillovich has this famous book about how we know what isn't so. We know from research in psychology and from a long history of human error that people often can believe things that are unwarranted or unjustified or not fully supported. Because we all as human beings have this tendency to jump to conclusions and frankly to make mistakes, I think it's really important when lives and fortune are at stake in these opinions, that we test them out and check them. That's what science is all about. And that's I think one of the great advantages of science over other methods of developing knowledge is the commitment to rigorous testing of beliefs. So how do we go about testing these kinds of claims? Well I suppose it depends on what kind of claim we're talking about. When we're talking about scientific claims, scientists often use the term validation when they're talking about testing of claims. Validation meaning to make sure that the claim is valid. And so I think part of what you have to do is look very closely at exactly what the expert is claiming and then think about how one can test that that claim is true. So if an expert is claiming that he or she can distinguish among individuals using a DNA test or a fingerprint test, one way to put their claim to the test would be to submit to them samples that you know come from the same person or a different person and see how accurately they can distinguish those. See how often if you give them samples that you know came from the same person they say that they are from the different person. Or how often if you submit samples known to be from different people they claim it's from the same person. So the rates of those kinds of errors will tell you a great deal about how accurate the claim really is. And so I think it's very important that we do those studies. So some claims are about the general ability to discriminate among things. Other claims have to do with the rarity or frequency of certain characteristics. So if an expert says, I've looked at these two DNA profiles they're the same and DNA profiles of this type would be found in one person in a billion. Well how does the expert know that? Obviously the expert hasn't tested a billion people to know that. A certain amount of mathematical extrapolation must have been involved, right? So one needs to examine very carefully how that conclusion was generated. In the case of DNA evidence these claims are supported by doing research on the frequency in various populations of certain genetic markers or characteristics. The samples they use are not billions of people. They often are hundreds or thousands of people. And to get from a database of a hundred or a thousand people to a claim that one in a billion person matches assumptions have to be made that the markers are sort independently and they are statistically independent of one another. Now are those assumptions true? Well there's been a lot of debate and discussion of that. There's some support for these assumptions. There's whether the support's adequate is still debated in some circles. I think the conclusion has been that they're true enough for government work. They're true enough that we allow experts to make them when testifying in very important criminal cases, which doesn't mean that they're proved beyond all possible doubt. You were involved in a case with O.J. Simpson. Can you tell me a little bit about that? Well that's now a very old case. In 1994 O.J. Simpson, who was a well-known former football star and then a TV pitch man was accused of murdering his former wife in Los Angeles. There was a great deal of DNA evidence involved in the case. Blood samples everywhere. At the time I was a professor at UC Irvine where I had done a lot of writing about DNA evidence. I had also, in the role of lawyer, litigated cases on the admissibility of DNA evidence. I suppose it's not surprising that the O.J.'s defense team asked me to join them as one of 12 lawyers in the so-called dream team that was put together to defend him. I served as defense counsel during the criminal trial that began in 1994 and terminated in 1995 with Simpson's acquittal. Most of what I did was analyze and work on the DNA evidence. What the defense team did was look at evidence that appeared to discriminate Mr. Simpson and see if we could generate alternative explanations for that evidence consistent with his innocence and see if we could support those explanations. I think it's a little known fact that psychology played a big role in Simpson's acquittal, at least in my view. The defense team included a psychologist, me, and we consulted with a number of psychologists. At one point we even approached Daniel Kahneman about him being involved in the case, although he didn't ultimately work on the case. J. Kohler worked on the case. Gary Wells was consulted on the case. So a number of famous psychologists were consulted on the case. The defense team in constructing theories of the case was strongly influenced by a psychological theory called the story model developed by a psychologist named Reid Hasty and his colleagues at University of Colorado and Northwestern University. The story model is a theory about how people come to believe and accept theories of a criminal case. And what it tells us is that it tells us the circumstances that make a particular story or theory believable and involves things like logical coherence of the elements, how completely a theory explains the evidence at hand and so on. And I would say that in designing Simpson's defense, the defense team was guided heavily by theoretical notions from the story model. We were trying to tell a compelling story of the case that would explain the evidence and be consistent with Simpson's innocence. And the psychological theory helped us determine what elements that story had to have. And so that's the little-known story about the role of psychology in the acquittal of O.J. Simpson. That sounds great. But yeah, so the reason Simpson was acquitted, despite what appeared to some to be an overwhelming amount of DNA evidence that incriminated him, was that the defense was able to construct some theories that had to do with accidental contamination and some intentional planning of evidence and was able to present at least some support for those theories, sufficient support to make those theories plausible in the minds of the jury so that the jury believed that the prosecution's theory of Simpson's guilt was not the only possible explanation. There was another story, if you will, that was plausible enough that the jury had to take it seriously and that created the reasonable doubt that led to Simpson's acquittal. This course is about the science of everyday thinking. What advice do you have for people out there who want to think better and do better in their everyday lives? Oh, that's a good question. Boy, I'm not sure I've mastered how to think well myself. I think we all struggle with thinking clearly, with marshalling our thoughts. There are some times when I have been benefited by trying to be very systematic and decompose problems into elements and think about them carefully piece by piece. But the fact is I rarely make actual decisions that way. I think a lot of our decision-making happens kind of intuitively through processes that we don't fully understand and really can't analyze. And so the kind of advice that I give people about making better decisions is to be careful about what information you allow yourself to consider. If you're a forensic scientist and you want to avoid being influenced inappropriately by extraneous information, make sure you don't know that information. If you're an instructor and you want to avoid being influenced by how attractive or charming the students are when you grade their exam, grade their exams blindly. So there are circumstances in which less information sometimes leads to better decisions. Knowing what those circumstances are and then blinding yourself to inappropriate information is maybe one of the best ways to improve your decision-making. My name is Bill. I think about proof. I think about how proof is generated. I think about how people respond to proof. I think about proof that's put forward that isn't really proof.