 I'm now ready to bring up our introducer for tonight, who is Carly Colmez. Some of you may have seen her here before. She actually gave a talk about a book that she co-wrote with tonight's presenter. We do have copies of the book. If you haven't seen it, it's a very interesting and engaging book. And Carly also has another relationship with tonight's speaker, which I will let her share with you. So without further ado, I invite up Carly Colmez. Hi, everyone. Thank you all for coming. So I'm going to introduce Leila before her talk. Leila is a researcher in pure mathematics, and she lives in Paris. She's also written a few mystery novels, and she's also my mother. And we wrote this book, Math on Trial, together a few years ago. It was a really interesting process, and actually we've had a lot of interesting experiences since writing the book. And especially my mother has kept doing research in the same area. So this is thinking about, well, looking at cases of court trials where mathematics has been used as evidence, often to convince someone. And especially cases where the mathematics was wrong. And so we think that there may have been a wrongful conviction, or at least a conviction for wrong reasons. And my mother has learned about a lot of new cases. And in fact, most of the cases you'll hear about in her talk, we don't mention in the book, so it's all new stuff. And in particular, there have been a few instances of people actually coming to talk to them about some personal stories where someone they're close to is being accused, and they think wrongfully. And one of the cases in particular that you'll hear about is one such case where my mom was asked for her opinion and her help in what people believe to be a wrongful conviction. So there you go, I'll let her tell you all about it. Hi, so it's very exciting to be here in New York. And this is the absolute first time that I've talked about this subject in this country, which is my country, contrary to appearances. But since I live and work in Europe, I have dealt with a lot of European cases. And I know a lot about the European justice system. And some of the things that I will tend to explain to European audiences are a little bit different here, for instance, everybody in the US knows perfectly well how a jury functions and what juries do, whereas there are countries that don't actually have a jury system. So what I'm going to do today, this is perhaps to me the most interesting part of what I'm going to do with you today is I'm going to treat you like a jury. We are going to look at some cases. We are going to look at four specific cases. There are many cases in which mathematics is used, and there are many reasons to use mathematics in criminal trials. There can be scientific evidence, there can be DNA, there can be statistical analysis, there can be calculations of the probability of certain events, all kinds of ways. And because there's more and more forensic science used in trials, there's more and more mathematics being used in trials. And therefore, the question of its correctness is more and more important. I can't tell you how often or how many wrongful convictions there are due to this. I really don't know. But I can tell you that there are things going on that are truly not right and absolutely needing more investigation. And that the goal of our work with Coralie, by the way, I recommend writing a book with your kid, because it's really fun. Was to point out some typical errors as illustrated by certain real cases. These are errors that can come up again and again. And people need to sort of learn to recognize them and be familiar with them, so as to be able to avoid them. But there are always more. I keep discovering more other kinds of errors or else other contexts in which they come up. So what I would like to do in order to illustrate how it works that a jury can go into a mathematical error, is for you to be to act today as a sort of jury. We're going to cover four cases. So that's you today, okay? There you are over there on the left. So before we start, I'm just gonna tell you a couple of things that are very important to know about any jury. The first thing is, everybody, every single person, it has prejudices and limitations and opinions and their own point of view and their own beliefs. And that's as it should be. And we should use those as members of a jury. We should base our judgment on what we're seeing, on what we know. But it's very important to know that that also has its limitations, right? To be aware that what we're saying is not right necessarily. And to be able to make a balance between our intuition which is really something based on our experiences and a certain ability to analyze facts. There's gotta be a balance of those two things. There's nothing wrong with using your prejudices, your experience and your intuitions to a certain extent. We need to do it. What else would we base our opinions on? But you have to always keep in mind how wrong you can be. How wrong our previous experience, we can't blame our previous experience, but it leaves us wrong. So I show these pictures often in talks of all these very, very nice looking young women and men who maybe are familiar to you or maybe not. All of them have been accused and tried and convicted of murders. Terrible murders, some of them, except for one of those in the bottom left-hand picture in which two of them murdered the third. I cannot tell you which one. You may recognize some of the more famous one, the young man on the bottom right killed his math teacher. I'm gonna ask you right now to, I'm gonna talk about cases that are difficult. I'm gonna talk about people who die, people who are unfairly convicted. Some of the cases, some of these situations are truly tragic and very difficult. I do sometimes joke. It's also a way of dealing with it. I do not ever get into anything at Al Goree. Mine is about statistics. Nevertheless, this is a difficult field to work in. The things that happen are tragic and sometimes you have to laugh and it doesn't mean we don't understand that these things are really difficult and tragic, many of them. So the four cases I'm gonna give you are real, but I have adopted them a little bit just simplifying some of the numbers and some of the data a little bit in order to really emphasize and underline the mathematical issues that I'm gonna get at. So for each of these three first cases, I'm going to be asking you to participate and make a decision at a certain point. And in order to do that, if you wanna participate and I hope everybody does, first start by connecting your cell phone to MoMask guest Wi-Fi. Once you've done that, at the right moments, I will tell you which website to connect to and how to vote and what to vote on, okay? And I hope that you will all vote and express yourselves. So the first case is something that I think is probably very frequent, very frequent. And in fact, there's been studies showing that it's extremely frequent. So Mrs. Miller goes to the hospital for a yearly mammogram, which is a very standard test. The most women will probably do on a yearly basis. But Mrs. Miller is in a bit of a special situation because her mother died of breast cancer and so she is actually quite worried about having this disease and she thinks there may be a genetic predisposition. It's something that she's worried about, but up to now everything's been fine. But this year, she gets a positive result on her mammogram. So how prevalent is breast cancer? It's about one woman in a thousand, which you can say is either rare or frequent according to what you're measuring exactly. There's about one woman in a thousand who has this disease. And so she asked her doctor, she says, is the test reliable? And he says, yeah, this test is really very reliable. It detects 95% of cancers. Yes, yes, but what interests her is, is it possible that the test could give a positive result when she doesn't have cancer? That's what she wants to know. And he says, well, yeah, but the test is even more reliable for not doing that. That does happen, but only in 1% of cases. So she says, well, so I almost certainly have cancer. He's like, well, it detects it at 95%. Now look, go home for the weekend, come back Monday, get an appointment Monday. We're gonna talk about, we're gonna have another test. We're gonna talk about treatment. Go, mammogram images. Okay, she goes home and she is very upset and very worried because of her family history and she starts to write her will to think about dying. And she ends up having a panic attack, a debilitating panic attack. And calls her daughter who comes and drives her to the emergency at local hospital where she spends a day and a night with some sedation. And morning comes, the doctor on call comes to see her. I heard your panic attack was due to a positive result in your mammogram. But you know, you shouldn't worry. You probably don't have cancer. What, she's like, what do you mean, I probably don't have cancer. The doctor said 95% chance, but he says, I've seen hundreds of patients who have had a positive result. My experience, nine out of ten, nine times out of ten, they don't have cancer. Okay, she goes to have another test. She doesn't have cancer. She is very relieved. It's great, she feels much better. This is good, so it leaves a story with a happy ending. However, and I can tell you that this is the bit where European audiences burst out laughing. She receives the bill from the hospital, $22,783. She does not have $22,000. Her insurance does not cover panic attacks. But they advise her to sue the doctor who gave her wrong information and cause the panic attack, because after all, if you made a professional error giving her some wrong probabilities, then it's his fault that she had this panic attack. What would make her think he gave her wrong probabilities? Well, that's what the other doctor said. She complains. He said that there were 95 chances in 100 and the other doctor said nine out of ten don't have it, so that would only mean there's one chance out of ten. If I had Dr. Portman to start with, I wouldn't have had a panic attack. Dr. Davis says, well, these studies that have detected the reliability of these tests are very reliable. There have been studies done. These are published information. I only quoted well-known, well-documented, published, correct, statistical information. That's all I did. I quoted what's known. I didn't make them up. Dr. Portman, he was just speaking from whatever experience, but I mean, he's implying that these tests are very inaccurate, Dr. Davis says. I don't know if Dr. Portman should be saying things like that and sort of telling people that the tests are very inaccurate. So here's the data. This data's correct. And now it's gonna be your turn to be the mediator, because this didn't go to trial, this was dealt with by a mediator, to be that mediator and figure out what has to happen. So in order to do that, you wanna go to vote.moemouth.org with your phone. And you will find a little dial, which you can turn to many numbers. And I want you to choose, you just move the dial to number one. If you think Dr. Davis is correct, Dr. Davis told her, the data of the 95% and the 1% chance of false positives. Two, if you think Dr. Portman is correct when he said nine times out of 10 people with a positive mammogram don't have cancer. Three, if you think something is going on that it's somewhere in between that they're both wrong, okay? And I think you click on submit, you choose your value on your cell phone and press submit. And when you've done that, when everybody's pretty much done it, I will be able to show you everybody's result by going here. So when I click get results, all of your results will appear as a bar graph on the screen. Are you getting a dial with numbers? Vote.moemouth.org. Okay, choose your one, two, or three. There's always somebody who chooses zero or seven, so don't do that. And submit. If you're in a mediator or a jury, you have to do it. In fact, I'm seen even by giving you the option of three. A jury or a mediator would be able to choose three, but they would have to fully justify why, which I'm not even asking you to do. Okay, so I'm giving you a task that's a little easier than what you would literally have to do in a real situation. So shall I, are we ready? Should I click on get result? Has everybody voted who wants to vote? Ten, nine. Back to the choices, back to the choices. Do you need to, do you need to, shall I go back one and show you what Dr. Davis said and what Dr. Portman said? Let me just, what is Davis to his Portman? Just gonna remind you what they said. Dr. Davis told her the, so this is what Dr. Portman, so he's number two. So this is number two, what he says down there at the bottom. I've seen that, just a patient seemed to me that nine times out of ten, they don't have cancer, that's what Dr. Portman told her. What Dr. Davis told her was the, these information, pieces of information about the precision of the test. And so one is this guy, Davis here, this one, and number two is Dr. Portman, and number three is neither. So number one is the guy who scared her. Number two is the guy who reassured her. And number three is, this has to be both wrong, okay? I'm gonna take the plunge now, I'm gonna do this, I'm gonna click on get results. Doesn't look right. There aren't even that many people in this room, right? Not sure what to do with all the tens. So I'll just pretend they're not there. I am not sure what they're doing there. But I do think that the votes divided between one, two, and three are pretty typical. So these are, it's probably pretty much legitimate. Which is, we're pretty evenly divided between Davis and Portman, and a large majority of people think these cannot be right, right? Okay, let's see now what is going on. Let us decide who's right. We're gonna decide who's right. Because math will do this for us because math is great. So think about this problem which is in fact very simple. You just think of 100,000 women and take a nice round number. Now we saw that one in 1,000 women actually has the disease, so we are talking about 100 women who actually have the disease. We saw that the test will detect 95% of them, so we've got 95 positive results that will come out of these 100,000 mammograms. Now we've got 99,900 women who simply don't have cancer. But there's gonna be 1% of false positives on these, as Dr. Davis correctly said, except that what that gives us is 999 false positives, which largely overwhelms the mere 95 true positives. So what's really happening here is we're getting 1094 positive results, and only 95 of them actually have cancer. And so that's 8.6%. So that the actual probability that she does have a cancer as opposed to a false positive is actually less than 1 in 10, which is indeed surprising. We somehow intuitively, and I'm talking about people who know math, people who like math, people who love math, people who do math all day, as well as normal people who don't know anything, it's very easy for everyone to fall in the same trap and to simply forget about the massive amount of women how many more don't have cancer than do. Just to forget that, and just think of the 95% and the 1% to somehow being comparable. This is something that happens a lot. We'll see more like this later. So when you choose a jury, people who select juries, well, there's supposed to be 12 people, and they're very different from each other, and they're supposed to be very varied and diverse, and hoping that the experience and prejudices will cancel out in a way and lead to something that's balanced and fair, but it doesn't work in math, it doesn't work. You can choose all kinds of different people in your jury from those who love math and don't know anything about math, and they're all able to make the same mistake. Somehow, the way we learn math and the way we deal with math and our psychology about it, it leads us all to make the same errors. So the whole idea of balancing out a jury is not an efficient way of avoiding certain typical mathematical errors that come up again and again. So in this particular case, the correct answer was Dr. Portman, the other answers were just wrong. Frankly, Dr. Portman was simply very experienced guy, but he had no idea why he didn't do this reasoning either. And in fact, there have been massive studies among doctors in the US and in Germany on this exact question, and maximum 10% of doctors are actually able to reason this out and give a correct answer. Which is quite scary. I don't think I even need to tell you that, OK? So that was my first case. I'm going to move on now to another case. Now, each case is very surprising. And each case is absolutely typical at the same time. So the cases that I concentrate on, and there are many such cases, I pick them to study this kind of phenomenon, tend to be cases where there is very little evidence, because the more evidence you have coming in, the more complicated everything is. And it's hard to tell why a jury, a person will decide what they decide, because there are so many elements. So I'm choosing cases where the evidence is really just very little. And believe me, there are many such cases. So the body of a woman is found by Hikers near a forest trail. She has been stabbed. The knife is found at the scene. This woman is known to go jogging regularly at this hour in this place. Police just simply do not find much evidence. She lived with her boyfriend. The forest is nearby a town. A town with a fair big population. Nobody could say whether the knife, I mean people, the police will first investigate, tend to investigate the boyfriend. So does the knife come from the home? Nobody could say. The boyfriend said no, the knife, I've never seen it before. No one could say whether the knife came from their home or did not come from their home. Simply didn't know. And they protestified that the couple quarreled and that they quarreled the night before the murder. So that's evidence against the boyfriend. They investigate the boyfriend and they found out that this is a guy. This is in England, what I'm telling you here. Not that it makes a difference, but I could write pub instead of bar. He loves to spend time at the local pub, he drinks. There have been fights. There have been fisticuffs. He has never been arrested for any, he's never been charged with any violence. He has never been accused of domestic violence. There have been some fights and the police have been called. So there's a little history of violence and drinking, but nothing that ever led to anything criminal. He claimed that he was at the bar, which was the pub, during the time of the murder. So that whole evening, like from six o'clock till after midnight, he was there and all his friends and everybody was like, oh yeah, he was there. I saw him there. But no one would really say, oh, but he was there nonstop, full of people standing around, drinking. Police determined that he could have gone out for an hour and come back. And that would have been enough time knowing that his girlfriend went jogging at that time. It would have been sufficient time for him to, if he took a knife from home, to commit the murder. So he has an alibi, but it's not a fully convincing alibi. And this isn't much evidence, right? This is really not very much evidence. So what I'm going to ask you to do now, and this is what you would be doing if you were a jury, is to go back to the site and to simply enter your personal subjective estimate of the probability of guilt of the boyfriend. And it's totally subjective. So you can enter whatever you feel. There will be people who will say, oh, it's always the boyfriend, I don't know, 70% or something. There will be people who will say, come on. This isn't even evidence. I mean, what? We're going to arrest everybody who drinks in a bar and quarrels, 10%, whatever. This is totally subjective. Just estimate it, how you feel it. You should just be aware of one little thing, which is that unless you vote 10, which is practically certain, you're not going to be beyond a reasonable doubt. And no number that you could put it up to nine would mean conviction. So we're not at conviction yet. Now we're just making a preliminary estimate of what you think literally in this situation at trial. My point is not to really study the values that you enter now, which have a tendency to be distributed between sort of 10 and 7, 1 and 7. But rather, I'm then going to add a further piece of evidence of what interests me is to see how that will modify your estimation. So I want to have this first estimation for the purposes of comparison with what comes next. So go ahead and do exactly what you did before. You pick your value. Nine means 90% chance he's guilty. At this stage, probably nobody will think nine. One means zero. He means what? Come on. I mean, this is like no proof at all. It's like nothing. You are absolutely free, and people do, to either think that the fact that the boyfriend is often guilty in such cases counts, or that it doesn't count in an individual case. This is truly just objectively up to you as it would be to anyone in a jury. So if you are doing that, then I can go back to Endu Clear. And when you're ready, I hope as many of you are voting as possible, because the more of a general picture we get of the kind of values people choose, so please only vote once. I guess you're getting used to this now, and you're all set, and can I click on Get Results? Should I click on Get Results? Remember that we are not in a situation of wrong and right answers at this point. Even if you hit something that you didn't mean to, it doesn't matter. The point is, well, no. This is just supposed to be representative of what the general population feels about the situation, and there's a widespread, and that's normal. OK, I'm going to do it. 10, 9, 8, 7, 6, 5, 4, 3, 2, 1. Generally, the assessment of guilt stops at around 7 or 8. OK, some people are a little ferocious-minded here. Somebody's ready for him to go to jail. In reality, there is absolutely no way you could convict on the evidence we've seen. That would never happen. However, you're right to think that we should. If a couple of people think, well, we should convict, that's your right. OK, apart from the ferocity that I see over at this end, this is a very standard and reasonable spread. Now, what I'm going to do is I'm going to introduce you to the last piece of evidence at the trial. So there's just one further piece of evidence. And the very important thing about this evidence is it's very different from everything you've seen before in nature because it has nothing to do with the accused. It's purely a scientific piece of evidence. In fact, it's numeric. And I will show it to you now. I've got to get back to my slides. OK, we went through that pretty much. This is more or less how people reason. We sort of said this already. So the final witness is an expert who was given the knife to examine in a lab. In the hopes that she would find DNA, fingerprints, something that would identify the murderer, unfortunately, she did not find these things. She found something that is much less useful for the purpose of identification. What she found is a palm print on the handle of the knife. Not absolutely completely clear either. Nevertheless, a palm print, it's not at all like a fingerprint for identifying people. But there are many different types of palm prints. And this was a type of palm print which doesn't belong to everybody. Only a certain number of people have this palm print. So whereas if she had managed to find DNA, we could say, oh, this belongs to 1 in 10 million people. We can't say something like that here. The figure that she gives is a palm print like this is going to belong to about 1 person in 100. So it's common compared to what, say, a DNA or a fingerprint would be. So this is the information given by the witness. The palm print on the knife is going to correspond to about 1 person in 100. It corresponds to the accused, without which he wouldn't even be accused. Prosecution lawyer is extremely happy. See, he hasn't. There, it's him. Defense lawyer is also happy. Come on, 1 person in 100 in this whole town. You've got hundreds and hundreds and hundreds of people. Yeah, my client is one of hundreds and hundreds of possible suspects. That's it. That is all you're going to get. Now, you're going to deliberate again, so you're going to go back to the site. And what you're going to do is you're going to update and put in what you now think of the probability of guilt of the accused. There are going to be some people who think, oh yeah, there are so many people who have that palm print. There are so many possible culprits. Maybe he's less guilty. Most people will tend to think, oh, what he has the palm print, so that's not very good for him. So they'll make him more guilty. What interests me is the general change from your initial assessment. That's what I'd like to see. So go and vote now on your second assessment after the palm print evidence. What does that palm print evidence mean to you? In terms of the guilt of the accused. You can keep the same if you want. You can go down. You can go up. Yeah? Isn't it relevant how many murders happen in this urban area? I mean, are we talking about Detroit? Or are we talking about, you know, stuff? OK, we're definitely not talking about Detroit because we're talking about a town with a population of 1 million. So we're not talking big town. You know, that would be a very relevant question if say there was a serial murders going on. So let's say no, no, no, nothing like that. Otherwise, if there was any reason to think that this was committed by another person who committed previous murders, but we're not in that situation. Yes? Are we supposed to vote according to how we think about it, or is it that we were actually on the journey? Why would that be any different? That should be the same. On a jury, you have to go with the evidence, even if you think he's guilty, if you don't see the evidence. Ah, OK. Go with the evidence. If those two things don't lead to the same result for you, I would rather that you would go where you think the evidence shows. Because if you're on a jury, that's kind of an obligation. So, yeah, I think that for most people those two things might coincide. But I would rather you just say, what do I think the evidence is telling me? Knowing again that unless you go for 10, where 10 doesn't mean absolute certainty, right? Because we're not at, you know, 10 is just, 10 could mean 99%, 98. 10 could mean beyond a reasonable doubt. It doesn't have to mean I am 100% sure, OK? 9, which is 90%, would mean I'm pretty darn sure that he is guilty, but I cannot convict. That's what 9 would mean, and less than that means, well, I still am not mad convinced that it's him, OK? Have you voted? In principle, if I click on get results now, we should get a green graph next to the blue graph that will compare the old and new. Yes, so we did. So what happened? What happened here is that the most inassist assessments tend to be slightly less, whereas the more guilty assessments tend to be more, OK? So the general assessment of guilt, it hasn't changed gigantically, but it has changed. It has shifted somewhat to the right, but not that much, OK? Typically, a given person, so what this is not showing you is if you first voted 0.2, where are you now? But what typically happens is that people will increase by 10% to 20% their initial assessment. How many people increase their initial assessment by 10% to 20%? How many people increase their initial assessment by more? OK, so some did, not many, but some. OK, now I'm going to tell you something interesting. Firstly, I want to tell you that this kind of result is extremely typical of studies that have been done, including by me. And I think it's normal that it's typical because I think there's something in our way of thinking about math that does actually work this way. But what I want to tell you is actually that second assessment shouldn't have been an assessment because it's not an assessment because there's a formula. That given your initial assessment and the information given by the expert witness, you can actually just calculate what your second assessment should now be. What does that mean? That means that we are able to quantify something that seems hard to quantify, which is the weight of the evidence. The expert witness brought evidence. We need to calculate the impact of that on what we thought before, the weight of the evidence. And there is an actual formula that calculates the weight of the evidence. Just to show you this formula, I'm just going to introduce the notion of relative probability, which means the probability of something A if some other given thing B is true. And I'm going to write that this way, the probability of A if B. So R, A, and B, the things that are probabilistic statements are the accused is innocent, which is going to be true with a certain probability. The accused is guilty. And this specific type of palm print was found, P.P. stands for palm print. What do we want? We want to calculate this, the probability that the accused is innocent, given that that palm print was found. That's what we like to calculate, or the probability that he's guilty, given that that palm print was found. And probe that he's guilty before updating, that's what you said your first time around. So I'm going to look at three different possibilities, 0.20, 0.5, 0.7, but it's the number that you first put in. Probability of innocent is one minus that, because guilty plus innocent equals one. He's either one or the other. OK, so I use those three possibilities for innocent. Now, what's the probability of finding that palm print if he is innocent? Well, if he's innocent, it's not his palm print, it's somebody from out there, and there's one person in 100 who has that. So the probability of finding that type of palm print is innocent is one in 100, and it was given by the expert. What's the probability of finding that palm print if he's guilty? Well, that's one, because if he's guilty, it's his, and he has that type, so we're sure to find that. And those are all the numbers that you need to plug in to the formula, which calculates what we want, which is the probability that he is innocent, given that we did find that type of palm print. So there are the values. You just plug them into that formula there, which is Bayes' theorem, and it simplifies to this, where the number you see in red over here, 0.01, is the one in 100. It is the exact number given to you by the expert playing that role. And probe guilty is the number that you first put in. It's your initial probability of guilt, and this is the way that this is the formula that lets you update it. So we actually just have a formula that should give us from whatever first value you chose, which would give you the second value, and we're gonna look at what those results are when you do it. So if you thought that he was only 0.2 guilty to the start, you find that the formula gives you 0.039 for innocent given the palm print, which means that the probability that he's guilty given the palm print is 96%, or 96, 96%. If you started with an estimation of guilt of 0.5, this palm print information brings guilt to 99%. And if you started with 0.7, you're now up to 99.5%. So certainly the second two, and possibly even all three, you could say are within the domain of reasonable doubt and are legitimate actual reasons to convict. And these measure the real meaning of what it means that this guy has the palm print. There are sure there are hundreds of people who have the palm print, but do they know the victim? Are they close to the victim? Do they know that she's jogging? This is the weight of evidence. And since nobody does this calculation, because nobody knows how to do it, and intuitively nobody, nobody measures the weight of evidence correctly, and that means me. I mean mathematicians, no one does it. This guy walked, and as far as I'm concerned, this may extremely, extremely well have been a case of wrongful acquittal. And I think there are many, many such. I'm so glad that I'm in the role of analyzing things that have happened, and not in the actual role of deciding whether people go to jail or not. All right, I'm on to my third case now. So my third case, this is very, very interesting and it's something that is just happening a lot nowadays. It is about what happens when you simply don't have evidence against an accused. You do not arrest your accused because he's the boyfriend or he, this, he, that. No, you find them in a database. That's your starting point. So what do you do then? So this was one of the most important trials that took place using what's called a cold hit, which is you just search for a DNA match in a database. So the murder itself was from a long time ago, that before there was any DNA analysis. The landlady, she had two young nurses who were living above, one of the nurses was at work but the other was home, and she heard something that sounded very wrong. She was banging and thumping and she goes upstairs and the door is flung open by this man that she described as white, fat and bearded, growls at her, go away, we're making love, and then she, but he actually runs away and she sees him out the window running away and she goes upstairs and knocks on the door, very, something's wrong, this is not right. Nobody opens the door. She calls the police, police come and this nurse, Diana Sylvester, has been murdered brutally under her Christmas tree. The police simply could not find a suspect. This could have been a random person who followed her home that didn't seem she knew anybody who could be suspected who corresponded to the description of the landlady. They simply didn't find any convincing suspects. So they never solved the case and it became a cold case and I went into the files of cold cases and sat there for decades, which is what happens. However, little by little as DNA began to be used more and more, police began to, everybody gets arrested for crime in California, their DNA goes in a database and so they constituted a big database. I rounded out the numbers a little bit but basically there's half a million people in the database and somebody had the idea of going back to all those cold cases and opening the packages and taking out whatever's in their clothing, blood stains and testing them for DNA, it's a ton of work and putting the DNA into the database to see if you can find anything. And so there was somebody who was charged was doing that work and there was a small sperm sample found on their clothing and the police biologist found some DNA. Now I have to tell you how DNA analysis works. What the people who do DNA analysis for criminal purposes have done is they've chosen between 13 and 17 genes according to which country you're in and how modern your kid is, which have a very special property, two special properties. One is they have picked genes, this is quite interesting actually, that have no known physical effect. If they wanted to examine the DNA of a criminal to see if he has blue eyes, they could but it's not done. Perhaps they think it would shed too much suspicion on blue-eyed people. On the other hand, not doing it could shed suspicion wrongly on brown-eyed people so I don't know but anyway it's not done. These genes do not correspond to any known physical trace. And the other is that vast, vast studies and testing done by the FBI on many millions of people have shown that these genes are extremely independent from each other. They can occur in any combination. They're not like blue-eyes blonde hair genes which are more frequently together. They are just seemingly totally independent from each other. A gene contains two alleles, one comes from your mother, one comes from your father. So here's what it looks like when a biologist does a DNA analysis so you can see four pairs in the first row then five and four so this is a 13 gene analysis. Under each allele there's a number which is a specific feature of that allele. If you test another person and compare that person is gonna be this person only if every single allele is identical. Has the same number underneath it. The height of the alleles means nothing. That actually just tells you how much DNA you're measuring. It's the number underneath which identifies. Here in this pair there's only one but that's because the pair is twice the same. So this is what I call a very good quality sample of DNA. You've got your full 13 pairs fully visible. If you get a sample of somebody else's accused of being this person and there's one allele that's really different it's not your person. A really good sample will identify very, very specifically. So every single allele occurs in the general population with a slightly different frequency which are very, very well studied. The FBI has all these numbers. However, roughly speaking I'm gonna say each allele occurs in about one person out of four. It's close enough just to simplify the calculation of it. So what that means is how many people share one allele? How many people share two? Like two given alleles if I just name two of them like in that graph you're gonna get about one person out of 16 who has those two. And if you take four that is two genes that are the same you're gonna get about one person out of 256. Well I didn't do the multiplication here but if you actually have the 13 genes in common you're not talking about one in a million or one in 20 million, you're talking about one in trillions you're talking about way more than the population of the world. So the identification is absolutely certain if two people have 13 pairs in common. However that's not what happened in this case. What happened in this case was that the DNA sample from the crime scene was very degraded and old and the analysts could not get 13 pairs out of it she could only get five. So what that means is that you're gonna get about one person out of 16 to the fifth so about one person out of a million who will have the same five genes as the murderer. So she did what she could and she ran these five genes through the database and she found a match, there was a match. But the trouble is that this is not a fully matching sample like 13 genes, five genes is not a fully matching sample. So they found the guy and they went to arrest the guy, John P, 72 year old white man from San Francisco. His DNA is in the database because he was arrested for rape. He does not have any record of murder. He says he has nothing to do with this murder. He has no knowledge of this murder. Nies it absolutely. There's a trial and the way in which he was accused the fact that he has no link to the crime that he is accused on the basis of a DNA search in a database. This plays an important role. The prosecution says if he's not guilty there's only one chance in a million that he would have these genes so please. I mean that is a very, very, very tiny chance that he's innocent so you the jury will now decide. The defense argument says you're gonna find one person in a million with these five genes and then you go look in half a million people. So if something happens once in a million and you look in a million people you're pretty likely to find it once, right? So if you look in half a million people you've got one chance out of two of finding it. It's quite sure, I mean it's 50, 50 that you will find someone. So how can you then say that he's guilty? You can find anything if you look in enough people, right? There's one chance in two that you'd find somebody. These are very, very different. One chance in a million or one chance in two? This is like not the same. So one chance in a million comes from the rarity of these five genes in the population. One chance in two comes in the fact that you may think one in a million is rare but then if you look in a million people it becomes not that surprising to find it. That's where these two reasonings come from. So now you go back to that site and decide if you will, which argument you think is more convincing or neither. Great question, you said that you can take it up. How much time was there between when this case was, was the case roughly 30 years called? Yes, that 34 year time, I'm very little. So we're talking 30 years after the crime, 34, 35. 2006 I think the trial was, 2007 maybe. Remember, you are not judging now whether he's going to prison or whether he's, how guilty he is, you're now judging just is the prosecution argument convincing or is the defense argument convincing or something else, okay? Without asking yourself whether he's going to prison or not, that I'm not actually asking you to make that judgment right now. At this point we're literally sort of judging the mathematical arguments that were presented and not the guilt of the accused, okay? I will clear this and I will click on get results if you are ready. Can I click on get results? So what did we say? We said one prosecution, two defense, three neither. Many, many nines. What does 133 even mean? I mean there aren't even 133 people in this room. I'm not sure what that's doing there. Nevertheless, I'm interested in the distribution between one, two and three and I think that is quite telling, don't you? Don't you, what does that mean to you? It means we have no clue, we have no clue. Well now I'll help you get a clue. Thank goodness we don't have to talk about reasonable doubt here because I didn't ask you that question if we did and you'll see why in a minute we would be troubled but we aren't going to ask yourself that question because we're not in the jury for real, thank goodness. Ta-da, okay. Prosecution's argument is wrong, just wrong. Why? There gives you a number which is correct, one in a million. But what does that number mean? It means the probability that John P. shares five genes with a criminal if he is innocent. Because if he's innocent, if he isn't the criminal, then it's a random chance that he has these same genes and that's one person in a million. But that is not what the prosecution is interpreting the number one in a million as. They are interpreting the number as the opposite. So I showed you these if probabilities. The probability that he shares five genes with a criminal if he's innocent is not the same as the probability that he is innocent if he shares five genes with a criminal. They're just two totally different numbers. They're related by a formula. They're actually related by the Bayes formula that I showed you before but you have to have some numbers that we don't know to complete that formula. As happens, they're just not equal and so they are telling you a number that's correct but they're not telling you the correct meaning of that number and so it's plain wrong, their argument. But this is wrong too. This is totally wrong. Why? And in so far as they say, you've got one chance in a million of finding this thing and you go look at half a million people. You've got one chance into a finding it. They're right. But what if you're finding and he's the guilty one? What makes him say he's got one chance into it being innocent? Because if you look at 500,000 people and you find him, it's a random match. What if it's not a random match? What if he's guilty? What's missing here in this analysis from both sides is what is the probability that that murderer who really existed, that person who murdered Diana Sylvester, is in the database? Without that or some kind of estimation of that, neither calculation can possibly be right because that's a key point. There's another error going on in this argument which is if you take a random sample of people in the country, you'll find a frequency of one in a million but this database is not a random sample and you can't really pretend it's a random sample. It's in California, there's a very specific racial mixture or whatever in California, age distribution. It's not representative of the whole country. Plus, it's totally skewed between men and women as you can imagine from a little database. There are plenty of women, I'm not saying there aren't but there's a great majority of men. It's skewed in terms of age. It's skewed in every way. It is just not representative of the country. So that's another error. So how can you reason? So the right answer would have been number three if it were me. So I'm gonna make a reasoning which is lax in which I would never do in a court of law but if I had complete access to all the statistics I need I could do this in a court of law. You've got approximately 330 people in the country who will have these five genes. John P is one of them. The criminal, the real murder is one of them. They may be the same, they may not. That's what we have to consider. Given that we've got one in California, what's the probability that this John P is another? And here is where we suddenly realize that it's not true that we have no evidence against John P. We do. We have major pieces of evidence and the first one is look how old he is. Look how old he is, he could have done it. 90% of the people in the database are under 30. In fact, I think maybe even under 25. In fact, they shouldn't even be used to count them in that database. But what if the single match had been to a 30-year-old? We'd have known. Wouldn't have been criminal, that's all. We'd have known. This is evidence. And again, it's like the other case, we have to know how to assess the weight of evidence. And then there are other factors because there was a witness who said that the criminal, the true murder was white. He was in California. They were able to ascertain that John P had always lived in California. He was a criminal. How many people are criminals? John P was a criminal and the murderer was a criminal. And in fact, in years 330, absolutely random people all over the country who can be babies, who can be of any kind of race, color, whatever, who can be any age, who can be women, anything. You are saying, what's the probability that I have? Not one, but two people having all of these traits in common. And you realize that there are not so few traits, especially the age, that's very limiting. And so you can calculate the probability of having two people sharing those traits with these proportions that I've given that are roughly true. But of course, if I were to do it in the court of law, I would use very specific statistics. But if I do this calculation and I calculate it, what I come up with is that the expectation of guilt, to me, is the same thing as saying, what I'm saying is there's 2.3% chance that there could be two people with those five genes and all those traits in common. That's the calculation that I've made. And if that means that there's 97.7% chance that you don't have two people like that, but only one, in which case he is the murderer. You're not assuming independence. I am assuming independence, but let me again say that I would do much, much better job with justification of independence and so forth if I were to do it in the court of law. Here I've done something that's rather estimating. This said, I think they're independent. I mean, it's a very good question. It's a very good question. Perhaps they are not totally independent. I don't want to be mean. I don't want to be like overly feminist, but is it independent to be male and criminal? Okay, but no, but the question is totally legitimate and it would have to be done much more completely justified and correct if it were gonna be done in a court of law. But nothing of the kind was done in a court of law. We are unfortunately nowhere near a place where this kind of thing would even be done in a court of law. Okay, so I'm gonna get on quickly now to my last case, very quickly. All right, what was your conclusion? So was he guilty? He was convicted. Was he guilty? I cannot tell you. I cannot tell you. What was your conclusion? I believe that he was guilty. 97.7 is my, but it's a rough estimate, yeah? But was it the age you have to also count people who would have been alive except they died in the... Yes, yes, I'm not saying that this is an assessment about as rough as what you gave earlier, but I think that this could be done very, very well, if it was necessary and if I were assisting in the court of law, I would do it very well and I would count that. He was convicted and he appealed and because he was very ill, he died in prison before his appeal. And we'll never know if he was guilty or not. We don't know, right? I mean, one never knows. Okay, I really do have to move on because I very much wanna do this last case, which I care about very deeply. In a sense as I'm involved personally in this situation because people have actually come to me for help, I never looked into this situation until people actually came to me for help and I've had some, well, I'll tell you about it as I go on. I'll tell you pretty much what I'm up against, what's happening, what's going on in this situation, especially in France. I believe that there's relatively similar situation concerning shaken baby syndrome in different countries, but in France, there's a particularly rigid attitude which I'm gonna tell you about. So, here is the basic situation. Child's brought to the hospital with head injury and the caretaker, which can be a parent or a nanny or the nursery school says the child fell. And it's extremely important for the doctors to determine whether this could be an inflicted injury, whether this could be a case of child abuse. This is, I can't, one cannot overemphasize the importance of this question both ways because if this child's in danger, it's an emergency to do something about it, but if this child has fallen, it's an emergency if you're gonna start taking it away from its parents and breaking up the family. So, it's just, it's hard to overstate the importance of this question. And until I started to investigate, because I was asked, I didn't realize how prevalent this problem is in many courts and many trials that are going on. So, I'm gonna go back almost 30 years to the study done by Chadwick. He studied a rather large area, which is San Diego County, so we're really talking over three million people, and it's a big area. He studied for five years, almost five years. He studied 317 children who were brought in for head injuries, I say injuries, but head injuries due to a fall, and he classified them into these four categories. So, they fell from high, they fell from a medium height, which is about like this, a wall, say, and they fell from rather low. A furniture, a piece of furniture, they fell off their bike, some of them fell out of their pram. They, the parents slipped and fell holding the child. So, there were 34 children who were excluded from the study because simply nobody had written down where they fell from. But he found that there were very few fatal falls, there was only one fatal fall of 118 children who fell from high, and when I say more than 10 feet, some of them fell from 30 feet. Children can be really elastic, okay? They can fall out of the third floor winter, I'm not saying to let them, right? I am not being like, I mean, to meet the whole issue of child being hurt and child abuse is something really terrible. But thank goodness, children are very resilient, and so only one died and he had fallen from quite high. However, out of the 100 who fell less than five feet, seven died. Okay, this is obviously very, very striking. You cannot just say, okay, let's move on. You've got to stop on this and say what's going on, and that is what Dr. Chadwick did. If histories of shortfalls are correct, the conclusion would be reached that the risk of death is eight times greater when children fall one to four feet, I don't know why five feet came four, but then children who fall 10 to 45 feet, since this conclusion appears absurd, it's necessary to seek another explanation. And the obvious explanation which he goes for is they were not shortfalls, they weren't shortfalls. The parents, parents, not parents always, they're gonna be caretakers, are lying. So, he describes two situations. First situation is, easy. If you examine the child, okay, so if you receive a child in this situation, you want to give them a full body x-ray, you want to examine them completely for other injuries. If you find multiple injuries that are healing, you can strongly suspect child abuse. But then there's a situation of a discrepant injury, a single discrepant injury, which means that there's a single injury, it's always a head injury, otherwise there's no real danger to life. But it is discrepant, there's a discrepancy between the story told, the child fell off their bike or whatever, or even it's much younger children out of the crib. And the clinician could stay with a high level of certainty that that injury couldn't possibly have been produced by that event. So there's a discrepancy between what the caretaker is telling and what the clinician thinks is possible. So he analyzed the seven fatal shortfalls. So here are symptoms, especially three of these symptoms. Subdural hamedoma is bleeding in the brick, it's not bleeding you can see, it's in the brick, you can only see it by internal examination. Theraebolebdomis, swelling of the brain and retinal hemorrhage is hemorrhage in the eyelids. And none of these things can be seen without a hospital examination. So pretty much except for skeletal fracture which was where these symptoms occurred in all of the children. And then he looked to see these other things. Are there other injuries? Cause that is really the key point. And so he made a table there. Okay, if there's two impact sites on the head and the kid's supposed to fall in once, that's bad. Bruises, old fractures and then no associated injury. He made a list of what was going on here in this table. He then went and studied a bunch of other studies. So he actually wrote two important articles. In order to see whether a kid can fall from solo, even a baby and die, he studied other studies. And here he quotes a bunch of studies and 100 children, no injuries, no injuries. Less than 10 feet, no injuries. Some people studied even from 15 feet, some injuries but no fatalities. He studied this California database in which literally an area of 2.5 million children were treated and he looked inside this one hospital, maybe even all of the hospitals actually in the area over a period of five years and there were 13 fatalities from falls but many of them were from high and some of them he excluded and some of them clearly were cases of abuse and he ended up saying there are six here which could be from short falls but that's an average of 0.48 short fall fatalities per million children. It is extremely rare for a child to take a fall like this and die. It's extremely rare, it's not even one in a million. So when you see this, you have to be extremely worried. Just, this is something that, this is just a sort of side remark but you might think little babies actually fall from three or four feet. And you might think this is very rare. Sure toddlers they do all the time but little babies and well this British study that was done shows that apparently half of all little babies do take such a fall in the first year of life. Parents can slip and fall with a kid in there. I don't know, I was a little surprised by myself but the study showed it does happen to 50% of babies who are of course the most fragile in that risk. So he published these two articles that were absolutely seminal that have been studied and used in medical schools and taught ever since. So this, it's his observation that shocked him so much when children incur fatal injuries and falls of less than four feet, the history is incorrect. The best explanation of the finding is that for the seven of children who died following short falls the history was falsified. To the point at which the symptoms that I showed you before, especially when they occurred together. Parents are suspected, abuse is suspected in the presence of anyone or two of these symptoms but when you see the three, it's called a triad and it's a name, it's called shaken baby syndrome. The abuse isn't hitting necessarily, it can come from shaking and it has another name. Some people say well we don't call it that, we call it abusive head trauma but it's the same idea as the, the idea is that you are literally looking at symptoms and you are naming these symptoms with the name of the cause. So it's awfully hard even psychologically to say this baby has shaken baby syndrome but he wasn't a shaken baby. I mean it's a little bit strange but the only reason that this was done was because of the absolute certainty that this is the cause of these, this triad of symptoms. So this is something that doctors absolutely learn, all of them and there are some doctors and I would say very few who have protested and said no, I'll mention one of them later but there's a few who said it can happen, this can happen. There's a doctor in England who is extremely vocal, she said I used to be one of the ones who most believed this, I was absolutely and I just, I have changed my mind because I have seen cases in which there is zero support for any theory of abuse and she was, what do you call it when people are struck off the medical register, she was actually prevented from being a doctor and then she repealed and she was put back but she is like very, very black bald in England now, she's still very vocal about this. There was a case in the US of a man who was exactly in this situation, his son, baby, little baby, he slipped and fell in the kitchen on the tiles, the child died, he was accused, he was on trial and he was eventually acquitted so somewhere, somewhere what they did is they reenacted the exact circumstances and somehow some doctor acted as an expert witness and said this could have led to a fatality. So I live and work in France, I interviewed some French doctors who work in exactly this and I got this answer which I find very striking, oh he was acquitted because he paid a lot of money to, you know, he paid defense experts. In France there's no such thing, the judge will name an expert, independent expert, there is no, the defense will pay their own experts on the other side, it doesn't happen so. But he was definitely guilty, oh the child had to try it, he was definitely guilty. So France is particularly rigid on this and the people who came to see me were actually in France and there was a real problem with the stories that they were telling me, well anyway. What I want to show you is I want to show you this. That is what we just saw. This is what Chadwick wrote. Risk of death is eight times greater for children who fall from here than from here that is ridiculous. That can't be true. And there's another statement. Falling from over 10 feet is like 20,000 times more dangerous than falling from less than five feet. Do they sound contradictory? Are they contradictory? Well, it is contradictory to say eight times more dangerous or 20,000 times more dangerous, but if you forget that and just look at the numbers, let's just look at the numbers. So here are some statistics about falls. These statistics agree with Chadwick. Chadwick gave a rate of fatality of 0.48 in a million which is about one in two million. Look at these statistics of children who fall from here. 99.996% are essentially uninjured. But every now and then, and it's rare, one child in 25,000, the parents will be like, ooh, this is, I'm not actually, I should make research that I'm not talking about a broken wrist or something. This is head injury I'm talking about. They'll see that something could be wrong with sometimes the child's eyes. I mean, they can see something wrong, they go to the hospital. Which is completely consonant with Chadwick's observation of 100 children brought to that hospital in the San Diego area over a pace of five years. And that very, very, very tiny percentage of those falls are fatal. That's one out of two million falls, which is consonant with the rarity that he said. Whereas if children who fall greater than 10 feet and that can be 20 or 30 feet, they are all brought to the hospital because who's not gonna bring their kids to the hospital, right? 1% of these falls are fatal because thank God children are very elastic. But 1% is much, much, much bigger than 0.0005%. It's 20,000 times bigger. So this justifies, along falls are 20,000 times more dangerous than short falls and this is nothing but statistics so the statement is correct. Now, let's look at Chadwick's own data, his own data. Three million, 300,000 people in San Diego County. So we're talking about 600,000 kids. Now, we saw before that apparently 50% of babies actually take such a fall. As for toddlers, they fall every day but I'm talking about falls where they hit their head. That's rarer. So four isn't once a year but I'm gonna count it as once a year and older kids, it's rarer but they can fall off their bike. It does happen. So I added up roughly the amount of falls, of kids of each age and amount of falls. We're getting about half a million short falls per year among the children of San Diego County. We saw that one child in 25,000 will get to the hospital so that's 20 children a year so that's 100 kids in five years and this is very much what Chadwick observed. And we saw that one in two million short falls are fatal which he agrees with and which means we can't expect one or two short fall fatalities in San Diego County. This is expectable. And remember what he saw? He saw safton fatalities but if you remember, two of them had no associated injuries. So five of them, I will not dispute that he had every reason to say that these were questions of child abuse but what about the other two? Do they have to be accused of child abuse and does it have to be a trial and do they have to be separated from their children? His own data is not showing us that short falls are eight times more dangerous. That is ridiculous. That is so wrong that like it couldn't be wronger. His own data is showing us that it's 20,000 times more dangerous to fall from high. His own data. So what he wrote is just like wildly wrong and it is learned and studied and taught and used all over the place and while there's a tiny bit of let's say leeway in this country to think that perhaps a short fall can cause fatality in France, there's basically none. If your child has the triad, you're an abuser. Done. So I'll just finish my, I'm pretty much done now. I'd like to tell you that there's one American doctor who also did a vast study, a vast study and he worked in Cook County which is where Chicago is which has even more people than San Diego County so he's working in a population of five million and he studied 400,000 children but he looked at only children under five which are actually the most fragile and liable to be injured. Over four years, so we're talking 2.5 million falls, one can say more but that's about the rate of falls where you hit your head. So firstly he says two of them were, oh he encountered 18 fatalities, now this is enormous compared to, this is enormous compared to what Chadwick observed. This is so much that you could be a bit surprised. So what's going on here, why did he find so many? Well two of them were observed falls of children in hospitals and seven others or maybe seven including the two. What Chadwick says is that parents who have abused their children will bring them to the hospital and say it was a shortfall because they think that a shortfall can be fatal which we doctors know can't so we know they're lying but also they don't say it was a longfall because that happens outside and there could be a pastor's by who would have seen it whereas if you say it's inside nobody can know. That's Chadwick says, he says parents think shortfalls can be fatal. Hulse is the opposite, says parents and even hospital workers do not think it can be fatal and they tend not to bring the child for treatment until it's too late and that many of these fatalities came about unnecessarily because nobody thought this would be dangerous until it was too late. So without that already there would be less, okay and then there is a couple of other reasons for which we could see a higher rate than San Diego. Now this might sound funny but some of the fatalities were due to parents walking on and slipping on ice and falling with a baby in their arms and that is obviously something that will happen in Chicago and will not happen in San Diego County and it seems like it could be something of a serious danger and there's also a simple question of architecture. There are differences in architecture which can make some cities more dangerous than others. Okay, so if you remove the cases that were because of delays in treatment and so forth you get a more reasonable number but Hall does not include that some of those cases may have been child abuse but not all especially the ones that were observed in the hospital, okay. So Chadwick knew about the Hall study and he wrote about it in his article and he said, oh well we're not gonna use that study because the fall histories were not validated and they had a very, very accurate exchange which is all published in exchange of published letters in medical journals and which Hall wrote they were more investigated basically than in any other study that has ever been done. Every story was investigated by the police and the medical examiner. Each child had full body x-ray to make sure there were no other injuries because in the 18 there were no other injuries. Those with many injuries were considered as abuse. The medical examiner was not only involved in every investigation but also an author of the study. Basically he says it is doubtful that a more complete validation could be performed. Our study remains the largest study of deaths from falls to date and there's this key sentence in the middle here. This I think is really the key sentence. It was the practice of his, the medical examiner's office, to rule out abuse, not rule in abuse in all suspicious cases. So it's a suspicious case. Only one injury, no other injuries, otherwise you know, it's not even a suspicion, okay. You may think the parent's story is very rare and unusual and weird but there is zero reason to suspect abuse. No history of abuse, no sign of abuse, nothing. What do you do? You rule it in like Chadwick or you rule it out like them? This is really the question that is facing many, many parents who are in court today for this kind of accusation and I think it's a problem that's worth publicizing and exploring more deeply. So I haven't asked you to participate in this one. So we have time for two questions. If you can raise your hand, I'll bring the microphone to you. There's one right behind. Where does statistics stand around the controversy over whether a shaken baby syndrome is real or a manufacturer of investigators? Oh, I think there's zero doubt that shaken baby syndrome is real. It's absolutely happens. The question is, is every child who presents with those symptoms due to that? I'm absolutely not saying that that shaken baby syndrome doesn't exist. It does, it does. And I mean, I think it's known and absolutely clear that it's responsible for the majority of babies presenting with those symptoms. The question is really, can this happen? Can these symptoms happen from a short fall? Is there such a thing as a child with these symptoms and innocent parents? And I'm just astounded at the refusal of the doctors I've spoken to to admit that. No parent is lying. I don't know. I'm a layman, I'm not a doctor, thank goodness. But the Aspelin story who was eventually acquitted slipped on something in his kitchen floor and his little baby fell on tiles and a baby is small and fragile. And to me, that one in some two million falls like that a child might die, it doesn't seem to be like, oh, that's impossible. But that's what doctors are saying, that's impossible. At least in France, they're saying that's totally impossible in America. There's a little more leeway. He was acquitted, but many people are convicted. So no, it exists. There's no question it's responsible for the majority of times those symptoms are seen. I think there's no question about that. Hello, in the talk, you mentioned a number of cases where the intuition of the jury was inconsistent with what you'd expect from the math. Are, I guess, people pushing for probability experts to enter court, like you'd have experts in other fields? So the answer is, this is so interesting. Yeah, there are people who are pushing and people who are resisting. So especially in England, which is a very, very intense field where this kind of thing is going on. Two things happen that are interesting. One is, you will get a statistical expert to give a proper analysis of what's going on with this DNA frequency or whatever. But you'll get a biologist who did the DNA analysis to say something different and you wouldn't believe some of the things the biologists say about statistics. I don't even know whether these people are wrong or cheating. I don't even know. But you'll get a statistician who will say, no, no, no, that's not true. And then the judge will have to pick between them and the judge will go, he's a biologist. He knows about DNA. You're just a mathematician. What do you know? No, seriously. And the other thing that happens, and this was like there were two recent cases in England in the last five years where this happened, is a statistician will be called by the lawyer who knows that this is needed and he will do a good analysis and it will be obviously right and everything will be great. And the person will be convicted and go to jail and they will appeal and there will be a new trial and the judge will quash the previous trial on the grounds that the expert told the jury how to think and you're not allowed to do that. You cannot tell the jury how to think. You have to let the jury think. So there's a fine line, right? Here's the thing, what the statistician, one of them, he was quite upset about this and he wrote an article in the statistical journal saying, no, this makes no sense to say let the jury think because we don't know how to think about tiny numbers like one in 20 million. We don't know how to think. We're not trained and our intuition fails. So there, yeah, a lot's going on but it's not all that successful so far. And let's give our speaker another thanks. Thanks.