 In 1985, I was part of a big debate, took up the whole issue of the Journal of Paris Psychology on the gunshell effect, which I'll talk about in a bit. And during that half the issue was me and the other half was bottled to me by Arlington. And during my 85, 50 pages of critique of their experiments, I did the very first meta-analysis in Paris Psychology. I say that because the Smear Calpa is, I feel bad that I did that because that, as far as I know, I'm the first person to ever do a meta-analysis in Paris Psychology. The Paris Psychology sees upon that, hey, this is a way we can show that our results are actually replicable. Since then, Paris Psychology has been doing meta-analysis all over the place. And so, and I've had to, more recently, write a white meta-analysis the way they're using it is confusion of between exploratory versus confirmatory analysis. And there's no way, the way they're using it, that it is to be used as a confirmatory, and I'll explain that analysis. So I just want to make that apology that I may have been inadvertently have unleashed a whole torrent of years and years of Paris Psychologists being able to think they have definitely shown that the results of Paris Psychological Research are not only significant but fantastically replicable. And in fact, in the claim of two of the major Paris Psychologists, their results are even more replicable than those experiments in psychology and in physics. So that's why I wanted to say the final result of our talk, everything's cumulative, we want to have good evidence and we give you a framework by which you can hopefully judge the quality of the evidence you're dealing with. But that usually applies to just one experiment, this conditional structure we gave you. If something is so, this hypothesis is real, that's true, then we should get certain results and how you deal with that kind of a conditional thing. That applies to a single experiment, but a single experiment by itself is not sufficient to say that this is high quality data. And in science, it's recognized that you've got to keep, you've got to replicate. It seems to be not only replicated but by independently replicated, by different, by your rivals in different laboratories. They can't get what you get. It's not science until it's public and independently replicable. And only under those conditions do you now have a basis for saying, hey, this is a good data that I can think about, the use of my thinking. There's one other point I should come back to it. But let me begin by saying that I'm going to do it entirely in terms of going to apply it to Paris psychology. And we're going to deal with one field of Paris psychology, one kind of Paris psychology experiment called the Ganzfeld Z-F-E-L-D. Now by itself, the Ganzfeld is a term that means a German word meaning the entire field in a say, but it was applied by psychologists to a situation where if you put people into a, like, put ping-pong balls over their eyes. You can do it a lot of ways, but the standard way now is you take ping-pong balls and cut them in half, and plastic them. And then take those half balls over the eyes of a person, and then shine a bright light in front of it. Sometimes it's a pink light, but sometimes a white light will do. And the person sitting in the chair reclining and looking at that, it's, the world becomes, because you're taking away the reason that psychologists are interested in this, because perceptual psychologists have always thought that perception depends upon seeing edges and contours. I tend to see the world without edges and borders and stuff like that. You shouldn't see anything. And that turned out to be true. They figured out how to do that, creating what's called the Ganzfeld field. When people are put in that state, everything becomes like walking in a fog. There's no objects anymore. Everything was foggy. And after a while, you get into what's called an alternate state of reality. Pleasant. They usually, at the same time, put the sounds of ocean sounds in your ear or just white noise. And with this light this thing here, within a few minutes, people get into what they think is a very satisfactory artist state. The parapsychologists picked up on that because they said, hey, they're always looking for ways of making their experiments more relevant. Every time you do parapsychological experiments, they're very difficult to reproduce. It's very difficult to do in the first place, to get consistent data. And their story is that the reason for it is that dealing with psychic phenomena, you're also in a world where you've got regular sensory input. And the sensory input is interfering with picking up the delicate sigh signals. So if we can get rid of that, somehow suppress the ordinary sensual input, we can enhance the sigh signal. So that's why they last wanted to do this as a possible way of solving all their problems. Because for them, their problems are that they're dealing with a very subtle ability, which is very, very delicate and very erratic, and it's very hard to demonstrate scientifically. So they developed what they call the Gunsfeld Psi experiment. And the first one was published in 1974 by Charles Arnettun, and I forget how many of you have a co-writer or co-author, but he is the key person in this and I will put him down here. So 1974 begins the era of the Gunsfeld Psi experiment. And to this very day, 1974, it's the longest consistent data kind of experiment that parapsychologists have in this more than 150 years of trying to do science. This is, for them, the longest thing that's lasting before it became dissolved, became, they gave up on it. But 1970, when I first started with it, Paul Kurtz, he may have heard of, was the editor of the Humus magazine. And by that time, he was beginning to write, by 1976, you remember, it was the first, that when PsiCOP was formed, and Paul Kurtz and Machella Tritzi were the co-directors and finally Kurtz became the head of the whole enterprise. But 1977, he had Martin Gardner, myself, debate a parapsychologist by the name of Scott Rogo, about ESP, and mostly the debate was, Martin did a general article, and I was in direct debate with Rogo, I think. And Rogo used the 1974 experiment on the Gunsfeld Psi to justify the fact that we now, we, parapsychologists, now have a, obviously replicable, he said he certainly had no doubts it would be replicable, but a true, scientifically done, correctly done, significant experiment, which, and I, for the first time I went and read that experiment very carefully, and I found problems with it. You can find problems with any experiment, but this one was serious, I thought. For one thing, here's how they did these experiments and they still do that way. In a Gunsfeld Psi experiment, they have a room where they put a room where they put a recipient, let's say this is the recipient, okay, and lying in the chair, right, declining, made very happy, okay, and then they put these ping-pong balls over their eyes and have a bright light shining into the eyes, and maybe into the ears they're putting in white noise, and that is the recipient. That person is isolated in the room with the experiment, in this case it could be Armitin, okay, experiments, they're in the room shielded from the rest of the world, more or less, they're in this room, and often another room is the sender, it could be another part of the building, but it's supposedly separated from another, and the sender, through another procedure, the sender is sitting at a table, you're sitting down, and looking at a target, okay, which is usually a picture of some sort, photo maybe, it's sort of a target. The target has been selected from a set of four, randomly, by some random process, by some blind process, and so this person now ends up with one of the four targets that belong to a set, so the set consists of four targets, and one of the targets randomly selected for the sender to pay attention to the focus on it, while this guy is in the Rishi, or he, or is in the so-called Gansfell state, while this person is in that state, and this person's staring at this target, this person is encouraged to mentate, if they meant by that, to say anything comes to mind, whether they see or feel, you know, so they talk for about 15 minutes and there's a transcript made there, but the important thing is, after the session is over, about 15 minutes or so, they take off the guy's pink long balls, and they shut the light off, and wake him up if they have to, or get him to sit up, and now someone takes, this person doesn't come to it, but someone takes the target, puts it back in the set here, and a whole set of targets is brought into the room, and shown to the recipient. Got it so far, so far clear? So the recipient now is given four targets, one of them was the actual target, the other are the foils for this experiment. So the recipient has to decide on the basis of his or her feelings during that session, which one was closest to what they were coming to, the images coming to their mind, which one best matched it, and they have one chance and four being correct just by chance. Does that make sense? Someone said no, and someone said yes, okay, but if done right, if they randomized correct and so on, that should be the case. So over a number of trials, one-fourth of them, if there's nothing but chance going on, I mean one-fourth of them would be correct, and three-fourths would be incorrect. So 25% is the chance level, and they compare that with the actual level they got. In this case, they got something like 33% correct. Okay? Now, any numbers possible to get just by chance in an experiment like this. So we get into what is true in physics, psychology, and everything else, many fields, because most of our data is probabilistic. We use statistical testing. You've heard of null hypotheses, maybe you haven't, maybe you haven't. At this point, most people go to sleep, okay. For whatever reason, our minds were not made to handle the probability and statistics and stuff like that. And I, as both the psychologist and statistician, know this, I know I had to help all my colleagues and my students, very difficult, because most of them go into psychology because they wanted to avoid statistics and probability and stuff like that. But it's essential to do experiments if we need this. So they use a proper statistical test, and they decided that this was significant. That's the basis of all probability, all power psychology, almost everything you've done in power psychology depends upon comparing the outcome of trials against what you expect by chance. And sometimes their outcomes are significant, as they call it, and sometimes they're not. They're hoping to get significant results. And here they got a significant result. And although it doesn't look big, 30% turned about robust enough so that they felt they conclude that something more than chance was going on here. And if they had done everything, this is the problem with doing power psychological research. If they had control for every other possible ways that this person could give you right, and it's not chance, they eliminated chance, then it's got to be psi. That's the way to do it. But you already can see the major problem while doing power psychological work. You're trying to eliminate all kinds of possible reasons why something could be above chance without there being something psychic there. And so the problem may have to be perfect. They have to have a very well controlled experiment where they make sure that there's no other alternative explanation. Very difficult to do. And these guys have been doing this for years and stuff. They're very clever. They sometimes have good training and stuff like that. The thing I discovered when I just read their paper very carefully was that everything's done as I told you, except the fact that when a target is being viewed by this guy here, there's no control over whether he touches it or if she touches it, whoever the sender is. In fact, the sender is encouraged to hold the target. Now that target is placed back in that pool. Can you see anything wrong with this? It's the exact same object. Yeah. They could have bent the corner. Yeah, anything. If it's deliberately, these two are in cahoots with one another, this person could bend the corner, leave a subtle mark on it. Or inadvertently, just hand me it, could smudge it in some way. Or if it's very close in time, it could be a little something warmer than these depending on the theory of everything. And it's another case. There are, not necessarily plausible, but there are possible non-paranormal means by which this person may be getting better than chance successful results, but which of these four was the target? And the parapsychologists themselves have, since J.B. Ryan made this important because they want to be scientifically accepted, said if there's any possible normal means, even though it's not even implausible, but possible, we cannot call this a successful psychic experiment. We've got to make sure that we've eliminated any plausible non-psychic means for further transmission. Well, this was obviously not, this would make, with Ryan's and other parapsychologists think that this was not an acceptable experiment, just because it was a possibility. We don't have to say that it was what caused it, but Occam's razor and other things say, well, if there's a normal explanation for something, why push for a paranormal one, right? Okay. So there was a controversy about that, and the Arntons had this, we had, oh, other similar fights like that. Arnton said, well, Ray, look, he said in the literature even, he says, you're claiming that this was the cause rather than Psi. I didn't claim that at all. I just said that that by your own standards, when you do have, there is a possibility of non-paranormal means for this person to pick up information, that doesn't, isn't an acceptable Psi experiment by your own standards. His argument was, Ray, you have to prove that, in fact, that was the reason that this person is better than chance. Shifting you on is a proof, which is a very favorite example that he's always doing that, that we should, and many times, skeptics fall into the trap. That's one of the things I know all the time, over and over again. Skeptics are always falling into the trap where they make such standard claims about how this guy cheated, it puts the odds on them now to prove that he actually cheated that way. And that means that many of the fights that skeptics get involved in, they have put themselves in a position where they are supposedly having to defend their claim rather than the psychic who's making the claim. And even I have sometimes gotten that thing, as I'll point out a little later, maybe, but it's easy to get into the trap where you're a fact, in fact, I shouldn't even put a cap, quiet about this and say nothing about it, because I get into the situation and once I made, I splintered that out as a possible flaw in your experiment, they're demanding that I prove that it actually was the cause of the result. That's not why my, the bonus should always be on the person making the claim, a very, very, very strange claim that in fact there's a psychic means, which most scientists don't accept, but I wish this person was getting the information. Well, I'm the brain of wanting to be right. You know, a lot of people want to be right. I like to be right sometimes too. What's that? I'm not sure I got your point. That's the purpose of the null hypothesis. Yeah. Well, yeah, the null hypothesis, there's a lot of questions about even the null hypothesis testing in many cases and as opposed to having this not parapsychology, it's interesting. In most science, the null hypothesis is a strong man. We set them up, but no man actually believes that it's real. But we want to, it's better if many people say, we'd be much stronger evidence instead of having a null hypothesis, we compared plausible hypothesis against one another. So it's a big fight, it's a big theoretical fight. So whether the null hypothesis is good or not. In the case of parapsychology, a null hypothesis actually makes sense in some ways because we really believe that if there is no such thing as psychic phenomena and everything else is controlled, on the average the null hypothesis should be true. However, there are other reasons why this null hypothesis could be false and that is because the statistical models we use are really perfect models of what the environment is doing. As a result of that, when you have a lot of what we call too high power, you can get a significant result even when a null hypothesis is true if you impure form because the statistical model is very likely only a good approximation. It's good enough when you have reasonable data. But that's another thing, I don't want to get into that at the moment, we don't have time anyways. But anyways, I want you to understand what the Gunsville hypothesis is like because around 1980, so we started in 1974, 1980, I got requests from the IEEE at the International Engineering Journalism at that major to write a tutorial for them on parapsychology from a skeptical viewpoint because they had already published a paper by Targ and put off on their research with Gellar and all the research on remote viewing. And that got a tremendous backlash from engineers all over the world saying, first of all, this doesn't belong in our journal, it's embarrassing for our whole profession to have such bad research. And so they thought to calm the waters a little bit if they had someone like me write a lengthy article on the other side that might appease everyone. So in order to do that, I think I'd better follow my own maxim and if I'm going to tackle and criticize the field of parapsychology as of 1980, I don't want to look at the bad psychological experiments, I want to look at it as best. So I want to follow my idea, I'm going to tackle someone's field, I want to look at the best they have. And I asked around in 1980, I actually went and contacted parapsychologists including Charles Lawrence and I said, okay, you guys, I'm going to evaluate the field of parapsychology, I can't look at all of it. By then there were thousands of papers published in parapsychology, there's no way I could do that in the reasonable amount of time. But tell me what the best papers are, what's your best paradigm is in parapsychology. What do you think? And I want to look at your very best and find that your best doesn't work, I don't have to worry about the best. So they unanimously almost picked the Gansfeld experiments. By then there had been a lot of Gansfeld experiments done, all many supposedly successfully replicating individual work. And so I contacted Onerton and I asked him, how can I get hold of most of these experiments? Because they're in journals that I didn't have access to necessarily. And he was so pleased that I was, I was known skeptic like myself, was going to take them so seriously. He said, look, I will make sure I'll get you every published paper, including young published ones. And sure enough, I was not then at Sanford for a year, I had the future of Sanford. So 1982 arrived on my desk at Stanford, was a 400 pages of documents in a big box. He said, as far as I know, this is every experiment published and unpublished, half of them not ever published. There's everything done in power on the Gansfeld effect. And there it is. And I was overwhelmed, but I had all the pages that go through. And I began quickly scanning them first. And I was impressed in the wrong way, maybe. I was, first of all, I knew many of the names on the papers. They were well-known parapsychologists. Well, I had some respect. I knew they had some scientific training. Many parapsychologists have PhD in regular sciences, like physics and sometimes biology and sometimes psychology. And they, for one reason, they become parapsychologists, but they have some good scientific training. And several of them had, were the authors of these papers, and some I didn't know. And they all seemed to be significant. And according to Arnoldon's count, there were 42 separate experiments, some by the same author, meaning 42 separate experiments, and I think something like 75% or something like that were significant. Had to produce significant results. That's what we call a vote counting method. In other words, you just count the number of significant studies, and you're trying to, that was before they had meta-analysis. And that's, everyone realized that's not necessarily the best way of judging a field. But as I began doing that, but then I began spending more time reading them very carefully, and I kept finding, you know, these guys should know better, they're doing the wrong statistical tests. Or they are not randomizing. They should know that with good tables of random numbers and also random number generators on computers, you don't have to shuffle by hand anymore, which is not considered an adequate method of randomization. And they were doing other things. I said, well, you know, I'm going to get arguments about this, because I'm just subjective. They didn't look like they were doing the right controls and stuff like that. But I said, I'm going to get an idea. The first time I made a small report, Arnoldon argued with me, people argued with me that I'm misreading, and it's not that way. I thought that I would reduce what I did now to yes and no. Did they use the right test? Yes or no? That seems to be black and white. Did they say that they had used standard randomization methods or not? Did they do this now? And they come up with an eight yes-no type of things I could apply to each paper, maybe ten. Five had to do with statistical problems, and five had to do with experimental methodological problems. I thought that would be on them. And when I applied that, every single one of the 42 studies had at least one and most had several of these flaws. And my way of understanding parapsychologists themselves and their methodological things that even one of the flaws that I had found would be sufficient to this to invalidate that state and take it out of the pool. That created a huge backlash and Arnoldon accused me of all kinds of things, and we got into not a very nice fight to some extent, although we try to be civil sometimes, but got very uncivil. And it culminated ultimately in 1985, the whole issue of the Journal of Parapsychology, which is the major journal in the field, Journal of Parapsychology. They devoted that a whole issue to the debate between Arnoldon and myself. They allowed me to write a paper as long as I wanted, and I think it was about 50-something class pages, where I listed all the problems I found with these studies and stuff. And I also did a meta-analysis, as I said, the first meta-analysis done in the field. And I concluded that this was, you know, none of the amount of flaws, a number of flaws and stuff like that, this counts as the whole entire field, this whole entire batch of studies. This is now called the original Gunsville database. Arnoldon replied, he took a year and a half to reply to my paper, and during that time I was not allowed to change or do anything with my paper. But he got various experts and statistics, and he got a bunch of people to advise him, and he wrote a lengthy over 50-page rebuttal of my thing, and I wasn't allowed to rebut in that same issue. In most psychological journals, if someone criticizes an article, the author of the article being criticized has the privilege of writing a short response to the criticism. I wasn't allowed to do that then. They said in the future she might be open, not that issue. So the whole issue is taken up with this debate. And what happened was, as you can imagine, is that after this journal was published, the power psychology said, well, Arnoldon demolished Hyman. His critique just showed that Hyman got it all wrong, and he was biased and everything else. The skeptics, all the people in psychop, all the skeptics in the world said, Hyman showed that those guys don't know what they're doing. He really exposed them. And I dawned on me that, hey, no one's ever going to do what Arnoldon and I did. We were looking at the nitty-gritty. We were looking at each experiment, and figuring out the raw data, did that experiment go right, and so on. And no one, even in parapsychology, in the right mind, is going to take the time that we took to do that. So all the parapsychologists, this is called attribute substitution by time. Remember we mentioned that? Instead of going on themselves and looking at doing all the hours and months of work that Arnoldon did, and I did, going over these papers and seeing whether they really are flawed or not, they're going to take Arnoldon's word for, if they're parapsychologists, which is, in other words, a substitute to the actual problem with a simpler one, which is simply take Arnoldon's word for it, and my friends, or people on my side were taking my side. And I realized that no one's going to know, literally, who is right except that those who are on my side are just going to trust me and take my word for it. And so I decided after I'd written an 80-page, now rebuttal to Arnoldon's rebuttal of me, and the journal was going to publish it, but they're going to let Arnoldon now reply to that other one. So there's going to be another whole journal in 1986 or something like that. And at that time I happened to meet Arnoldon at a convention I was at, and someone talked to us into going to lunch together. We did. And Arnoldon was packing the car and he says, how did you say all those mean things about me and your rebuttal to me? Basically, he was very upset with what I was saying. I said, I didn't say anything. What mean things I say? He said, well, you said that I was wrong in this and this and this. I said, you were. It was just factual. To me it was factual. To him, it wasn't somehow I was insulting his integrity and stuff like that. I wasn't being mean. I wrote a paper on proper criticism. I tried to follow my rules. But somehow he seemed to be very hurt. And I was hurt. I said, because what you were saying in his rebuttal, what he said to me, he accused all kinds of things that he had did, he accused me of doing. And so we had this discussion very much. But as we talked, I learned that the more and more I realized that we were agreeing on a lot. It turned out that he was agreeing with, to my surprise, he was agreeing with me. Yes, until this whole batch of original experiments do get replicated with cleaner experiments, we get it with whole judgment on whether we've got something here or not. I said, I agree with that. That's fine. So I said, okay, I'll take back my paper. I've made this proposition to him. And we'll write a joint paper, which we did in 1986. We wrote a joint communique, which essentially said, look, at the moment there's not data in the Gansu database has enough problems that it really doesn't support reality of side. And in the future, if they're going to produce things that do support reality of side, this is the criteria they should meet. Okay? So that was healed by all kinds of people. He said, hey, here's these two combatants suddenly agreeing on something, putting out a joint paper. Isn't this wonderful? Lovey-dovey, and the world is great. But later, meanwhile, he didn't begin a number of experiments. Actually, he had begun them before we wrote that communique. And they called the Otto Gansfeld experiments with it out there. That's okay. And unless you're psychic, you wouldn't be able to see that. Okay? And the Otto Gansfeld experiments were once he did, and they got published in parapsychology journals before he died. Unfortunately, he died right at the young. But by then, Darrell Bem, you may have heard of him. He's the guy who brought you Feeling the Future and stuff like that. We talked about him a little bit. Darrell Bem, a well-known, recognized and revered person in the field of social psychology, in his later years also become a parapsychologist, as I mentioned. And he had told Arlington, he hadn't done any parapsychology work himself, but he told Arlington, if you do a series of experiments, which by my criteria are right and so on, and get the good results, I will sponsor them for you. Make sure the major journal, psychology journal will publish it. Because with my name on it, the major journal will publish it. And when Arlington came out with these Otto Gansfeld experiments, Bem coauthored the article with Arlington. And then Arlington died before they actually published it. And that was in the, I think it was the 1994 Psychological Board, which is a major prestigious journal in psychology. And also made a big splash, like later one of Bem's, because here was a paper on parapsychology supporting it with the major psychologist being one of the coauthors. And I was asked, I'm not only as a referee on it, but I was also asked if I would write a, and the same issue that this would appear, if I would write a commentary on it. So the article appeared, and the article said that the Otto Gansfeld experiment definitely established the reality of Psy, no question about it, supports the original experiments that Heimlich criticized, and shows that Heimlich's criticisms, even whether they were valid or not, didn't really affect the reality of the results. And we all should bow and pray and so on. So in my criticism, I pointed out that hey, this experiment was not a replication, in fact it was a failed replication. I was puzzled by the fact that they were telling it as the, as having replicated results. Because the Otto Gansfeld experiment considered consists of two kinds of sub-experiments. In some experiments, this targets were targets like in the first Gansfeld experiment. There were pictures, static pictures, we call them static targets. But they also introduced, in some half trials, dynamic targets as they call them. These were video clips rather than fixed pictures, with actual sounds and stuff like an airplane flying and with sounds or other things like that, short video clips. And those were the targets. It turned out that on static targets, and he did several experiments, so it was several hundred trials, it was a big experiment. On static trials, it was just, the results were just a chance. On dynamic trials, results were fairly high, and the average one was again about 35% hitting. Remember it was 33% in the original set of experiments. In the original experiment and about the same in the original, in the whole set of original experiments. So, this is, on the basis of that, I won't go into all the details. On the basis of that, BEM said, these are the best experiments done, couldn't be the best standards, and there it is. And I, in this case, I wrote to BEM, because Ardenton had died, and I said, could I have access to the original data? Not just the published data, but I'm not the actual numbers of that. He said, well, I don't have it all. But he finally gave me, he sent me, over the web, he sent me much of the data, for, some of the data for all of the people participating in the, I think it was only 400 subjects or something like that, and all the experiments, the combination experiments. And I did various analyses on it. I didn't have all I wanted. Somehow he couldn't give me all of it. He didn't give me all of it anyway. And I found rather peculiar relationships. For example, I'll leave the significance of the dynamic targets. I'll just tell you one little of the several flaws I found in the data, looking at the raw data. You couldn't find that from the published data. That on each experiment, each experiment with the video clips, let's say these are the video clips, four video clips, one is taken putting into a projector to project it. And it's put into the video projector and projects. And then it's put back. And then the, all four of these are played for the recipient. And he or she picks which one they thought most closely matches what they were experiencing during our consulate state. Now what I found was that every time a target for the first time in a pool here, a pool of four of these had been used in the experiment for the first time, all those trials were just a chance. When they were used, when a trial, a target was used for the second time, every time it was used at the second time during the experiment, it was a little better than chance. And it went, went up almost nearly, uncanny, that the more times a particular target had been used before it was presented, the more likely it was to be picked by chance. And that was striking. So just a lot of things like that I found in it. And I thought it was, it should be enough to bother Ben. Ben's response to mine, because he had the final response in this case, his response was that's very interesting what Haydn claims. If we find that, continue to find that same phenomenon, we're going to call it the Heimann effect. New finding in parapsychology. What I was trying to say, I didn't want to say it because I didn't want to get into that kind of an argument at that point because I felt it's their problem to fix. What I was pointing out was that we know that when you continue to play a videotape over and over again, there's some degradation. And it suggests here that maybe this was enough to give a clue because the first time it was played, it never was chosen. But later on, the more it was played, the more likely it was to be chosen. And the more likely it was to be degraded a little bit. Okay, so, but anyway, that was then, and then oh, by the way, the reason I said I also didn't think it was a replication because all the original database, the original Danzel experiments were worth static targets, right? And now the same kind of targets were used in all those original experiments no longer in the big experiment didn't show any significant results. So in that sense, it wasn't the same. By little meaning of replication, they weren't replication original results. It was a new kind of target that hadn't been used before. And so I'll just jump to present to today, well, to 2010, Psych Bulletin again. This time some parapsychologist led by a man named Storm and some of his colleagues in Australia, he's a well-known parapsychologist. He did a meta-analysis of all the Gonsfeld experiments from the very beginning, but also since the psychology of Bulletin when there have been several more since then. So all the experiments he did a big meta-analysis and found that wow, significant. Even though despite Heimann having shown that over time they meta-analysed what they call the effect size, right from the beginning of the original experiment, remember we said 33%, the sizes were going down. However, by having bigger sample sizes, I mean the number of hits were going down, it was still significant. So it looked like over time it's coming down right to chance. Not quite because at the end, the last thing, it looks like it's coming back a little bit so they call it a rebound. But even so over time what's called the effect size has been going down from 1974 to the present. So I pointed out, you know, among other things, my previous criticisms that in any sensible science first of all, they reject the idea which everyone thinks are right away because of criticisms and stuff, they've been improving their methodology. And with improving methodology the effect is disappearing, right? So the normal idea is that what they were getting was because of poor methodology, but they reject that of course, for what it had different reasons why. But I say, and also they reject the idea that it's going to eventually settle at zero because he says in this latest one, the storm and so on, that it's a very little glitly, he says that's significant in other words that it's coming back again, it's rebounding. So it isn't going to stabilize at zero. I pointed out though that look, in any normal science as you improve your methodology you get rid of the errors before and things become better and better and you should, every science, you should expect an increasing effect size because you're reducing errors. So even if it's still significant in some sense, a decreasing effect size is a degenerative type of science according to many people. But anyway, that's the thing. There are other things to say about this, but the main criticism I had is that parapsychologists, even if he could take the what they do is they take the average of the effect sizes. Now what they mean by effect size, I don't mean to be too technical, they don't measure 33%. What they do is they, because some experiments may have different number of targets. So instead of measuring the percentage of hits they take what's called the difference between the magnitude of the guesses in one size against the magnitude that's expected by chance and they divide it by what's called standard deviation. It's a way of getting rid of metrics. So it becomes an unmetric type of measure. So the effect size is kind of just a abstract size of a difference, discrepancy between the narrow hypothesis and the outcome, but it can mean anything. In this case, for example, if you take the original, the Gaudi-Ganzel experiments and you take that 35% and measure it and take, create them to effect size, it's going to be about the same effect size as the original Ganzel experiments where even though the original Ganzel experiments are not a replication of, and this is not a replication of that because it's a difference, but that's wiped out. That's what it did by the fact that you're now just using these non-metric measures so you can combine different experiments. And later on, from the original Gaudi-Ganzel experiments, a parapsychologist at Ryan's Duke Laboratory, Dave Brouten and his colleagues, they spent two years replicating all the Ganzel experiments using the same number of subjects, including everything the same and stuff like that, even using some of the same equipment by then going to the best way but they would get onto this original equipment. They spent two years in Gaudi-Zilch. At the same time, a Dutch parapsychologist also spent two years trying to replicate Gaudi-Zilch as well. Yet these are included in the same meta-analysis because you can include it because you've got an effect size. Even if the results are different and significant or they're not significant but there's a little bit of effect there, anyway, combining all this stuff in an abstract way, you can get anything that way. And this is the basis by which the parapsychologist claimed, hey, we now have not only good data we've been able to independently replicate over a period of now 30 plus years, it's pseudo-replicability. It confuses the thing. First of all the meta-analysis is after the fact. You're picking and you also have to decide what to include, what not to include. As long as it's an interesting subjective problem, you know the results that you're putting into it. It's not double-blind, it can't be because I say you've got to pick them. And it's a after the fact type of thing. And it confuses what we would call confirmatory with exploratory data. So it's a good idea for exploration purposes to go back and take old experiments combined in an abstract way and get a range of what the real effect size should be, to emphasize an old one. That's okay to do if you realize this is a exploratory type of thing. It's not a way to draw conclusions statistically. There's a lot of reasons why you can't do that. And use it as a hypothesis to do a really repeatable experiment. Your hypothesis should be, okay, with this number of subjects and suing the size effect which we guess from all the analysis on the meta-analysis we should get this. And that would be a repeatable positive experiment. And never been done. Parapsychologists have been doing meta-analysis now ever since I did the first one in 1985. And they've got meta-analysis coming out of their ears and they write books with it and they use the meta-analysis and they've never been able to prospectively predict from any meta-analysis to some new data. They only can retrospectively go back and say, hey, everything's significant if we treat it as if we did the original exploratory experiments, which they hadn't that take old experiments and combine them together. So this is the one thing I want to warn you about. Replicability is important. Parapsychologists realize it, but unfortunately in their zeal to have it, they have resorted to a completely artificial creation of what I would call pseudo-replicability. They don't really can ever have yet to produce any really lawful replica experiment. Now I must say this, which is a funny reason, a funny thing going on in the parapsychological field. There are some major parapsychologists now who agree with me that the results are not replicable. Now how can any parapsychologists that they agree with me that they cannot replicate their experiments? Because they say, science isn't ready for this kind of stuff. The phenomenon of Paris of Psi is so elusive and so evasive you can't catch it with normal scientific methodology. And the idea of demanding this kind of repeatability that science demands should be put aside for parapsychology because Psi is different. And even a paper by Robert John, one of the major parapsychologists who also was the head of engineering school of engineering at Princeton, Robert John wrote a paper with a colleague saying we should change the rules. And the whole point of the article is that science got to change the rules to let in parapsychology. This is what's called begging a question in logic though people know it. You assume because you already assumed that Psi is real and since it can't be demonstrated by scientific methods there must be something wrong with the scientific method. Anyway, so this is what the field of parapsychology is in now and by the way the people who say, so I said some of the parapsychologists say we can't replicate ourselves because that's the nature of the beast. And they go back, one guy John Kennedy has gone so far and Kennedy is actually a pretty good skeptic about the parapsychological work. He's been a thorn in their side. But he now says he agrees with Hyman that the results can't be repeatable and many of the flaws that Hyman has found are real. And he says, but Hyman is wrong in saying that this means that you can't, this is not scientific. He's saying what we need, he went back to William James and I think he misinterpreted William James. William James became also a testing psychic phenomenon. William James, you don't know was a famous philosopher but also one of the fathers of psychology who in the turn of the century at Harvard wrote the classic book which is still the best written book on psychology except the information had now has been changed somewhat but still it's a good book. So William James wrote just before he died. He spent the last 25 years of his life testing spirituals mediums and stuff like that and he was kind of intrigued and he thought that someone was real, especially Mr. Spiepel one of his favorite mediums. But at the end of his life he wrote he said after 25 years of doing my best to test psychic phenomena, it's still unsettling. It still hasn't gotten anywhere. I'm still at the same where I was because the phenomenon is so elusive that most of the half of the time I'm thinking it's got to be a trick or it's just nothing there. The other half at some time is so attractive, so impressive that I say yes something is there but then when you look at it, it evades you. And he said I'm sure he didn't mean it literally. He said it's as if the good lord meant it to be his waves. Always going to attempt us to think there's something there then pull it away. That seriously turned out, I couldn't believe it, that John Kennedy, a major parapsychologist, attended the critical parapsychology, he said that in fact that suggests to him there's a conscious type of almost conscious like entity or force out there that is evasive and deliberately trying to attempt us to believe but then pull the rug out from under us. It's always going to keep that way. So this is a problem but I want this again to emphasize again that replicability is the goal, it's the benchmark for anything we want to do as skeptics or scientists. And if you want to think about dubious claims, you want to make sure that you have good quality mind wear. And you also have to be careful because if you're so desperate to believe something you can imagine that you already have that good quality data and that you already have that good replicability when to an outsider and even to people in the same field of parapsychologists, you don't have that replicability. So half the field of parapsychology, I don't know if it was exactly half, but some major parapsychologists claim they have replicable data. Some other major parapsychologists who believe just as strongly in the reality of sight believe they do not have replicable data and they never will be able to have replicable data. And so you want to avoid those kind of pitfalls as well. So I hope that at least got you started. I hope I got you to think a little bit more about this issue of how we think about dubious claims. And I hope you will not waste your time thinking about dubious claims if you're doing it on the basis of bad data. Thank you.