 A large study on masks details their importance in the fight against COVID. Bullshit. For an in-depth look, we spoke to one of the lead authors of that study. Researchers at Stanford, Yale, and UC Berkeley analyzed 350,000 adults in Bangladesh. Now they took half of that group and encouraged them to wear masks. 29% of them complied with that for about a 10-week period. They found that masks in general provided a 9% reduction in cases. Yeah, junk science. And today on Skeptico, our guest Dr. Andy Paquette will break down what is one of the most deceptive studies I've run across. I mean, this discussion even brought us back to the Sheldrick Wiseman days. But let's roll on with the clip. Surgical masks were even more efficient, reducing cases by 11%. Ashley Stucinski, one of the lead authors in the study and an infectious disease fellow at Stanford says the results offer a glimpse of just how much masks matter. So overall, we felt that this demonstrated that masks are highly effective in reducing COVID-19. Yeah, Alex, I gotta say like a couple of things. I'm sorry, this is just, yeah. First off, the first headline was much more sensational than the second one. Both of them are not based on any kind of foundation of evidence found in this article. But the thing that really got me was that clip of the TV news. So the TV announcer says that they found a 9% reduction in cases. And then the lady says, yes, it's 9% up to 11% for this other condition. And I'm thinking, I just read that paper and what they just said is wrong. It's a 9% relative reduction. The actual absolute reduction was something like 0.002%. It was tiny. The headline to me is big lie. And when I say big lie, I mean, it's kind of well known in propaganda is the best way to hide a lie is to make it a big lie. Because little lies are liable to be exposed. If they would have just tried to bury this study and not put it out and someone stumbled across it and said, hey, here's another no result. Stack it alongside the Danish study that just came out, randomized control study that shows no result. Stack it along with all the epidemiological data, which we should talk about. I think what they've done is they've hyped it up in order to bury it. So the debate becomes, well, did they really do it? Did they do this right? Who did they force to do it? When what the real story is, another no result. Welcome to Skeptico where we explore controversial science and spirituality with leading researchers, thinkers, and their critics. I'm your host, Alex Icarus. And today we welcome back Dr. Andrew Piquette to Skeptico. Andy is probably best known, at least to Skeptico listeners, for his work in cataloging and analyzing just an amazing collection of dreams. And we talked to Andy way back in the day when this book, Dreamer, 20 Years of Psychic Dreams and How They Changed My Life. When that book came out and I have stayed in contact with Andy, he's really become a friend of mine and a friend of the shows. He's also, I should mention, as you might have seen from the website of his that I pulled up there, he is just an incredible artist and somewhat well known. He's also a professional photographer, has done work in major publications, major media publications, maybe he'll mention them, that anyone would know. He's also a graphic artist. And a couple of years ago, he got his PhD from King's College London on something called Spatial Visualization Among Digital Artists, which I don't know what that means. But now that I've laid out Andy's amazing background, I wanna tell you we are probably not gonna talk about much of any of that today because Andy is this Renaissance man of bio that covers all these different things. But one of the things I always think of Andy and the reason he's kind of my go-to guy for this particular show is he is a scientist. I mean, he's published in peer-reviewed journals, particularly he's published in the Journal of Scientific Exploration a couple of times. But he's also published in other peer-reviewed journals. But the one that always pops to my mind is the Journal of Scientific Exploration because I know the standards there. I know how tough it is because its roots are as a parapsychology journal and parapsychologists have been so picked on over the years that they're extra careful about how they do their work. And Andy has been extra careful about that. And I know over the years as he shared some of his papers with me, I've seen how he sweated over the details of getting the statistics right. And in a minute as we get on with this interview, maybe he'll even tell you about when he kind of leaned on Dr. Daryl Bem, a very famous professor from Cornell who's also well-published in this field and how Andy collaborated with him as scientists do to get their science right, to get their statistics right. So all that is just a background for why I felt Andy was a perfect, perfect fit as a go-to guy to analyze this very, very interesting study that we're gonna look at today. But before we get into it, and we will get into it really quickly, Andy, welcome, welcome back and thanks for joining me. What else did I leave out of that intro bio there that you'd like to add? Well, you did leave out the writing I've been doing lately because on a call I was making, I guess it was about a year ago relating to a photo shoot, I wound up instead talking myself into a job as a staff writer, writing articles about current events. And that was for an online publication called Law Enforcement Today. And then I've been doing that more recently for Red Voice Media, where I've just become a regular columnist. So, and as far as the number of articles at the Journal of Scientific Exploration, I believe it's five that they've published, one for the International Journal of Dream Research and another one that I did in a Journal on Education Research. My research from my PhD was about the development of professional levels of competence. And then I use computer graphics artists as a group that I was going to study to find that. But the overall goal was just to look at how competence or proficiency or even expertise is achieved primarily because I wanted to contest the idea that you actually have to like study something hard for 10 years in order to gain expertise. My impression was if you found the key concepts that defined expertise and you did it quickly, you would be an expert even if it took you 30 days. And I was able to show that. There's probably a few other things I left out, but it's good enough. You mentioned that I worked on Spider-Man the Movie or Daredevil or Space Jam. You didn't mention the games I worked on like Unreal and Parasite for Full Spectrum Warrior. Those are all big titles too. Anyway, you can go on. Or my TV show, forgot about that. I did a comic book that became a TV series called Harsh Realm. It was awful, don't worry about it. But still, it's a TV show. Not many people get those. So interesting background. And again, you have this kind of amazing graphic artist background. And there's all sorts of interesting skeptical-like stories about that that we've connected on over the years. But what I'm really trying to punch up and tell me if I'm doing it too much is I think you understand how to analyze this mask study that was done on the impact of community masking in COVID-19. This is a study from Yale and Stanford. It made somewhat of a splash in the media because they found, God darn it, just put on those masks like we told you. Here's the best science. Here's the science you've been clamoring about, waiting for, here it is, nail in the coffin, research. So let me start with this, Andy. When did you first hear about this study? I know I sent it to you. Had you heard about it before then? No, I hadn't. The first time I knew about it was from you. I thought it might have been me who kind of turned you onto this. Because as you know, I've been on this mask thing for a while. And when I say the mask thing, this idea of whether or not masks are really effective in controlling the spread of COVID among the general population. So we might get into that in a minute and we have to differentiate between whether they work in a lab to whether they work out in the general population. But I've kind of really gone out there and I've hammered a number of guests on the show and even had a debate on the show saying that masks don't work, that you always get a null result whenever these studies are done. And then somebody on the Skeptical Forum pointed me to this, what is published here in Live Science. And the title reads, huge gold standard study shows unequivocally that surgical masks work to reduce COVID spread. So I started looking into that. I found a similar article on the Washington Post. I'll read in that headline, which is also sensational. Massive randomized study is proof that surgical masks limit coronavirus spread, authors say. And one other thing I want to share with folks is how it played out in the news, news being kind of in quotes here, but I wanted to play this, how it was processed by mainstream media news. A large study on masks details their importance in the fight against COVID for an in-depth look. We spoke to one of the lead authors of that study, researchers at Stanford Yale and UC Berkeley analyzed 350,000 adults in Bangladesh. Now they took half of that group and encouraged them to wear masks. 29% of them complied with that for about a 10 week period. They found that masks in general provided a 9% reduction in cases. Surgical masks were even more efficient, reducing cases by 11%. Ashley Stucinski, one of the lead authors in the study and an infectious disease fellow at Stanford says the results offer a glimpse of just how much masks matter. So overall, we felt that this demonstrated that masks are highly effective in reducing COVID-19. And that if we were able to achieve even more uptake than the 29 percentage point increase we saw, we would have probably been able to measure a greater effect. The study found people 16 and older, they've been able to measure a greater effect. I'm gonna pause it there. Were you able to hear all that? Yeah, Alex, I gotta say like a couple of things. I'm sorry, this is just, yeah. First off, the first headline was much more sensational than the second one. Both of them are not based on any kind of foundation of evidence found in this article. But the thing that really got me was that clip of the TV news. So the TV announcer says that they found a 9% reduction in cases. And then the lady says, yes, it's 9% up to 11% for this other condition. And I'm thinking, I just read that paper and what they just said is wrong. It's a 9% relative reduction. The actual absolute reduction was something like 0.002%. It was tiny. So for them to call that a 9% or an 11% value, extrapolated from a 9 to an 11% relative value when you're comparing two numbers that are almost identical is really disingenuous. Now maybe they're just stupid. I suppose that's possible. They are after all. No, no, no, don't go there with that second part. I want to roll this back a little bit because I've just kind of played in the first impact because as I tell the story, you know, I've been hammering the mask stuff forever because I looked at the existing data and the existing data always had a no result. Whenever you took it out and tested it in the general population, no result, no result. Just recently, there was a Danish study published. Same thing, no result. When we actually try and apply masking to people in the general population, there's no difference. It doesn't make a difference if you wear a mask. It's not effective. No result, no result. That to me seemed to be the overriding data. So when I first heard this report, when it was published on the Skeptical Forum, which is the first time I did and I found that article on live science, I got to tell you because I think this is where it hits people. My heart dropped and my heart dropped because like, oh my God, I'm an idiot. I've been wrong and I've been spreading this stupid information and here are these really smart people at Yale and at Stanford and they're doing, they're way smarter than me about this stuff and they're so confident. Nail in the coffin, level of research, gold standard study. These are not my words. Highest quality, gold standard type of clinical trial known as randomized control trial should end any scientific debate. So says Jason Alla. Alla Luck, an economist at Yale. And I want to get into this but I have to say is, as this first hit me and my kind of, like I say, my heart dropped, there were also a couple of things that immediately jumped out at me and I want you to, because we talked about this and I want you to talk about it as well. When I heard such over the top language as should end any scientific debate, it did cast an immediate doubt in my mind like, hey, maybe there's something here that we need to look into. What did you think when you've read that kind of stuff? Look, there's a couple of things. Like for instance, the comment about ending scientific debate, that one acknowledges that there is a scientific debate which is in opposition to all the rest of the media signaling, which is telling us that there is no scientific debate because everyone agrees about this stuff because science says masks are good. So the mere fact that they're actually saying this is the nail in the coffin of that argument is telling me there is an argument. Why, you are now admitting this here which is something that you're denying beforehand. So, and when you use extravagant language like nail in the coffin or examples like that, it also makes me highly suspicious. I mean, my tendency when I'm reading that kind of language or hyperbole is to not trust it. I'm like immediately suspicious that what they're saying is the opposite of what I'm gonna find when I look at whatever it is that they're talking about because that tends to be the case. For a pre-published paper on top of it, right? Well, the fact that it's pre-published really bothers me. I don't know what would persuade a real proper scientist to send out a document before it's been peer reviewed and to tout it as something that's a pre-publication version of something that they intend to put through peer review. To me, that one reason you might do that I suppose is if you're worried that it won't pass peer review. And so you wanna get a step ahead of that by getting some popular support for it among people who actually don't know enough to understand the mistakes that you've made. Again, this is the direction my thinking takes when I see this, but frankly, I think it's really bad manners to do it. But on top of that, I also think it's bad science because the thing is the peer review process is helpful to the authors. It's very helpful. I wouldn't wanna present something that hadn't been peer reviewed for a number of reasons, but one of them is I rather appreciate the help I get from the people who peer review my articles. They don't just put a stamp of approval on these things and then send them in for publication. They make comments that are genuinely useful. I get those comments. I'm like, oh gee, I really should clarify this point or I should correct this number. And once I've done that, I feel a lot more confident in the paper. So if I do it before I've ever shown it to anybody for peer review, I'm thinking I have less confidence in my own work right now because it hasn't been vetted by anyone else. So why is it that these guys are doing it? Well, that's really a good point. So there's a number of ways we could tackle this study. You have a number of points that you've piled up. I do too, but I don't wanna bury the lead. And I think the lead here is that when you analyze the numbers, this study actually proves the exact opposite of what it claims. Because this study is confirmation of the null hypothesis. That is that there's no evidence that masks work when you move them into the general population. And that's kind of my working hypothesis, which we'll kind of hash out later is that I think the hype on this study is kind of the head fake because if you really work out all the numbers and you say, wow, they did do a huge study. But the fact is, they got a null result. And that's a replication in a sense of all the other null results. And the last thing you want is for that to get out. So the best way to go out there is lead out with a big lie that, oh my God, this is the best study ever. Andy, I think you had a comment. I've pulled the numbers that I wanna talk about up on the screen, but what did you wanna say before we dive into this? What I wanna say is that the way the study is written is deceptive on its face. It's really clear that they're intentionally disguising the actual findings of the study and the meaning of it. They are not making any comparisons to studies that come to different conclusions. Like for instance, the many studies you're talking about that show that mask wearing has no positive benefit. And I know about those studies and I've seen them. So why they are left out of this makes no sense to me because if they have this robust result you would expect them to say, look at this, all these studies X, Y and Z show or claim that masks aren't affected, but we have proven them wrong and this is how we've proven them wrong. Nowhere do they address this. And that should have been right up front and it's nowhere. I am really disappointed by that. But then when it comes to the numbers and you keep talking about this huge study, they got around 350,000 people in the study. But when I look at the actual number of people who are relevant to their conclusions, it's a small number relative to these bigger numbers that they're throwing around. And every single opportunity, they use the bigger numbers whenever they can, even though they're not directly relevant to what they're talking about. So that also bothers me. Anyway, go on. No, everything you're saying is great. So what I wanted to do next for people who are listening and aren't watching this, I'm referencing now right out of the study that you can get this link right from the Washington Post. And by the way, you can also get a whole hashing out of this that we did on the Skeptical Forum. I kind of put up this post saying, hey, help me out with this upcoming interview. And it was really great. I got a lot of posts, not a lot of them I agreed with, but definitely it helped the whole process because it's hard to figure this stuff out. Everyone makes mistakes here or there just like we're gonna point out that the scientists in this case made some mistakes. But the numbers can get a little bit confusing. But what I wanted to point out here is figure one is right out of the study. And this is the headline big graphic. So again, they had about 340,000 people. They had 146,000 in the control group and they had about 160,000 in the intervention group. Intervention group are people that they went and they pestered the crap out of them to wear this mask for the 10 weeks. And here is the result that they got. Check this out, people in the control group at the end of the day and we'll tell you how they got to this but they figure out that 0.76% of their control group had COVID. The group that they pestered the heck out of that group had a COVID rate of 0.69%. And they said like Andy just pointed out, hey guys, let's get all excited. That's a relative 9.3 reduction in COVID. Multiply that by all the people in the world. Multiply that by all the weeks in the year. Multiply that if we got people to double their mask rate. All of what you can't do is total bullshit science. But nonetheless, the real fatal flaw, the real junk science part of this is in the numbers themselves. Here's the little story I wanted to share with people Andy and then I want you to really take over on this. Here's another way to think of this study. Let's say I had a magic pendant, a little magic pendant with a crystal and a little leather strap on it. And I said, Andy, if you wear this magic pendant, you won't get COVID. And then I did my big study and I came out and I proved it guys, I proved it. And you came back and say, okay, well, tell me, tell me how you proved it. And I said, well, we took 1,000 people and you know how many of them had COVID at the end of it? Eight. Now I'm saying eight because that 0.76% rounded up because it's 7.6 people rounded up to eight. And I'll say eight people had COVID. And then you go, okay, well, how many people in your intervention group, the people who actually wore the pendant? So then Andy, if you were to say, well, how many people that actually wore the pendant got COVID? And I'd say, oh yeah, seven, seven out of 1,000 who wore the pendant got COVID. And you'd go, wait a minute, you said the control group, eight out of 1,000 had COVID. And in the intervention, the people who wear their magic pendant, only seven out of 1,000 had COVID. You'd go, that's not a very convincing result. And especially if you pressed me and said, well, how did you even measure whether they were wearing the pendant or not? How did you measure whether they had COVID at the end? What kind of test did you do? Is it possible that you made any mistake in terms of testing those 1,000 people? All those things would cast doubt on how accurate it was, particularly when my end result is that this is the effect. The effect is a reduction from 0.76% to 0.69%. It is minuscule. Anyone with common sense would tell you that is not a significant difference just because you wore the magic pendant. Maybe that's a stupid example, but that's what really brought it home to me is how they're totally playing with these numbers in order to create the illusion that they've done something when in fact they've really done the opposite. They've confirmed that this is a no result. What do you think, Andrew? Well, yeah, and I'm sitting here thinking you're taking on my thunder here because all the stuff you say is right. I look at that and frankly, I think it's extremely dishonest for them to call that a 9.3% relative reduction. In a scientific paper that's going into a scientific journal, you would say what the reduction is, not the relative reduction. If you wanted to make that a 99% relative reduction, you could do that. Just reduce those numbers enough, like 0.001 to 0.0001, and you could have this incredible relative reduction and it'd be totally meaningless because the numbers are so small, just as in this case, and also because of the number of people involved, you actually can do that. So when I look at the paper, I'm seeing two things that bother me. I mean, you're focusing on the number and I think you should because it is an important defect, but the other thing is the way they reported is very dishonest. I would say it's manifestly dishonest, meaningfully dishonest. They've changed the meaning of what they did, how they did it and what it means. All of those things are reasons to not approve this for publication. If I was reviewing this paper for a journal, I would not want to approve it just on that basis alone. Even that one line that you just showed there, that image, the graphic, where it says relative reduction, right there, that word relative, they take that out and replace it with the absolute reduction, or it doesn't get published. But this article is full of stuff like that throughout from front to back. The fact that they don't bother mentioning competing theories, that's a big problem for me. I don't like how they, I forget where it is, but there's one of these places where they drastically increase the numbers that are affected by this, provided their conjecture is true, but that's provided their conjecture is true, which is not something I'm willing to grant because it's the case, and they give no justification for it. You know what I'm talking about? They have a number 2.5 in there where they essentially multiply their results by 2.5 and say this is what the results would be if this fancy pants invented theory of ours is correct. And I'm like, well, prove that first and then give me the 2.5 because otherwise it doesn't make... It's interesting because where the 2.5 comes from, if we are going to talk a little bit about the method that they use, the protocols, what they did is they took this huge population in Bangladesh, which I have to say once I got over the point of saying this is all concocted, it's junk science and it's intentionally junk science, you start questioning the whole thing. One, why do you need 340,000 people? I suspect that one of the reasons you need 340,000 people is what you just alluded to. And I want you to talk more about that from your experience. When you have a really large population, it's kind of easier to fudge the stats at the end of the day. I mean, if you had one tenth of this, if you had 40,000 people, you'd still have a very significant study and you'd have a much more manageable study, right? Okay, the thing that bugs me about this is that I'm not even convinced that that 340,000 number refers to genuine participants in the sense that they are relevant to the claims that they're making here. Yes, they had 340,000 people fill out a survey, but they did not give 340,000 people blood tests to determine whether or not they had this COVID virus in them, okay? They only had something less than 10,000, 9,000 something of those people had that. So what they're doing is they're testing for a seroprevalence and their whole conclusion is based on changes in the amount of seroprevalence in one group versus another. And they're saying it applies to 340,000 people, but they only gave the test to 9,000 something. Well, if you read this study, they had to go to great lengths to explain how they created this randomized group versus the control group. And they really wanna hype that up because that they did and they probably did right. And you know, how do you get the profile of the village that matches up and all the rest of this? All smokescreen, smokescreen. Cause as you said, what they do at the end of the day is then they go and they pester the crap out of these people, they show them videos of their sports heroes in Bangladesh and politicians in Bangladesh saying, wear the mask, wear the mask. And then they go out and they have their little observers who they pay to go and observe people in the market whether they're wearing the mask. They haven't, then they say, well, we should observe them in the mask too. You know, cause mask wearing, we already know that if masks are effective at all, they're effective where the virus is being spread, not outdoors in the market, right? But leave all that aside. Again, it just, it just is a smokescreen. Here's what I wanted to get to. At the end of the day, what they do is they say, okay, time to tally up the results. Let's see who has COVID. So this is not an unreasonable way to do it. It just has the high possibility of introducing air. And that is that they call everybody up and they say, hey, that's 10 weeks. Remember you were doing this study. How are you feeling? Do you got COVID? Got a flu, got a cough. They go through the symptoms and the person goes, yeah, I don't feel good today. They say, come on in for a blood test, would you? 40%, both the control is pretty much the same, both in the control group and in the intervention group, the people that they're bugging to wear the mask. 40% of them come in. That's where they get the 2.5, right? Cause 40% multiplied at 2.5, you'd get 100%. But that's fake. You can't do that. All you know is that 40% of the people you called came in. You don't know which 40%. You don't, this is a telemarketing thing. You don't know if you have somebody calling them up who's really good at talking people. It has the kind of motherly vibe and it says, oh honey, you sound really bad now. You should come in and they get more people to come in than the other one. There's all sorts of potential for human air. Cause remember at the end of the day, you got a difference of one out of a thousand is the difference. If you lose a blood sample, if you get the wrong person to come in, if any of that changes, you have a complete null result. You don't even have this kind of fake null result that barely jumps over some bar. I know you're dying to jump in here, please do. Oh, I absolutely am. But I wanted to get back to this 2.5 times because I'm going to read it right off the article. Their justification is, quote, if non-concenters have similar seroprevalence to concenters. And I'm thinking that is a completely unjustifiable assumption or even conjecture. And the reason, apart from the reasons you gave, which I think are also valid, it seems to me that the non-consenting group is going to be meaningfully different from the consenting group. Otherwise they would have consented. Therefore, I wouldn't want to make any assumptions about them being similar to a group that they have just proven they're dissimilar from. That doesn't make any sense to me. Especially when you keep tying it back to the numbers. And that's what I think. It took me a long time to work through this study and think about it and mull it over. But I think it's such a great window into the whole pandemic thing and whether this is a program or not is really the question we've been trying to answer all along with COVID. And I think the masks are an interesting window into that question. And I think this study in particular kind of shows the method. So I totally agree with what you're saying, but I would just bring it back to one out of a thousand was your difference. So when you talk about the potential for a mistake being made and what you just said, remember the magnitude of the mistake you need. One out of a thousand is their complete extent of the difference that they report. They also report that, hey, some of the blood samples we did, they didn't work. They didn't have the right blood. The label, the barcode label on it got messed up. They admit, you know, which is understandable that thing wasn't perfect. Tie it back though, folks. One out of a thousand is the difference. It's terrible. But you know, one other thing too that I want to get at, and I know you want to stick with, actually I'll stick with the numbers just for now, but you were talking about significance levels that 0.042 number, something like that. I just want to illustrate what that means. The 0.05 level of significance, number one, is not considered valid in a lot of situations, depending on what it is you're testing. But what it amounts to is a one in 20 chance that it's random, okay? That's what it means. So 0.04, not much different from that. That's like a one in 21 chance, something like that, that this is happening randomly, okay? And quite frankly, that's a high chance that it's happening randomly when you're looking at 340,000 people, okay? I would want a much different value with a population size that large if they're going to claim significance. And then when they're claiming relative significance, this is like saying, I am relatively taller than my daughter in comparison to a Tyrannosaurus rex and an elephant, okay? We're already very close just because we're the same species. This is something you want to know absolutely is this actually changing the effect, that's not. And on top of everything else, on top of the fact that I think they're dishonestly reporting the results, I think their research objectives are dishonest also because they're saying we want to check out what kind of methods are available to encourage people to wear masks, which they assume is the good thing to do. And they start talking about the kind of methods, you know, we're going to get people in mosques to ask the people to do it, we'll pay off their village elders the equivalent of 6,000 US dollars if they get their people up to a certain level of mask wearing, et cetera, or the alternative, they say law enforcement. And I'm thinking, okay, so you're basically telling these guys and you're really pushing hard on this message wear them or we'll force you to wear them, okay? So do it nicely or maybe we'll punch you on the face first and stick it on you while you're unconscious. That's kind of how I'm reading this because they were really pushing these guys hard. And quite frankly, that destroys any kind of neutral viewpoint that these guys might have pretended to have had when they did this. They were really coercing the subjects a lot. And that as far as I'm concerned is a highly unnatural condition that they cannot extrapolate to the general population. Just my impression. Well, they do even worse than extrapolate it at one point as we played in the video. They said, well, what if we could double it? We got an increase of 35% mask wearing. What if we get to get 75%? We would get even a higher rate, which again is a complaint that your data doesn't show any of that. But to your point, there's all these problems that come up both in enforcing the mask wearing, counting the mask wearing and all the rest of that. But I just am gonna keep bringing it back to the numbers because the headline to me is big lie. And when I say big lie, I mean, it's kind of well known in propaganda is the best way to hide a lie is to make it a big lie. Because little lies are liable to be exposed. If they would have just tried to bury this study and not put it out and someone stumbled across it and said, hey, here's another no result. Stack it alongside the Danish study that just came out, randomized control study that shows no result. Stack it along with all the epidemiological data, which we should talk about, right? Because we have, we've kind of amassed a lot of data on mask wearing. One, we go into a state and one county enforces it and the other county does it. And we look at the results at the end and there's no difference. We go from state to state. They do it and there's no difference. And that's difficult to compare. It has all kinds of problems. That's why we really would want a randomized control trial where they really do kind of control that. But we can't totally ignore that epidemiological data. But here what we have is confirmation, further confirmation that mask wearing doesn't work in the general population. I think what they've done is they've hyped it up in order to bury it. So the debate becomes, well, did they really do it? Did they do this right? Who did they force to do it? When what the real story is, another no result, more data that it doesn't work. What do you think of the big lie theory? Oh, I think it's absolutely right. And actually it reminds me of something the CDC did in a study of pregnant women taking the vaccine. Because what they did there, and I'm just doing this from memory. So I'm not gonna give approximate numbers, but they did a study of about 900 pregnant women. I believe it was slightly more than that, but it was less than a thousand. And they said their results in their conclusions showed that the risk to pregnant women of taking the vaccine was in line with the normal risk of just being pregnant and having a miscarriage. And that was based on 900 women with a 12% miscarriage rate, which they considered normal. Personally, I think that's shockingly high. I had no idea that 12% would be considered normal. But let's assume that they're telling the truth there and it really is considered normal. What they leave out is in the very same article, just like this one, they've got some numbers that disagree with what they just said, but they don't highlight it. So if you don't pay attention, you don't see it. So within that group of 900 women, they've got it broken down into trimesters in the numbers of weeks that they're pregnant, right? So if you look at the women who were 20 weeks pregnant, 82% of them had miscarriages. If you look at the women who were 16 weeks pregnant or less, it was 92% miscarriages, okay? Those are extremely high rates, but by blending those values in with the remaining women in that study who did not have miscarriages, like basically everybody over 20 weeks. And they get to say it's only a 12% rate and everybody's safe. But what their data is actually saying is, if you're in your first 20 weeks, you're in a very high risk group and it is not safe for you. If you're in your last 20 weeks? No, first. It's the women who were in the first 20 weeks who were most at risk of losing their baby, not last, okay? Okay, got it, got it. So the thing is, it's not just simply disingenuous. This isn't a simple error. This is something that they had to know because the numbers are sitting right there. By covering it up the way they did, by masking it in their conclusions and discussion, they were able to get out the message to pregnant women, no matter how far along they were, that this is safe and you should go ahead and get the vaccine. That message can lead to them having miscarriages. And I think this is something that is highly unethical on the part of the CDC. I'm shocked that they did it and I'm shocked that they got away with it actually. I'm really amazed that the media let them get away with it because the information's just sitting right there. And what that means to me is either the media is absolutely lazy as all to get out and they just don't know how to read or something so they don't bother looking or they're complicit in this. And I actually think it's more likely a combination of the two. I think they will accept the top line reading of an article that they're told by their producers to read. But I also think that they seem to want to promote this stuff because that's what they're actually doing. It's really shocking to see this and it's widespread. I just wrote an article about this just a couple of days ago called Laundering Lies for Red Voice Media. And it's all about how people are lying to more or less honest people but they're doing it in such a convincing way. These honest people believe the lie and then spread it widely to other people who readily accept it because the people who are talking to them that are known to be honest people and they honestly believe the thing. So they're taking a lie from corrupt individuals who are doing it on purpose and passing it off into the hands of other people who are basically cleaning it because they themselves are honest and innocent in all this. And they become victims because then what happens is they act on these lies, they change their habits, they change their social relationships and so on. It's destructive to people around them and it's destructive to themselves. So for instance, if you're lying about whether or not masks are efficacious and you're telling people to wear these and they're always useful and everyone should do it. But in fact, they're either useless or they have a harmful effect. Then what you're doing is you're promoting something that has a consequence. It's the exact opposite of what your intentions are and you're doing it because you've been persuaded to do it by people who have ill intentions. That's how it looks to me anyway. Wow, that's a stunning example. You'll have to send me that and I'll see if I can incorporate it into this, into the video part of this and I'll provide the link for people who do it. And I stumbled across a lie, not of that proportion, but I think it fits in beautifully with what you're just saying. And it was that this is what one of the scientists in the study, one of the Yale guys, Ahmed Mubarak, I'm not totally sure on that name, but hey, I invited him on Skeptico. I invited, I should point out, remind me. I invited all these people on Skeptico. I invited Jason, I invited Ashley, multiple times to come on Skeptico. I did get one response from Jason saying after I pastured him a bunch of times saying, oh, I'm too busy, I can't do it now. Which speaks back to your point, no scientific debate on this. So it's not just little old me that they're blowing off. They're not engaging in any dialogue on this. If they can go on and do kind of press release readings on the media, they'll do that, but no engagement with any of this stuff. But here's the quote from Ahmed. This is so next level creepy. I wanna process it a little bit because it gets to this bigger issue. Here's what he wrote in, I think this is in the Washington Post, but you can find it, we'll have the exact quote in there. He says, most importantly, as soon as the data began to suggest that masking had benefits months before we drafted and released our study, we began to talk to the World Health Organization, the Bill and Melinda Gates Foundation, and the World Bank and dozens of other governmental and non-governmental groups about scaling up so others would benefit. Here's the real beauty of this lie. Remember, one in a thousand was the difference. This is clearly a lie. There's no way the data came in. If you actually run the numbers, they wound up with slightly more COVID cases in their intervention group than in their control group because their population sizes were a little bit off and they had to adjust it. But at no time, at no time could have Ahmed observed any kind of significant surge in mask effectiveness because it was never, ever, ever there. And we know that because that's what the results say. So how can this be anything other than a complete lie? Yeah, well, actually I wanna say something about what you just said is I think that population size difference is important because they mention that in the article, they say, this is the population size of the one group, this is the population size of the other group and then they say, but the difference is negligible so we can basically ignore that. But then when they get to this incredibly tiny result that they have, they're gonna magnify by turning it into a relative measurement and making it seem like it's much bigger than it is. So I'm thinking, if you can just discard thousands of people extra in one group over the other, you should be giving the other number the same treatment basically and saying, you know what, this is too small to really worry about. That's a great point, because folks, if you want to go in there and dig into the numbers and work the numbers backwards because they're kind of very sketch on the numbers too. They don't always add up, but for the most part they do add up and what they do give you in the study is the total number of tests that they did, COVID tests, control group, intervention group and they give you the percentage of people who tested positive and from that, they don't give you the actual numbers, they never give you the case numbers, the number of cases, because the number of cases would just startle you. It'd be this one in a thousand, you'd go, one in a thousand, what are you guys talking about? This could not possibly be significant, but they bury it there. But if you work backwards to support your point, Andy, again, work those numbers backwards and there's actually more COVID cases in the intervention group, that is people who wore masks, they actually counted more COVID cases in that group than they did in their control group, people who didn't wear masks. Now, as you just said, they can kind of say, well, there's a little bit of a population size there, so we'll adjust it and then you can get this very tiny, tiny little difference that they pump up, but fake, fake, fake junk science all the way and I would suggest a big lie. Yeah, well, one thing about that that's important too is that the difference in population size between the two groups is greater than the percentage difference between the two numbers that they extract their 9.3 relative improvement number from. So that also is important because if they're ignoring, let's just say 1% over here, they need to ignore everything under 1% over there because they're connected. And I'll tell you, when I was doing my study on death screams with all this help from Darryl Bam, which I really appreciated, I found things that didn't really match my hypothesis very well, but it didn't make me dishonest about it. It didn't make me just discard them from the pile and not report them or to report them differently from I just reported them exactly as they were. It's like, this one is an outlier, okay? I've got all these other examples here and this is the result I get when I run my test and these guys are different. We can talk about that separately, but I'm not gonna leave them out and I am gonna draw attention to it. That's the thing, when you have something like that that deviates from everything else, you have to mention it. Otherwise, you're not honestly reporting a result. These guys don't mention that stuff. They ignore it. They just gloss over it all over the place. You know, if I'd gotten this from a student, I taught a master's research writing class when I was teaching at university in the Netherlands. If I'd gotten this from students, initially I'd be thinking, wow, you got a 96-page paper here, 92, whatever it is, and that's impressive. And they go, oh, you got 340,000 participants. That's impressive, right? I look at it, but then I read one paragraph in and I'd be thinking this is a bunch of garbage because they don't treat the subject honestly from the very beginning. They don't deal with any kind of contrary information whatsoever and they disguise their numbers and they're inflating things all over the place by using these wild extrapolations without sufficient basis. It's ridiculous. The real problem to me that I'm trying to chronicle, if you will, because I feel like I've been kind of part of it with Skeptico, is how rapidly they've undermined science. You know what I mean? Cause like, you did the thing with Daryl Bem for your Journal of Scientific Exploration paper. I had Daryl Bem on the show. Let me pull it up. Hey, Alex. Yes. While you're doing that, I wanna mention two things. I wanna say this and I'm hoping that you find it worth including number one, I love talking about how science is undermined, okay? And I also think you and your Skeptico program have done a lot to illustrate that. But I really admire that work that you've done. So to me, the hallmark of Skeptico is you're absolutely not afraid to deal directly with the people who disagree with you. And you, as far as I can tell, have honestly tried to find out if the other side might be right. You've asked them the questions you need to ask and you've listened to their answers and you've waited until you've done that before deciding, okay, wait a minute, this makes sense or it doesn't. To me, that's how this kind of inquiry should be conducted. And it is something that I don't see very often anymore, at least not in these kinds of subjects. That's nice of you to say. My concern is that in the 10 plus years that I've been doing this, it's kind of coming up 15 pretty quick. I've definitely seen a shift. I definitely have seen a shift. And I wanna talk about that with you because ultimately that's what this, that's what this whole thing is really about, is how far down on the path are we? Is this business as usual? To what extent can we, should we try to stop this? When I, back at the day when I really had no clue and I had Richard Wiseman and Rupert Sheldrick on there debating about dogs that know when their owners are coming home. And I don't know if anyone remembers this show way back then, but we really dug into those papers. And it was kind of a seminal moment because we got Richard Wiseman on and he finally had to admit, well, yeah, he wouldn't admit that he was being intentionally deceptive which he was and Sheldrick called him out on it. But he admitted, well, the data is the data. I can't really argue against Sheldrick's data. To me at this point, that looks so refreshingly honest from a very dishonest guy, Richard Wiseman, that it's almost a marker of how far we've slipped. I pulled up episode 170 with Daryl Bem responds to parapsychology debunkers. And I also pulled up way back, Skeptico 126, Andy Piquette claims 20 years of history with pre-cognitive dreams. The reason they're linked is because you did lean on Daryl Bem because you had a complicated statistical problem. Again, you're super rigorous about the way you treat your data and as such, you had data that you could really do real statistical analysis on it. And you had to really come up with some novel ways to do that and I'm sure it spun your head around. Daryl Bem, Cornell University published in top journals had the same problem. And when we did this episode on Daryl Bem, he comes to the same conclusion, intentionally deceptive. And again, it was Richard Wiseman. I don't know, Richard Wiseman, he was kind of the guy that they leaned on to go debunk this stuff back in the day. But again, it was intentionally deceptive, but not to the order of magnitude that we see here. This to me seems like a whole different ballgame where you have, like you pointed out at the very beginning, you have a pre-release paper that hasn't even undergone peer review. And you immediately have the media access to the Washington Post, New York Times, Live Science, all the other places to make out and make all these outrageous claims. This is a new level that I haven't seen before and it just makes you wonder how far they've gone and just kind of completely undermining serious scientific debate, serious scientific analysis on tough subjects, on the stuff that just doesn't conform with what everyone already believes. Actually, I'm just gonna make a couple of comments on that because I made a couple of observations that I hadn't really thought of until you started talking about this. So when I started getting into studying dreams, right, it was simply because I had evidence in front of me and although I didn't notice it, my wife did, she got me to look at it. But at a certain point, I started listening to your shows and it was interesting. I actually really enjoyed hearing the adverse comments, the people who disagreed with the parapsychology hypothesis because by listening to them, I felt actually better about some of the other conclusions I had made because they never made any sense. They very rarely justified what they were saying very well and I could very easily see through their arguments. If I hadn't seen that, I might have always harbored a suspicion that maybe there was a fantastic nail in the coffin argument out there just waiting to shoot down the idea that I'm having dreams about the future, right? But because I actually saw these guys or heard them on your show, I was able to just, you know, realize that that probably isn't the case. But one thing that I did feel at the time is that this is parapsychology. This is an inherently controversial topic. There are a lot of people who just on the basis of atheism alone aren't going to accept anything related to this. And then you're gonna have people for religious reasons aren't gonna accept it. Then there's this tiniest liver of people who are gonna be open enough to actually pay attention to the data. And even the smaller sliver that are gonna understand it and even smaller sliver that are gonna have access to the right data, okay? So I was looking at the problem with skeptics and parapsychology as being linked to that subject matter. But after listening to you talk right now, I'm wondering if we're seeing dishonesty among scientists in parapsychology, why would we think it's any different among scientists anywhere else, okay? And looking at what we're seeing right now makes me think it is impossible that these guys suddenly became dishonest in the last 18 months during the COVID pandemic. I think it's been going on and we just haven't noticed it because the subjects were inherently less controversial. In other words, why question it, okay? And with a subject that is inherently controversial, parapsychology, and I think this is also a very interesting data point, parapsychologists have been essentially forced to use far more rigorous methods than are used anywhere else because they keep on defeating the arguments the skeptics throw their way, but to do it, they have to keep on coming up with new methods that are even more rigorous. And what has happened is they've essentially become almost, I hate to say it this way, like superheroes among scientists because the strength of the rigor that they're applying is much greater than what you see elsewhere. So with that implies to me, if we're seeing this high level of skepticism in this field with this level of rigor, okay? It's definitely happening everywhere else. That is to say the lies and obfuscations and so on and nobody's looking at it very carefully because it's not very controversial. So, and then I started thinking about climate change science and conversations I've had with a good friend of mine who's a high energy physicist. And on the topic, and I'm thinking, you know what? This has been going on for a long time. There's a very high level of credulity, a low level of skepticism. And I'll tell you, I associate skepticism with the practice of genuine science, right? Be skeptical, look at the data, follow the data, come to conclusions that are based on the data, right? But what I'm seeing instead are people who are following whatever instinct they have, which may be a desire and it may be something that is based on genuine investigatory perception. I don't know, but in this particular case, it looks like these guys wanted money from who? And they figured this is a way to do it because doing anything that's gonna support mass mandates is gonna get the money. That's the push right now. So just like jumping on the railroads back in the 1850s, this is like a gold rush for people who do research, do something that's gonna support mass mandates. And that's even putting a potentially positive spin on it. We don't know if it's more diabolical than that, more evil than that. But I just wanted to throw in, add a little meat to the bones that you just laid out because retracing that history of parapsychology is really useful. I remember way back in the day, one of the things that the parapsychologist really pioneered is, and Dean Raiden can be credited with this, is a very rigorous statistical look at the file drawer problem, both practically and statistically. And the file drawer problem, in case people don't know it is, because people when they want to replicate an experiment want a replication, they can be prone in some cases, either consciously or not totally consciously to take a result that doesn't get the result they're looking for and put file it away and never publish it. And that sounds really bad, but it wouldn't be, and particularly parapsychology points this out, if you're doing a Zen card, hey, can you tell what card I'm holding here secretly? Hey, if it just flops, you just go, oh, forget it, just put it away. So he had a really complicated but useful way that has been adopted by other people for how to account for the file drawer problem. Another one is the experimenter effect. When they said, hey, we replicate this experiment as closely as we can and we get a different result. And when we really sort it all out, the only difference we can get is the experimenter. Is it possible that the beliefs and values on some level that we can't completely measure of the experimenter is making a difference? These guys, these parapsychologists actually pioneered this kind of work that has made its way into other branches of science for people who are willing to be truly open-minded and truly want to figure out what's going on. Yeah, I was gonna say something else, but now that you said that, I'm just gonna point one thing out because I had to deal with the file drawer problem myself. If you look at my dream journal, I have, right now, it's not open on my screen right now, but it's somewhat in excess of 450, what I would call theoretical dreams, meaning I've checked them out, I've investigated them, I've got some kind of validation that these dreams related to something I couldn't have had normal knowledge of, right? But that number has been relatively constant since I stopped actively looking for validation. So as of 1991, that's how many there are, and then flash forward 20 years, and it's maybe a couple dozen more because I only passively verify them now. That is to say, if something happens and I just can't avoid finding out that it's valid, then I'll write it down, but I don't actively go out proactively and try to find validation. So you could look at this as either a proportion of 450 out of 800 dreams, which is a very high percentage of theoretical dreams, or you could look at it in the context of, the entire time I've been keeping the journal, which is 13,300 dreams, much smaller percentage, but it's a misleading result because the fact is I haven't been checking all that time. So how do I deal with the file drawer problem? Well, I report how many are in the journal and when I stopped checking them and how many have been checked within that time period, and I ignore the ones that were checked afterward. So I'm able to deal with it, but I do deal with it. I have to think about it, and I think about it because of what you're just mentioning from Dean Raiden. And I think it's an important issue. And this is like these guys who did this article on the MASC study, they're so far from dealing with the file drawer problem, it's embarrassing. But the other thing you mentioned about this being like diabolical, I did wanna talk about that, okay? Because it's true, you can actually be kind of nice in the way you talk about these various lies that are being promulgated on the people of this country and the world actually. In this case, the people of Bangladesh, I guess. Maybe there's well-meaning people who have an idea and it doesn't work and they just don't wanna admit it or they're not able to see it. But when I look at studies like that, the spontaneous miscarriage study among pregnant women from the CDC, that looks intentional. That looks like they are promoting something that they know will cause miscarriages on purpose because to them, their goal of getting everyone vaccinated is more important than the health of these people. And that is diabolical because at that point, they are doing something that they know is going to cause death. I'm with you. Well, I think you make a great point. One thing I will say, save it for later, that's fine. But I think it's actually a very important distinction between people choosing on their own to do something that carries risk and people being told they have to do something that carries risk. Because it's like, if you tell everybody in the whole country you have to play Russian roulette, you're guaranteeing a certain number of deaths. If you leave it up to them themselves, not all those people are gonna try. It's kind of like the coins flipped versus coins not flipped issue when you get to the Ryan studies. I find this is a very, very interesting and very damning point when it comes to those mandates. Well, that kind of reminds me of a couple of points as we wrap this up that I wanted to mention that get buried in all this. One, right off the bat when people think about masks, they've been kind of conditioned to get into this debate about whether masks work in a laboratory in terms of preventing the virus. And we've all seen the graphic on this. There's a mask and there's like this aerosol spray that's your photographer, you don't know how they shoot it. And you see all this stuff coming towards the mask and it either gets in or gets out. I can't speak to the efficacy of those studies and I think they're all over the board. But what I do think is it kind of misses the point because the point is public health policy. And in particular, the point is science and scientific confidence. And whether public health policy should be based on science which we all agree it should be and to what extent does that science have to convince us in order for us to give up the rights that we normally think are our rights at least in this country as Americans. Our default position is, hey, you can't make me do what I don't want to do if it isn't harming anyone else. So if I want to wear a mask or not wear a mask it doesn't matter, it's my choice. So the question is what kind of science, what degree of certainty would you need in order to have something that overrides that, you know? And that's what we're really talking about here. So that science is not laboratory science. We'd quickly get anyone to agree that what you'd have to do is go out and test it in the public and see if what you're trying to implement as a public health policy it really is effective. The other thing that I'd point out really quickly because I'm kind of going on about this point but I keep making it again and again is because whenever we've done that we always get a null result. We always come back and say masks don't seem to make a difference in the general public. We've never really seriously considered the adverse effects because we don't have to because we're not forcing people to wear masks. We've never really seriously considered the adverse effects when we do, when a couple of people have and they say, hey, there's some pretty risky things that we might want to look into in terms of mask wearing. So that's all left out of the equation because you shouldn't be mandated because mask mandates aren't really supported in the science after all. Well, when I see this kind of stuff going on and anytime I see something that doesn't really make sense to me like this my first reaction is usually I need more information. I'm missing information on this. And I think this is one of those situations because the COVID pandemic reaction based on the idea that COVID is super dangerous does not match the data we have on the actual danger posed by COVID. Therefore it's unsupported. The wearing of masks is based on that but that's not supported properly. And the masks aren't supported properly thanks to all the studies showing us that they're not efficacious. And so the fact that we're being told to do this anyway when the people who are asking us to do it have to know that it doesn't work. Okay, and we actually know that. Dr. Anthony Fauci is on the record saying masks don't work. Actually several other doctors who are promoting the use of masks are saying essentially it's a placebo just to make people feel better. If that's what they're saying then why are they attaching legal penalties to not wearing masks in Australia, for instance? Or actually even in New York City they're not now but a number of months ago they were actually giving people tickets for not wearing masks in certain places. So that kind of thing bothers me but one thing you mentioned made me think of something. There's a comic published in the 1940s. It's a Donald Duck comic with a story called the Golden Helmet. And the idea behind this story is that the Golden Helmet found in Labrador established the person as the owner of all of North America, okay? And so it goes through a number of different people. Donald Duck gets it, a evil museum curator gets it, all these other people get it and they all say what they're going to do when they own North America. So the museum curator says, I'm gonna make everybody go to museums every day of the week and school is going to be all about going to museums, okay? This evil lawyer says he's going to do all these evil things to take everybody's money, okay? And then when Donald gets it, he says I'm going to charge people for the air they breathe, okay? A sigh can cost a nickel, a gasp, a dime, okay? But the point is that they've got this arbitrary designation of power that allows them to make everyone in the entire country do the same thing to their benefit. And no matter what it is, whether it's going to museums or being charged for the air you breathe, it's evil. And it's bad and it's unsupportable. So when I look at this and you asked me, so at what level do you think it's okay for them to take this control over you? I don't know that there is a level where I think that would be okay. I mean, you could have meteors hurtling from the sky and the public address system could be saying, duck and cover, okay? And I would still consider it my right, perhaps unwisely, to stand out in front of a meteor, okay? And not be arrested for it, okay? I'll give you another example. I'm vegan, right? You know I'm vegan. Would you like it if I said you had to be vegan too? Because I had the golden helmet. I don't want that to happen. Why would I want to force you to do something you're not comfortable with? It doesn't make any sense. And that's what the government now thinks they've got the power to do. I think, and it's not just our country. It's like all over the world, it's crazy. It's not about science, it's about compliance. Heard that the other day. I think it's a great one. Andy, what's coming up for you? We are, I should mention, we are gonna do another show. I don't want to tell people what it's about. But it kind of piggybacks on this one because it's about following the science and where we might get if we follow the science and what that might get us into in the political and parapolitical arena. But that's all I'm gonna say about it. But what else is going on with you? What do you work on? What's happening? You know, I want to answer that question, but I hate to tell you. I just had an idea to say something and I want to say it, okay? Fundamentally, science is about honesty. Science that is not honest is not science, period, okay? Because if you don't record what you're doing honestly, if you don't state your goals honestly, if you don't report what you did honestly and if you don't honestly evaluate what you've got, you aren't doing science. Thank you. So you're asking me what I'm doing. Well, I have to tell you, thanks to the pandemic, all the things I planned on doing, I'm not doing and I'm doing all sorts of other things instead. So I came here and I wanted to set myself up as a commercial photographer and I was really looking forward to like traveling around the country and doing portraits of prominent parapsychologists, maybe even you, you somehow became reachable way with the heck over on the East coast and I wanted to do portraits of athletes. This is all the stuff I wanted to do and I was set up to do and I actually started doing, but then COVID hit and all of a sudden it was inconvenient to be in the presence of other human beings. So studios were closed. I couldn't get to the models or the clients or anything. So one day while I was talking to someone about doing a photo shoot, this guy turned out to be the publisher of a large online publication. He said, boy, you sure sound articulate. I'll bet you'd be a good writer. Why don't you write up some samples for me? The next thing I knew, I'd written almost 100 articles for him and got paid for it. So now I'm officially a writer, I guess. And then I was approached to do a couple of comic books. So I did that. So I've done this. I also did do a few photo shoots, got paid for those and Mike accountant is very confused. He's like, Andrew, what do I put down as your profession because you're doing these different things and you're getting money from different sources. And I just started becoming a columnist for Red Voice Media. But what I'd really like to be doing, quite frankly, is getting back to my art and also I'm doing some research on the topic we're going to be dealing with next. But that's more of a hobby that I'm doing just for my own edification. And I've also been getting quite a few contacts related to my dream researchers. It kind of surprises me. It all started about, I think, two, three months ago when I think you recommended me to this lady who's an author who apparently has written a lot of books. Trish and Rob McGregor have collectively written 100 books and Rob has, he wrote all of the books for Raiders of the Lost Ark. He didn't write the original ones, but he wrote a whole series with George Lucas on that. So I didn't know who they were when I did the interview, but apparently they're well-known enough that I started getting a lot more contacts to talk to another podcast and so on. And I'm getting a lot of encouragement. Rob and Trish, I just have to interject. Rob and Trish are super-duper well-connected and they told me after the interview that Andy might just be the most psychic person that they've ever spoken with. And I think what they meant because Trish is kind of tuned into the scientific kind of part of this, even though that's not really her background, but they were just blown away at the extent to which you've documented this carefully and meticulously. And I just thought that was interesting. He might be the most psychic person I've ever spoken with. Well, you know, something funny about that, talking about the relative percentage improvement that we were talking about with this case study here, I oftentimes kind of get so close to my data that I forget how unusual it is. So what'll happen is I'll go a few days without a dream that's particularly interesting and I'll think, oh, well, I guess that's gone. And then I'll have one, but by the time I do, it's been a couple of weeks. And so it's like, wow, this is unusual now. This is rare. But then when I look at it from a greater distance, I'm like, well, wait a minute, no, I actually had several hundred interesting ones that year. And when I compare that to other people, it actually is a lot. But it's hard to remember that sometimes. What I see in you, Dr. Paquette, is someone who is constantly switching hats, like you said your account is saying. And I think you are totally open to challenging what that even means, what consciousness means, what precognition means. We have no clue what that means, right? And that's what I think your research points to. So the rigor with which you've taken on the real questions behind that is what I think really causes us to rethink what that even means. Because I think there's an important recalibrating that needs to go on for the term psychic. And I think that's what you're in the process of doing because we need data. Otherwise, it's just one person's opinion. You've never been the sage on the stage kind of type to say, dude, you've always been like, like you just talked about like, oops, there it goes again. What's happening there, you know, kind of thing. Yeah, it's kind of funny. And I just, I'm kind of embarrassed that it took so long for me to notice because actually I had some pretty significant events happen before I was paying attention. And I let them go. I'm kind of disappointed with myself for having done so. But anyway, as far as that is concerned, I have to admit that after looking at it and making comparisons with other studies, I do have a lot of examples, more than a lot of others. In fact, actually, although at one time I was very impressed with Robertman Rose's Journeys Out of the Body series of books. I look at them now and I think, number one, I actually have more examples than he does in those books. And of course, who knows how many he's got outside the books that he didn't, I'm sure he's got plenty. But the thing is, a lot of what I read in there comes across as conjecture as opposed to database. And that bothers me a lot. But the other thing is, I think that precognition and prophecy, and by the way, I do define the two differently. Precognition is simply a view of the future and prophecy is when you are shown the future within the dream, okay? So it implies another agent. That's all I'm talking about. But that's your distinction. And I don't know that that would hold up to analysis. Maybe it would, but maybe it wouldn't, you know? I mean, what is the agency? And how would we deconstruct that? And from what perspective are we looking at it? We're looking at it where everything looks like agents, maybe from another perspective, it doesn't look that way. I don't know. I keep coming back to this thing that the little bit of evidence we have, and I'm not gonna speak specifically to your evidence, but I'm interested in what you think about your evidence, suggests that we are definitely disadvantaged in our perspective, right? Because like people come back, people like you come back, I don't wanna take that out, scratch that. People come back from a near-death experience, they go, I knew everything. Down here, I only know this tiny little bit. People come back from an odd-body experience, they go, I knew everything. No, I don't. So process that not as a story, process that as what is the pattern there? The pattern says that we are very prone to being deceived down here. It's just the makeup, too many things running through the brain or whatever the fuck it is. But that would to me be one of the guideposts on all that conjecture about what prophecy and the distinction and spirit. It's like, first thing we know is, one, if consciousness is fundamental, all that shit looks like it doesn't matter. And then secondly, to the extent that it does matter, we would wanna figure it out. We're in the worst possible place to figure that out. Yeah, well the way I look at it, it's kind of like, if you have to repair the intercontinental cable that goes on the ocean floor, right? You have to send divers down there without welding torches and they have to have suits on that essentially, unless you've got radios or whatever, they can't hear anything and all they can see is what's directly in front of them and they have no knowledge of what's outside the water, basically. And they just focus on that one task. And to me, that's what being born into a physical existence is like. So you can't really, but the thing is, at the same time, you're capable of doing something important, even though you're cut off from all those other normal sources of information. So I think that what we do here is actually important in some way, even though we have stripped ourselves of other abilities. But anyway, as far as what I'm doing, I mean, I'm actually wanting to get back to normal. Let's just put it that way. I wanna get back to normal. I'm writing right now. I'm drawing comics. I wanna do photos, but I want stuff to get back to normal. I wanna go back to not having to wonder what my neighbors are gonna think of wearing masks and not wearing masks. I wanna go back to just being able to say, hi, how are you? It's a beautiful day. And not worry about that stuff because this is just really distressing. Well, Andy, it's been great having you on and now I'm even more psyched to do this second show that we're gonna do in a week or two and we'll bring that to people as well. So thanks again. No problem. That was great talking to you, Alex. Thanks again to Dr. Andy Paquette for joining me today on Skeptico. I usually tee up one question from these interviews, but today I have to tee up three questions in this kind of level thing that I do. Level one question is, do you think as we claim, this study shows an all result? That's level one. Level two, is this study big lie propaganda as I claimed in this interview? In question three, level three, who's behind this and are they evil? Let me know your thoughts. Skeptico Forum is one place. Email me however you find me. Until next time, take care and bye for now.