 Every modern intellectual has a romantic idea of science. Science is the cornerstone of modern thought. But does science have some flaws? Is science in practice different from science in theory? Do fundamental processes like peer review really guarantee good work? And do they really weed out bad thinking? These are the questions I'm trying to answer on the 64th episode of Patterson in Pursuit. Hello my friends and welcome to the 64th episode of Patterson in Pursuit. I have been playing up this episode for the last two weeks and I guarantee it won't disappoint. This is without a doubt one of my favorite interviews so far. We are talking about the scientific process in practice. As it is executed by tens of thousands of humans all across the world. As you guys know I tend to be extremely skeptical of everything. But if you show any skepticism about the scientific process in practice, if you think maybe the academic system is not structured in such a way which preserves the ideals of the scientific method, you are immediately labeled as an anti-intellectual. Is there room for skepticism about science? I'd say good heavens yes there is. If we care about the truth we have to be honest about the challenges that the scientific and academic community faces. My guest this week is Mr. Brian Earp who is the Associate Director of the Yale Hastings Program in Ethics and Health Policy at Yale University. He's also a research associate with the Oxford Center for Neuro-Ethics. As well as being the author of my favorite article that I've read this year called The Unbearable Asymmetry of Bullshit. I'll make sure to have a link to that in the show notes page this week Steve-Patterson.com slash 64. But before we criticize the uncriticizable I'm going to give a special shout out to all the supporters and listeners of this show. If you value the work that I'm producing you enjoy the podcast Patterson in Pursuit or you enjoy the articles or videos that I release you can become a patron of the show which means you contribute just a dollar or two every time a new article is released. Head over to patreon.com slash Steve Patterson and you can join about 90 other patrons who are all chipping in to make the show possible. Plus if you sign up you'll get a free copy of both my books What's the Big Deal About Bitcoin and my first book on philosophy, Square One, The Foundations of Knowledge. If you want this voice to be heard a little bit louder then make sure to go to patreon.com slash Steve Patterson. Alright, I hope you enjoy my conversation with Brian Earp. Mr. Brian Earp, thanks so much for coming on Patterson in Pursuit. It is a pleasure to have you on the show. I'm glad that we can talk today. Thanks for having me on. You wrote a fantastic article which I read this year though you published it last year and it is my favorite article of the year. The listeners of the show shared it and everybody who read it in our little online group just thought it was fantastic. So first of all, thank you for writing it. I imagine you got some flak for it. It's just my suspicion. But it's called The Unbearable Asymmetry of Bullshit. And you get props just for having a good and provocative title. I was pleased about the title. That's probably my favorite part. It's all downhill from there after the title. Well, so you, like so many people in the 21st century have a great deal of respect for science, the scientific method. But you say that there's a great quote in your article. In fact, I want to do this interview in a little bit different way that I usually do. I want to kind of go through it and then ask you questions about it. I'm not going to read through the whole thing. I'm just going to pull some remarks. You say, I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it in a slightly different way, if I may paraphrase Winston Churchill's famous remark about democracy, it is perhaps the worst tool except for all the rest. In other words, science is flawed. So put a little meat on the bones there. What do you mean to say? You think science is the best tool for getting at empirical truth and yet you think it is flawed? Sure. There's a lot going on here. I'll quickly jump back to your speculation that maybe I received a lot of flak. Surprisingly, I didn't. I received overwhelmingly positive remarks from people across a whole range of disciplines. And what I think part of that was about was that I had identified something within some of the areas that I specialize in that hasn't really been talked about before. And I'm sure we'll get into what that is. This is sort of a way in which some people who have an agenda who are scientists, practicing scientists, can sometimes contribute to the literature in a way that is not the most productive. It's not the most truth-bearing. And I think people find that frustrating when you work in science and you're trying your best to get things right. You're trying your best to contribute to uncovering what's really out there. And then you have people who have less concern for those goals, nevertheless participating in the conversation, and sometimes corrupting the literature with basically not good science. So it turns out this isn't just in the specific areas that I work in, but a lot of people are frustrated about this. So this kind of leads me to answer your question about what do I say when I say, you know, science is the best method for getting at empirical truth, but it's got these flaws and these difficulties that we have to face. Well, when I was a kid, I grew up thinking that scientists were these white lab coat-wearing truth discoverers or something like that, and they were like demigods who just figured out what was right and were immune from the kinds of, you know, human foibles that we're all prone to. And then when I started training as a scientist, and particularly when I started studying the history and philosophy of science and the sociology of science, it became obvious that scientists are whatever else they are, human beings with psychological biases and career interests and things like that. Now, as I say in the article, most scientists that I know personally and certainly the ones that I trust and work with are hell-bent on getting things right. Nobody's trying to, you know, illegitimately get a paper published or something like that. But even if you're doing your best to get things right, there are aspects of the incentive structure of being a scientist. You have to constantly be publishing or you won't have a job, so sometimes you're overworked and you're putting something into the literature that you wish you'd been able to spend more time on and these sorts of things. So my general perspective is, to kind of put this into a more concise framework, scientists are humans, scientists make mistakes. The current incentive structure of professional science I think is extremely problematic. And what this means is that although the scientific method, at least in an idealized form, is the best way of making sure that we don't fool ourselves, which I think is Richard Feynman's famous statement. You know, it's easy to fool yourself and scientists try really hard not to fool themselves. They want to really believe things that they only have good reason to believe. Nevertheless, it's easy to get fooled even when you're a practicing scientist. And I think that being honest about that and having a serious conversation about where the weaknesses are in science will allow scientists and funders and policymakers and so on to put in place the sorts of structures that will further limit these kinds of human biases that detract from the best work that scientists are otherwise capable of doing. I think hiding from that and pretending that you're either pro-science or anti-science is such a simplified and frankly stupid dichotomy that it's so frustrating to see circling around the internet whenever there is a debate that comes up. That's an excellent point. The way that I like to talk about this is between science in theory, which is as you say how we kind of think of the scientists in the lab coats that are all getting at truth, versus science in practice, which is a bunch of humans who are not categorically superior to every other human on the planet going about a very difficult craft. I think you put it in an excellent way, but I want to focus on one aspect that you mentioned, which is the incentive structure in the world of science in practice. You've got a line here in your article. You say, if the scientists want to keep their jobs, at least they must contend with a perverse publish or perish incentive structure that tends to reward flashy findings and high volume productivity over painstaking reliable research. Can you explain? Yeah. The physicist Peter Higgs, who was responsible for coming up with the idea about the Higgs boson, which is just a fundamental aspect of reality, one of the most important theories and discoveries in physics, commented that if he were up for tenure today, he probably wouldn't get a job. He wanted an oil prize unless I'm getting this wrong. I'm almost certain that he did. We can Google it. The point there is that it took him a long time to sit there and think, and he came out with a paper every once in a while when he really was sure of what he was saying. He had a secure academic job, and that's sort of the way that the incentive structure was set up around the time. Nowadays, his point is that you can't get a job unless you have this consistent flow of papers coming out. Now, there's different reasons for why that's come about. Some people tie it to what they call the corporatization of universities. Universities are basically turning themselves into businesses with paying customers, which are their students. In order for people to run their labs, they have to get funding. Funders want to see that you're being productive. You're not just going to waste their money, so you don't want to be just sitting around in the lab. All these pressures are combining to make it the case that, again, scientists, not just to be accepted by their peers or do a slightly better job, but simply to get a job at all, have to do a certain amount of intensive publishing when the clock starts after they receive their PhD. And so I don't fault scientists for this. Again, I think some people think, well, you know, it's really up to the scientists to make sure that they're keeping their integrity up to the top level it could possibly be, and they shouldn't be publishing so much stuff if they're not able to do a good job with each paper. But that's not really the choice they face. It's not doing a little bit less publishing and slightly better work or something like that. It's that if you don't keep up with the rat race, you won't work in science at all. You're going to have to get a different job altogether. There's really need to be dealt with. And look at the systemic contextual issues that are driving scientists into this frenzy of mass-produced work. And sometimes that leads to shortcuts. Sometimes that leads to collaborating on a paper where you're not sure where the data came from, but you know that your colleague did it. And these kinds of little things wouldn't happen if there was more breathing room, more time and less pressure. And this is such an important point and maybe we'll talk about it a little later, but it also relates to the replication crisis that at least we've seen in the realm of psychology. I think the area that's been affected most by it where even the material that gets published in the peer-reviewed journals that people are putting out there responding to these career incentives isn't even necessarily good science in quotes, right? Yeah. Well, partly there's a number of factors that play into this issue. And I think it's a very serious concern. There's some debate among scientists about how much of a crisis or not we're really in. Some scientists say, well, of course, a lot of work that's published shouldn't replicate later because that's the cost of doing innovative stuff. You know, if you put an idea out there, sometimes it's not going to hold up over time, but you don't want to just do bricklaying exercises for all the science where all you're doing is hardly moving anything forward, never being innovative, never being creative, mathematically proving everything before you put an idea out there. And I think that that's right. We have to have a serious debate about what the, as it were, correct replication rate should be. It shouldn't be 100% because that means science would be moving far too slowly and there would be no innovation. It probably shouldn't be 0%. Maybe 50% is right, it's hard to say, but there's a prior discussion that has to be had about what we want in terms of trade-offs between innovation, creativity, and painstaking care and replication and so on. But certainly there are many people who are concerned, and I'm one of them, that these incentives I've been talking about make it more likely that the work that does get published is essentially statistical errors or false alarms. And I'll just give you a few examples of how this can happen. So imagine that you have 20 laboratories basically running the same experiment. And this is not an unrealistic expectation. If you have a bunch of people who are closely following a literature, sometimes there's a sort of obvious next step that you would do. And so you might have a bunch of different laboratories running essentially a similar experiment. Now, if none of them get it to work, except for one of them, that one person, and in this case, we're pretty sure this is a statistical fluke. If it worked one time out of 20, that's basically a guarantee that it's an error. That person will say, oh, look, I found something and they'll submit it and they'll get it published in a journal because they met the sort of threshold criterion for publication, which is these so-called statistically significant findings. There's a whole debate about that that we can get into. But you don't hear about these other labs that didn't get it to work because for the longest time, journals had a very strong prejudice and they still do against publishing so-called negative results or null findings. And what that means is that the one time it quote-unquote worked, which is an error, gets published and the 19 times it didn't work, nobody knows about it because they're not published anywhere. It just exists in people's so-called file drawer so this is called the file drawer problem. The scientist runs the study, doesn't seem to work, they just file it away and they don't bother to write it up anywhere. What happens, though, is that you're at a conference with your colleagues and you might be sitting at the bar afterwards and somebody from another lab says, oh, yeah, I tried to work on that study and, you know, I just couldn't get that result to replicate. So, yeah, you know, I did too. And so, you have this informal knowledge going around among scientists that a lot of stuff can't really be repeated or that probably isn't trustworthy stuff but all that's in the public record is what's likely to be a false alarm. So that's one way you can get, you know, these sorts of replicability problems. Now, from the outside, that seems like that is a really big deal, that that is like a methodological issue because then you have the literature being built on top of literature that actually gets published but the fundamental, you know, the seminal works here are perhaps flawed in themselves. So when you say, yeah, go ahead. Well, I'll just add one quick thing to that. So let's say that you publish something in a top journal, you know, a science magazine or psychological science or something like that if we want to focus on psychology. And I just want to be fair to psychology. Psychology's been in the spotlight but these problems are rippling through biomedicine, biology, neuroscience, genetics and other areas. So psychology has sort of taken the brunt and actually leading innovations to try to address these problems. But, you know, I myself train in psychology so I know this area best and that's why I'll use these examples. But let's say that you've got a paper published in the top journal. Well, supposedly if you go back to Francis Bacon, the idea about what makes science different from pseudoscience is that you have these intersubjectively verifiable observations. But if you report that you could do something in your lab, I should be able to do the same thing in my lab. And so essentially people are supposed to check each other's work. But the problem is from the professionalization of science where it becomes, you have to, you know, keep up with a career to keep your job. It's not in anyone's individual career interest to do an exact copy of anyone else's study. So if you publish something, I know I'm going to move away from that area because I figure, okay, well, you've got that cornered and I have to come up with my own sexy finding to advance my own career interest and get my own grants and get my own prestige. And so I'm not going to do an exact copy of your work. The closest thing I'll do is what's called a conceptual replication. And what this means is I take your finding for granted because there it was published in a top journal. And then I try to sort of test it a slightly different way, using slightly different methods, or I'll try to extend the idea into a new territory, basically build on your work. Well, here's the problem. If I use slightly different methods than you used and I don't get the effect that I expected to get, I don't know where to place the blame. I could just say to myself, well, because I changed something. So whatever I changed is probably responsible for my failure to find the effect. So the so-called conceptual replications, if they turn out, if they work, then they get published and it looks like, oh, the original finding is supported and now it has further support for it. And so just as you say, and this is why I wanted to jump in there, when you say you're building literatures on top of literatures on top of literatures where it might be false findings all the way down. This sort of thing can happen because people aren't doing what are called direct or exact replications. Generally speaking, they make some sort of change because they don't want to just do an exact copy. That scene is very unprosegious. It's sort of looked down upon typically. And so when nobody's doing an exact replication, when they're changing something and it doesn't work out, they don't know where to place the blame. So they just say, well, it's probably my fault and they don't tell anyone about it. And if they get it to work, it gets in the literature. So you get this sort of self-reinforcing cycle of apparently building and advancing on certain theories when the bulk of it might very well just be these flukes that we were talking about earlier. Now I want to ask you a couple questions on that. When you describe that phenomenon, just in your professional estimation, I'm asking you an impossible question that you're not going to be able to answer, but I'm still going to ask it anyway. Do you have any inclination for what the absolute amount is of this kind of, let's say, foundationally flawed literature, at least in psychology because that's your area of expertise? Is this something that we're talking about is okay, we need to bring it up because it's 10% around the edges or is this something that you think is deeply throughout the literature? Well, there's a couple ways of getting into that. There's a very famous paper that was published in 2005 by a Stanford epidemiologist and now a meta-scientist named John Ionitis. Or maybe it's Ionides, I realize I should figure out how to pronounce his name. He published a paper called Why Most Published Research Findings Are False? And he basically took account of all these different nudge factors that make it likely that false alarms do get published certainly more than they should. And did some modeling and so on and sampled lots of papers and he came up with this estimate more or less a rigorous way of coming up with the estimate. Certainly there are some critics of it where he thinks that it's more than 50% of what's published in biomedicine is simply false, just not true. It says there's a finding, but there's nothing there, nothing to see there. So that's his estimation. Now in psychology, famously recently, there was a big exercise called the open science collaboration got together and they sampled 100 papers from some top journals in psychology and they had independent labs try to replicate each of those studies one more time. And there's various ways of measuring what counts as a replication, but on all the metrics they used it was not very promising. It was, you know, 37% of the papers replicated on some metrics, a little bit more, a little bit less depending on how you count replication. That, by the way, is a very on hot debate. What does it mean to replicate a finding? Does it mean that you get the exact same effect size? Does it mean you get the exact same p-value if you're using frequent statistics? Well, not necessarily. That's not necessarily the most obvious way of saying that you've repeated the same finding. So I just want to flag here for putting in people's minds in the background that this is where philosophy science comes in. What counts as a successful replication? How close do you have to be to the mark? Just say, yeah, that seems like that is supportive of the original finding or that's different enough that we should sort of subjectively call this a non-replication. So that's part of the debate here. But to tie those strands together I would not be surprised if 50% or more of published findings including in top journals is merely statistical noise. Now, for me, I'm not a professional scientist here when you've got 50% of a product that might be statistical noise as you put it. That is like a paradigm shattering idea to entertain I think for lots of people. What it sounds like what it sounds like you used an interesting term and I want to ask you about the professionalization of science. It sounds like there are potentially gigantic flaws which might be able to be corrected here in the area of academia in general and in the incentive structure of this scientific system. But it's almost like there's no time for anybody to correct the systemic flaws because they don't have career incentives to do so and in fact if they don't just accept the structure as it is they might not even have a job in the future. Do you get the same sense that that's kind of what's going on? Yes, I think that's right. I was at a meeting at the Royal Dutch Academy of Sciences a couple of months ago and the Dutch government had taken interest in this issue I think precisely because they realized that individual scientists can stick out their neck and somehow heroically save the system it's sometimes referred to as a collective action problem. It's not in my interest to slow down and try to get everything right and publish only one paper every three years once I've built up a body of evidence because like I say, I just simply won't have a job if I do that. So I in fact can't stick out my neck and still be a scientist. In some cases maybe you can if people have tenure and big grants and so on maybe they can take a little bit more make a little bit more of a change but a lot of people, particularly early career scientists simply cannot afford to work against the incentive structure. So what that means is the incentive structure must change and I think that that's going to come down to funders governments and other sort of systemic forces to make these changes if there's going to be the kind of proper response to the amount of frankly wasted resources there's also ethical implications to this if you look at medicine and you enroll people in a medical trial for example and you don't have the utmost care about what statistical analyses you're going to use which control conditions you're going to use whether you publish all of the trials I mean this is something you see with drug companies drug companies run lots of trials on drugs and they don't publish all of the ones that they ran they publish the ones that make the drug look good that's a huge ethical concern because now you're subjecting human beings to risks without having done due diligence and making sure that you're going to have the very best estimate of what's true about that substance or that intervention that's ethically unacceptable so there's wasted resources, there's human costs there's all sorts of problems and I do think it's a big problem but the solution is not going to be individual scientists trying to be heroes it's going to have to be systemic changes to the reward structure and to publication practices Now are you optimistic about the prospects for that kind of systemic incentive structure change that doesn't come from renegade scientists let's say I think there's a sort of generational shift going on here and I don't want to be too broad brushed with this there's people at every stage of their career who think that this is a serious crisis and needs to be addressed and others who think that the talk of crisis is being overblown so there's but if I had to sort of just give you my estimation looking at the literature and talking to people and getting a sense of what's going on I get the sense that scientists who are sort of coming up and forming the next generation of researchers are really taking this seriously are concerned about getting it right don't want to just turn out publications that they don't know whether they're accurate or not they're trying to learn new statistical tools so another thing that I planted flagging before is there's this common procedure used in psychology but also biology and neuroscience called the null hypothesis significance testing and this is basically a common statistical inferential procedure that for decades statisticians have been shouting at social scientists and medical scientists saying actually this procedure is not valid as an inference procedure in the way that you're using it it only gives you useful information under very strict conditions that almost never hold in the experimental conditions that psychologists and medical researchers use and so you know but that's the thing that journals got used to accepting as the valid form of statistics and so you get the sort of institutional inertia where you use these basically statistical rituals to get p-values less than 05 and then if it's less than 05 you say well we've got a publication so there's a growing movement now of people who say listen this is just we can't keep using these statistics in this mechanical way we need to actually have serious proper training in statistics and look at Bayesian methods and other forms of doing statistics and have statistical controls on institutional review boards and embedded in every lab and in every department so you don't have people who aren't truly experts in statistics just cranking out these studies I see a groundswell support for these kinds of changes that seems to be coming up from roughly speaking younger scientists so I am optimistic that there's a possibility for change and I think that funders and governments are getting on board I do want to raise a risk here of course I saw an article recently pointing out that folks on the right hand side of the political spectrum who are skeptical let's say doubtful about say climate science or you know vaccinations these kinds of things will find an easy opportunity to kind of seize on this what to me is a very healthy and important discussion happening within science and you know with the media attention that's happening in the public domain as well that science is not just this record of facts that get published you know paper isn't just a thing that's now true it's more of a progress report it says this is what we think is probably true now or this is a finding but it has limitations and we're going to have to wait to see if it replicates so I think the public needs to understand you know science needs to stop suggesting that every time a paper is published it's this earth shattering new discovery science moves very very slowly replications take a long time to do science clearly in some instances works we really did send a rocket to the moon there really are diseases that have been eradicated so it's not that science is just alchemy or something like that there's a good science out there but it takes a long time to get the kind of confidence that you need there are better and worse scientists there are scientists who are more and less careful and so you know what the public shouldn't do is just say well given that there are these various flaws in science that it's not a perfect enterprise my opinion is just as good as any scientist that's not really the way to go that's why I try to chart a middle course I say science really is the best way of answering these questions it's certainly better than the opinion on the street that's not a reliable way of getting at complicated truths about what's going on with say the atmosphere that doesn't mean that the science about the climate is infallible or that every study that's been published is you know gospel truth but that is the best way we have of actually answering these very difficult questions and so we have to we have to both allow for flaws and faults in science that it's an imperfect enterprise doing its best to you know get an inaccurate picture of the world but then we also need to on the other hand not therefore throw out the baby with the bathwater and say well I guess we just don't know anything and whatever my untutored opinion is on any subject is as good as any experts that's a totally misguided as well and one factor in this is I wonder what role the media in particular and journalists play in the let's say over hyping perhaps or premature hyping of scientific findings so one of the questions I wanted to ask you was do you think the current I am comfortable calling a crisis foundational epistemological crisis in science do you think this is a new occurrence based on the relatively modern system of academia maybe in the past hundred years or even 60 years because another variable in this is like I said media journalistic coverage of the latest study that came out that now gets overblown and that particular you know scientist probably has a lot more prestige maybe a lot more funding if they publish something in a prestigious journal and it catches news headlines which seem to greatly distort and misunderstand various facts that are presented in scientific literature do you think this is relatively new so I raise this point in some lectures I give which is that if you look at the historical record crises of one sort or another have been declared pretty regularly since the founding of psychology and in other disciplines if you go back even further before psychology was sort of recognizable as a discipline in its own right you have enormous debates about whether this finding or that finding is reliable or this theory or that theory I mean just think of the amount of time Darwin spent working on the theory of evolution and wasn't willing to publish results until you know Wallace came out and said I found something similar so these kinds of crises and debates and politicization of science has been going on forever in fact in the 1970s if you look at the the titles of journal articles in psychology the leading journals they have titles like the crisis of confidence in social psychology now why was why didn't we hear about that I think it's because there wasn't the internet around and there weren't blogs and there wasn't a 24 hour news cycle where people were trying to take every bit of thing that looks like news and then turn it into a public discussion so what's happened now is the same sort of problems that certain critics within psychology had been raising for decades have just become much more public and embarrassing so if you go back and look at the work of a psychologist Jacob Cohen from from the 60s he has papers pointing out and there's a I think it's his paper might be someone else's but it's he says God loves the point of six just as much as the point of five you know making the point that this obsessive concern with this essentially arbitrary cutoff point it could have been something different it just became ossified as the convention in the field that basically people stopped thinking in sophisticated ways statistically and they started thinking in routinized ritualized ways and these critiques have been raised for a long time Anthony Greenwald the famous psychologist wrote a paper in the early 70s called unintended consequences of prejudice against the null basically he was talking about the file drawer problem or publication bias where journals were accepting negative findings and saying this is really bad we have a very skewed literature and even our meta-analysis aren't going to be reliable because we're only doing a meta-analysis of what was published but what was published is not representative of what was conducted so these critiques have been around forever just to put a personal bent on this my grandfather who I never met who died in the 1940s wrote a paper in 1927 in the Journal of the American Medical Association called the need to publish negative results and the point was if you only publish the successes you're going to have a very skewed sense of what's reliable if all the failures are never published then you know the published record is not an indication of what we really know and I stumbled across this paper as I was doing this research and I had chills down my spine that somebody I never met but was related to had indeed been talking about this as early as 1927 so now when that's in 1927 that's nearly a century ago you still think though that there is hope for change in that particular regard because I had no idea that it went that far back these kind of problems I think in a way the public attention that's been drawn to these problems is part of what's going to lead to a more serious effort to try to find solutions these were just internal professional debates that were happening as recently as the 1970s in social psychology when these kinds of papers were being published there was lots of people being concerned and scratching their foreheads and thinking maybe we should change our methods and then nothing really came of it and people just kept pursuing the same kinds of practices that reliably led to publications which led to esteem from their colleagues which led to good careers and so on and there were always people working to be hard core in their methodology and say it's harder to show something than you think it is or you really need to replicate this before you put it out in the public or whatever but those folks didn't have as much influence over the field as they really should have partly because it's not sexy to talk about methodology it's really boring and it's really hard to figure out for example what counts as a measurement this is an issue in psychology how do you measure a mental state psychology really likes to use numbered scales 1 to 7 give me your answer on this thing and I'll just give you one example of how this is problematic so let's say you have a scale and you want to measure whether people are more liberal and more conservative and a standard way of doing this is just a one item measure where you say one equals very conservative and seven equals very liberal and people just mark where they are on the scale now in order to do the standard statistics on the scale certain things have to be true of the scale in terms of its ability to measure whatever is out there this actual you know tendency to be more liberal or conservative and one thing that has to be true is that the difference between a one and two on the scale has to be the same as the difference between a six and a seven it has to be measuring the same kind of mental tendency or political tendency out there the problem is that's not that's not really the case very few people are willing to put themselves on the extreme ends of the scale because saying I'm a one or a seven is really a strong statement so most people put themselves somewhere in the middle of the scale and that means that if you do put yourself on a one or a seven you're actually way far out there in terms of your your political tendencies at least in terms of your self identification and what this means is the scale is not mapping on in a one to one relationship with the phenomenon out there you know however you want to conceptualize it and so most of the statistical tests you would use on the scale are invalid from the start but you know to work out how you do a valid measurement of a mental state or a political disposition or something like that is an extraordinarily complex and logical problem and it's simply kind of boring for a lot of people who work in the field so they're kind of happy to use numbered scales without really interrogating what would have to be true of this scale for the statistics I'm using on it to be valid statistics there's some people who work on it but they publish an obscure journal now that this just to me sounds like pure philosophy sounds like all what you're talking about is just really epistemology you know what would have to be true in order for this to be true and this to be a reliable method which I think is wonderful I think these questions are of the utmost importance maybe that's why I'm biased here because I found like that you have to have it almost seems like you should have these discussions before even undertaking the projects when you say that the example with the numbered scale there's also the question of the relationship between self reported political beliefs and which somehow you could measure actual political beliefs so somebody might consider themselves you know a three but actually in practice they support you know policies that are that would be more liberal you know how do you measure something like that right I mean psychologists have been aware for a long time of the problems of self report there are many of them one is that people often don't know their own minds another is that people are often motivated to respond in ways that they think the experimenter wants them to respond this is called socially desirable responding and there are various methods that have been proposed to try to account for these kinds of things but they aren't always employed so you know I've written a critique of a study in a medical journal where they asked questions in such a way that they was almost like leading the witness and then they got the answers that they expected and they had no measure or even attempt to assess are these participants giving answers that we want to hear and if you don't account for that the answer doesn't mean anything at all it just means that they told you you wanted to hear that's not a reliable measure of their actual mental state right and so you know even the measures that have been invented are kind of imperfect I'll just tell you one of them as an example so there are these socially desirable responding measures that basically give people a set of questions where it's very unlikely that a certain answer is true but it's the socially desirable answer and so if a person consistently gives the socially desirable answer that just has a very low base rate possibility that's the case either they're an extraordinary person in this regard and you just have a sample full of extraordinary people which is unlikely or people are showing a tendency to respond in the way that they think you want them to and when that's true you can identify those people who have that tendency and you can you know compensate for their effect on whatever it is that you're investigating so these are the sorts of tools that have been developed and when they are employed I think that's very good but certainly psychologists are aware of self report problems but they're hard to get around no matter which way you try so I want to go we're running out of time a little bit here I want to go back just for a bit to two more sections in your articles some things I want to talk to you about so you've got a line in here where you're talking about the humanness of scientists you say they have reputations to defend egos to protect and grants to pursue they get tired they get overwhelmed then you say they don't always check their references or even read what they cite that seems like outrageous claim what do you mean by that well there's been interesting documentations of this there's something about what is it it's an urban legend about whether spinach has a lot of iron in it or something like that there's a wonderful sociology of science piece I can't remember the author's name off the top of my head but he basically shows that people rely on trust a lot in science so if somebody that I trust cited something in support of some claim I'm not necessarily going to go try to dig up the original article maybe it's hard for me to fly maybe it's in some obscure journal or something like that but I don't want to just cite a secondary source so what I'll do is I'll cite the original source cited by the other person even if I maybe haven't read it now I'm not saying this of myself I'm saying this is an example where you know you make these inferences where tracking down the original source and confirming that it really is that way is sometimes incredibly time consuming and so in order to kind of get the point across you might use a heuristic that is you know may typically be reliable it might very well be the case that such and so a researcher tends to do a really good job and you know that if they cited something that's probably a good citation and so on but this is just to give an example of the sort of scenario where someone might include a reference to support a claim that isn't really the best way of supporting the claim or maybe they haven't screwed nice that paper I'll give you an example that kind of shocked me so the Centers for Disease Control came out with a policy on a very controversial issue which is infant male circumcision which happens to be an area I know a lot about and they came out with this policy it's much fanfare it was a draft but nevertheless it was picked up by all the media and you know this is the new policy from the CDC which is supposed to be the you know most specifically respected medical body out there well I this happens to be an area of expertise of mine so I went and read the policy all however many pages of it and I noticed that there were these stunning errors in citation I'll give one example to illustrate and there are others I could raise so here's one they say according to this study only 6.5% of infants experience clinically significant pain when they undergo this procedure now I thought well I haven't heard that figure before so let me go look up their reference so I dig up their reference and there is a 6.5% but it's a misprint and it only appears in the abstract of the paper it's nowhere in the body of the paper because it was just a typo so what that means is you know right away that the researchers at the CDC did not read the paper they only read the abstract and they cited what's in their abstract now I knew as an undergraduate student if you're going to cite something you should read the whole paper and you shouldn't just take it to face value you know authors often spin their work it's really cool in advance even though it goes a little bit beyond what's actually shown in the paper they certainly don't highlight the limitations in the abstract you know and this is why when you get media reports the media reports often go with the abstract or the press release or something like that including the New York Times by the way this is a whole story of its own the New York Times science reporter this one guy Nicholas Baccalar on so many occasions I see him just basically repeat the content of press releases without getting any critical opinion from somebody who disagrees with the view of those authors or somebody itself to see whether there may have been flaws or limitations in it and if this is science journalism I despair because the New York Times if something comes out on those pages people figure that's as good as true but if it's really just a recycling of a press release I don't know how deep this problem runs because again I only have a couple of areas that I'm familiar enough for the literature to be able to identify these problems but in the areas that I know it happens all the time so to go back to my paper that I was looking at the figure that was probably they meant to cite was 6.7% which isn't very different but that turns out that was miscalculated in that table and it should be 7.3% so still that's not terribly different but the point is you just know that they didn't read the paper carefully and then the takeaway lesson was that something like 75% of the infants experienced clinically significant pain just being administered the anesthetic and so to say that 6.5% experienced clinically significant pain when upwards of 70% experienced significant pain just by you know getting the anesthetic and then that has a failure rate of over 7% is to completely misrepresent the content of the paper they were saying and this is the Centers for Disease Control and I could spell out many other examples like this so if even the folks at the Centers for Disease Control can make undergraduate level research errors again I don't know how widespread this problem is I hope it doesn't happen all the time I hope it doesn't happen as frequently in areas that I'm not familiar with but I honestly don't know and I've seen this sort of thing play out with supposedly respectable policies from mainstream institutions and the World Health Organization I've scrutinized some of their policies and seen similar basic level errors and it was very disheartening and disturbing when I noticed that this was taking place now I mean there's so much to talk about there I would love to hear some of these other examples when you say the CDC now is that because I don't know how these institutions work is that literally this a group of scientists themselves practicing scientists that are putting out this work or is this kind of some group of bureaucrats that are trying to do some kind of meta study that maybe they don't actually know what they're talking about right so at the CDC it's the authors of that particular report were anonymous which is interesting now I expected at the CDC it's mostly scientists doing these so-called technical reports now maybe they have orders from bureaucrats who sort of have a certain outcome their expect or something like that but often what happens is so this particular literature I'm talking about the literature on circumcision is very polarized and politicized there are people who have very strong feelings in either way and some of the scientists who are contributing to the literature themselves have very strong feelings or biases one way or the other and so unless you're an expert in this specific literature it's not enough to be an epidemiologist it's not enough to be a pediatrician or a urologist generally speaking you have to actually be an expert in this specific literature because otherwise you can't account for the political games that are being played between different scientists you know citing their friends articles and citing their own articles in a way that's not really representative of what other people would say these kinds of things happen when you have a politicized literature this is one area but there are many examples of this where scientists sometimes polarize over whether they think a treatment works or doesn't work or whatever it is so in this case my sort of charitable assumption is that they had a group of scientists who are generally knowledgeable about the sorts of things that are typically relevant to these kinds of policies but who weren't experts in this specific literature and they clearly couldn't have been because for example they prominently cited the work of an Australian researcher that nobody in this area takes seriously for a core claim and you could only do that if you don't know the literature because if you do you would never cite this guy's work certainly not in a government policy so that's that's my guess about what was happening there in the case of the World Health Organization I've looked at a policy of theirs I think in that case it depends some sometimes you have scientists who are writing the policy other times you have sort of kind of researcher worker bureaucrats who have consultants and they hire people to give them information but the World Health Organization a lot of the policies come from the top down they aren't coming from somebody who's an expert in the research and trying to give the best account of the science and then offering that up to the the policymakers often the policy is set by people who have an agenda or something going on and then that constrains or at least shapes or influences the work of the lower level researchers who are actually physically typing up the report and that's another example I was just stunned to see when I started to look at the details. Now you said you're unsure obviously areas outside your expertise what the status is if the same thing is going on one of the things that I've realized I kind of had a a disillusionment process before I went to college I had this I had the naive view of higher education and academia it was all these brilliant people who were all innocently discovering truth and talking with one another and I discovered something very different and this was confirmed not only while I was getting my undergrad education but I worked in the non-profit sector for a while engaged with a lot of professors found that was the case and the work that I'm doing now I'm talking with professors all over the world and I find out that my romantic vision was misguided but here's something now this probably sounds crazy to you but just like what if somebody is unaware of the state of let's say the replication crisis in psychology the facts might sound crazy to them but even in fields like mathematics there are certain foundational claims that mathematicians have been making in the last century which I found or treated the exact same way which have become let's say dogma or orthodoxy it has to do with the theory of infinities and I've spoken with I don't know probably six or seven different people who are either mathematicians or philosophers of mathematics who have said who have shared their skepticism of the basic fundamental like axiomatic theory of infinities for all over the world one from Ireland one from New Zealand one from Australia and one from the United States have all said there's room for skepticism here but in math of all areas because it's so logical and if A then B if B then C if C then D and then they go on from there that initial assumption if A then B they think has already been established or at least the formal mathematical orthodoxy thinks that it has been established and you find there's actually room for skepticism so I discovered this maybe two years ago and I thought this is crazy of all the areas you think mathematics would be that which is immune from these kind of fundamental errors but it does not appear so yeah that doesn't surprise me at all but it's easy to see how this can happen any particular researcher can't reinvent the wheel you have to build on the work of others a lot of people aren't historically rooted in their discipline so you often find that they're working on whatever is the latest thing that's going to be popular in their area so they work with their advisor they know some of the latest stuff but they might not actually go back and read those foundational papers that were later sort of accepted whether actively or passively sort of as you say calcified into dogmas there's certainly dogmas that exist throughout the sciences as with any area of human thought there's political theories that have calcified into dogmas there's you know on the left and on the right politically you find ideas that are very dogmatic and people take a lot of things for granted and why because we have limited mental resources so you know unless you're a genius and you have an infinity of time you have to take a lot of things for granted and just that simple human fact and that human limitation makes it possible that things that maybe ought to be scrutinized more or going back to fundamentals and basics sometimes persist in literature for decades and you know only get rooted out under very unusual circumstances and I think this is one of the areas where as you mentioned you know the existence of the internet I think in many ways to what at least I would put it is it perpetuate as I love the internet I think I mean I wouldn't have a job if it weren't for the internet I totally love the idea of the radical dissemination of information but it does allow for let's say the perpetuation of some very dogmatic thinking dogmatic ideas the surface level research that one can just you know skim abstracts and then have a clue on what you're talking about especially if you're working outside the field but one of the benefits that has come from it I think that will come from the new generation of intellectuals that at least raised on the internet is a more realistic and skeptical stance about some of the things we've been talking about where I don't know on the internet how one would comfortably say something like oh I think some of the foundational theories in mathematics of all fields or science or psychology I think some of these works that we hold up to be this is what we're building our knowledge on are wrong and that to me gives me a bit of optimism that it's so much easier to encounter skeptical arguments now than it was just 20 years ago my favorite book on this that everyone should read is called are we all scientific experts now by the sociologist of science Harry Collins it's a short book you can read in the weekend and it's just a brilliant discussion of the very sorts of things that we're talking about but he talks about different levels and kinds of expertise and it certainly is possible that you can have people who weren't in a sense formally trained in the methods of a particular discipline who may through their own extraordinary efforts you know gain a certain level of interactional expertise and have some familiarity with the sorts of claims and theories and be able to make a skeptical argument I think sometimes this idea that unless you're a scientist all your all your possible logical inferences are somehow inferior you know scientists are better in that they've trained for years and years to acquire certain specific skills that's the sense in which they have a sort of epistemic authority with respect to certain claims but scientists aren't necessarily better reasoners for example scientists aren't necessarily better moral philosophers or policy experts and so sometimes you'll have scientists making these very bold claims about you know the policy that they think obviously follows out of their scientific findings or how you should behave or how you should accept this finding in your own life well they're not experts in value claims they're not experts in seeing entailments of ideas I mean they're probably better than the average person so if you're a well educated person who isn't an expert in a certain field you know the thought that somehow you should bow before the statements of any scientist in an area that you're skeptical about I don't think that's right scientists do train for a lot of years to get highly difficult and counterintuitive sets of skills that allow them to you know have a genuine authority when it comes to certain types of claims so unfortunately there's no easy answer here this is why I love the you're either pro-science or anti-science discourse that comes out in the popular media where you know if you're on the left you're like I'm pro-science and then you say so really what can you explain this theory and very often it's like well no I don't actually understand I just know that I'm supposed to be pro-science because I'm part of this political group or you know you're anti-science because you don't support this particular view you know there's some research on this on climate change for example regardless of your view in the public whether you're sort of pro or anti or skeptical or whatever it is you tend to have similar levels of actual understanding of the theory so it's not that you know what's going on and that's why you support the theory it's that you know you're supposed to support the theory again I'm not making any comment at all I'm not an expert in that area so I completely leave that to others but again this idea that you can just cast aspersions on on people as being in one camp or the other is not the way it goes just give one example here and during the AIDS crisis in the 80s there were the you know the doctors wanted to do a randomized control trial so they could test the efficacy of different drugs and a lot of gay men who were dying said listen I don't want to be in the placebo arm of this trial I just want the drug if you have a theoretical reason for thinking it would work prior to actually getting the right kind of evidence and at that time there was a lot more medical authoritarianism where the doctors said we know best and you're out there and we're going to do our science and you're going to give us our data and a lot of these men were so concerned about their mortality that they essentially became experts in the literature a lot of them learned as much as they possibly could came to understand the theory they read the articles as they came out pointed out flaws and some of the science and so on so it's certainly possible that lay people who are sufficiently motivated and care about the topic can in some cases you know acquire a certain type of skepticism that's justified and that indeed should be taken account of by the people in the white lab coats. On the other hand there are a lot of people who are total crackpots and they read a study once somewhere that and they don't really know how to evaluate it and they you know the study has been discredited but they still keep bringing it up that is just a vast and serious problem as well and again the only thing there that we can tell is to help people be better reasoners and to know when and under what conditions it's appropriate to be skeptical when and under what conditions it's appropriate to rely on the authority of someone else we can't all be experts in every topic unfortunately and that means we have to rely on trust we have to rely on authority in certain cases and so we have to pick and choose our battles but we should be cautious about asserting that we know something because we you know read an article online once or we fancy ourselves to be skeptics and that's not the way to go either and and you have to escape between these extremes it's I mean you put it wonderfully and I can certainly personally attest to the fact given that in my own project I'm playing this line between talking with a bunch of people inside the system criticizing lots of parts of the system from outside the system and as a result of that I found I get a lot of let's just say a lot of communication and emails from people that are legitimate legitimate crackpots and it's true it's out there there's a lot of there's plenty of justified room for skepticism especially as if you dive into some area that you're interested in most likely you'll find there's a lot more room for skepticism then you thought when you weren't aware of the area but sure enough there are the majority of people that I have been contacted by I would say who are interested maybe in my skeptical positions you know on mathematics are legitimately crackpots so that there there is a reason perhaps for the stigma even though it's I think it is I think it turns into factions right you have the people inside the system who are the brilliant scientists you have the crackpots outside the system as if those are the only two options yeah I think that's right I mean we haven't emphasized this part of the conversation as much because I you know part of what I my role in the scientific community here is that I've been talking a lot about the problems and the flaws and I've been emphasizing those things because I think that they're not talked about enough but we could equally have spent an hour talking about the problems with pseudoscience with crackpots with people raising harmful theories that they don't have sufficient support for for people you know not sufficiently trusting scientific evidence that is indeed well substantiated so that's the other side of this coin of course it's just that like you just said people use these proxies you know somebody has a lot of letters after their name people get excited about that and they say well that person must be must be you know properly accredited and know what you're talking about and for the most part that is a reliable proxy if you have spent years educating yourself at the top universities it's it's very likely that you know more about what you're talking about if it's within your expertise than other people that's generally reliable proxy but of course sometimes people have letters after the names and they get things wrong we have to be open to that possibility similarly you know generally speaking if you're just kind of fishing around the internet and trying to come up with some theory you know confirmation bias is a huge force in our lives it's easy to click around on the internet and if I'm sort of inclined toward conspiracy theories or something like this and I say well the man is out to get me and you know I can't trust authorities you can you can find illegitimate support for that view with a few clicks on the internet very easily and and that shouldn't be given any credence at all so we all should be trying to counteract our own biases we all should be trying to adopt a skeptical stance not just toward the man or you know those skeptics over there but toward our own ideas you know we should be skeptical of our own skepticism if we find that we're just you know throwing stones at everybody else's theory but we're not constructively offering our own point of view or if we're you know not charitably construing what the other person is trying to get at and we're just trying to tear them down that's misguided as well and so like I say there's no shortcut or easy heuristic here people have to try to keep their biases in check they have to learn about what sorts of biases we all have and figure out strategies you know the various rationalist movements and things that go around meetup groups where people try to you know identify biases and counteract them and so on but we should be aware we're using a proxy for something when we're using you know a degree as a proxy for truth we just have to recognize it's a proxy it's maybe a good heuristic but it's not infallible similarly somebody who's not educated in a proper way that doesn't mean they're wrong but they're more likely to be wrong someone who's spent 10 years you know acquiring the proper tools to know how to evaluate the literature and so we have to treat it for what it is which is a heuristic that often fails I mean that very well put so the last question I want to ask you is about peer review this is one of those buzz words that gets thrown around the media all the time that scientists throw it around all the time and it's supposed to immediately give this feeling of legitimacy oh professionalism oh truth when you drop the words peer review so we have peer review in theory can you tell me a bit about peer review in practice is peer review what people make it out to be as this this marker of when you get the stamp of peer review that means oh well this is practically truth right okay this is a complicated subject and I'll try to touch on a few main points peer review this is sort of like the quote about Winston Churchill earlier it certainly is better to have your manuscript rigorously reviewed by a genuine peer and particularly if you're talking about a top journal typically those editors are very good about trying to select people who will give a very serious scrutiny of your paper before it's published that doesn't guarantee that what you've published is reliable but when peer review is working properly it is indeed a very important quality control mechanism the problem is the difference between theory and practice in some cases peer review works and it helps and it filters out some of the bad stuff but there's a couple issues here one there's a proliferation of these open access journals that are so-called predatory journals there are some quality open access journals and there are some that are set up in remote areas of China with addresses and vague places and made up editorial boards that will print anything that they get and they'll just take authors fees and so they that those are not you know adequately peer reviewed anyway that's just basically so that's a problem you might have a paper that looks like it's in a you know a real journal but it's it's just nonsense that somebody published in one of these predatory journals and then there's this gray area in the middle which is that peer review and practice has lots of problems I'll just name a few of them Richard Smith who used to be the editor of the British Medical Journal the BMJ not very influential journal was very concerned to figure out how reliable peer review was I mean the thing about science that's fun is you can apply science to the process of science you can say well let's do a study on the efficacy of peer review and so one way of doing this quite simply is to take a manuscript embed lots of errors in it and send it out to the sorts of peer reviewers who would typically handle it and see how many of them notice the errors and very often you find that they don't the rate of error catching is disturbingly low here's another issue about peer review I've been an associate editor for different journals or a guest editor and so I've had the position of receiving a manuscript and then I'm the one who decides who do I send this out to you know find the peer reviewers so I have a kind of inside look at what this is like well when I read the paper if it's particularly if it's in a contentious area I'm going to form a judgment about it right away and it's not that I have some sort of dispassionate algorithm that I go use to help me select the most qualified reviewers who don't have a stake in the game or something like that I make a judgment if I think that the paper really shouldn't be published whether I'm doing this consciously or unconsciously I'm more likely to pick a peer reviewer that I have a good hunch is going to sync the paper because I kind of know what they would argue about this case or I kind of know that they would say that you know these methods aren't sufficient to prove the point whereas if I like the paper and it's I think it's something that should be published again there's not some magical objective formula that takes place here but I'm going to send it to somebody who I think will give the paper its best shot so this is just an example of the decision-making that goes on in associate editors minds now again these editors at good journals that have good reputations for good reason really try to do their best to get maybe one review from somebody who's likely to be sympathetic to the article another one from somebody who's likely to be critical then they try to synthesize between these different things and really come up with a well-informed judgment about whether they should or shouldn't publish the paper so that's the ideal scenario and that does happen and again part of what the trick is going to be going forward is helping people identify which journals are indeed using good practices that are having a rigorous peer review process that really is a good quality control mechanism so I don't want to suggest that it doesn't exist but I do want to suggest that there's a lot of room for politicking in peer review where especially when you're in a politicized area you know if the associate editor the journal has a certain attitude they know who they can call up for a peer review it's not done in any sort of purely dispassionate way I don't even know how it would be done in such a way so just because something's got a stamp of peer review that again it's a piece of information if a paper is peer reviewed at a really good journal and that journal has a good track record of publishing stuff that sort of bears out over time then I maybe give a little more credence to the fact that it was peer reviewed at that journal but I shouldn't just take the mere fact of something being peer reviewed as evidence that it's therefore true or that it's passed some really strict test of reliability because extremely often that is not the case and something can be peer reviewed and you know basic errors will slip through for example peer reviewers don't check the statistics of authors so they just don't have the time and they also don't read every reference included in the reference section because peer reviewers aren't paid to do this it's all done as a favor to the sort of academic community and so if somebody sends a peer reviewer a manuscript and it's 50 pages long and there's 100 references they're just trusting the author to have cited references that are the appropriate citations that sufficiently support the claim now if you're a specific expert in a specific area you'll be able to identify whether the references are the right references or not but very often peer reviewers are sort of a little bit more generally knowledgeable about the field but they may not know the specific issue and they're sort of giving a quality check to the best of their ability but they might miss these sorts of things particularly if somebody has an agenda and they're using citations improperly which does occur similarly in almost no cases do they rerun the statistics so they just take the statistics and face value they hope that the researchers have done due diligence and what they're looking for is sort of obvious signs of design flaws or something like that and they may very well catch them and in many cases properly motivated peer reviewers do a very good job of saying this paper just shouldn't be published it's not rigorous the paper might then go on to be published in a journal so just because it's rejected at a top journal doesn't mean it's not going to be published elsewhere very often with the very same flaws and that would count as a quote-unquote peer reviewed paper even though the first person who maybe was the real expert said it shouldn't be published so that happens all the time if I put in all the effort to put a manuscript together just because it got rejected at the top journal doesn't mean I'm not going to file it away I'm going to keep going down the totem pole until I find a journal that accepts it now hopefully I've taken the criticism and tried to improve it and so on but many authors don't do that so I'm going to keep going down the line and wait until somebody accepts it and flaws and all and that happens again all the time now do you think there should be something like getting compensated for professional peer review do you think that would correct some of the problems peer review has to be revolutionized peer review is on balance and extremely unreliable quality control mechanism right now again with lots of exceptions and some people doing good work but on the whole peer review first of all it's very slow and I'm an expert in the area I've actually done good work and I'm pretty sure I've done good work and then I submit it to the journal and it gets held up for six months when really people should be able to see that and use that so authors are now doing things where they'll put what are called preprints of their papers on certain repositories so while it's being reviewed they have a draft of it that's available so it's not reviewed yet but other people in the community can just decide whether it's useful they don't need to have those two reviewers that some associate editor happened to find were available that week to do the review and then maybe they're on vacation so they give it to the graduate students to do or something which again happens regularly so the fact that the two people that you manage to get to do the paper are somehow the be-all and end-all arbiters of whether it's good paper that's crazy if you put it in an online venue you can let the community decide they can read the paper and see if there are any flaws if you had 150 eyes on your paper rather than two eyes that's going to be much more likely to give you valuable feedback and people are going to be able to say listen how did you run that test instead of that tester show me your open data so that I can re-run the statistics or here's a way you could improve your argument in this passage so something more like crowdsource peer review among experts who cross a certain threshold of qualification to be able to comment on a paper or something like that is where the future is going to be because the idea that two people should be the deciders in chief of every paper that comes out just creates a huge bottleneck and completely slows science down sometimes years and I think that's also completely unacceptable that's excellent I love that idea I think that's a beautiful blending of the internet the pros of the internet with the new intellectual system that I hope emerges taking advantage of these technologies because you brought up excellent points I mean bottom line is skepticism is justified the world of ideas is really hard to navigate humans scientists are all humans the professionals are all humans and again we have to look around at the success stories of science too focused completely on the flaws and so on in this conversation which again I think is an important conversation to have if you look around the world around us and you look at the feats of engineering and you look at the discoveries that have been made and the diseases that have been eradicated no one could deny that the scientific method when properly applied and over given enough time and weeding out the errors and so on has completely transformed human existence sometimes in dangerous ways in the inventions of bombs and things like that but other times in very beneficial ways science works a lot but sometimes it stumbles and we need to be honest about the ways it stumbles so we can fix those problems rather than sweep them under the rug that's an excellent end on thank you so much for this conversation this has really been fantastic thanks for your time this was a lot of fun for me as well alright that was my interview with Mr. Brian Earp I hope you guys enjoyed it even a tenth as much as I enjoyed it in the modern world this kind of criticism it's just a little taboo and it shouldn't be and we need a lot more of it it is okay you have intellectual permission to be skeptical about everything even those sacred cows like the scientific process in practice like I said at the beginning of the show if you valued this episode and you want to hear more interviews like it head over to patreon.com and make sure to tune in next week as I'll be interviewing a very special guest J.P. Sears he's the man who's become famous for his ultra spiritual videos where he kind of satirizes the new age movement well it turns out he's not just a satirist he's also a serious life coach who actually believes many of the things that he criticizes he's actually a really interesting guy so I'm sure you guys are going to love that interview alright that's all for me today have a fantastic week fantastic week fantastic week fantastic week