 It's Sunday, May 30th, and this is For Good Reason. Welcome to For Good Reason. I'm DJ Grothy. For Good Reason is the radio show and the podcast produced in association with the James Randi Educational Foundation, an international non-profit whose mission is to advance critical thinking about the paranormal, about pseudoscience, and about the supernatural. My guest this week is Massimo Piliuci. He's a professor of philosophy at the City University of New York, and he's written many books, including most recently Making Sense of Evolution with Jonathan Kaplan, published by the University of Chicago Press. He's also involved with New York City skeptics and has been involved with the skeptics movement for well over a decade. He joins me on the show to talk about his new book, Nonsense on Stilts, How to Tell Science from Bunk. Welcome to For Good Reason, Massimo Piliuci. Thank you. It's a pleasure to be here. Massimo, your book is about telling science from the bunk, and that really raises the very first question, how you tell what science is, what's called the demarcation problem in the philosophy of science. Isn't it every bit as scientific to question reigning theories about vaccines and autism or global warming, or maybe evolution versus creationism, as it is to just debate in normal science? In other words, I'm asking how you tell what science is so you can legitimately say what isn't science. That's a good question, and I thought about half of the book really is devoted to exploring that question. As it turns out, the demarcation problem, trying to separate science from non-science and pseudoscience, it's a classic problem in philosophy of science, which was formalized by Carl Topper in the middle part of the 20th century. And Popper is one of the few philosophers of science that many in the general public and certainly a lot of scientists know about. He's the guy that came up with the idea of falsification. The idea that the theory or hypothesis in order to be scientific has to be falsifiable. There has to be a way in principle that the theory could be disproven by the data, if in fact it is not true. So Popper is an example of the demarcation between science and non-science where the difference between Freudian psychoanalysis on the one hand and Einstein's theory of relativity on the other hand. So Einstein's theory of relativity for Popper was a spectacular case of science. It made very daring predictions, like most famously the idea that light has to be bent by the presence of a massive body. That prediction was spectacularly confirmed during a total eclipse of the sun in 1919 where astronomers were actually able to confirm that deviation that Einstein predicted. So for Popper, that was definitely the way science was supposed to work. Now compare that with Freudian psychoanalysis. It turns out that pretty much any conceivable human behavior can be interpreted or reinterpreted or explained by Freudian theories. And according to Popper, that's not a virtue of the theory. That means that the theory is comparable with essentially anything that you can observe and therefore cannot be falsified. And as such Freudian psychoanalysis is not a scientific theory. Well it's like the biggest insult that a scientist can give to someone to call him or her unscientific. Or you know if a scientist says well that stuff over there is bonk, that stuff is unscientific. Well it calls into question that so many theories in science have turned out to be unscientific or at least untrue, right? That's the nature of science, that many old theories have been overturned in the history of science, that we don't have all the answers, that we're always revising our knowledge. Most overturned theories have been overturned by new thinkers who were deemed unscientific by the old guard, you know, their intellectual forebears or whatever. So the question is don't most revolutions in science, Einstein's for instance, you mentioned, don't they start out seeming like they're just bonk? That's a good question. And in fact, we need to make a fundamental distinction between being unscientific and simply not true, untrue. For instance, Newtonian mechanics is not true strictly speaking because Newton's theory about how gravity works and about, in fact, you not actually have a theory of how gravity works, but his theory of space and time turned out to be incorrect according to Einstein's relativity. Just incomplete, correct? I mean it's true up until a point, but it's not comprehensive in its explanatory value. Well actually it is worse than that, meaning that it is true that one can derive Newtonian mechanics as a special, as a limit case of Einstein's relativity. But if we're talking about the way we think about space and time, Newton's was simply wrong. I mean Newton thought that space and time as absolute and unchangeable. And Einstein on the other hand tells us that both space and time, which are not different from each other by the way, they're, you know, one thing called space-time, they're actually malleable, they change all the time. So in that sense, Newtonian mechanics, Newtonian physics is in fact wrong, but it was never unscientific because it made varying predictions. Many of those predictions turned out to be correct, which means that the theory, even though at a foundational level was in fact overturned by relativity, in fact was true enough, so to speak, or was close enough to truth to make very good predictions over and over again. So there's a difference between being not true and being unscientific. According to Popper, Freudian's theory are unscientific. They may even be true. For all that we know, it may actually be the case that everything about human behavior goes down to issues of sex and the kinds of things that Freud was talking about. But the way Freud presented his theory and put forth his theory, there's just no way to know. So speaking of Freud, it may be true that we all have an ego and a super ego inside of us. If you have that tripartite division of the self, all of that is not only explanatory. It may be true. You're just saying it's unscientific because it can't be falsified. That's right. And there is an even better example that I treat in my book to some extent, which is the current status fundamental theory in physics. So everybody probably has heard that most people have heard this point of string theory, which is supposed to be the next theory of everything, the theory that unifies quantum mechanics and general relativity in physics. Now, nobody is arguing that string theory is pseudoscience. It is solid mathematics. It is solid mathematical theory, which may have really serious implications for the way in which we understand the universe. There is a problem, though. At least up to this point, there doesn't seem to be a way to test string theory empirically on empirical grounds. All the predictions that it makes are no different from the predictions that are made by more better understood theories and well better established theories such as the standard modeling physics. So there's no novel prediction that string theory is made yet that has been confirmed or in fact that is even possible at this point to confirm empirically. So no one's calling it pseudoscientific, but are you implying it's unscientific? It may be. It may be. And this is a big discussion that's going on in physics in the physics community these days. And in fact, interestingly, it's a discussion that involves quite a bit of philosophy science. And we have some physicists, some prominent physicists, who are beginning to think that string theory is a dead end that has been explored for now something like close to 30 years, and that at least Mullen put it in a book that he published a year or two ago called The Trouble with Physics, it may very well be that this is the first generation of physicists in 200 years that has now made a fundamental new insight, theoretical insight into the nature of the world. So that's an interesting point. I mean, I don't know what the answer is. It remains to be seen, but at the moment there are examples, even in science itself, of things that are clearly do have all the trappings of science. They clearly work in the way in which science is supposed to work. And yet, because they fail at the fundamental criterion of empirical predictions, they're not really science. So if we're talking about the demarcation problem, what science is, you got to figure that out so you can tell what isn't science. So what you can label pseudoscience or non scientific or bunk as in the subtitle of your book, string theory, you're not going to insult it by saying it's pseudoscience. It's not quack, junk science. On the other hand, it's not sufficiently scientific because it has no predictive value or can't be falsified or replicate or, you know, all the all the real trappings of what makes something, you know, on this side of the demarcation, in other words, in this category of science versus the stuff that's outside of science. Correct. And in fact, ironically, there are some pseudosciences that are better at making empirical predictions than string theory. I mean, consider astrology. You know, astrology makes a large number of empirical predictions, which in fact have been tested over and over and over. And it turns out that they're false. And so, you know, astrology is considered drastically a pseudoscience, not because it doesn't make empirical predictions, but because it's empirical predictions that be falsified over and over and over. And what turns something into a pseudoscience at that point is the recalcitrance of its adherence to actually accept the verdict of empirical evidence. I mean, there are more recent examples, again, from within physics. If you remember a few years ago, the debacle of cold fusion. You know, that was legitimate science. I mean, this was the initial papers on cold fusion, this idea that you can get nuclear fusion without chemical reactions instead of thermonuclear ones, which would have meant, of course, very cheap and abundant energy available. Well, the original articles were published in papers that were published in Nature magazine. We're talking about pseudoscience done by academics who work within established universities. It was a possibility. But the reason it then got labeled as pseudoscience is because after it was looked into and found out to, you know, that the evidence didn't support the claims, there was still a kind of pigheaded commitment, almost a faith commitment to cold fusion, despite the lack of evidence. Correct. And in fact, it has sort of become a cult. I mean, now there is a persistent small group of largely chemists and engineers who keep saying that cold fusion is real, then the rest of the world just doesn't understand it. And they keep having their own little meetings and they're published their own little papers in journals that are, I think, in one journal that it's published by themselves and so on. So it turns into a cult. I want to talk about some of the specifics you get into in the book, but just on that last point, you look at the history of science and scientific revolution, sometimes it is only one or two people who go up against the prevailing view and they're seen to be very kooky and outsider and they just have this persistence that eventually pays off. It sounds like, instead, you're questioning that quality almost as a strike against them. No, not necessarily. I mean, you also rewrite that the history of science is full of examples like that. But on the other hand, just because you have a small number of people who are dedicated to an idea that is unpopular, that doesn't mean that that idea is actually brilliant and will be confirmed. But I mean, the philosophers actually have an unofficial name for that kind of fallacy. They call it the Van Gogh fallacy, as in the painter. The idea being that, well, Van Gogh was a great painter who died penniless. I am penniless, therefore, I'm a great painter. Well, no, it doesn't follow. So getting into some of the specifics in your book. First, I should say I didn't have you on the show in order to debate global warming or vaccines causing autism. But one of the points you make in the book is that some of the bunk that people believe out there, they believe in despite the fact that the consensus view in science says one thing and they're believing the opposite. So here's the question, Massimo. Are you saying that we should just trust consensus science? In other words, we can't all become expert climatologists or experts in vaccines or in evolution versus creationism. And so most boosters of science and people who are on the right side of these issues, I'd say, maybe say the skeptic community, they're just going to go with what science says without maybe even really knowing why. Well, to some extent, I'm afraid that's true. And I want to qualify to some extent. No, I'm not suggesting that we, as a society, should just give the keys to our future to scientists and let them do whatever it is that they do. Some unelected elites cocooned away. They're not in the real world. You're not saying that we defer to them and just accept what they say. No, but it depends on what you mean by deferring. So if we're talking about deferring on political and policy decisions, of course not. But if we're talking about trusting the science, well, I'm afraid the answer is yes, because scientists are simply one category of experts. I mean, you know, most of us go to the mechanic and we defer to what the mechanic tells us about what's wrong with our car unless we actually have significant knowledge of car mechanics. When you go to the doctor, you usually trust and defer, especially if it's a specialized doctor like, you know, a surgeon, for instance, you usually defer to what the surgeon is going to do. You don't start arguing with him. Well, you know, I have to have an open brain operation and here's what you want to cut. Well, but on the other hand, there is such a thing as a second opinion. And society doesn't really get a second opinion when it comes to these controversial issues that have all these social implications. So global warming or vaccine causing autism or that sort of stuff. Instead, you're saying, go with the first answer that we get, which is the consensus view. Well, you make a great point about second opinion, but I would argue that in the case of science, we actually get thousands and in fact, tens of thousands of second opinions, right? I mean, we're having, you know, let's talk about global warming for a minute. You know, it is a complex phenomenon. It very likely is, in fact, only partially human made. It's likely not entirely so. And moreover, even if we accept the reality of global warming, which was I do, like most scientists, and especially the experts in the field, which are the climatologists, that doesn't mean, however, that we know what to do about it or that we necessarily have, you know, the best idea possible about what to do about it. Let's talk about the phenomenon itself for a minute. Well, who is supposed to be knowing the best or the most about these kinds of things? Climatologists. Well, there's been a little disagreement among the climatologists themselves about whether human global warming exists or not. You know, the consensus is almost universal, just like there is an almost universal consensus among organismal biologists that evolution does in fact happen, just like there is a almost universal consensus about medical researchers that HIV causes AIDS and so on and so forth, right? But for each one of those that I just mentioned, while the consensus among the experts is 95% plus, in fact, in most of those cases, much more than 95%, the consensus about the public is very different. So there's 40 to 50% of people in the United States who don't believe in evolution reject climate change and don't think that, for instance, there's no connection between vaccines and autism. Those are public controversies and there is a distinction between a public controversy and a scientific controversy. And unfortunately, in the book, I blame the media, the whole chapter there about the media and how it deals with science and science popularizing, understanding of science by the public. And you know, this idea that a lot of people in the media have that, well, we need to present both sides of the debate. Sometimes it makes perfect sense when there are, in fact, two or more reasonable sides to the debate. But in some cases, there isn't a second side. I mean, depending on what the debate is, if we're talking about evolution, there's no second side in terms of the science of evolution. There is a second side in terms of public understanding of evolution. So the real issue that the media should be covering is not whether evolution is true or not, but whether how come that 40% or 50% of Americans reject in established scientific beauty? That's the story. So if that's the story, the story is really the question why so many people buy the junk instead of accepting the science. If you lay it at the feet of the media, and that's not really all your book is about, but you just mentioned it, where does the responsibility of the average citizen come in? You know, I can't become an expert of climate science. I likely won't become an expert on vaccines or evolution. I just kind of take those things, should I say it, on faith because they're the consensus view in science. So I want to ask you a question about that and really focus on this little movement we're a part of, the skeptics movement. Tell me the difference between your skepticism of, say, the supernatural or ghosts or God or something like that. And these other folks, the 40% or 50% of people who don't believe in evolution or global warming, the numbers actually are higher according to some polls, are they skeptics too? I guess what I'm really asking is a skeptic gives the word skeptic as a compliment to folks who are skeptical of the things we're skeptical of. But we don't use that word for folks who are skeptical of the things we actually accept. So there's two very important questions in there that you just asked. One is about what does it mean to be a skeptic? And then the other one I want to go back for a minute to the whole idea of expertise because I think it's crucial. So being a skeptic doesn't mean the nine things. Being a skeptic means to have an open-minded, an open-minded about the issues but to hold to beliefs in proportion to evidence. So it is the idea that a skeptic is supposed to be somebody who engages in critical thinking and engages the issues, engages the evidence and tries as much as possible to adjust these or her beliefs in proportion to the evidence that's available. Now that's easy to say. It's not really that easy to do. You know, critical thinking doesn't come natural. One of these needs to be trained in critical thinking and unfortunately one of the things that I point out in my book is that we do very little about training people in critical thinking. I mean critical thinking classes are taught only at the college level pretty much. There's very little before college and even at the college level they're non-mandatory. You would think that we want a open democratic society that's based on the fact that citizens can engage in critical thinking. But that's another story. So what you have as a result is that you can have people calling themselves skeptics. For instance, Bjorn Lomberg who wrote the Skeptic Environmentalist several years ago, harshly criticized the whole idea of climate change. But in fact Lomberg is not even a climate scientist, he's an economist. And if you do go through his book, as I do in a chapter in my book, I go almost by paragraph through his main arguments about global warming, you'll see that they sound academic, they sound really sophisticated. But in fact it's only a veneer that it's easy enough to scratch once that you know enough about the debate. Which brings me to the idea of expertise. So the last chapter in my book is about experts. And any questions, no idea what does it mean to be an expert. Then more importantly, how are we as part of the general public supposed to tell whether in complex issues whether one expert is better than another or is more right than another and so on and so forth. And there is actually a simple two-steps process that one can implement about judging expertise in other people. And basically this is based on five criteria. And one can ask about a particular issue, let's say evolution or climate change or whatnot. One can ask herself the following question. So if you're listening to an expert debating another expert or you're reading about one alleged expert against another expert, the first thing you want to ask is, well, are the arguments sound? Are the arguments really well-established in sync with the empirical evidence and so on and so forth? Now the problem with this particular step is that only a few people can do it because in order to challenge just the soundness of an argument or a technical argument, you probably have to have enough technical background that for all effective purposes you're an expert yourself. So most of us can't do that. Well, then the next best thing is to start looking at other criteria. One of which is, well, what about other experts' agreement? Do the majority of people who are expert in that area agree with X or they agree with Y? But then that's just or at least it could open the door to it just being a marketing thing where it's not that the best ideas rise to the top, but the most popular ideas because they have the best spokespeople or something like that develop traction, right? But the argument is that the most popular ideas among experts, not the most popular ideas among the general public, have a better chance of being right. There's no guarantee, mind you. All of this is unfortunately, there's no guarantee in life or there's few guarantees in life and being able to tell whether an expert is right or not, unfortunately, simply doesn't follow the guarantees in life. But the idea is, first of all, this is only one of the criteria I'm going to mention three more. But second of all, is the idea that yes, let's take your example earlier on of the second opinion when you go to the doctor or to the mechanic. Well, imagine if instead of just getting a second or a third opinion, you said, well, how many doctors out of thousands agree with this notion versus this other notion? If it turns out that 950 of those agree and 50 don't, I think that's reasonable evidence that you're better off betting on the first option than the second one. So make the gamble based on this numbers game? That's right. On the numbers of experts, however, they agree that's the crucial part. There is another question one can ask when we say watch an expert talk on CNN or read an expert's column in the New York Times or whatever. And the question is, this is actually simple, but it is crucial. And that question is, is that person an expert in the field about which he's talking? So if somebody's talking about climate change and global warming, just having a PhD doesn't make him an expert. He has to have a degree and technical expertise in climate science, not just in general. Anybody can get a PhD, trust me, I've done it. But that doesn't make you an expert on everything, of course. So for instance, I often debate creationist or intelligent design supporters about evolution. And one of my latest encounters was with Michael Bee at Lehigh University. Well, he had a PhD, but it is in biochemistry. It's not in a field that really should answer the questions about evolution, intelligent design. It's not an organismal biology, and therefore, that means that for all effective purposes, he's not an expert. Or Lomborg, his degree in economics, not in climate science. Yeah. That's right. Economics instead of climate science. Then there is another question that one can ask if we're talking about assessing expertise and expert opinion. And that is, of course, the question you have to ask yourself, is that if that person has a bias, an obvious bias that might color these or her ideas or opinions about that particular, particular topic. Now, I have to qualify this because, of course, it doesn't mean that just because somebody's biased, that doesn't mean that he's wrong. Or has financial incentive or something to gain, it doesn't follow that therefore he or she is wrong. That's right. It doesn't follow. But as a critical thinker, you want to at least be aware of it. So as it turns out, if you look at most of the people, for instance, that reject the idea of climate change, most of them tend to be not only economists, but economists of a particular persuasion. They tend to be libertarians. But they tend to be people who really don't like this idea of global interventions by governments to solve whatever problem because they think that the markets are the best solution to that problem. Now, that doesn't mean that they're wrong. But again, in judging the argument, the general public ought to be aware, ought to be knowing the fact that these people do have an agenda. It's not that they come to the discussion with only everybody else's knowledge and benefit in mind. They have a particular agenda. Lastly, the last thing one can do when evaluating expertise is to look at the track record. A lot of academics and a lot of experts in a particular area have a track record. And one can look at that track record and see how successful they have been in establishing their positions and opinions about other things that are related to the topic. And this is common sense. If you go into the mechanic, again, to use one of our earlier analogies, presumably you ask people around and you keep track of, in your own mind, of whether this mechanic is in fact usually successful, not always, but usually successful in fixing the problems. Because if it turns out that you keep going to a mechanic who fails to solve the problem or who is known for rarely getting it right, then obviously you're wasting your money. You may be a mechanic. You may be an expert. But in fact, he's a pretty lazy expert and you certainly don't want to pay attention to him. The idea that I lay out in the last chapter of the book is that you can use these criteria, I mentioned five, to different degrees. The general public can use this criteria to get some idea of, in fact, where the likely truth lies. There's no guarantee, but it's an exercise that we can all do. So if you're giving this roadmap to figure out who's an expert, who's a legitimate expert, who merits our attention as experts, the one that stuck out to me in that list of five you just gave was who has an agenda, right? Asking that question, who's biased? Critics call them skeptics of HIV, AIDS connection, or of human-caused global warming, or vaccines not causing autism. Those skeptics, or also a lot of alt-med complementary and alternative medicine promoters, they all look at the financial incentives of those involved in the things they're skeptical of as a reason for their own skepticism. So big pharma, big corporate pharmaceutical companies, well, that suspicion fuels the skepticism of the complementary and alternative medicine folks. The AIDS denialists who look at all the money involved in AIDS research, that seems to fuel their skepticism that it's all just for financial game, but it's not actually a scientific theory that merits our acceptance. So to put a fine point on it, you're not saying to only look at people's biases, because in fact a lot of the denialists really focus on the agenda, the biases of those they're skeptical of, but that it's one component among those five. Right. So first of all, it is one component among five. But the other thing is that what makes this whole discussion about science and science really complex, and to me fascinating, is that in fact some of that criticism that you're talking about is correct. There was a study a couple of years ago, for instance, that looked at several papers published in peer-reviewed journals, medical journals, about the effect of certain drugs, for instance. And the review found that papers that were funded by the pharmaceutical industry found positive results, that is, that the drug in question was actually working significantly more often than independent studies carried out by academics who were funded by government agencies. Now, this is not to say that the scientists in question were actually consciously falsifying the results. There was no charge of fraud. It's just a matter that, you know what, if you know where, who's bothering your breadth, your whole attitude about reality and about the data changes. Yeah, the point is that science is a human enterprise and it's subject to the same kind of cognitive biases that any other human activity is. We all have our own hidden motivations that we might not even know about. Absolutely. But now here's the interesting part. First of all, that study also found, therefore, that there was much less bias among, you know, to be found in papers that were based on research that was not funded by the pharmaceutical industry. And of course, a lot of research, a fundamental research, is not funded by the pharmaceutical industry, funded by organizations like the National Science Foundation or the National Institute of Health and so on. So that's one thing. But the other thing is, you know, it's actually interesting to hear, for instance, proponents of alternative medicine accusing big pharma and accusing scientists of, you know, having a big financial interest, which is certainly true. But apparently, they don't seem to realize that the alternative medicine market is worth billions and billions of dollars. And also largely owned by the same big pharma. Exactly. So big pharmaceutical companies own the small mom and pop or the seemingly small mom and pop enterprises that push the alt med stuff. Yeah. That's exactly right. Look, the point is that there is no simple clear and cut way to figure out the difference between science and non-science, between, you know, good research and bunk, which is why, of course, the book is more than 20 pages long, rather than just a few sentences. And the book again starts out with a discussion of Karl Popper's approach, because Popper thought that he had found a simple solution to the problem, right? Falsificationism was the idea and it was easy to apply. And once you applied it, you could easily tell what was science and what was not. It turns out, as I explained in the book, that in fact, Popper was wrong. I mean, he had it. He was right about the problem, but he was wrong about the solution to it. Or just that the solution was as elegant and as simple as he thought. That's right. The solution is messy. The solution requires a lot of work and it doesn't. There's no guarantee of getting it right every time. But the idea is that in fact, there are things that we could learn as citizens of an open society about how science and pseudoscience work and compare to each other. And we can make informed decisions. And frankly, that is the best that we can hope for. We cannot pretend to be infallible, but we have the duty to get it right as much as possible when it comes to issues of science and in particular public policy. Massimo, I feel like we've only just scratched the surface. There's a lot more I wanted to talk about. So are you able to join me again on next week's show and we'll continue the conversation? Oh, yes. You have to twist my arm. Yes, of course. I will be more than delighted to join you again. Well, thank you very much for joining me on For Good Reason. We'll talk again next week. It was my pleasure as usual. Thank you for listening to this episode of For Good Reason. For updates throughout the week, find me on Twitter and on Facebook. To get involved with an online conversation about today's show, join the discussion at ForGoodReason.org. Views expressed on For Good Reason aren't necessarily the views of the James Randy Educational Foundation and questions and comments about today's show can be sent to info at ForGoodReason.org. I'm your host, DJ Grothi.