 Welcome to Free Thoughts, a podcast about libertarianism and the ideas that influence it. Free Thoughts is a project of the Cato Institute's Libertarianism.org. I'm Aaron Powell, editor of Libertarianism.org and a research fellow at the Cato Institute. And I'm Trevor Burrus, a research fellow at the Cato Institute Center for Constitutional Studies. Our guest today is Pat Michaels, director of the Center for the Study of Science here at the Cato Institute. Our topic today is how biased science can be, the direction that research takes and the findings can show a bias. But I want to – that cuts against the view that a lot of people have of science, right? That science is this kind of thing apart. It's this special way of approaching truth finding that doesn't get caught up in all of the problems that face other sorts of truth-seeking. And so is there some truth to that? Before we start talking about like what's wrong with science or how the bias shows up, is there still some truth that science is potentially less biased than other things or is this just a really out to lunch view that science hasn't earned? Well, you've got to remember that scientists search for the truth. Scientists also like to keep their jobs. Scientists don't like flying in coach in the airplane. So they're going to do what they can within their search for truth to advantage themselves economically and career-wise. And that includes both in the – mostly in the university setting of course. But where does the money come – where is the money coming from that they're looking to fly better airplane placement and everywhere is the money coming from? Money comes from you and me. By and large, science is funded by the federal government. As a society, we view science as a public good. Therefore, worthy of public funding. I think there's a healthy debate now beginning to appear in libertarian circles. And that is, should it be viewed as a public good? Does science prosper when it's not viewed as a public good or does science suffer when it's not a public good? And the evidence is very equivocal. There's some strong evidence that it really doesn't make any difference. So for example, the private funding versus public funding? Correct. But this, prior to World War II, almost all science was privately funded. And even after World War II, we had major developments coming out of things like the Bell Labs, IBM, et cetera. These were private money operations. Now, the argument for science as a public good is that the basic science – and Bell Labs is doing what I call basic science, science that doesn't have any immediate obvious applications. It may seem arcane to people who aren't practicing it, but then one day, lo and behold, nanotechnology appears or buckyballs appear or something like that. So it's very clear that basic science can be done without government support. And Terrence Keely, who's going to be a new Cato scholar in the Center for the Study of Science, has looked at funding in the United Kingdom and finds that there is no difference whether it's publicly or privately funded as a whole as to the amount of scientific, quote, progress that's made. So what are the effects if we get together and spend some taxpayer money on trying to advance the human endeavor, go to Mars, make human life more like a Star Trek episode? Isn't that a good thing, or does the funding – does it produce less truth, possibly? Ask where it comes from. It comes from the federal government. Funding comes free from the federal government. Politicians may have interests in certain issues. I don't think at any hearing on climate change, which is my area of expertise, that I've ever heard anyone testify when it comes to budgetary matters that, well, you know, I don't think this issue was all that important. It may have been overblown. So why did you save your money and spend it on AIDS or something like that? You never hear that. No. Fundings have to be portrayed in the most stark and dire terms. Congressman, if you don't approve this funding, I'm afraid that all our children are going to grow up to be midgets. And believe me, the funding shows up. So I guess there's kind of two types of bias, though, that I want to see if both of these are going on or if it's only one. So when we talk about bias, what you – the type that you have brought up so far is more of a conscious bias. So it's the agent, the scientist, is spinning the facts or the research or coloring some aspect of what he's saying in order to personally benefit in some way. But there's a second sort of bias that I want to see if this one is related or if it's also going on, which would be a bias that even if the agent isn't consciously doing this or isn't setting out to bias the results, the way that the system is structured leads to results that skew one way or another. So we can't – we can't say there's any kind of nefarious stuff going on. Well, this is a real problem. I call it scientist behaving badly but rationally. This just came up last month, last December, when Randy Sheckman was about to be awarded the Nobel Prize in biological sciences. And he gave a speech and put an article in The Guardian the day before he was going to receive his prize saying he's no longer sending articles to science, nature, and cell. Are those the biggest journals? Those are the big kahunas because he says that they are harming science. They're harming science because they preferentially publish articles that get headlines. These journals compete with each other and what they're competing for is something called impact factor, which is measured among other things by how many newspaper articles your journal is cited in. Sheckman made the argument that makes scientists pursue, quote, flashy, his words, flashy science as opposed to what I call headline science. Well, that's not good because I can assure you that the paper that says something that can be headlined like, billions to die from global warming is going to get a lot more TV than, well, global warming might be slightly overblown. Or drugs or cancer or anything. Anything like that. That's right. If it's going to kill us all, it will – if it bleeds, it leads. That will happen. So, as a result – I'm speaking about the environment here, but we should get onto biomedicine on this because there's some fascinating new research that's being done that I have uncovered. But speaking about climate and the environment, the media sees all these horror articles in these, whoa, big-time journals, and they say, my God, the world's going to come to an end from X, from global warming, from acid rain, from global cooling, ozone depletion. I'm an old F, but I'm not that old, and I have lived through nine separate ends of the world in the environment. How does this happen? I think people would get wise to this. Well, it's obviously very pervasive. Because – so the scientists would both be driven by their own flashy headlines, which gets them more funding from Congress and also gets them more exposure from the media. And much more importantly, you publish in Science or Nature, you're going to get tenure. You're going to get a job for life, and when you get tenure and you're at the associate professor level, you're going to have a plane ticket in your pocket virtually every day. So you're going to travel around a lot and become quite famous. So the incentive structure clearly bends toward headline science, not work-a-day stuff that's not going to garner headline because nobody can quite figure out why you did, why you did, but you know, acid rain, ozone hole, you name it, and then that creates political pestilence, unfortunately. But wouldn't – I mean, especially in highly kind of politicized areas of science, wouldn't negative results create headlines? So if there was a study that came out in Science or Nature that said – We have found no effects of carcinogen from cigarettes or something like that. Right. Something like that. That would generate headlines, right? Negative results will generate headlines as long as there is a standing positive result that has generated headlines. That's true. However, let's consult what I think is the world's authority on what's going on in science right now. His name is Danielli Finnelli. He is at the University of Edinburgh, and he has published a most remarkable paper. It actually got covered in Newsweek early this year. Talk about headline science. He asked for the first time – and this is an amazing question – for the first time he asked the question, are there systematic changes occurring in science as a result of the incentive structure in science? Not just in climate science, but in roughly 25 different fields. He looked at 4,600 journal articles. Let me just read you the title of his paper as part of the way of answering his question. The title is, quote, negative results are disappearing from most disciplines and countries. In fact, we're seeing significantly fewer negative results. Wow, what does that mean? If you have more positive results and fewer negative results, just given the nature of hypothesis testing at, say, the .05 level, you're going to have more false positives. Let me translate that into plain English for the listeners, okay? You're going to have a significant increase in the number of scientific papers that are wrong, W-R-O-N-G, wrong. And wrong papers, particularly in the areas of the environmental sciences, lead to wrong-headed policies and restrict personal freedom. This is a very serious problem that Finnelli has uncovered. And it would also be true for other things, too. Climate science, and that seems to be the biggest one, of course, your field. But we also hear about food science all the time and carcinogens and food science and carcinogens all over the place and things that make you lose weight or things that make you gain weight and all this stuff, and it seems like it changes constantly. Would that also be tied to this, would you say? For 20 years of my life, I was convinced that fish oils were miracle drugs for cardiovascular stuff and took a lot of them only to find out 20 years later that, well, not any demonstrable increase in a reduction in all-cause mortality and, oh, by the way, they accelerate aggressive prostate cancer and aging males, bye-bye fish. So here's the thing. Let's get back to this subject of the self-corrective nature. Finnelli has some interesting observations on this, and I'm going to read from his paper. However, even if the long run, in the long run, truth will prevail, in the short term, resources go to wasted in pursuing exaggerated or completely false findings. Now, the guy who's referenced in that is a guy by the name of Ioannidis, Ioannidis from Stanford, who has established a reputation by showing that most science turns out to be wrong, particularly biomedical. Next sentence. Moreover, the self-correcting principle will not work efficiently in fields where theoretical predictions are less accurate, methodologies less codified, and true replications rare. Well, what field can we all agree on in this table has inaccurate predictions, uncodified methodology, and can't be replicated? Can somebody say climate modeling? Macro-economics. You're out of my field. I think that being a very simple edit, macro-economics, too. And so what he's saying is that you are going to get a lot more loss of negative results, more wrong papers, the more loosey-goosey and unprovable the subject area is. So in climate change, we talk a little bit about how this effect works in the climate change science literature. The literature, the IPCC, the Intergovernmental Panel on Climate Change, and negative results. So a paper that says studies show that carbon dioxide has less of an effect on multiplying warming effects than previously thought. Would that be as likely to be published as one that said it has more? I'll tell you what, let's just play a game and imagine that we work for NASA and we go to a congressional hearing and Senator Snort asks the administrator, says, you know, global warming is the most important problem confronting mankind. I mean, the vice president said that. Could your agency use the money to study this? Well, you're going to say yes. And then you're going to go to your department heads and say, I got us all this money. Now, get me proposals. Who's going to write a proposal that says they don't need the money? No one. And then the department heads go to the scientists and say, write me a proposal, specific research project. Which project is going to have the hypothesis that the sensitivity of temperature to climate is overblown? None. Then the research is done, the paper is sent to nature or science and who reviews it? The guy down the hall. Because the reviewers are the people who published in the field. And the guy down the hall sees the paper that says the world's coming to an end, takes a look at the NASA budget and says, by God it is. Let's give this paper a favorable rating. Now, on the other hand, how about the paper that says world not coming to an end that came in not from within the agency but surely from some outsider, you know, maybe some Coke-funded climate skeptic guarantee, by the way. So how does that work? Well, that paper is going to get slammed. And I prove this. I published the paper, actually, in a renegade journal a couple years ago where you could see that the ratio of papers saying it's worse than we thought to it's not as bad as we thought was completely improbable, assuming the science is unbiased. This is part of going for flashy headlines type of thing. Well, yes, in part of the Mayly's incentives. Again, I publish, I don't have to stay and coach. I publish. I get a job. But going back, you say, so you make a prediction and you say, I believe that the temperature will rise by two degrees in the next 20 years. And it doesn't. And then you attest the hypothesis and it could either be less than two degrees, so less than we thought, two degrees exactly what we thought, or way worse than what we thought. Now, generally speaking, you should be testing all sides of the hypothesis, correct? And what did you find in that? Well, what you see, again, is much more it's worse than we thought. Now, there's a sticky point here because what I'm telling you is that each new piece of information, once a ground is established, has an equal probability, let's say, in climate change of raising the forecast or making the impacts worse or lowering the forecast and making the impacts less. That is true because climate models are supposed to be zero biased. And the climate profession, in a famous brief in the 2007 Supreme Court case, Mass VEPA, made a blatant statement that it was equally probable that new research would say it's worse than we thought or not as bad as we thought. So I was just testing the hypothesis of my community and it failed miserably. Does this mean then that science has actually practiced out there in the world works differently than the way that we're taught in school that science works? Because the way we're taught in school is the scientific method. So you have a hypothesis. Scientific method is still there. Right. So the idea that the notion that people have of science is that Hey, man. I'm a scientist. Step back. You have a hypothesis and then you go and check it and you see if the data that you find supports it or not and then you have either confirmed or invalidated is the kind of food version of it. But what you've mentioned earlier was that when they're like the government agency like NASA is soliciting research, that people say, I think you said something like they say this is what they're going to try to prove and then they go and do the research. So is that that kind of grade school version of it? Are they trying to prove a hypothesis? Could we consult Fannyelli again? The reason I like consulting Fannyelli, I've been corresponding with him in email. He's very careful and I have a feeling he's either a libertarian or a communitarian. I can't tell quite which. So let's ask him about the nature of grade school science. He tries to explain why there are so many positive and so few negative results. And here's his answer, quoting, the hypothesis tested, this is what you're talking about, might be increasingly likely to be true, which is what we're seeing. That's my words. Obviously this would not happen because sciences are closer to the truth today than 20 years ago, but because researchers might be addressing hypotheses that are likely to be confirmed, making sure they will get publishable results. So let's go back to your original question if we could, Aaron. But what happens is if you test the hypothesis and you get a negative result, sometimes it goes into the back of the file drawer. It's known as the file drawer effect. Negative results oftentimes aren't published or negative results, but I don't know if we really want to put that out. I might get yelled at for that liquid happen of poor old Pat Michaels when he published a paper that said the world wasn't going to warm up all that much. Or could it just be the thinking like, well, if we didn't get that result, maybe it was because we didn't set the experiment up correctly or our instruments weren't calibrated. But we have to lower our hypothesis. We'll run it again. Yeah, I'll look at the research stream coming out of the Lawrence Livermore National Laboratory on verification of climate models. Basically, as we go further and further into an era with either no net surface temperature change or a slight... Which is about 18 years, right? That's most people... We're about to get into this. It depends on the record you use. 16, 17, 15, 18 years, depending upon the record you use. Now does the IPCC agree with that? They acknowledge that it's there. Okay. It's called the pause or something like that. Very loaded word. The pause means, well, that means it's just paused. But as it gets longer and longer and longer, papers from Lawrence keep coming out that, well, okay, a few years ago we said 10 years. No, we really meant 15. Now the magic number is 17. Oh, we really meant 20 to 25. You need more time than that. So you can see the process operating. So do you think... And looking at this on the broader scale, government comes in, media comes in, they fund things that seem to be catastrophic. That's one possible... Well, if the squeakiest apocalypse gets the dough in Washington. Or I mean, but they could also be funding things that are actually problems. And so it doesn't necessarily disprove the global warming hypothesis, correct? It would say that it might be getting into its own sort of feedback loop of people talking to each other and making it sound a little bit more apocalyptic than it would be. See the climate-gate emails for people talking to each other. And then so we could say that in your world you would want to see a diversity of opinions about global warming of all types being accepted. Well, and science in general. Science in general. And of course I have a modest proposal and because it's coming from the Cato Institute I can guarantee you that it may appeal only to me and our apparently few friends. And it is this, recognizing the bias of government towards government, towards things that allow it to interpose itself to save us all from the horrors that are being described by our objective scientists. Or by not by the free world. Okay. Now let's take the playbook from the other side, the fiction, but we're going to do it anyway. Okay, and then let's say there's a consortium of large oil companies. And they say we're going to fund research on the environment, maybe some climate stuff. I'm going to guess that they're going to be preferentially funding hypotheses that say the world's not coming to an end, what do you think? Okay. Are they awful people for doing that? No. They're more awful than the government is for doing the opposite. And then there may be another interest group that has another dog in this hunt that we don't quite understand. So why don't we do this? We could maybe get rid of the problem of all of the paucity of negative findings if we broaden the bias base. Let's just call it what it is. A bunch of biased entities. Yes. So you have all these actors. Can you say to the actors, okay, we think some genius, I don't know where this part comes. The federal budget is $5 billion or something like that. Hey, World Wildlife Fund, we know how much money you get a year. If you contributed $400 million or something like that over a period of time, we would cut the federal outlay by $400 million. And you, the petroleum guys, yeah, well, we happen to know that ExxonMobil gave Stanford over north of $25 million at a crack for their energy and environment area out there in Palo Alto. Yeah. Okay. Our budget by that $25 million and, yeah, you know, Amaco BP, yeah, okay. So what you get is you get a broader bias base. And then what? Average all the biases? You'll get to look. I guarantee you you're going to see much different if that occurred. You must remember that that's kind of the hybrid system that we had, at least in the United States, until 1940, where we'd had some federal support, mainly for agriculture. But there was the federal, the federal government sort of nosed its way into the science business after the Civil War. Would that address the flashy headlines issue, though, of still, I mean, the journals would still want to get cited until they would want to only publish citable articles? That would not, it probably wouldn't address the flashy headline issue. At least I don't see how it would, unless there might be some compelling, compelling synthesis of, say, negative or moderate results in any given field that would, in fact, be sufficient to generate flashy headlines. Those journals are still going to publish flashy stuff, science, nature, and cell. That's what they're going to do. They're not going to stop. And if you don't think they're involved in the political process, all you have to do is walk out the door from the Cato Institute, go a couple of blocks down the road to 12th and New York, the headquarters of the American Association for the Advancement of Science. That's the scientist's lobby in Washington, D.C. When the energy bill was being debated in 2007, you may recall that the President Bush's solution to climate change caused by what he once called carbon monoxide was ethanol. And so the energy bill is being debated in the Senate, and it's all about ethanol. And a banner drops from the side of the American Association for the Advancement of Science Building, multi-story banner of a corn cob with a corn stalk with the cob morphing into a gasoline delivery novel, or a nozzle. Now think about this. And the background is a blue pristine ocean, and it's a beautiful sunny day. And the caption says, ethanol from corn, fuel for thought. That's close to Stalinist, okay? That was massive propaganda on the part of the American Association for the Advancement of Science. Well, they were believing in the research, and they would like to do the research, so they wanted money for the research, right? That's not really Stalinist. To push a policy that some of the people in the field are beginning to say actually results in more carbon dioxide emissions than if you just burn the gasoline. That turns out, by the way, to be the answer, unless you take the meal that's left behind from ethanol and feed it to pigs and cattle. It's a high protein source. Then it gets closer to neutral. But the fact of the matter is it's not going to do anything about climate change. So I'm sitting here, and I'm not a climate scientist at all, and I imagine most people listening to this are not climate scientists, so they're wondering what to think. They're hearing Pat Michaels. What is the average person? In this situation, the bias is interesting to me because we're just talking about how politics biases, and these results that you're talking about, too many positive results, not enough negative results. Real science would not produce that level of bias, so something is going on here in a variety of fields, including if not especially global warming. So if you're trying to say the minimal thesis of why you should be skeptical about the climate change and government science, are these findings right here that you're talking about? That doesn't really have a bias by itself, it's saying that the funding of this is biasing science toward the positive in all fields, and that's why at the very minimum you should be skeptical of climate science in your specific area, but also food science, biomedical science. You mentioned that before. What about biomedical science? Good Lord. I mean, the stuff that gets published, which means the reviewers approve it in biomedical scare science is sometimes breathtaking. You ever heard of bisphenol A? No. Well, bisphenol A, aka BPA, is the current bad chemical du jour. It's an estrogen receptor blocker, okay? And so the idea is bisphenol A, which is the lining in a huge number of canned goods. It keeps cans from deteriorating. I'd like to know how many cases of botulism, bisphenol A is probably prevented, probably a whole bunch, but the idea is that it will raise your concentration of blood-borne estrogen. And just to show if this was true, a researcher published a famous paper about six months ago in which she found elevated levels of blood estrogen. And then in the fine print, you realize that, well, you see, BPA is hydrolyzed first pass in the stomach, okay? So instead of feeding the rats something with BPA, they injected it directly into their bloodstream. And so much that you would have to consume probably about 20 cases of canned tomatoes in an hour in order to get something like half that level. Now, the reviewers, if I were a reviewer, I would have said, give me a break. You can't get a concentration like this in a human being, and it doesn't last very long anyway. But no, the paper was published, the headlines were made, and it was up to people like us at the Center for the Study of Science here at the Cato Institute to say, hey, wait a minute, guys. And this would be like red dye number five, and all these. Oh, God, red dye number five, yes. These additives that come out, again, but it's flashy over. And does the government fund that disproportionately? Those kind of studies, too? Well, saccharin was an example. Saccharin was taken off. Saccharin's used as a sweetener. It was a very cheap, abundant, was implicated in some type of weird cancer or something like that. It turned out not to be true. It destroys the manufacturers of saccharin. Other people get heavy. Non-attended consequences are pretty large. So, yeah. And then the regulatory paradigm that results from biased science is really bad. I think we should talk about how the bias directly affects the policy process. You want to do that? No, please, yeah. OK, OK, we claim the United States and other countries around the world, but especially the United States, that our policies, regulatory policies where necessary, are based upon science. You know, if Al Gore were in the room, oh, God, he is. You know all the sudden you hear, science says we must limit carbon dioxide or we will all die. OK, so that's the whole deal. We claim it's science. Now, why do we claim that? Well, because big pieces of legislation, like the cap and trade bill from 2009, are based upon citations of reports of mega science, mega science compendia, the reports from the intergovernmental panel on climate change, for example, of the United Nations. Those come out every several years. What they really are are massive literature reviews. And guess what? If you're reviewing a biased literature, what's your literature review going to say? Oh, my God, it's worse than we thought. Now, there's no politician in the land that can stand up and say, I know this is biased science. No politician is a climate scientist. I know that this doesn't make sense. I haven't, oh, I've read Danielle Finnelli. No, they're not going to say that. So they have to go along with it. And then you get your policy. Not a pretty picture, is it? Yeah, absolutely. And that's the same. That's why Kato's doing this. Turns out what's going on in science affects individual liberty. How skeptical then should the average person, so the person who's not involved in policymaking and they're not a scientist, but they're just someone who hears about various studies, whether that's on the news or on the internet or whatever. I guess what should our general level of skepticism be then? Should we just discount all the stuff we hear? Or are there ways that we can, things that should trigger a skeptical response so we can say, this sounds more fishy than other ones. What can we just do as regular people to get more of the truth out of science and counter the bias? Well, first of all, I would say beware of individual results or low sample size studies. This guy, Eonidus at Stanford, who was kind of the first guy in the modern era to say, hey, there's something systemically wrong with science, because I think he found 95% of all positive results, meaning fish oil will save your life, turn out to be wrong. And one of the reasons is most of those early studies are low sample size studies. You want to look at meta-analyses, large analyses, remembering that the meta-analysis itself is meta-analyzing a biased literature. And that's just what you have to do. I mean, I say, take the end of the world with a grain of salt, ask yourself, can this hypothesis be falsified? And see if there have been any experiments published trying to falsify it. Problem is, many fields, the way that we do so-called post-modern science, become non-falsifiable. I'll give you an example. This week, the polar vortex, by the way, which has been around ever since the planet got an atmosphere and rotates. And it deforms because it's a fluid, the atmosphere being a fluid. And occasionally, a piece of it will break off from the overall flow and drift southward or eastward. And that's what happened. That caused the big cold outbreak of January 2014. Well, you can go back to go and go in the popular literature, and you'll see that some people say, oh, this was caused by global warming affecting the jet stream and the high Arctic. Or it disproves global warming. Either one. Either one. First of all, A, number one, weather is not climate. But let's go back now to the 1970s, because those who were around and conscious then in the eastern 2 thirds of the United States know that the winters of the late 1970s and the 1980s were brutal. If you want to see a real cold outbreak, look at the Christmas cold outbreak of December 1983. Anyway, having said that, what is Time Magazine right in 1974? Oh, the unusual cold that we're seeing is a result of global cooling. Right. So climate change can cause anything so far as I can tell. And that's the problem. It means it's unfalsifiable science. You know, Karl Popper, the great historian of science, philosopher of science, calls that pseudoscience. And he had three examples of pseudoscience. Marxism, psychoanalysis, I forget what the third one is. But things that explain everything really explain nothing. And when a field starts to explain everything, back off. That's when you've got to say, wait a minute. Something's going on here. I don't get this. So if someone comes along and they ask, well, it can't be the case that, well, it doesn't seem to be the case that government has always been wrong on this stuff. For example, cigarettes and smoking, which were. That's kind of obvious. Well, they funded that. And did that bias against. If you inherit, if you inhale hot, acrid smoke into your lungs, that you're probably going to cause that damage. Is that the proper answer? I'm asking because all government science or predominantly funded government science does not end up with wrong results. No, it does not. It does not. And it's an unfortunate. I mean, in the case of lung cancer, I think that was a no-brainer. Well, for a long time, it wasn't. People were dying of lung cancer. But it could have been something else. You still needed science to figure out what it would be. Right. Actually, statistical analysis was all that was necessary there. But yes, it is true that the government oftentimes funds stuff that turns out to be true and can result in regulations that, in fact, are necessary. I hate to say that I don't have a solid answer for you. Nobody does. That's why we established this center here at Cato to try and figure this out. And I've got three of the best people in the world helping us now. We got Dick Linsen. To figure out how government biases science generally. But another example that occurs to me is the effects of drugs, which has been written about before in terms of the negative, like the addictive effects of cocaine or heroin being overblown by the drug war and by Reagan. The drug war is a wonderful example of science gone bad. You know, OK, here's one for you. Guys who are somewhat old tend to have chronic pain issues. And so they have experience with fairly strong pain medication every once in a while. We have heard of the scourge in the drug war, the scourge of Vicodin, Percocet, Vicaprofen, all the synthetic narcotic painkillers that contain either hydrocodone or oxycodone. Must be bad, killing massive numbers of people. How many prescriptions are written for hydrocodone per year? Anybody got a guess here? In the United States. That 100 million? 130 million. That would be one. If you do some simple math, it works out to about 26 doses per man, woman, and child in the United States. Now, I offer you that the emergency rooms would be overflowing. There'd be lines in the street. Well, actually, of the fall and respiratory arrest from drug overdoses, if this were that dangerous. But no, you get bad policy because somebody looks at a small sample size of data, extrapolates it, and now you have a drug war. And we've also seen similar studies about the addictiveness of some contrary studies about how addictive heroin is. Because I grew up being taught in school. And this seems to be a, what are we going to teach children? We're going to teach them real science of what the, I'm putting real in quotes here, climate change. And I was taught that take one dose of heroin and you're going to be hooked for life, basically. Erin was probably taught the same thing. Right. And that would have been government funded science, too. That's true. And it's actually probably the other way around. Another new guy that we have at the Center for the Study of Science is Ed Calabrese from University of Massachusetts. He's a toxicologist. And he shows how wrongheaded the one dose or one molecule threshold is. We regulate things now based upon the assumption, in many cases, of something called a linear no dose threshold, meaning the first molecule of a carcinogen can kill you. The first photon of ionizing radiation can kill you. That's bologna. In fact, the entire therapeutic model is that you take small doses of things and it has a salutary effect. If you take too large a dose of it, it will kill you. So when you say we regulate based on this first dose. I can tell you how that happened. But that just, you just mean that we, that means that the regulations ban it entirely instead of allowing some small, never-moving. Pretty much. I mean, if we were really, really, really honest on that, we would try to make sure that people never went out in the sun. But here's how this one happened. And this is a lesson in science that's going to really, really bother a lot of people. There's a guy by the name of Herman Moeller, who won a Nobel Prize in something close to health physics. I believe it was earlier in the 20th century, I think, before 1940 or something like that. And he had developed the hypothesis that one photon of ionizing radiation was capable of hitting DNA, uncoupling a base pair. Possibly he didn't know the mechanism at the time, but it would do this mechanism. Possibly de-repressing something that was repressing an enzymatic expression that would make the cell go embryonic, i.e., cause cancer. And the one photon could do this. Well, that was in science, in search of an explanation for evolution, because they're trying to figure out how organisms evolve. And that was the reigning theory that ionizing radiation changed DNA. Some people died, some people had advantages, et cetera. Makes sense when it turns out it's not true. The amount of DNA that is self-destroyed in the aging process is orders of magnitude greater than what most ionizing radiation can do, unless you get massive doses of it. But Muller won the Nobel Prize for this, and he became the head of the panels that put forth the regulations for ionizing radiation in the United States. Which were zero. Zero, correct. And so one person, and Ed Calabrese is documenting this, one person has caused a regulatory nightmare that won't go away except for his dogged research now showing the general positive effects of small doses of things that, in large concentrations, are very bad. But that's how it can happen. So, and that was science-based regulation. My God, Herman Muller said it. I mean, it can't be wrong. Nobel Prize, what are you talking about, man? And this would be something behind like arsenic and drinking water, for example, if we think of it as a linear relationship. It's not. And every single time, if you lower the standards from eight parts per billion to seven parts per billion, you've killed X number of people with that one part per billion, or raised it from eight to nine, sorry, you've killed X hundreds of people. Linear and no-dose thresholds, unfortunately. And by the way, Calabrese has actually documented some stuff on arsenic that is very interesting and in fact, the low dose. I mean, there is an effect at very low doses that's not really negative. But that's another story. How about regulations for ethanol in your blood? I mean, there are people that would like it to go to zero. Okay? And if you drive with anything but zero, sorry, $10,000 and six points on your license. Well, way back when, there was research showing that actually low concentrations of people actually drove better, but that got suppressed. I think it was actually in scientific American, way back when. So we make policy off of zero tolerance for radiation. We make policy based upon science that is now demonstratively being recognized as systematically biased by the desire for professional advancement. And by the way, the universities love federal money. Anybody who thinks the story that I'm telling about science bias, I just wish they had sat on a university promotion and tenure committee at a tier one university to see how quickly the question comes up. How much money did he get? How much money does he bring in? How much of it is good money? Good money is federal money. How much of it is the best money? That's National Science Foundation money. These are metrics, performance metrics that come up right on the top, that is correct. And then, so I would argue, and it's some of a wide ranging discussion, that the origins of political correctness are not necessarily in the inherent leftiness of the faculty, but the origins of political correctness are that the university is dependent upon the state and therefore believes in the state. All that overhead money, research funds, that money is used by the university in a discretionary fashion for academic matters. So, what's happening is the Environmental Science Department, the overhead for their money, which is 50%, you put in a grant for $5 million, 50% of that goes somewhere else to the university. That money will go to pay the salary in the Germanic language department where tuition revenues won't support it. So, the universities are addicted to this. Are they gonna promote people who are going to say, you know, the science that we're doing at this university is biased? I could just see me going to Dean Laffler at UVA when I was there, I was there for 30 years, by the way, and saying, hey, Mr. Dean, Mel, I got a great idea. I think I can find evidence that the science that's coming out of our Environmental Science Department is biased in one direction where it shouldn't be. You think I should do that research? Will you sign that proposal? What do you think he's gonna say? Yes, I don't think so. So, the institution has a culture of statism that results in part from what also creates bias science. It's a nasty picture, isn't it? Do these sorts of bias results, are they less of an issue then in fields of science that are less tied to things people would regulate around? And so, things that, you know, Congress is less likely to say, hey, agency, we've got this problem, we'd like you to fix it by- The mating habits of the porpoise. Yeah, or some guy who's studying ants or something, that, you know, are there less of an incentive and so do we see less of this kind of positive results bias or whatever else? Again, we're looking at finale because finale is the only guy that has done a comprehensive study of the scientific literature. And I can tell from his emails that he's a really, really cautious guy. But anyway, let me just read the conclusion to his paper, the one about the disappearing negative results. And think of your questionnaire and about, well, does this happen more in fields that have regulatory importance? In conclusion, it must be emphasized that the strongest increase in positive results was observed in disciplines like clinical medicine, pharmacology and toxicology, molecular biology. Now, you can't say pharmacology and toxicology don't have regulatory implications, okay? Where concerns for publication bias, meaning throwing the negative result in the drawer, that's called publication bias, have a longer history and several initiatives to prevent and correct it have been attempted, including registration of clinical trials, enforcing guidelines for accurate reporting and creating journals of negative results. This study, the one that Finnelli just did, suggests that such initiatives have not met their objectives so far and that the problem might be worsening. So yeah, it seems to be affecting regulatory fields. So as a conclusion, we would say, or someone might be thinking, okay, so let's say government stops funding science. Not gonna happen. And then we have ExxonMobil and Philip Morris and we have a bunch of self-interested people because you would say that it would be the same behavior, right? Yes. The scientist is trying to get money from Philip Morris. If the scientist wants to be promoted and given fancy raises in a fancy office, they're gonna do what they can within the truth that they're describing to make sure that that truth, while it is the truth as they see it, also isn't one that's gonna hurt them. And that would include for private money that's trying to advance periods. Private money, sure. Absolutely. So we have bias for money creates bias or sources of money creates bias, which is what people often accuse climate change skeptics of saying that the money is creating the bias. But your main point is that government money also creates bias. Of course. So what do we do about trying to fix this problem? We all admit our biases and stop pertaining that government is not biased. Yes, yes, that's right. And again, I think if you broaden the bias base, at least you're gonna get diversity. Come on, the universities are all for diversity. Let's be for intellectual diversity. Let's be for funding diversity. Why do we have to depend upon the state, particularly the federal government, to throw money at the university that allows university to run departments that are uneconomical? Huh? Why can't other people do this for the university too? So in closing, you've painted a pretty grim picture of science of the bias that is inherent in science in the way that it's funded and the results that we see. What can we do about that? I would say be astute and savvy and take what you read with a grain of sodium chloride, okay? Don't jump off based upon one study and remember that the compendia pretty much on any issue are going to suffer from the problems that Danieli has uncovered. Or as we talked about at the beginning of this talk when Randy Scheckman got the Nobel Prize, the journals, the big journals were Harming Science. It's just the way it is. Deal with it. If you have any questions or comments about today's episode, you can find me on Twitter at A-R-O-S-P. That's A-R-O-S-S-P. Then you can find me on Twitter at TC Burris, T-C-B-U-R-R-U-S. Free Thoughts is a project of Libertarianism.org and the Cato Institute and is produced by Evan Banks. To learn more about Libertarianism, visit us on the web at www.libertarianism.org.