 Okay, good afternoon everyone. Welcome to the Science-Based Medicine Workshop. We have a good hour and a half planned for you. There are four of us on the panel for this workshop, myself, Steven Novella, the founder and executive editor of Science-Based Medicine. I'm also a senior fellow for the JREF and in charge of their Science-Based Medicine program. With me today is David Gorski, who is the managing editor of Science-Based Medicine. Harriet Hall, who is also an editor at William and Mark Crislop, who again is a regular contributor and editor of Science-Based Medicine. And together we form Voltron. Sometimes. So very quickly, we're going to be giving you, this is really a primer. We've done different topics for these workshops, but this year we wanted to get really basic. Each of us picked one area where we thought it was an absolutely core understanding of Science-Based Medicine. And to go over that in some detail, at the end we're also going to do a little bit of interactive Q&A. We'll ask you some questions and see if you could bring together everything that we talked about and analyze some abstracts and some quotes, etc. So these are the topics that we're going to be covering. David's going to tell you about what is Science-Based Medicine. Harriet Hall is going to tell you how to cover clinical research. Mark is going to be covering, what are you going to cover? I was going to stand there for 20 minutes. I was going to stand there and look down for 20 minutes. And then I'm going to cover logical fallacies. And then I'm going to cover the placebo effect, placebo effects. And then we'll do a Q&A at the end. The Science-Based Medicine website has a new look, if you haven't been there recently, pay us a visit. We have over a thousand articles now on there covering most topics, you know, central to Science and Medicine. Also we have our, the first couple of years of Science-Based Medicine out on eBooks. You can download these for the Nook or the Kindle. And you could also purchase them for a special conference price $29.95 on all, on DVD or a CD? A CD. A CD, all on a CD. You can get them at the SGU table and the tables in the back. Just pay us a visit. So David Gorski is going to be getting us started with what is Science-Based Medicine? Thanks, David. Good afternoon. Let's just get right into it. So I basically think the way to understand what Science-Based Medicine is is to compare it to evidence-based medicine, which is what all of you have probably heard about. And I know we've written about this a lot, but I'm going to try to boil this down in less than 15 minutes. So we hear the term evidence-based medicine all the time. And who would think that there could be anything wrong with basing medicine on evidence? I mean it's like mom, apple pie, you know, and all of that. But do we question it? Well of course not. The question is how? What exactly does it mean to be evidence-based? So you know, evidence-based medicine, but have you ever really thought about what does that really mean? What does it mean to at least the way evidence-based medicine is performed now? And there were lots of BMs before evidence-based medicine, you know, EVM. You know, dogma-based medicine, which, you know, that's what the textbooks say or that's what Thor says, I don't know. Antiquity-based medicine, which you hear a lot these days. We've done it that, it's thousands of years, they've been doing this. Acupuncture is 2,000 years old. Experience-based medicine, you know, that's what I've seen in my practice. Oh, you do know what the most dangerous three words in medicine are. Mark knows. In my experience, right? Testimonial-based medicine, that's the way I was taught. Testimonial-based medicine, it worked for me and we see a lot of this from people like Jenny McCarthy, various books on cancer, and of course, Suzanne Summers. And this is Kim Tinkham, whose testimonial-based medicine did not end very well. She died of her breast cancer. So from the Cochrane collaboration, I'm just going to skim through this. I know it's far away and a lot of you can't read it, but basically, evidence-based healthcare is the conscientious use of current best evidence in making decisions about the care of individual patients. The current best evidence is up-to-date information from relevant, valid research about the effects of different forms of healthcare, the potential for harm, the accuracy of diagnostic tests. It's all a lot of verbiage. And Cochrane, as you know, is the Bible Holy Scripture of evidence-based medicine. Their reviews and meta-analyses are basically trying to synthesize medical literature. And they come up with this lovely thing here that they call the evidence-based triad, which basically says okay, you take the best external evidence with your clinical experience, as well as patient values and expectations. And in that little corner there, you get EBM, which is all well and good. But what does the best external evidence mean? Or what is the best external evidence? How do you decide that? I think the way the current paradigm works causes some real problems. For instance, what does Cochrane say about homeopathy? Can you all know what homeopathy is? Do I have to explain what homeopathy is? It's water, okay? So, for instance, in view of the absence of evidence, it is not possible to comment on the use of homeopathy in treating dementia. Sure it is. I do it all the time. There is not enough evidence to reliably assess the possible role of homeopathy in asthma, as well as randomized trials. There is a need for observational data. No, there isn't. There is insufficient evidence to recommend the use of homeopathy as a method of induction of labor. Rigorous evaluations of individualized homeopathic remedies for induction of labor are needed. No, they aren't. And here's my favorite. This is Mark's favorite, too, I think. The promising data, we're not strong enough to make a general recommendation to use a silicoxinum for the first line treatment of influenza and influenza-like syndromes. Further research is warranted, but the required sample sizes are large. Okay, do you know what? Mark calls this Osocilium. And do you know what it is? Okay, you've probably heard about it. Boron makes lots of money making it. It's homeopathic like duck's liver and heart. So here's Cochrane, the guru of evidence-based medicine saying that, well, you know, there is not enough evidence to say that using ground-up duck's liver diluted to nothing shouldn't be used for flu. Touch therapies, same thing. You know, therapeutic touch is basically the same thing as Reiki, more or less, to change a little bit. So they say touch therapies have a modest effect on pain relief. Really? More studies on healing touch in Reiki and relieving pain are needed. No, they're not. So is evidence-based medicine the same as science-based medicine? And obviously we don't, you know, not really. So here's an idealized way that I like to think of how things should come about. You know, start with basic science. You look at in vitro cell culture biochemistry studies. You move on to animal models. Then on to clinical trials. And, you know, it filters back to basic science. And there's some interplay going back and forth. But the basic science is there. And you don't do anything that doesn't have a basis in basic science. But if you look at the evidence-based medicine pyramid, there's something missing. Now this is the evidence-based medicine pyramid. It starts out way down. Look at what's at the very bottom. In vitro research. Animal research. Look at what's above in vitro and animal research. Ideas, editorials, and opinions. Then finally you get the case reports, case series, case control studies, cohort studies. And finally at the top you get the randomized clinical trials and the analyses. So what's missing? Evidence against from basic science showing that a therapy is either highly improbable or impossible. And you could say that about homeopathy on basic science alone. So there's a blind spot here. Evidence-based medicine has a blind spot. Clinical trial evidence is be all and end all. It really is. And if it isn't clinical trial evidence, it doesn't matter. Basic science considerations are relegated to the lowest level of evidence. And this blind spot as we argue on science-based medicine almost every day is directly contributing to the infiltration of quackery into academic medicine. And we also say it's impossible and unnecessary to do randomized clinical trials for every single question. It's just logistically not possible. I love this term, quackademic medicine. It was a term created by Dr. Donald to describe the infiltration of pseudoscience and alternative medicine into academic or medical centers. We have people like Andrew Weil, Oz, remember Oz got his start as a quackademic to say. David Katz at the end there, he's famous for saying about homeopathy, for saying about randomized clinical trials. I think we have to look beyond the results of randomized clinical trials in order to address patient needs today. And to do that, I've arrived at the concept of a more fluid form of evidence than many of us have imbibed from our medical education. Yeah. So back to why science-based medicine. Evidence-based medicine is a great idea. Who can argue about using evidence? Science, evidence, obviously we want our medicine to be based on evidence. But in practice it's really flawed and it's had some very unfortunate consequences. We argue that when the application of science can correct EBM's blind spot. I also like to talk about this for evidence-based medicine. It's called methodology. Methodometry is defined as a worship of a method that employs it uncritically, regardless of particulars and past negative results. However, I love this definition of it as applied to medicine and particularly evidence-based medicine. It's the profane worship of the randomized clinical trial is the only valid method of investigation. And that's a great quote. So here's what we think happened. Evidence-based medicine was basically blindsided. Basically the idea is that clinical observations are supposed basically the idea was this. They ignore basic science. It goes straight from these various modalities without any consideration of whether they're even the least bit plausible. So the implicit assumption is that any hypothesis that reaches the stage of a randomized trial has good preclinical evidence to support it. So we're not saying you can base therapies just on basic science. Basic science is insufficient, but it's not unnecessary. And that's the problem with CAM and that's the problem of evidence-based medicine. It essentially relegates basic science to the lowest run, which may be okay when you're trying to take something that has some basic science plausibility and prove whether or not it works. But when you're taking something that violates the laws of physics like homeopathy and relegating that to the bottom and saying, well, we still need clinical trials, well, that's a problem. So basic science can't conclude that a treatment is efficacious, but it can conclude that the probability of it being efficacious is so low that it's not worth doing clinical trials for. And here's a little thing about clinical trials. And I can only talk about this for like a minute tops. If you look at prior probability it really does actually matter. A p-value, for instance, of 0.05 if you say the prior probability, which is the estimate of whether the trial will work before you do the trial, you say it's about 50% say right down at the end there. If you get a p-value of 0.05, it's only about a 73% chance that you have a true positive trial. Now let's look at a prior probability of 1%, which is way, way higher than most of these CAM therapies have as a prior probability. A 0.05 p-value gives you like about a 3% chance that it's a real result. Even a really highly significant value of 0.001, 0.001, it's only like about a 50% chance that you're looking at a real result. And this is for a prior probability that is way, way, way higher than anything for say homeopathy, Reiki, etc., you know, those sorts of things. Here's another thing that they try to nail you with. Plausibility does not mean you have to know the mechanism. We can't reject something out of hand because we don't understand how it could work. However, we can reject something if its proposed mechanism violates well-established laws of physics, chemistry, biology, science, etc. If it violates principles that rest on far more solid evidence than bias in error-prone clinical trials, we're talking stuff like energy medicine, Reiki, therapeutic touch, claims based on anatomic structures that don't exist like eridology, non-existent physiological functions like craniosacral rhythms, Reiki claims violating laws of physics like homeopathy. And the problem is, this gets us, now, here's another one I like to use before I'm almost done here. This is one of the all-time greatest articles ever written in the medical literature. Parachute used to prevent death and major trauma related to gravitational challenge. It's a systematic review of randomized trials. And I, for those of you in the back, I'll sort of read or summarize this. As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation. Advocates of evidence-based medicine have criticized the adoption of interventions evaluated by using only observational data. We think that everybody might benefit if the most radical protagonists of evidence-based medicine organized and participated in a double-blind, randomized, placebo-controlled crossover trial of the parachutes. I love this paper. And this has real-world consequences. For instance, treatment of acute homeopathy, randomized clinical trials in third-world countries testing homeopathy for acute infectious diarrhea. More homeopathic trials. And treatment of childhood diarrhea with homeopathy and Nicaragua. I mean, it's really, we do these sorts of trials all the time. And they are basically unethical. But for some reason, because of the methodology and the rejection of basic science, we still end up doing them. Which is perfect. And as a result, quackery, as I like to say, has evolved. So we start out, say, with this. Oh, come on. But I prefer this. Because this is better for what we have going on now. So we start out maybe hundreds of years ago with what we might call full medicine. Which more recently might have been called quackery. But then it became renamed alternative medicine maybe a few years ago, which sounds better than quackery. But then it became renamed, say, in the 90s, complementary and alternative medicine. Which is even better than alternative medicine, because it complements regular medicine. And now we have integrative medicine, which, as Mark has pointed out so well, is integrating cow pie with apple pie. And I'll finish with one last thing. And there he is, Andrew Weil, as the very epitome of integrative medicine. And all of this happens, a lot of it, because we don't put the basic science telling us which things can't possibly work as a basis for medicine. Thank you. So I'm working. I'm going to talk about what can go wrong in clinical research. Now, in both evidence-based medicine and science-based medicine, we're looking for evidence, and we're looking for evidence in clinical research. We rely on research studies, but sometimes they're not very reliable. The next slide. This is John Ioannidis. He wrote a famous study, Findings Are Faults. And he listed some of the reasons that things are more likely to make them false. Is that any better? I'll just hold it then. Okay. So there's a lot of things that can go wrong. Think about it. Most published research findings are false. That's scary. And I've made a list of 20 questions that I'm going to ask that will illustrate some of the things that can go wrong. Number one, who's paying? If the pharmaceutical company's paying to have its product evaluated, the results are much more likely to come out positive. Two, who are the researchers? Are they true believers? Do they have some heavy investment in what they're studying? If so, the results are likely to be biased. And who are the subjects? Are they a selection bias? For instance, if you do a study comparing a drug to a placebo, if the patient's in the placebo group or sicker, it's going to falsely make the treatment look like it was better. And the subjects might be biased in some way. There was a study done by a so-called chiropractic neurologist. A study that involved chiropractic manipulations. And he recruited his subjects from his own students in his chiropractic neurology courses. So they're all chiropractors. They all obviously believe that chiropractic manipulation works. They know what the teacher wants to find, and they want to please the teacher. So you can certainly expect some bias there. And we're negative studies suppressed. We call this the file drawer effect. You study something that you're pretty sure works. You really want to prove that it works, but you do nine studies in a row, and everyone turns out negative. So you do one more study, and that one turns out positive. So you stick the first nine studies in the file drawer, and you only submit the last one, the positive one, for publication. And since the rest of us don't have access to what's in that file drawer, we get a very distorted view of the data. It was randomization inadequate. Did they follow proper procedures? Do the data justify the conclusion? That may sound like a silly thing to ask, but I've seen any number of studies where the data don't justify the conclusion, and even where they justify the exact opposite conclusion. And the authors just put a spin on their data to make it look like it was something else. And what didn't they tell us? Every research study was supposed to have a method section where they describe what they did so that anyone else could read the paper and do the same thing in their own lab. But they can't possibly include every little thing that they did, and there are things that they didn't even know about that might have affected it. They might not have told us that there was a bacteriology lab next door that was studying something that passed over and contaminated their lab. How many dropouts were there? If you do a study with 10 subjects, and six of them drop out because it's not working, and you have three that improved and one that didn't improve, you'll only see the three and the one, and it'll look like you have a 75% success rate. But if nobody drops out, the real success rate's only 30%. And where was the study done? This lists the percentage of acupuncture trials that were positive in different countries. Science is the same everywhere around the world, so you'd expect to see the same percentage everywhere, but it varies quite a bit. And this is mainly a cultural thing. In China, for instance, if you get a negative result, that means that you've failed. You lose face and you might even lose your job. So essentially no negative studies get published in China. And I'm not just talking acupuncture studies. This is true for all of their science. And Russia is another offender. And I'm suspicious of any studies coming out of China or Russia until I see them replicated in a study in a country that has a better track record. Tim, what was the sample size? They did a study of chickens where one-third of the chickens got better, and one-third of the chickens stayed the same. If you're paying attention, you'll notice that that's only two-thirds. So you're supposed to ask, what about the other third? That chicken ran away. Obviously the more subjects, the better your results are going to be. Where was the study published? If it was published in the New England Journal of Medicine, you know that they have very high standards. So it must have been a pretty good study to have passed their editorial review. But if a study on acupuncture was published in the lower Slobovian acupuncture weekly, it might not be such a good quality study. Where statistical error is made, sometimes people use the wrong statistical test for the kind of data that they're dealing with. And sometimes they make simple mistakes in arithmetic that don't get picked up before publication. Was the control appropriate? Did they compare a drug to an identical looking placebo? Or did they compare the drug to patients that were on a waiting list and didn't get any treatment? It was blinding effective. Was there any way that the researchers might have been able to tell which patients were getting the real thing? Could the patients tell? Sometimes they can. Sometimes they do things like opening the capsule and testing to see if it tastes like sugar. So it's good to do an exit poll. After the study is over, you go back and you ask the patients which group do you think you were in? Do you think you got the placebo or the real thing? And if they can guess better than chance, then you need to go back to the drawing board. Are there multiple endpoints? Let's say you do a study of a treatment to prevent heart attacks. Well, your main endpoint is whether they had heart attacks or not. But maybe you measure all kinds of other things. What was their cholesterol level? How many days did they spend in the ICU? Or how many times did they visit their doctor? If you measure enough things, you're sure to find a false correlation somewhere just by pure chance. And there are statistical methods for correcting for multiple endpoints. Did they do them? And was there inappropriate data mining? Sometimes they don't get the result that they wanted. And they go back and they torture the data and twist it every way and divide it into subgroups until they find some subgroup where it looked like it was positive. Could there have been fraud? You've all seen the reports in the news about researchers who falsified data or who just made up a study that never actually was even done. And sometimes the people who are working in the lab either consciously or subconsciously distort the data because they know what the loss wants. Was the endpoint a lab value or a clinical benefit? Does it just lower the cholesterol or does it actually prevent heart attacks? We like to call it poem. It's patient-oriented evidence that matters. It's not the cholesterol that matters. We're not treating the laboratory. We're treating the patient. What was the effect size? If you're testing a blood pressure medicine and it lowers the blood pressure by 20 to 30 millimeters, that means a lot more than if it only lowered the blood pressure by 1 to 2 millimeters. How was the data reported? We like to look at numbers needed to treat and numbers needed to harm. In an early study of Lipitor, they found a 19% reduction in heart attacks. And if you look at the number needed to treat, you'd have to treat 250 people to prevent one heart attack. And by treating 200 people, you'd get one of them harmed by the medication. And did they report absolute risk or relative risk? One study said that cell phones increased the risk of acoustic neuromas by 200%. Acoustic neuroma is a benign tumor of the ear. And don't worry about it because a subsequent study showed that this wasn't true. But 200% sounds pretty scary. The baseline risk for acoustic neuroma in the general population is 1 in 100,000 people. 200% of 1 is 2. So the absolute risk was one more case of acoustic neuroma in 100,000 people. That doesn't sound nearly as bad as 200%, but it's the exact same data just reported differently. And what are the confidence intervals? When you do a study several times, you'll get slightly different numbers each time because it's an imprecise business. If you look at the bar on the left, it looks like they got a value of about 18. And they calculated the red bar is the 95% confidence interval. That means that they got 18 this time, but they're 95% confident that the actual value lies somewhere between 16 and 20. And if you compare that to the... whoops, it's going the wrong way. Oh, okay. If you compare the brown bar to the white bar, it looks like the brown bar 1. But the lowest value on the confidence interval for the brown bar is still way higher than the highest one on the bar for the white bar. So it's very clear that brown really did win. But if you look at... oops, what did I do? I just tapped it and it went away. Okay, we're back. It went away again. That's what you want, right? You're right. It's a plot. Okay, look at the bars on the right. Is my microphone still working? You can find a point on the confidence interval for the brown bar that's higher than some of the points on the confidence interval for the white bar. So it might be the case that white actually won. Now there's two kinds of study that are very likely to be wrong. One of them is tooth fairy science where you study something that doesn't exist. You can do science on the tooth fairy and calculate how much money she leads to the average kid in a rich family compared to a poor family. And you can get reproducible results that are statistically significant and you can think that you've done good science and learned something about the tooth fairy. But you haven't because there's no such thing as the tooth fairy. One thing that's likely to be wrong is when they do pragmatic studies of implausible treatments. Now pragmatic studies are hello? It's clicking in and out for some reason. Anyway, the alternative medicine people love pragmatic studies because it allows them to skip the essential point of finding out whether their treatment actually works or not compared to a placebo. Because in a pragmatic trial, you can do something like comparing acupuncture for low back pain to comparing acupuncture that's conventionally treated. And in an article that Dr. Novella and Dr. Kahoon wrote about acupuncture, they called it a theatrical placebo. And by doing a pragmatic study, you allow the theatrics to shine. It gives full play to the interactions between the provider and the patient and to the placebo effect and all of that. Okay, in this wonderful book, Snake Oil Our Barker Basel came up with this four point checklist, simple checklist. Is the study randomized with a credible control group? Are there at least 50 subjects per group? Is the dropout rate 25% or less? Was it published in a high quality, prestigious peer-reviewed journal? If you can answer yes, it's still no guarantee that the results are correct, but it makes it a little bit more probable. If you answer no to any of these questions, then it becomes much less likely that the results are credible. So remember, most published research findings are false. You should never trust one study. You have to wait until studies have been replicated and confirmed by other means. And you need to look at the whole body of data and check to see if there are any other studies that have shown opposite results. Remember Carl Sagan, extraordinary claims require extraordinary proof and always be skeptical. So that microphone away. So I'm going to talk about logical fallacies and some of the ways in which we all think badly, which is why most of alternative medicine exists, I think. This quote actually turned out, I thought this was some monkey business in Groucho Senate, but it actually turned out to be Chico from the funniest movie of all time Duck Soup, where he says, well who are you going to believe, me or your lying eyes? Well, your own eyes. What people do is they believe what they see rather than think critically. I'm going to talk about the things that people think about and how they think that makes them think poorly about things in alternative medicine and medicine. When I started out doing this I just thought people were stupid. I mean, I see a pretty simple answer. They thought stupid things. They didn't know what they were doing. They believed in things like oops, alright. This is not doing it right for me either. They believed in UFOs. They believed in homeopathy. They believed in faith healing. People are stupid. People are uninformed. All you have to do is give them the right answers, give them the facts. They will change their mind. It's as simple as that. And that, of course, is not true. People are not stupid. People are not ignorant dumbasses. The person who was was me for thinking that at the beginning. People are actually untrained in thinking and don't think well naturally. And it's the natural human condition to be bad thinkers. And consequently when given things that are complicated they think badly about them. And that's the problem. People rely on their experience and memory every day and every way to get things done. That's what I do. Where am I going to go eat tonight? I know I have good experience at a certain pub. Or what movie am I going to see? I trust other people's opinions. We all use experience and we all use the opinions of others every day to make judgments. And then we get to healthcare and all of a sudden our way of looking at the world no longer counts. It's why I always say that the three most dangerous words in medicine are, I lack insh... wait. Who put that in there? Ah, in my experience. And I say that every day to the house staff where I teach. And it's not in the experience of diagnosis. It's in the experience of treatment. You cannot trust your experience in trying to figure out what treatment does or does not work. And the problem, of course, is that critical thinking is not the default mode of the brain. Nobody thinks critically any time unless you force yourself to do it. It's an unnatural act. It's a human construct like cement or Twinkies or plastic. Critical thinking does not exist in the wild. We have to make it. And that's the problem. This is what my med alert badge just in case you're interested. So how do we know what works? Well, you know, I always love this quote, talks about how easy it is to fool yourself. But actually, as much as I love this quote and have to avoid fooling yourself, we don't think that way on a daily basis. Only Nobel Prize winning geniuses think about whether or not they fool themselves. The rest of us are fooled on a daily basis. I think this quote is a little closer to reality about how we fool or fool ourselves. I don't tend to read slides to people, but everyone knows this famous quote from a former president. I think he's a who fan. So there are lots of logical fallacies. I hate reading slides to people. There is an old saying in Tennessee. I know it's in Texas, probably Tennessee, that says fool me once. Shame on you, fool me. You can't get fooled again. Roger Daltry said it the best. So there are a bunch of logical fallacies that you need to worry about. The biggest one that people use is association. We all say this every day. Association is not causation. But we all think that one thing that follows another causes it. And we think that every day. And you try and argue with a patient that, well, discuss with a patient, that because they did something and they got better they got better because they did that. They got really nuts with this in the hospital because somebody has a fever. They give them something and the fever goes away and they go, oh, I made the fever go away. I treated an infection. You didn't. You did A, you did B, but the two aren't related. And that is the single most common fly C in physicians and non-physicians alike in determining what logical fallacies, in the logical fallacies that they use. In the background there that nobody can read is a list of all the logical fallacies plus several more pages found on the Wikipedia. It is jaw-dropping how badly we think and how many different ways that we can think badly on a daily basis when confronted with a problem. And most of the time we're unaware of that. My personal favorite logical fallacies, God, this is tiny. Confirmation bias, we look for things that confirm what we believe and ignore those things that disconfirm it. Fox News would not exist without confirmation bias. Illusory correlation where we see God, it's so tiny. I can't read. I'm an old man, you know. What do you think in your 50s? Jesus. Oh yeah, a single relationship that does not exist. And again, that's confirmation. That's, you know, the post-argo... Sorry. The focusing effect where we pay too much attention on something that occurs. And finally, oh, God, I'm sorry. I didn't realize the print was going to be so slow and I did this. I need some lacing. Well, there's another one there. What does it say? The clustering. Thank you. Believe it or not, these are my four favorite logical fallacies and I can't even read them. But the clustering, when you see things all happening at the same time and you think, oh, this is the cause and effect. And it's just the random noise of life. And people love to find patterns in random noise. Now, I'm terrible at logical fallacies. Whenever they do name that logical fallacy on the skeptic sky to the universe, I never get them right. I can never figure out what they are in real time. It makes it difficult when you're doing them yourself and participating in logical fallacies because you do not likely will notice them. And I've noticed that the rare times that I do notice people's logical fallacies, they rarely take it with grace and understanding. When I point out to them, well, you know, you're using... They like that. They don't like to be pointed out that they're thinking badly. That's a problem with people who want to try and think critically and rationally all the time is most people don't want to hear from us when we tell them that you're doing it. Nobody likes a know-it-all. And we all know what happened to Mr. Know-it-all. At one point, he's deleting a game show. I think Mr. Peabody there would probably kick ass in this competition. And the next thing you know, it all is dead. For young people, this is the Rocky and Bowling show for my childhood. But it's difficult when people are doing logical fallacies and pointing them out in real time, of course, they don't like to have it pointed out to them that they are thinking badly. The other thing which really combines with this, sorry, David, and I think this really explains surgical resonance is the Dunning-Kruger effect. That people who don't know anything about a topic are the most sure that they master the topic. And they kind of recognize that they are too ignorant to know that they don't know what they're doing. If you've ever seen a 30-year surgical resident treat a staff infection in the hospital, you'll understand what I'm talking about. And it's really scary how they give no clue what they're really doing until they get a consult involved. But it's very difficult because the more you know about a topic, the less confident you are. It's a weird psychological effect in people. But the Dunning-Kruger when combined with the Peter Principle, I think pretty much explains history. The Peter Principle is that people rise to their level of incompetence and then stay there in hierarchies. You combine the Dunning-Kruger and you combine the Peter Principle, I think the world is understood better than most places. Now, the other thing that comes up badly is memory. I'm doing this lots of things quickly. One of my epiphanies when I started to get into this was The Seven Sins of Memory by Daniel Schachter. And he talked about the different ways that your brain misremembers the world. And I was quite astounded by this book because I thought my brain was like a Super 8 film. That's how old I am. You have Super 8 cameras. You guys probably think in terms of YouTube's or something like that, that I'm an old person. But it's amazing how bad our memory is and how poorly we remember things. For example, the rule one is that memory fades. And because it fades rapidly, what we do is we construct our memories of what occurred by what we think it should be, not what actually happened, which is pretty remarkable. We also misattribute things. We see things that happen and we say, well, so and so did it, but it wasn't them. And we constantly do this with our memories of the past. This is, of course, a big problem in the law where people will misremember who shot J.R. or whoever. Memory is suggestible. I always love the fact that somebody implanted in someone's memory that they got hugged by Bugs Bunny at Disneyland. Of course, Bugs Bunny is not a Disneyland character, but still a third of people thought, oh yeah, I remember being hugged by Bugs Bunny. If you put the right memory in someone, they will remember their past incorrectly. And you can actually sort of really input false memories in people. And then memory is biased. We remember things the way we want them to make us look better. The movie Gigi and the lyrics of Gigi pretty much summed that up. If you watch the movie this day and age, the whole Thank Heaven for Little Girls thing is really creepy. But it's still fun to watch these two go back and forth. And if you've ever been driving home from a party with your significant other, arguing about the event that took place, and she remembers that totally incorrectly. And you remember it totally correctly, but you never guess from your conversation that you both went to the same party. And we do that all the time, I call it the Gigi effect. And we remember things that make us look better. We remember things as to how things should have been that make us look better than we are. And of course memory has persistence, and we tend to remember those things that are associated with stressors in our life. My whole medical training is one big long post-traumatic stress syndrome. And I tend to remember all the bad things. And one of the things you have to do in the practice of medicine is that a homeopaths remember that their patient got better on their homeopathy. I'm always much more biased and have these horrible memories of when things went bad. And even though it was the right thing to do to try not to be biased by that. That's five of the seven. The other two don't count, so I'm not going to mention it. But I know that people in here are taking off the seven issues of memory. Those are the final two actually. And this gets down to the archetype of in-rays, which I love in-rays. Back in the 1900s, a Frenchman named Blondeau discovered in-rays. He made this machine, he had a spark, and it cast a shadow against the wall, and he could see these in-rays, and they made no sense. And 200 physicists published 300 papers on in-rays, and they made no sense. It was like homeopathy or acupuncture. And then some smart-ass physicists from the United States named Wood came by, and when they weren't looking, he just disabled the in-ray machine. And people still saw the in-rays. Then this is in physics, which is like the hardest of the hard sciences. But you can do this in any study, where if you think you're going to be seeing something, you're going to see it. If you think you're going to have an effect, you will have that effect. And I think in-rays are really the archetype of this in the literature. And I always love the pen and teller, Bullshit Show, where they took a downspout from a house, they bent it into the shape of a magnet, they painted it to look like a magnet, and then they stuck it on this lady's arm, and she said, oh yeah, I feel better. Okay. And she did feel better. But people have an amazing ability to see things the way they want them to be. And finally, the thing that's really amazed me over the years in doing this is that facts just don't seem to matter to large numbers of people, which makes it difficult having an argument or discussion with people who believe in homeopathy or acupuncture, because they're not interested in the facts of physics, the facts of physiology, the facts of the topic at hand. When I mentioned this to my son about the word derp, my 16-year-old, his look of disgust at me was amazing. And he said, dad, that's the most cringe-worthy thing you can talk about. Never use the word derp in my presence again. So for those of you teenagers at home, don't talk about derp with them. I think this gets down to my favorite review on iTunes of my quackcast where the guy said, god damn it, it's small. Harmful to the Brian, I apologize to any Brian's in the audience. But he said, he gave me one star, and he says, I don't need to listen. He gave me one star, didn't even bother to listen to the podcast. I mean, that sort of sums up the ultimate in derpdom, if you ask me. And I really think you really are going to fail in an argument with anyone. You know you're in trouble when you start your sentence in your discussion with someone with actually. Homeopathy doesn't work. Well, actually, you've lost your argument right there with the person because you're about to start contradicting their whole basis of reality, experience, relying on other people, all the logical fallacies that they just participated in. And once you say the word actually, you've lost. You've got to find a better way. I think you've got to find a better way. I'm trying no longer to use the word actually. And I think if it disappeared from the skeptical lexicon, probably none of us would ever talk again. When you combine all of these ways, and there are many more ways of mistinking all these logical fallacies, all the different ways that diseases get better anyway, regression to the mean, disease running its natural course, the need for lasix, the psychological events. You can see why people will believe in different scams. You can see why researchers will get results in scams where none are there. You can see how we as human beings are func- are designed, not designed, bad term in this group, are evolved to think badly about reality and critical thinking. And it's not a natural process. So basically human nature predisposes us to believe in scam and to find that scam is effective. And you can't change human nature. But you can be aware of it. And if you're lucky and thoughtful and careful, you can compensate. And actually, I think that's what makes it so skeptic are a critical thinker. So that's the end. Standing ovation please. That's my website. If you're interested, you can throw me roses. I will take them. Thank you very much. Okay. Thanks, Mark. All right. So we learned a few things so far in our workshop. Our memories and our thinking sucks. So we have to compensate that with studies, but it's really hard to do a good study. And there's tons of ways in which studies can go awry. And they do. And evidence isn't enough. We need all of science. I'm going to talk to you about placebos. What are placebos in the placebo effect? In my experience, this is the most highly misunderstood concept in medicine. I don't know anyone outside of our small circle who really gets what placebo effects are. Yes, they are big sugar pills. But we're talking about all of the things that come into effect when we're talking about placebos. So here is a definition of placebos from the placebo program. The program in placebos studies, for many years, the placebo effect, you're calling it the placebo effect, which it's shorthand, but already there's a misconception there. It's not one effect. It's many effects. What's considered to be no more than a nuisance variable that needed to be controlled in clinical trials. Only recently have researchers redefined it as a key to understanding the healing that arises from medical ritual, the context of treatment, the patient provided relationship, and the power of imagination, trust, and hope. That is the fairy dust version of placebo effects. What is a placebo actually? Well, a placebo is an inactive treatment. That's the simple definition. But what we mean by placebo effects depends greatly on context. It's often used, this is what I call the part of this complete breakfast fallacy. Again, I don't know how dated this is, but the commercials for sugar bomb cereals, cereals that are just basically, or pastries, things that are basically sugar, they would always present them as part of this nutritious breakfast, or part of this complete, of course, because you have a nutritious breakfast plus the pastry or whatever. It isn't an important part of the nutrition breakfast. It doesn't really add anything to it. So that's how the placebo effect is being marketed. Part of this effective treatment. It's a completely irrelevant part, but you get to wrap whatever therapeutic ritual you want around non-specific placebo or therapeutic effects and you get to market it as part of this complete treatment. In the clinical trial, this is important. So we talk about placebo effects, but placebo effects have a very specific operational definition within the context of a clinical trial. The problem is then a different definition is used when you're applying it to the real world and without understanding what we mean by placebo effect. So the placebo effect singular in a clinical trial means everything other than a physiological response to an active treatment. It's everything else possible that can affect the outcome other than a treatment effect from the thing, the variable being isolated and studied. So if you're taking a medicine, the physical pharmacological effect that medicine has on you is that effect is isolated from everything else and everything else we call the placebo effect in that clinical trial. Does that make sense? So the logic here is this is the core logic of a clinical trial. You have one arm that measures the active treatment effect plus everything else and then you have a placebo arm which measures just everything else. So you have treatment effect plus everything else minus everything else equals an isolated treatment effect. That's how we know what the efficacy or the treatment effect is of the treatment that's being studied. If you don't do that, you simply have no idea. Placebo effects are highly variable also talking about clinical trials. This gives a little bit of insight into the various things that make up the placebo effect. The more you spend for a treatment, the greater placebo effects will be. What does that tell you? It tells you it's bias. That's what that tells you. The pill color affects the placebo effect. Pharmaceutical companies know this so sleeping pills are blue I think and other pills are more effective if they're red. You're sort of maximizing the placebo effect. The more compliant you are with a placebo, the better an effect you'll get out of it. If you miss taking that sugar pill, you won't be as well off at the end of a trial if you took every one. The more invasive the treatment, the more placebo effect there is. This is due to what psychologists call risk justification or expense justification. The more you invest in something then the more value you want to get out of it. You have to convince yourself that it was valuable because you invested so much in it. The more dangerous, expensive, risky, invasive, the thing is that you're doing the more you're going to be sure to convince yourself that there really was a benefit from it. Obviously, that doesn't affect the actual effectiveness of the treatment, your actual outcomes. This is all just biasing your assessment of how you're doing or the treatment. The difference between therapeutic and placebo effects. Therapeutic effects are specific response to a therapeutic and active intervention. There are also, however, non-specific responses to the therapeutic encounter. Those non-specific effects sort of live in the gray zone between therapeutic effects and placebo effects. They're better thought of as placebo effects within the context of the clinical trial because if you're asking does this drug work then remember everything else other than the drug working will be measured as a placebo effect. That includes all of the very real and useful aspects of just interacting with a physician or a practitioner. If you see somebody for a problem that you're having, well that gives you hope that the problem's going to get better because you're taking an active step to make it better. You might get kind attention from that practitioner depending on their bedside manner. All of those variables actually affect outcome in clinical trials. The more empathic the practitioner is, the more time they spend with you, all of those variables, those are all good, those are all meaningful, but they're non-specific. They're just what happens with the therapeutic ritual just from the very act of seeing a practitioner and being treated. They say nothing about the treatment itself. All of alternative medicine pretty much is a packaging of non-specific therapeutic effects as if they were a specific response to whatever ridiculous treatment ritual they're selling you that day. That's why we call acupuncture as a theatrical placebo. It's just the ritual surrounding acupuncture that works. Sticking needles into acupuncture points has zero effect. The thing that's really interesting is that 20 years ago alternative medicine proponents were telling the world give us the money to properly study these things and we will show you that they are efficacious that word has a very specific meaning. It's not the same thing as effective. Efficacious means that there's a specific physiological response to the treatment. 20 years later, after the studies that have been done, billions of dollars from NCAM, hundreds of published studies and many of the most popular modalities, they have not been able to show efficacy, an actual treatment effect for any major CAM modality. Homeopathy, acupuncture, Reiki, therapeutic touch, whatever. None of them work. There's zero effect. So they've flipped the whole logic of clinical trials and medicines. That's okay. All of these things are still effective because of these non-specific therapeutic effects, placebo effects. This is why they're called the rise and fall of placebo medicine. Placebos are effective, powerful, healing medicine. No, they're not. It's just smoke and mirrors. It's actually mostly an illusion. This is now our favorite slide. You're always going to see this when we talk about placebo effects. This is a study done by a researcher who was trying to show that placebo effects are real, physical, physiological, mind over matter effects. So he compared acupuncture to real medicine to placebo medicine to no treatment. And if you look at subjective outcomes, how do you feel? All of the intervention groups reported doing better subjectively than the group that got no interaction at all. They got no intervention. Of course. Because the no-intervention group has none of the non-specific therapeutic effects, mostly bias, reporting bias, but still none of that was there. But then when you look at objective outcomes, there was absolutely zero difference between any of the placebo groups. Only the real medicine had a treatment effect. That's because the real medicine has efficacy and none of the other placebo or acupuncture, this is for asthma, has any efficacy. It only has non-specific subjective effects. So this gets to what, if we try to answer the question what is the placebo effect in clinical trials? Again, it's being marketed as if it's a real healing effect. But a systematic review of decades of research, pouring through the clinical research, looking for those trials where you had a treatment versus a placebo versus no intervention so that you can compare placebo versus no intervention. If you looked at those studies that did that, what you found is that placebo effects have no important clinical effects. There really is nothing valuable to these placebo effects. They tend to be small and transient and subjective. But patients do not get objectively better simply from mind over matter wishful thinking. It doesn't happen. And again, it's been studied inadvertently almost for decades whenever it was part of a clinical trial. Some proponents of placebo medicine have tried to say, okay, because one of our things we also say, okay, well, placebos don't work, it's all smoke and mirrors, it's all the illusion, it's regression to the mean, people would have gotten better anyway, it's all the things that create the illusion of a treatment effect but they say, no, it's a real effect and then we say, okay, well in any case it's unethical because you have to lie to patients and they say, no, we're going to do a study to show that you can get a placebo effect without deception. So they did a study where they compared, they told patients, this is a placebo and another group gave the treatment and they showed that you can get a placebo effect even when you tell people that it's a placebo. But they didn't actually say this is an ineffective treatment. So what they said was placebo pills made of an inner sentence like sugar pills that have been shown in clinical trials to produce significant improvement in irritable bowel syndrome through mind, body, self healing processes. So they didn't think, I mean that's not no deception because the whole second half of that thing is bullshit. So they didn't actually accomplish what they claimed that they did, they still were being deceptive. Now, it's always more complicated than you think. The brain is an organ and it's an organ that interacts with the environment and it also of course is integrated and it's part of your body. It affects your body. There's a neuroendocrine system and things like hope and expectation, actually those things exist in your brain as a chemical reaction. So if there is an effect of that chemical reaction which is specifically dopamine, it's actually plausible that there could be a physiological effect to mind over matter exists in as much as something that affects the brain, affects your brain's chemistry because you are your brain, your thoughts, your feelings, your expectations, all your mood. That's all chemical reactions happening in your brain. Those chemical reactions are substantiated in reality. And so yes, so you can affect that and sometimes there's a little crossover effect where that dopamine that's being secreted because you're feeling better about yourself or about your treatment can have other measurable effects. So we are talking very narrow, very circumscribed exceptions to the notion that placebo effects are transient, subjective, small, and not clinically valuable. The problem is then taking that like looking at Parkinson's disease and saying yes, you can reduce motor fluctuations with a placebo treatment in Parkinson's disease because Parkinson's response to dopamine in the brain and dopamine can be released by feeling good. Okay, I buy that. There's a specific mechanism of action that happens in the brain there. The problem is that these very narrow, interesting neurological exceptions are then being marketed as see the placebo effect is real and it works and therefore it can cure cancer. No, that's a massive overgeneralization. Sir, that's what they say. They go all the way to it can cure anything because it's real. It is real. Well, first of all, you have to ask about placebo effects in every specific context and indication. Just like what they're doing is the exact equivalent of taking a medication, doing a study showing that a pill has a very specific effect. This pill treats this specific infection because it's antibiotic and say see, this pill works. It works. Therefore, it works for anything I choose to use it for. No, it works for the one thing that the study showed that it was useful for. Chem artists do this all the time. Chiropractic works for what? That's like saying surgery works. For what? You have to, there has to be a very specific indication. I want to go through the rest of my slide very quickly. I want to leave some time for just to finish up my placebo. Acupuncture works as placebo. That is a non sequitur. That is an oxymoron. They are taking a negative study and saying it's really positive because the placebo acupuncture is acupuncture. No, placebo acupuncture isn't acupuncture. It's the ritual that surrounds acupuncture minus the acupuncture. That's what it actually is. They've managed to entirely flip the entire logic of clinical trials on their head. Don't buy it, though. Okay. I want to leave 22 minutes. What we're going to do here is a little bit of Q&A. We're going to leave about half of this time for you to ask us questions, but we're going to start by asking you a few questions. If you feel brave and you want to raise your hand and answer, that's great. But it's just more to get you all thinking about before we give you the answer, just think about the answer, just to integrate the stuff that we were talking about today. Okay. Here's a quote. Tell me what you think about this. I got rear-ended and had back-bad neck pain. Then I went to this acupuncturist and by the next day the pain was all gone. She was great. This is a quote from an actual patient. There may be a logical fallacy lurking in that sentence. Does anyone want to hazard a guess with that? Post-hoc ergo-proctor-hoc. After this, therefore, because of this. Of course, we don't know what would have happened if they hadn't had acupuncture. Most acute back and neck strains are going to get better on their own, no matter what you do or don't do. So that's an easy one to get you started. Okay. This is a bit longer. My son was born 11 years ago with very severe congenital diaphragmatic hernia. He spent two and a half months in the hospital. He came home on oxygen and a whole list of medications. He has constantly been advancing, I assume that means getting better. Four years ago when I encountered homeopathy and felt its healing power, I took him to see a homeopath as well. That year he had walking pneumonia. He had a course of antibiotics, but his blood oxygen level would not improve. The doctor gave him five days of steroids. He improved. Two days after the steroid was finished, his oxygen levels dropped again. I really did not want him to get a long term of steroids, so I took him to I guess that's the homeopath. Within seconds of receiving his remedy, his oxygen level went up and stayed up. That's a little bit, that's a similar sort of logical fallacy, but a little bit more longer. So there's something else that's happening in here. So yes, of course this is all assuming causation from correlation just in a more sort of complicated relationship, but another way to look at this is that she's not isolating variables here. There's all kinds of variables, steroids, other treatments, home oxygen, he had pneumonia, fluctuations in the symptoms. So you have this very chaotic system with multiple variables and you're going to say that he got better over this period of time because of the homeopathy. There's absolutely no way you could make that conclusion. The other thing you could say about this, anyone else want to make any other observations Mark, do you want to? That's classic misattribution. Well that's very good. So the other thing that jumped out of me was within seconds. So yeah, whenever you see that, that's like screams placebo effect. My favorite personal experience with this was the patient who had a clearly psychogenic neurological presentation who was convinced that she needed steroids and her symptoms went away before the steroids had worked their way through the IV tube and made it into her arm. So she actually hadn't gotten the medication yet and she was all better. So clearly talking towards motivation and bias here. She believes in homeopathy. She is fearful about, and I would be fearful about long term, nobody wants long term steroids if you don't need them. The drugs have lots of side effects. So clearly there's a huge motivation to believe that the treatment is effective. Okay. Parents of recovered children, and I've met hundreds, all share the same experience of doubters and deniers telling us our children must have never even had autism or that the recovery was simply nature's course. We all know better and frankly we're too busy helping other parents to really care. Anyone want to guess who this quote is from? Jenny McCarthy, very good. How many hundreds of logical fallacies are lurking in there? There's at least one straw man there and that they don't say that their kids never had autism. That's potentially a straw man. I do think in her specific case, she says that her son even had autism and it's not really clear if that was the correct diagnosis. Well there's also the issue that natural history. There's approximately 20% of autistic children, maybe 10 to 20%, will lose the diagnosis. Yeah, will advance. This is true of generically, whenever you're treating any pediatric, especially neurological condition, kids get better as they age. Their function improves just as a natural consequence of aging. These are often developmental delay, we call it. They're not flat, they're still developing. It's just that they're developing an abnormally slow or retarded curve and so we call that developmental delay. So of course no matter what you do, the children are going to be better a year or two down the road than when you started the intervention. No matter what you do or don't do. So how do we know if that's a treatment effect or the natural course of the illness? Well you got to do double blind placebo controlled trials then isolate the variables because that natural course is also part of placebo effects, right? So she's basically denying that we have to take in the natural history of the disease, that we have to properly establish diagnosis at the beginning of any treatment trial, essentially saying forget all those details that we use to know if something actually works or not, we know. So this is sort of starting with the conclusion. Alright, I think we have one more of those. There we go. Either homeopathy works or controlled trials don't. I think I remember who said that. Yeah, that's a homeopath. Surprise, surprise. So I don't think that's a biological fallacy that Mark mentioned. Does anybody want to hazard a guess? False dichotomy. Absolutely. There may be, there's other options there. There just perhaps may be other options other than those two that are being offered. It's also possible that clinical trials work just fine when they're done well and then homeopathy doesn't work. Okay. People looking for natural cures will be happy to know there is one. Two words explain how it works. I believe. This is all from one newspaper article on placebo effects. It's the placebo effect. The ability of a dummy pill or fake to treatment to make people feel better just because they expect that it will. It's the mind's ability to alter physical symptoms such as pain, anxiety, and fatigue. In just the past few weeks the placebo effect has demonstrated its healing powers in tests of a new drug to relieve lupus symptoms. About a third of patients felt better when they got dummy pills instead of the drug. So lots of problems with this. But this is the default sort of understanding. If you read anything in the lay press about placebo effects or just talk to anybody, this is pretty much what they believe about placebo effects. Just a couple of red flags to point out. Whenever people start throwing around the term healing, that's a huge red flag. Subjective symptom relief equating that with healing. Again, there's the implication that there's a real physiological effect happening, but by definition there isn't because that would not be placebo effects. There's the assumption that it's all mind over matter. Not realizing that in this trial what they were measuring are things like regression to the mean, which means that whenever your symptoms are bad they're going to regress to a more average state just by random statistics alone. So that's a powerful statistical effect. Yeah, when you have to go through Harriet's 20 questions about the actual trial including was the placebo arm legitimate, was it a good comparison group, were there any artifacts that weren't being covered. But this is just a representation of how people think of and talk about placebo effects. It's almost completely wrong. Okay, this is an abstract to a study. So this is now going to summarize Harriet's part of the talk. I'll read the whole abstract. Our systematic review of current acupuncture IVF, IVF is in vitro fertilization research found that for IVF clinics with baseline pregnancy rates higher than average, 32% were greater. Adding acupuncture had no benefit. However, at IVF clinics with baseline pregnancy rates lower than average, less than 32%, adding acupuncture seemed to increase IVS pregnancy success rates. We saw a direct association between the baseline pregnancy success rate and the effects of adding acupuncture. The lower the baseline pregnancy rate at the clinic, the more adjuvant acupuncture seemed to increase the pregnancy rate. So there's a massive glaring problem. This has been going around recently. This is like the in the news, oh, acupuncture works for IVF for in vitro fertilization and this is the data. That's just a quick summary of the data. There's a massive problem in that. No fair if you read my article Monday. David wrote about what do you think? Absolutely. Is this regression to the mean? So again, it's the clinics that were doing well at baseline had no effect from acupuncture. Actually there was a non-statistically significant trend toward a worse effect than the clinics that were doing better. Right, which is also just regression to the mean. You could be. Yeah, right. So again, this is also what we call data mining or cherry picking. You have a set of data and you look at all the data and it's negative. I wonder if there's a way I could pull out pieces of this data and make it positive. People do this all the time. This is like half of the articles you write on science-based medicine are researchers who are doing this in one of the couple of dozen ways that there is to do this and hide the fact that you're doing it. Harriet spoke about doing multiple comparisons. That's a way of data mining. And if you don't strictly control for the multiple comparisons statistically, you're just, it's just paradigm. You're just looking for patterns in the data. So here, so what do you think? I mean, so if clinics that had a below average success rate, there's a lot of room for improvement there, isn't there? And maybe just instituting, I don't know, the strict protocols of the study. Maybe that will get them back up to average. And that is being interpreted as a treatment effect. Whereas if you're doing better than average, it's not going to help you to be standardized to the, in fact, just having to do the trial may take the edge off of your efficacy. If you're a cutting edge leading clinic that really knows what they're doing and now you have to sort of standardize to the average. So everyone sort of regressed to the mean a little bit and they're just looking at the bottom end and look at that. They're regressing to the mean. That's a real effect. It's all BS. All right. But that's what's being sold as a state of the art acupuncture trial showing efficacy. And, you know, how many people in the public are going to be able to look at that? I mean, you guys can today, right? But how many people are going to look at that and go, oh, that's just regression to the mean. That's not a real effect. Not many. All right. This is the Q&A. So this is open field to ask us anything. Yep. Right. They actually did in the study. I forget what the... But repeat the question. I mean, the question was basically that they didn't quantify. He thought they didn't quantify. Actually, they did. Yeah. That was just a summary of the data. Yeah. Yeah. So, you know, if I were Mark Crystal of 20 years ago, I would say it's stupid. You can still say it's stupid. You can still say that. It's still stupid. But, all right. Yeah, we get that a lot. I mean, I've been hearing that question for a while. They're in the business of keeping you sick. No, no, no. Here. Cancer. They don't want to cure cancer. They want... Yeah. I mean, if that argument fails, no matter how you think about it, first of all, physicians are people. You know, I mean, how detached and evil would we have to be? Seriously. I mean, this is the conspiracy mind where people are cardboard cutouts. They're not actual people with real complex feelings embedded in the real world. There's nobody behaves the way the villains of conspiracy mongers claim that they behave. It's just ridiculous. But also, even if our goal was to make as much money as possible, not being effective is not a good marketing strategy. You know, we want to keep our patients healthy so that they live a long time and keep coming back to us. You know, and it's not like their treatments are so effective that people will be in perfect health until they get hit by a bus. No, our patients just get older, and the older you get, the more sick and the more chronic problems you're going to have. I mean, it's just a fantasy. It doesn't make any sense no matter how you slice it. Yeah. Right. And I have to say, the whole western versus eastern... Don't even get me started on that dichotomy. Yeah, that's another false dichotomy. We don't like that. There's just science. There's just medicine. It either works or it doesn't work. These false divisions are all about marketing. And currently, we, the federal government, withholds money from us in the hospitals until we do all our quality things right to improve your health. So actually, we make more money when we, as hospital systems, when we do all the things right to improve your health, and we lose money when we keep you sick. And the whole system is set up to reward us to make you well, kind of. But in all fairness, that was not necessarily true until recently. My career, 20 years' worth. Mm-hmm. Yeah. I believe that he believes that he's a real psychic healer, and there's two words that make me believe that very easily, and that's confirmation bias. You can convince yourself of anything with confirmation bias, and you will feel that it's based on solid evidence that this is what happens in the world, because our brains are very good at filtering and remembering information to confirm something that we believe or really want to believe. It's so powerful in effect. I mean, so there's no short path out of that deep dark wood. There just isn't. You have to engage in a long-term strategy of teaching critical thinking. And I also wouldn't address his psychic healing head-on. Teach him critical thinking about other things that he doesn't have an emotional investment in. And then you have to just hope that he makes the connection, which is a long path. And you might encourage him to make that connection at some point, but until he has the basic critical thinking skill set, it's really hopeless. Don't confront his belief. Just give him the skills as if they're... Hey, by the way, what do you think about UFOs? And just talk to him about why you think that's BS. Just whatever. Years ago, there was a poem reader, and I forget why, but he decided for two weeks to give everybody the opposite reading of what he thought he saw, and he discovered... That was Ray Hyman, yeah. Yeah, he discovered he was just as effective. You should ask him to do that with his healings. Just deliberately for a week, ask him to tell people the opposite. I'll never do that. Hey, never know. You might have ethical concerns about that, Joey. Yes. Yeah, I mean, so the Benedetti research really really is a... Yeah, I'm sorry, I mean the process of doing that. So there's a classic series of experiments by a researcher, Benedetti, who came to the conclusion that placebo effects were real physiological effects. But they suffer from all the problems that I kind of outlined in my section of the talk, in that he's extrapolating from subjective to objective, which you really can't do. And he's extrapolating from areas where there are exceptions, where there may be a neuroendocrine effect, for example. And then also the specific versus non-specific effects. So he's saying, essentially what he's saying is, there's non-specific of benefits from seeing a doctor. Yeah, no shit. We figured this out 300 years ago, or more, probably 2,000 years ago. Yes, people will actually get better if they're under the care of a physician, even if you're not doing anything, but making them think about their symptoms. They treat themselves better, they're more compliant, generally with other treatments. Those are the non-specific effects. So the problem is with using the placebo effect as if it were a single monolithic thing, when in fact it's multifarious. It's comprised of multiple things. Subjective bias, non-specific effects, statistical anomalies like regression to the mean. When you conflate them and treat them as if they're one thing and then wildly extrapolate in different contexts, that's where Benedetti went wrong. So he's doing astrology, ESP-type quality of research, but just doesn't realize it, because he's dealing with something that's very squirrely and he doesn't really separate out those variables in a meaningful way. When more recent research has actually tried to deconstruct the placebo effect into its components, and when you do that, you realize that it's really just all smoke and mirrors. There's really no useful, meaningful, exploitable effect there beyond a good bedside manner, which, you know, you get that with anything, there's nothing specific about any particular treatment that gets that. Right, right. You were first. Yeah, so it's very difficult. You know, one of my colleagues was talking about physics. When you're talking about electrons and things that are measurable to 3-4 decimal points, even there, like with N-rays, self-deception is a problem. With medicine, it's really squirrely. It's very hard. There's no painometer, for example. It's just hard to objectively quantify certain things. And this is, I think, the big challenge for mental illness research for things like depression and anxiety is that it's hard to quantify. We have to use subjective markers for how that they're doing or use quality of life type of markers. So you're always sort of asking related questions. How do you feel? How are you doing in life to figure out if this drug is having the effect you want on brain chemistry? Or you may use some biological markers, but it's not really, we don't know really how they translate to the net effect of somebody's mood, for example. I also think that's why we really can only confidently measure big effects when it comes to things like anxiety and depression. One very common question I get is, what do you think about the meta-analysis that shows that antidepressants don't work for depression? And what everyone misses is that what the analysis showed is that antidepressants don't have a statistically significant effect for mild to moderate depression. They leave out the mild to moderate. For severe depression, there's no question that they're effective. Because the effect size is bigger. It's just really hard to measure small effect sizes when you're dealing with something as subjective as how you feel. And so it's just hard to reject the null hypothesis with mild depression, but not with severe depression. So they just have to be really rigorous, because everything, all of the biases and flaws in doing clinical trials gets magnified when your endpoint is so subjective. It's not impossible. It's just much more difficult. You have to be much more skeptical. And I think the studies have to be much more rigorous before the results are believable. So the question is, how do you handle patients who are terminal or chronic where they can't be cured, basically, by medicine? They're frustrated, they make a false equivalency between what we do to manage those conditions and alternative medicine. You guys want to take that? I'll be happy to. That's a tough one. That's a tough one there. I'll say one thing. That's not the reason why most people use CAM. If you look at the surveys, they use it because they're ideologically predisposed to their recommendation. They don't do it because they're frustrated with the system. 85% of the time. 15% more fall into the category you're talking about where they're frustrated because we don't have the technology to cure what ails them. Chronic pain patients fall into that category a lot as well. So again, there's no good answer. And you have to really individualize it, I think. I usually try to feel out my patients for where they're really coming from and address their concerns. I think, first of all, if they're in your office and you're a science-based practitioner, they're already selected as being somebody who's willing to listen to the science end of things. So I personally just give patients my unapologetic science-based assessment and say this, I'm a science-based practitioner. I looked at the evidence for acupuncture and pain and it doesn't work. So I don't personally recommend it. And then you could also, if there's anything to be cautious about saying it's not a risk-free treatment, it's helpful, also don't spend a lot of money. I give them some basic common sense cautions and they appreciate that and that has some effect, but there's no magic answer. There's no way to steer 100% of patients away from false hope when it's being offered to them. They're just too vulnerable. You can't expect them, even perfectly rational intelligent people, when they're that vulnerable, that's what vulnerable means. They're going to fall for that. I don't know of any magic way to steer 100% of patients away from that. I don't know of any of your best advice. And that's why we advocate proper regulation and quality control standards in medicine because you can't expect vulnerable patients to be able to sort through very subtle, clever, deceptive marketing. And our time is up, so thank you all. Appreciate it.