 Hello and welcome. I'm Dr. Harriet Hall, also known as the Skeptok, and this is the first in a series of 10 lectures about science-based medicine. In this first lecture, I'll explain what science-based medicine is, why it's important, and how it's different from evidence-based medicine. In the second lecture, I'll talk about medicine that isn't based on science, complementary and alternative medicine, or CAM. In lectures three through eight, I'll cover individual topics in CAM that claim to be science-based, but that are not. Chiropractic, acupuncture, homeopathy, naturopathy, energy medicine, and miscellaneous others. In lecture nine, I'll talk about some of the pitfalls in clinical research, why we can't take every published study at face value, and how to tell if a study is good science. And in the final lecture, I'll talk about how science-based medicine is misunderstood in the media and in politics. I've prepared a course guide to accompany these lectures. It summarizes the important information from each lecture and includes references and suggestions for further reading. Now, the subject to these lectures is science-based medicine, not CAM, so why devote so much time to discussing CAM? Well, there's method in my madness. There are two reasons to concentrate on CAM. First, I think that understanding what good science isn't is the best way to understand what good science is. Most of what I've learned about what constitutes a good scientific study, I learned by studying the flaws in a lot of very bad studies. Bad examples can be very educational. If you haven't learned to spot what's wrong in flawed science, how could you ever be sure that something is good science and doesn't have any of those flaws? Have you ever read books about neurologic abnormalities like Oliver Sack's book, The Man Who Mistook His Wife for a Hat? When part of the brain is damaged by a stroke or a tumor or trauma, strange things happen, abnormal behavior and loss of certain abilities that we take for granted. Those abnormalities are clues to understanding how that part of the brain normally works. Studying brain pathology helps us understand normal brain function. And the principle of learning about the good by studying the bad works for almost anything. For instance, if you wanted to learn how to apply makeup, it would be helpful to see examples of people who've used too much makeup for the wrong kind. So that's the first reason. The second reason is that much of CAM falsely claims to be based on science. And we need to understand why mainstream science rejects those claims. Jenny McCarthy says, vaccines cause autism. Her autistic son, Evan, is all the evidence she needs. She says, my science is Evan and he's at home. That's my science. Well, we can easily dismiss claims like that. As Christopher Hitchens famously said, assertions made without evidence can be dismissed without evidence. But it's not so easy to dismiss a homeopath who shows us his evidence in the form of a long list of published peer-reviewed scientific studies that he thinks have validated homeopathy. We need to understand why his evidence doesn't stand up to scrutiny. By the end of this course, when you encounter someone who's promoting homeopathy or maybe selling a miracle diet supplement through multilevel marketing, if he throws a scientific study at you and says, here's proof that it works, you'll know why his proof is no such thing and you'll have a rational basis for rejecting his claims. Now, science is a wonderful thing. It has improved our lives, it's lengthened our lifespans, and it's given us reliable tools to understand how the world works. But for most of human history, we had to make do without science. Evolution shaped our ancestors' brains to help them survive in a world without science. A world without newspapers, books, or the internet without even written language. They had only two ways of learning about the world, their own experiences and the experiences of others. If you ate an unfamiliar berry and got sick, you'd learn not to eat those berries if you survived, or a friend might tell you not to eat them because they made him sick and you might be able to profit from his experience. Sometimes what your friend told you might be wrong, but in general, paying attention to the stories of your friends was likely to help you survive. To survive as hunter-gatherers on the African savanna, our ancestors needed to learn from stories, and they also needed to make fast decisions. So they evolved to be very good at pattern recognition and at jumping to conclusions. Have you ever thought you saw an animal and then taken a closer look and realized you'd been fooled by a stump or a rock or something? The other day I thought I saw a dead bird out on our deck, but it turned out to be just a crumpled brown leaf. We look for recognizable patterns, but we can be fooled by objects that have a similar profile. Now let's say you're one of our ancestors on the savanna. You see something over there off in the distance in the bushes, and it looks sort of lion-shaped, but it might just be an illusion created by branches and shadows. If you see the pattern as a lion and you're wrong, you run away when you didn't need to, but there's no real harm done. On the other hand, if you miss a real lion, your mistake could be fatal. Richard Wiseman explained, The ability to find patterns is so important to your survival that your brain would rather see a few imaginary patterns than misgenuine instances of cause and effect. That's the price you pay for being so amazing the rest of the time. But we see imaginary patterns everywhere. For instance, a baseball player notices that he was wearing a certain pair of socks when his team won. So he develops a superstition and he keeps wearing his lucky socks. This team doesn't always win when he wears those socks, but it wins often enough to reinforce his superstitious belief. Evolution shaped our thinking processes for maximum survival advantage, but it was a mixed blessing. There were advantages to our skills, but there were also downsides. We got very good at pattern recognition, and that helped us make sense of our environment. It helped us identify threats like a lion in the bushes, and it helped us learn the effects of poisonous plants or medicinal plants. But it gave us a tendency to sometimes see patterns that aren't real and to develop superstitious beliefs about cause and effect. We got very good at making quick decisions. This allowed us to react in time, to run away from the lion now, instead of stopping to think about it. But it made us more likely to jump to conclusions before we had all the evidence. Evolution equipped us with emotions that were very handy for survival. Fear motivated us to run away from the lion fast, but emotions can interfere with judgment. For instance, the fear of death can lead cancer patients to grasp its straws and go to Mexico to try implausible, potentially dangerous treatments like leitreal. Morgan Levy said, thinking like a human is not a logical way to think, but it is not a stupid way to think either. You could say that our thinking is intelligently illogical. Millions of years of evolution did not result in humans that think like a computer. It is precisely because we think in an intelligently illogical way that our predecessors were able to survive. So evolution equipped us with minds that kept us alive in a pre-scientific world by a strategy of paying attention to what others told us of seeing patterns that might not always be real and of jumping to conclusions. Then we discovered science, and what used to be a survival advantage became a handicap. The scientific method is a marvelous set of tools that allows us to ask specific questions about how the world works and to get reliable answers. And science is a way of correcting for the kinds of mistakes our minds are prone to make. But our minds evolved to help us survive in a world without science and thinking in a scientific way doesn't come naturally. Our minds still work the old way. We prefer stories to studies. We prefer anecdotes to analyses. If our neighbor had a bad experience with a Toyota, we're likely to remember his complaints and not buy a Toyota even if Consumer Reports tells us that their test found that it was the most reliable brand. It isn't logical to value hearsay over data, but we're just doing what comes naturally. That's a lot of training and discipline to overcome our natural tendencies. How do we decide if a medical treatment works? We're impressed by anecdotes, and we jump to conclusions based on our observations. If your friend says, Homeopathy worked for me, you want to try it. When you try it and your symptoms go away, it's only natural to assume that it really works. But sometimes we get it wrong. Your symptoms might have gone away for other reasons that had nothing to do with the treatment. You have no way of knowing whether your symptoms might have gone away if you hadn't used any treatment at all. Assumptions can be wrong. We make mistakes, and no matter how smart we are, every one of us can be fooled. Patients and doctors alike were fooled for centuries by bloodletting. Everybody knew that it worked. The medical textbooks explained how it worked to balance the Ubers, and everyone knew it saved lives because they'd seen it with their own eyes. But when someone finally thought to compare patients treated with and without bloodletting, they discovered they'd been killing more patients than they helped. Big oops! Even modern doctors can be fooled. Not long ago, doctors used to do an operation for heart disease, where they opened the chest and tied off chest wall arteries to divert more blood flow to the heart. They had an impressive 90% success rate. A doctor named Leonard Cobb was skeptical, so he did an experiment with a sham surgery control group where he just made the incision in the chest and closed it back up without actually doing anything inside. He discovered that just as many patients improved after the fake surgery. In this diagram, the two bars on the left showed that chest pain improved much with the sham surgery, light blue, as with the real surgery, dark blue. And the two bars on the right showed that the sham surgery, light blue, reduced the need for nitroglycerin pills for pain even more than the real surgery. The differences were in opposite directions and they weren't statistically significant. Essentially, the fake surgery worked the same as the real surgery. So they thought they had this great operation with a 90% success rate, but they were wrong. When they put it to a scientific test, it turned out the operation accomplished nothing at all, except maybe to suggest to patients that they would feel better. So this was another big oops, and doctors stopped doing that operation. There are a whole lot of reasons people can come to believe an ineffective treatment works. Here are some of them. Don't bother trying to read them all now. I'm going to go over each one of them separately. One, the disease may have run its natural course. A lot of diseases are self-limiting. The body's natural healing process is restore people to health after a time. Wounds heal. A cold goes away on a week, whether you treat it or not. A few years ago, I developed shoulder pain. I was unable to raise my arm without pain and I couldn't reach behind my back to fasten my bra. After about a year, one day I suddenly realized I was back to full function and had no pain at all. My treatment? Nothing but a tincture of time. Whatever was wrong had run its natural course and resolved on its own. If I had been using any treatment at the time, I would have been fooled into thinking that the treatment had worked. Even serious life-threatening diseases can sometimes resolve without treatment. Before we had antibiotics to treat pneumonia, it wasn't always fatal. Some people survived, so before we could be sure antibiotics worked, we had to compare the survival rate with and without antibiotics. Two, many diseases are cyclical. The symptoms of any disease fluctuate over time. Fevers go up and down. People with arthritis have bad days and good days. The pain gets worse for a while and it gets better for a while. If you use a remedy when the pain is particularly bad, it was probably about ready to start getting better anyway, so the remedy gets credit it doesn't deserve. That's known as regression to the mean. Here's what happens. Sarah has osteoarthritis pain in her knees. Here's the natural course of her pain during a random month. The first few days of this particular month she has little or no pain. As the middle of the month approaches, her pain gets progressively worse and then towards the end of the month it subsides. She's not likely to take any pain pills at the beginning of the month when the pain is minimal, but by the middle of the month she needs relief, so she starts taking pills. And look what happens. As she takes those pills, the pain subsides. It's only natural for her to assume that the pills worked. But she doesn't know that she's only looking at the right side of this graph with the left side cut off. Her pain did exactly what it would have done if she hadn't taken any pills. But she doesn't have any way of knowing that. Three. We're all suggestible. Studies show that we think wine tastes better if it comes from the bottle with a higher price level. If we're told something is going to hurt, it's more likely to hurt. If we're told something is going to make it better, it probably will. We all know this. That's why we kiss our children's owies. Anything that distracts us from thinking about our symptoms is going to help. Four. The wrong treatment may get the credit. If you took mega doses of vitamins along with your chemotherapy, you may think it was the vitamins that cured your cancer when it was really the surgery and the chemo. Sometimes the credit is due to something you don't even realize. Maybe something changed in your life at the same time. One writer called this the spaghetti factor. You happen to change brands at about the same time. It was some herb in the new brand that caused the improvement. Five. The diagnosis may be wrong. You've all heard those claims that X cured my cancer. When you track down the details of those miracle cancer curies, quite often it turns out that they never even had a biopsy. So we can't know if they ever really had a cancer to cure. Even if they had a biopsy, the results could have been wrong. Pathologists have been known to make mistakes in reading the slides, and there could have been a mix-up in the lab where the sample was actually from another patient. The prognosis may be wrong. They said I only had six months to live, but thanks to treatment X I'm still alive three years later. But doctors can only provide an educated guess about the future, and they may have guessed wrong. All doctors really knew was that the medium life expectancy for that condition was six months. But that means half of the patients die before six months, and half of them live longer than six months. Sometimes much longer. Six, temporary mood improvement may be mistaken for a cure. If a practitioner makes you feel optimistic and hopeful, you may think you feel better when the disease is really unchanged. Seven, psychological needs affect behavior and perceptions. When people want to believe badly enough, they can convince themselves that they've been helped. If they've invested time and money, they don't want to admit that it was wasted. We see what we want to see. We may not remember accurately what symptoms we had when. We remember things the way we wish they had happened. When a doctor is sincerely trying to help a patient, the patient feels a sort of social obligation to please the doctor by getting better. And he may convince himself that he's better when he really isn't. He's even known to deny the facts, even to refuse to see that the tumor is still getting visibly bigger. In one case, a patient believed he was cured. His doctor said, look at the x-ray. The tumor is still there. The patient refused to believe it. He insisted that that was just a shadow where the tumor used to be. Eight, we confuse correlation with causation. Now this is a big one. That follows a treatment. That doesn't necessarily mean that the treatment caused the improvement. It's called the post-hoc or go-proctor-hoc fallacy. That's Latin for after that, therefore, because of that. The rooster crowed, and then the sun came up. Therefore, the rooster made the sun come up. Well, it's easy to see what's wrong with that reasoning. But look at this. I took a pill, I got better, therefore the pill made me better. It seems compellingly obvious to us that it was the pill that made us feel better. And maybe it was the pill. But maybe it was just a mistake, like the rooster. We don't stop to think that we might have felt better for some other reason. We don't stop to rule out all other possible explanations. We jump to conclusions, like the man who trained a flea to dance when it heard music. And then he cut the flea's legs off one by one until it stopped dancing. And he concluded that the flea's organs of hearing must be at its legs. Correlation is not causation. People notice that the number of autism diagnoses increased about the same time the number of vaccines for infants increased. They saw a correlation and they assumed causation. Vaccines must cause autism. But there was also a correlation with the number of pirates. Science studied vaccines and autism, and they didn't find any evidence that vaccines caused autism. Science didn't study pirates to see if they caused autism. I love this graph. It shows an almost perfect correlation between autism diagnoses and the sales of organic food. Does anyone think organic food causes autism? Correlation doesn't equal causation. To find out if a correlation really indicates a cause, a scientist named Hill came up with some criteria for determining causation. I'll illustrate with the example of smoking and lung cancer. There's no way we could do a randomized control study of smoking. You can't divide people into two groups and make one group smoke and the other not. And you certainly couldn't have a blinded study because they wouldn't know whether they were smoking. So we had to approach the question by other routes. There was a temporal relationship. People smoked first and got lung cancer later. Two, there was a strong relationship. Lots more smokers than non-smokers got lung cancer. Three, there was a dose response relationship. The more cigarettes smoked the higher the rate of lung cancer. Four, the results of various kinds of studies were all consistent. Five, the mechanism was plausible. We know there are cancer-causing compounds in cigarette smoke. Six, alternate explanations were considered and ruled out. Seven, they did experiments where they exposed lab animals to cigarette smoke and the animals developed lung cancer. Eight, specificity. Cigarettes produced specific kinds of lung cancers. Coherence. The data from different kinds of epidemiologic and lab studies and from all sources of information held together in a coherent body of evidence. Richard Feynman said, science is what we've learned about how to keep from fooling ourselves. We fool ourselves because of the way our brains work. We prefer stories to studies. We have cognitive flaws and biases. We have perceptual failings. Just think of optical illusions and we're influenced by emotions and by psychological motivations. Biases and cognitive errors lead us to false conclusions that we wish to be true. Science is the only method that systematically controls for our biases and cognitive errors that allows us to obtain reliable information. The author of this book, a doctor named Drew and Birch, has called it Medicine's Beautiful Idea. The idea that no matter how certain we are that a treatment works, no matter how reasonable a belief sounds, we still have to test it. Today, EBM stands for Evidence-Based Medicine and it has been defined as the conscientious, explicit and judicious use of current best evidence decisions about the care of individual patients. But it wasn't until 1991 that the term Evidence-Based Medicine first appeared in the medical literature. Before that, we had other EBMs. First, there was expert-based medicine where we unquestioningly accepted whatever the expert said. Aristotle said men had more teeth than women. He was the expert and for centuries everyone took his word for it. No one bothered to look in mouths and count teeth. And then there was experience-based medicine where a doctor could say in my experience this treatment has worked well for my patients. For centuries, doctors had experience with bloodletting to balance the humors and they were thoroughly convinced that they were saving lives when they were actually killing patients. My colleague Mark Crislop says that the three most dangerous words in medicine are those experiences so very compelling but is so very often wrong. So we learned to rely on scientific evidence rather than on expert opinion and experience. The first modern clinical trial was done by James Lind of the Royal Navy in 1747. Back then, the sailing ships went out for years at a time and sailors had no access to fresh foods. Sailors developed scurvy. They became weak, unable to work, internal bleeding, fever, jaundice, convulsions, bone pain, bleeding gums, their teeth fell out and they died a painful death. Lind had heard reports of successful treatment with various remedies and he decided to test them to find out if any of them really worked. He divided a group of 12 sick sailors into six groups and he gave each group a different test remedy. A quart of cider, 25 drops of elixir of vitriol, that's sulfuric acid. I sure hope he diluted it in water before he gave it to them. Six spoons of vinegar, half a pint of sea water, yuck! Two oranges and a lemon, or a spicy paste plus a drink made from barley. The winner was ta-da! Two oranges and a lemon. Lind had shown that citrus fruit worked to cure scurvy but he didn't have any idea how it worked. Today we know that scurvy is due to a deficiency of vitamin C but vitamins wouldn't be discovered for another two centuries. Lind thought scurvy was caused by putrefaction and he thought citrus fruits worked because the acid in the fruit prevented putrefaction. He tried sending ships out with bottled juice to save storage space but that didn't work. The bottling process heated the juice and destroyed the vitamin C using fresh fruit. Anyway, the first clinical trial had found an effective treatment for scurvy. British sailors became known as limies and medical science was often running. Some doctors still think that in my experience is better than evidence. Here's one. Jay Gordon, pediatrician to the stars. He caters to parents who want an excuse to delay or omit vaccinations. He says, a very strong impression is that children with the fewest vaccines or no vaccines at all get sick less frequently and are healthier in general. I truly believe they also develop less autism. Well, his personal experience and uncontrolled observations have led him to false beliefs. Controlled scientific studies have proved that his beliefs are wrong. That he stubbornly prefers his own very strong impression to the scientific evidence. Evidence-based medicine was a great idea. Treat patients according to the evidence from scientific studies whenever possible. Unfortunately, the practice wasn't as good as the promise. We coined the term science-based medicine to point out the flaws in the way evidence-based medicine was being implemented. In 2008, a group of us skeptical doctors started the science-based medicine blog under the leadership of Steven Novella. If you aren't familiar with the blog, I hope you'll check it out. It's www.sciencebasedmedicine.org. Science-based medicine is all one word. Here is the published hierarchy for evidence and evidence-based medicine. The quality of evidence increases as you climb the pyramid. The red bar near the top is the RCT. That's the gold standard of clinical science, the randomized control trial. Ideally, it uses a placebo control and is double-blind. Typically, they compare an active drug to an inactive placebo, maybe a sugar pill. That's the gold standard of clinical science, the randomized control trial. Ideally, it uses a placebo control and is double-blind. Typically, they compare an active drug to an inactive placebo, maybe a sugar pill. Like the active test drug. In a double-blind experiment, the patient doesn't know what he's getting and the person giving him the pills doesn't know what he's giving. The pills are prepared by a third party who isn't otherwise involved in the research. They are coated and the coat isn't broken until the trial is finished. The patients are randomly assigned to two equal groups to minimize any influence from other factors. The results are analyzed with a fairly significant difference. Even the best RCTs are not definitive. There are a lot of things that can go wrong that could lead to false results. I'll be covering that in lecture 9. We should never rely on the results of a single study. We should wait for confirmation by other studies. The lowest level on the pyramid is in vitro research. In vitro means in glass like test tubes or petri dishes. You can kill cancer cells in the lab with practically anything. For instance, a bleach or a blowtorch. But what happens to cells in a petri dish doesn't necessarily happen when they're part of a living body. Using a blowtorch on a patient wouldn't be a good idea. The next level is animal research. But the results of animal studies may not be applicable to humans. For instance, aspirin causes birth defects and mice. But it doesn't do that in humans. The next level is ideas, editorials and opinions. Which are basically speculation by people who may be more or less qualified to speculate. Then case reports. Descriptions of a single patient. Then case series. Reports of several patients with the same condition. Then case control studies. For example, where patients with lung cancer are compared to patients who don't have lung cancer to see whether there are more smokers in the lung cancer group. Then the cohort study where groups of people are followed over time. For instance, groups of smokers and nonsmokers to see which group eventually develops more lung cancer. The EBM hierarchy puts the gold standard RCT near the top of the pyramid. But there is an even higher level. All too often different RCTs get conflicting results. To break the tie we can do a systematic review or a meta-analysis to combine information from several trials to reach a more accurate conclusion. They are the highest level of evidence. But they can only be as good as the individual studies. And sometimes they get false conclusions because of GEIGO. Garbage in, garbage out. And one really well designed large study can sometimes trump systematic reviews of lower quality studies. Evidence based medicine rigorously limits itself to considering only the evidence. But something is missing from this pyramid. There is no place in this hierarchy for evidence from basic science showing that a treatment is impossible or highly improbable. It doesn't give any consideration to prior plausibility. They tend to give equal credence to a study of a new antibiotic and a study of a homeopathic remedy. They don't seem to care that homeopathy is incompatible with our knowledge of physics, chemistry and biology. They don't seem to understand that we would need far more evidence than a clinical study before we could accept that a homeopathic remedy was effective. It would require a massive evidence sufficient to prove that all that other scientific knowledge was wrong. Now, we don't reject new ideas out of hand. But we do have to take prior plausibility into account. We have to require stronger evidence than possible. Carl Sagan said it best, extraordinary claims require extraordinary evidence. If I tell you I saw a robin in my backyard this morning, that's an ordinary claim. You'd probably accept a cell phone photo as evidence or you might just take my word for it. But if I tell you I saw a fire breathing dragon in my backyard, well, that's an extraordinary claim. And anyone with a shred of common sense would demand much more proof, not just a picture that might have been photoshopped, but something more substantial, like dragon scales or perhaps some dragon poop that could be analyzed for DNA. If a study shows that a slightly different new version of an old antibiotic works better for pneumonia, that requires an ordinary level of evidence. But if a study shows that standing on your head and whistling Dixie Cures cancer, that would require a much higher level of evidence. Evidence-based medicine doesn't work well for CAM. It works on a one tool fits all approach. Evidence-based medicine focuses on clinical trial results. It simply looks at the published clinical research and accepts the findings. And that works pretty well for conventional medicine. But it doesn't work well when you're dealing with things that lie outside the scientific paradigm or when the scientific plausibility ranges from minuscule to non-existent. My colleague David Gorsky said I suspect that the originators of evidence-based medicine never thought of the possibility of evidence-based medicine being applied to hypotheses as awe-inspiringly implausible as those of CAM. It simply never occurred to them. They probably assumed that any hypothesis that reaches a clinical trial stage must have good preclinical evidence and basic science evidence to support its efficacy. Science is a system of inquiry, not a collection of facts and not a belief system. It's not just one single method but a toolkit of methods to figure out how reality works. This diagram shows how the scientific process works. It starts with observations and then a hypothesis is formed that might explain the observations. If the hypothesis is true, we should be able to use it to make accurate predictions. We test to see if it does. If we observe that the predictions are not accurate, we have to modify the hypothesis. If the hypothesis does allow us to make accurate predictions, then evidence eventually accumulates and theories can be formed. The word theory is very often misunderstood. In science, a theory doesn't mean a guess or an opinion. A scientific theory is an established body of knowledge like germ theory or the theory of evolution. Evidence is a discipline process for testing hypotheses. It progresses slowly but inexorably as new discoveries build on old ones. It doesn't claim to have the truth. It reaches provisional conclusions based on the best available evidence and it's always ready to change those conclusions if new, better evidence comes along. Studies are repeated and they're either confirmed or discredited. A body of evidence gradually accumulates to the point that experts in the field can reach a consensus. Science makes mistakes but a self-correcting system is in place. It's a collaborative effort. Studies are peer-reviewed and scientists scrutinize each other's work. Science can be relied on to give us knowledge that works and it can be used to make accurate predictions and to guide medical treatment. By the way, there is no such thing as Western science. There's only one science and it's the same everywhere in the world. Some people would have us believe that there are other ways of knowing like intuition, tradition, revelation, the stoned thinking favored by Andrew Weil, visions, dreams, extrapolations, speculation, and personal anecdotal experience. Well, those other ways of knowing have their place. They're useful when it comes to knowing if you're in love or knowing whether you can trust your neighbor or knowing if a picture is beautiful. But when it comes to knowing whether a medical treatment is effective there's only one reliable way of knowing. Control testing using scientific methods. Those other ways of thinking can lead people to strong beliefs. But we can't trust those beliefs to reflect reality. Only the scientific method can give us reliable knowledge. No matter how convincing a claim sounds it must be tested before we can accept it as true. No other way of knowing shows the kind of progress that science does with new knowledge building on old. No other way of knowing leads everyone to agree and no other way of knowing allows us to make accurate predictions. They say science doesn't know everything but as the Irish comedian Dara O'Brien pointed out science knows it doesn't know everything otherwise it would stop and just because science doesn't know everything doesn't mean you can fill in the gaps with whatever fairy tale most appeals to you. Science works. Science is probably the best thing humans have ever invented. As Bill Nye the science guy says science rules. In this lecture I've talked about what science-based medicine is and why evidence-based medicine is not enough. When we set out to evaluate health claims, practices and products we want to use the best scientific evidence available in the light of our cumulative scientific knowledge from all relevant disciplines. Science-based medicine emphasizes that some evidence is better than others. It's very aware of all the ways clinical studies can go wrong and can lead to false conclusions especially when improbable treatments are being tested. It considers prior plausibility it puts clinical trial evidence into context with pre-clinical research and it looks for consistency with the rest of the body of scientific knowledge. In the next lecture I'll talk about what happens when medicine isn't science-based.