 I enrolled in the study to see if I couldn't make the opening joke as funny as possible, but unfortunately I think we're in the control group. If you have some sort of accident or illness and have to go to the hospital for treatment, there's some amount of leeway your doctor has in choosing what sort of treatment you'll receive, but there are some well-defined checkboxes that they must tick, a standard of care which you must receive before you leave their charge, lest they be guilty of malpractice. Most of these items are totally obvious. If you're bleeding, they have to stop you from bleeding. If you're having trouble breathing on your own, they have to hook you up to a respirator. If you have some sort of infection, they have to give you antibiotics. It's pretty clear in these cases what's wrong and what should be done to fix it, but in some instances, the standard of care mandates treatment that isn't particularly well justified. For example, in pediatric cavernous sinus thrombosis, it's mandatory for doctors to give their patients heparin. In every hospital, if a kid comes in with a clot in their brain, every single time they get heparin. We do have some good evidence showing that that's the right move for adults, but for kids, we have no idea if anticoagulants, drugs meant to prevent clotting, like heparin, are helpful. No clue. We don't know if every time we hook them up to an IV bag full of heparin, we're putting them at increased risk of bleeding for no good reason. If we're increasing costs for hospitals and adding an unnecessary step for already overworked nurses, without the least indication as to whether we're achieving anything, yet we keep doing it, and it's highly unlikely that we'll ever stop doing it. That seems a little bonkers. It's 2018. We have 50 years of history illustrating the effectiveness of evidence-based medicine. How can there be mandatory medical treatments without the backing of randomized controlled trials demonstrating their efficacy? How is it even possible that physicians are prescribing treatments without any clue as to whether or not they work? Well, we're caught in a bit of a trap here, one with both legal and ethical ramifications that make it nearly impossible to escape. Legally, if the doctor chooses not to give the patient heparin, they're no longer protected by the rubric of the standard of care, and can't claim to have done everything they should have done. If anything happens to that patient, if they suddenly have some complication or their condition worsens, even if there's no demonstrable causal relationship between the doctor's decision and those symptoms, there's no legal leg to stand on against accusations of malpractice. Essentially, a doctor would have to volunteer responsibility for any negative outcome the patient suffers in their care from that point forward, even if they have a reason to suspect that the heparin isn't really doing anything for them. Ethically, it's even hairier. If you have any familiarity with medical science, your first instinct might be the forehead-slappingly obvious one. Do a randomized double-blind trial. We have some medicine and we don't know if it works or not. This is the whole reason randomized double-blind trials were invented in the first place. Just do that. Well, unfortunately, if there's any suspicion that a treatment might be doing patients some good, you could imagine how it's morally dicey to start handing out a placebo instead, even in the service of confirming its efficacy. In order to perform a double-blind trial, you're going to have to deny some patients a substance that has been verified to have a few minor drawbacks, but is suspected of increasing their chances at surviving a dangerous medical issue. That's a little like holding some defibrillator paddles while someone's having a heart attack and saying, well, hang on a sec, let's see if they get better on their own. Still, placebo-controlled trials are the golden standard for establishing the effectiveness of medical treatment. In order to perform such experiments ethically, researchers must demonstrate that their experimental subjects fall into at least one of four situations, which give them adequate moral grounds for denying their patient's medication that might work. Again, it might not, but it might work. First, if there's nothing that has the slightest chance of treating the disease. Right now, there's no known prophylaxis for Alzheimer's dementia. Exercise and diet definitely help, but there's no pill that has the slightest suggestion of preventing its onset. If a medical researcher has even a small hunch that there might be an intervention that might help, they're in the clear to run a placebo trial on it, because, hey, we've got nothing. Second, if there are no serious ramifications for holding off on the standard treatment or skipping it entirely. If you're already bald, trying some long shot medication for reversing male pattern baldness for a few months wouldn't be the end of the world, and you might discover something better than the current best option. Third, if you've got really solid reasons for needing a placebo control in your study, and you're not likely to hurt anyone by depriving them of some existing treatment. Antidepressants alter brain chemistry in severe ways, so you generally want to ensure that any potential new antidepressants are at least somewhat effective before allowing their use in the general population, which means running a placebo controlled study. The effectiveness of existing antidepressants varies wildly from patient to patient, and some only have a slightly better track record than placebos anyways, so running a trial of some new antidepressant on some volunteers probably won't put them in any worse opposition. And finally, if you can do your placebo trial without interfering with any existing interventions and have good reasons for needing to establish the effectiveness of a treatment. Giving the antiretroviral medication AZT to the children of HIV positive mothers during childbirth in order to possibly prevent the spread of the disease doesn't interfere with any of the other stuff that we normally do in that situation. But AZT isn't a drug that you just want to administer willy-nilly just for the hell of it. It has some nasty side effects and contributes to antiviral resistance. There are good reasons to establish whether or not it's really effective. Those are really the only cases where medical ethicists have found sufficient justification for potentially short-changing patients. Dr. Heparin in the event of cavernous sinus thrombosis doesn't fit in any of those criteria. It works in adults, so we have reasons to suspect that it might work in children. If it does work, skipping out on it could have deadly consequences, and although there are several known drawbacks associated with its use, there isn't really enough at stake to say that we really need to know if it's effective or not, to the point of potentially endangering the life of a patient. Although it would certainly be cool to know the results of such a trial, we just don't have the ethical grounds to do a study and find out for sure. But if you think about it, there's nothing really limiting this rubric to the practice of medical science. Any scientific study investigating phenomena which have direct import for human welfare – education, governmental policy, humanitarian relief efforts, or economic practices – has the same potential moral quandary. If we have any suspicion that something might be helpful, might make someone's life easier, or ease some amount of suffering, how can we justify withholding it, even for the purposes of scientifically verifying that it does help? Take something as seemingly trivial as A-B testing of website interfaces. Software developers routinely implement randomized trials of various potential website designs, comparing user feedback for an A-version and a B-version of the website to see which they prefer. It seems totally kosher, right? But let's run that practice through those tests for medical research and see if there's any cause for concern. First, if the A-B test is for a totally new feature that you have no clue how to make functional, you're probably justified in testing a few different versions of it to find your footing in a new space. But if you're tinkering with some part of your website that your users already rely on to see if it could be better, it may be difficult to justify subjecting some of them to a potentially crappy new version. Second, if the A-B test is for some feature that isn't time critical, even if some version of the site is totally broken and unusable, it's not unreasonable to ask your more frustrated users to leave and come back after the trial is over. On the other hand, if it's a feature that they need to access quickly and reliably, maybe screwing around with it, even with the best of intentions, isn't the best plan. In that vein, if establishing the most effective version of some web feature is important and justifies the short-term inconvenience of A-B testing without substantial risk for the users, it can be a reasonable subject for a test. If a website's authentication methods aren't frustrating hackers as much as they probably should, giving some users the option of using a new security measure, like two-factor authentication, and seeing how it plays out, isn't the worst idea in the world. Abandoning passwords altogether and using some completely new and untested authentication mechanism for some users, on the other hand, that's probably the worst idea in the world, right up there. Fourth, if you can run an A-B test alongside the default webpage, allowing users to opt out of the test if they don't like either new design, you're probably in the clear. You get your numbers of what users prefer which version. They get to use the site however they want to. Locking them into a test they might not want to participate in isn't just morally dodgy, it's a good way to lose them to Facebook. Now, these criteria obviously aren't the end-all of research ethics for medicine or anything else. There are volumes of morally relevant factors to consider in every experiment, and the potential benefits of research often push us to weigh those factors carefully and rigorously. But it can be distressingly easy to overlook the possibility of causing harm with the simple act of creating a control group. We've learned that good science always controls for whatever it can, and who doesn't want to do good science? Even if confirming these hypotheses beyond any reasonable doubt would be intensely satisfying for our curiosity, sometimes ethics demands that we just keep doing what we've always done in the past, never knowing if it really helps, but comforting ourselves with a possibility that it might. Can you think of a scenario where merely creating a randomized controlled trial would violate ethics? Please leave a comment below and let me know what you think. Thank you very much for watching. Don't forget to blah, blah, subscribe, blah, share, and don't stop dunking.