 John is the Director of Pediatric Bioethics at Children's Mercy Hospital in Kansas City and the Director of the Children's Mercy Bioethics Center and Professor of Pediatrics at the University of Missouri, Kansas City School of Medicine. At Children's, John has developed a new NIH-funded program in pediatric ethics and genomics. John received his MD from the University of Pittsburgh residency at the Children's Hospital National Medical Center in Washington, D.C. and then came to Chicago to train in the McLean Center Ethics Fellowship Program. Between about 1988 and 2008, John worked closely with me as the Associate Director of the McLean Center and then went off to Kansas City. John is one of the few people who has served as President both of the American Society of Bioethics and Humanities and of the American Society of Law Medicine and Ethics. His research is on ethical issues in innovative therapies in pediatrics, decision-making for babies at the borderline of viability on issues like ECMO, both bone marrow transplant, growth hormone, and liver transplant. Some of his books include Do We Still Need Doctors, another book that he wrote with Bill Meadow on neonatal bioethics, the moral challenge of medical innovation, another one about the last physician, Walker Percy, and the moral life of medicine. I'm delighted that John is here today and he will speak to us on the topic of scientific uncertainties, mystical certainties, and the ethics of comparative effectiveness research. Dr. Landers. Thanks, Mark. Isn't this just the best conference any of you ever go to, like ever? Thanks both to Mark, of course, to the prior speakers who set me up beautifully, but also I just want to acknowledge the McLean Center advisory board who's here too, Rachel Kohler, who's president, Stan and Ed Dudley, and many other members of the advisory board who work closely in, I mean, if you can imagine a more thankless task giving Mark Siegler advice. I think they deserve a big round of applause there. When I say people set up this talk nicely, talking about doctor-patient relationship, about costs, shared decision-making, empowerment, solid evidence for treatments that we do. My talk is really about how we developed that solid evidence and particularly the recent controversy over the support study. The other theme that David Rubin just talked about was the power of the anecdote. Last August, in the sort of middle chapter of the controversy over the support study, there was an open meeting at the Department of Health and Human Services where people were invited to give testimony, and there were lots of heavy hitters speaking at that meeting, like Jeff Dresden, editor of the New England Journal, and George Annis, the New England Journal's legal affairs correspondent and the head of the AAMC, but the story that made all the newswires was this family, Sean Pratt and his daughter, who held a press conference outside Health and Human Services and said to reporters, tell me that the support study did not hurt Dagan, my daughter, in any way, and Dagan stood there by him wearing braces on her legs from the cerebral palsy that resulted. She had been born at 25 weeks and enrolled in the support study, and her dad said the support study looked good on paper. We were guaranteed the study wouldn't hurt Dagan in any way, and we were shocked to learn that the care she received was based not on what she needed, but on some protocol. They turned her into the subject of an experiment instead of a participant in a study. I think the central question, which I'm going to try to address and that I think is the central question facing OHRP and in the regulation of research is what would it mean to tell people they're not harmed by research, and specifically, can we tell Sean Pratt that his beautiful daughter, Dagan, was not harmed by being in the support study? Now OHRP says no. They said the study was risky and parents should have been warned about those risks before they decided whether to enroll because the study involved changing the treatment of enrolled infants from treatment according to the standard of care with attendant changes in the risks and potential benefits. Particularly, they said there were reasonably foreseeable risks of blindness, neurologic damage and death that parents should have been informed about. The advocacy group, Public Citizen, said it went beyond the informed consent form that the very design of the study was so flawed that it was both illegal and unethical. Any study, they said, comparing different oxygen levels would be not compliant with HHS regulations and the support study was highly unethical because in their words, it exposed 1316 extremely premature infants to increase risks, either death or retinal damage. The New York Times agreed with this assessment and in a lead editorial echoed Public Citizen's concerns and called the failure to disclose these risks startling and deplorable. And the title of the editorial was an ethical breakdown. The New England Journal, interestingly, disagreed in an editorial of their own, said the informed consent documents spouted out the risks clearly and succinctly, addressed prevalent knowledge fairly and reasonably, and the OHRP investigation was the ethical problem because it cast a pall over the conduct of clinical research to answer important questions in daily practice. The NIH itself weighed in making this a fratricidal battle within the Department of Health and Human Services since both the NIH and OHRP report up to the Secretary at that time, Kathleen Sebelius. But here the head of the NIH, Francis Collins, and the head of NICHD, the Child Health and Development Institute, wrote in the New England Journal, the babies were of course at risk because of their prematurity, that their care was never compromised for the sake of the study. Bioethicists, it turned out, were split about 50-50. A group of 45 bioethicists wrote a letter to the New England Journal saying that OHRP's conclusion that the study exposed subjects to additional risk is not supported by the evidence. Another group of 44 promptly fired back saying the potential risks and benefits could not be said to be the same as those receiving care outside of the study. And so our field was deeply divided. I was on this side, we had 45, they just had 44. So if you just did a vote, we would win. But that may be a topic for one of Peter Hubel's studies of social networking or something. So which was it? An important well-designed study conducted to the highest ethical standards or an egregious violation of ethics and federal regulations. The interesting thing is this sort of thing has happened before and has happened many times over the years and is a fundamental question about whether it's possible to do clinical research in a way that is respectful of persons. One of the famous stories in pediatrics was written up by a guy named William Silverman, who was one of the pioneers in the study of retinopathy of prematurity then called retrolental fibroplasia. And he wrote a book about it 35 years ago actually in 1980, a book called Retrolental Fibroplasia, Modern Parable. And he describes a baby that he took care of in 1950 before there were NICUs, before there were ventilators, before we could do oxygen saturation. When the treatment for tiny babies with breathing problems was to put them in an incubator and give them 100% oxygen, something that led to a very high incidence of retinopathy and blindness. And nobody knew how to treat this. And he described a premature baby girl born to a woman who had five miscarriages. They put a baby in the incubator and at eight weeks the ophthalmologist did an eye exam and diagnosed developing retrolental fibroplasia. And Silverman had heard about and wanted to try previously unstudied treatment, ACTH, a hormone, steroid hormone on the rationale that it's a connective tissue disease and premature babies might not have enough of this and nothing else seems to work. So what the hell? We'll give it a try. And they tried it and the baby's eyes started to get better. So they lowered the dose and the eyes got worse and they raised the dose again and the eyes returned to normal and the treatment was stopped and the infant gained weight and was sent home. And that was a beautiful thing. And they decided that this should be the standard of care. And he reported how over the next year and a half, 31 babies at baby's hospital were treated with this and 25, 80% left the hospital with normalized and had an affiliated hospital across town. Seven infants who had RLF were not treated with this and six of them became blind. So they said this seems to be the standard of care. And many doctors said it would be unethical to do a randomized control trial. But Bill Silverman was a pioneer of evidence-based medicine and he said we really need to do a prospective randomized trial to settle the question. This was in the early 50s, you may recall that randomized control trials were really first invented at least in their modern sense just in the post-war period. The very first randomized trial in the United States was a VA trial for TB that failed because they couldn't get enough patients enrolled. The Brits succeeded with a similar trial. The most famous randomized control trial of course was the polio vaccine trial in 1954. But what you may not remember is that 33 states prohibited randomization because it was seen as unethical. And instead what they did in those states was vaccinate second graders but not first and third graders. Whether that's block randomization didn't seem to occur to people but 11 states did allow randomization. Actually, there was a famous randomized control trial from before the 50s. I'm sure most of you remember Jane Lin's study of treatment for scurvy in British sailors where 12 sailors were divided into groups of two and received either cider, weak acid, vinegar, seawater, nutmeg and barley water or oranges and lemons. And after six days the two who got the citrus treatment were back on their feet. Here's a report from that famous study of James Lin giving the scurvious British sailors citrus. And this is of course why British sailors were referred to as limies. There's the polio study. Eventually a million and a half kids participated. Only about a third of them were randomized. Because as Silverman wrote in his book, randomization is pretty ethically complex. And when he was doing the ACTH study, they did randomize people but he wrote that they didn't tell the parents because the thought of random allocation to treatments in which blindness or life are at stake is. He wrote at first flush a repugnant one. We are all prone to feel that a well-meaning guess is somehow not as cold and unfeeling as the flip of a coin. This trope reappeared in the lawsuit that resulted from the support study in the charges against the investigators in the class action suit against the University of Alabama at Birmingham. They write the study was unethical because the amount of oxygen initially received by each subject was determined by the flip of a coin. In his comments at the HHS meeting, George Annis, who really should know better, said randomization always increases risk. How worried are we about the loss of physicians' individual decision when nobody really knows what the right answer is? We're really worried about it. The doctor's judgment matters. We think medical education means something. Here are the results of Silverman's randomized trial. With ACTH, 33% of babies got eye disease. And with placebo, 22% of babies got eye disease. The mortality was also lower in the untreated group. And three-quarters of the babies with early RLF showed spontaneous regression to normal with no treatment. So the issues aren't new. And one of the things Silverman wrote when he was discussing this is how medicine's in the midst of an identity crisis as it struggles to become scientific, which he described as a slow and uneven shift from mystical certainty to scientific uncertainty. And he said doctors are viewed suspiciously when they ask questions, a switch from their accustomed role as providers of answers. So what was the state of knowledge about oxygen when the support study was designed? Well, this was a paper written in pediatrics in 2002 by one of the leaders in the field. And he said 50 years of uncertainty, we still know very little about how much infants actually need, how much it's wise to give. And the depth of our ignorance is really quite embarrassing. There had been many small retrospective studies of different oxygen saturation targets. And a Cochrane meta-analysis of these showed that restricted compared with liberal oxygen had no significant independent effects on death rates in premature babies. So the best available evidence at the time suggested that there was no risk of mortality. And so leaders in the field decided to do a large, unprecedentedly large international collaboration to try to settle the question of whether targeting a slightly lower oxygen saturation within what was then the generally accepted standard of care, that is 85 to 95 percent, could lower the rate of eye disease. There were actually three separate studies supported in the US, one called Boost in the UK, Australia, New Zealand, and one called the Canadian Oxygen Trial, which interestingly also had places and centers in the US, Argentina, Israel, and Europe. All together there were 82 sites, that is to say 82 IRBs reviewed these. They randomized almost 5,000 babies. And the studies were done and the controversy followed. So let me come back to the question that Sean Pratt threw out at the HHS public meeting and see if we can answer it. Was Dagen Pratt harmed by enrolling in this randomized trial of two different oxygen saturation targets? Well, let's look at her outcomes. The first and most important, as you saw on that slide, she survived. And the main focus of criticism was that the mortality rate was higher in one arm than the other. So Dagen clearly did not suffer that harm. More generally, there's a question of whether overall babies who were enrolled in the trial had a higher mortality rate. It's clear that there were differences between the arms and this somewhat complicated slide shows the three different international trials caught support and boost, the outcomes in terms of mortality for the two different oxygen saturation targets, the lower one 85 to 89, the higher one 91 to 95, and then the combined overall survival rate. And the focus of concern really in the United States was that middle line where the mortality difference went from almost 20 to about 16%, a difference in mortality that achieved statistical significance at a p-value of .05. But look at the overall survival rates, 16%, 18%, and 19.5%. And compare those to what we know about survival rates for babies born at 24 to 27 weeks that come from national databases in quite a few different countries. Canada reports, support actually reported outcomes for babies who were eligible, approached for consent, but then didn't enroll. And there's also a database from the National Institute for Child Health and Development, Neonatal Research Network. The Swedes do a similar thing and actually there's about 10 others, but the slide got too busy. But what you'll notice is there's no dataset of babies 24 to 27 weeks that's ever reported a mortality rate under 20%. And to go back to the mortality rate in the three studies, all of them had an overall mortality rate of under 20%. So it's an interesting question whether somebody who was in the study and died could be said to have been harmed by the study. What about eye disease? Dagen Pratt had severe retinopathy. Babies in both arms of the support trial had lower rates of retinopathy than babies who were not in the trial. And here's the data. In the low oxygen arm, about 8.5% of babies had severe retinopathy. In the high oxygen arm, it was about 18%. And in the comparable database of babies not enrolled in the study, but born at the same gestational age, it was about 24%. Dagen had cerebral palsy. I'm not going to belabor this, but there was no difference in cerebral palsy between the arms of the study. And again, the cerebral palsy outcomes for babies in the study were lower than in comparable databases. Trust me on that, I'm going to skip over it fast. So the bottom line is that the only unexpected finding in support was that babies in the high oxygen arm had better survival rates than any group of 24 to 27 weekers ever reported in the world literature. And when I teach this to medical students, I say get out your iPads right now, and if anybody can find one, tell me about it. Nobody's found one yet. So it's hard to conceptualize this as a risky study when everybody in the study, on average, did better than people who were not in the study. But here's the thing. The controversy is not about measurable risk. So that when the 45 bioethicists who say there was no risk argue with the 44 who say that there was risk and cite these statistics, they say it doesn't matter. It's about something deeper and more intangible. It's about this idea that researchers have different obligations than clinicians. And the compromise loyalty of the researcher is really what puts research subjects at risk. OHRP said this in their letter, their initial finding of noncompliance that they sent to the University of Alabama. Ultimately, they said the issues come down to a fundamental difference between the obligations of clinicians and those of researchers. Doctors are required to do what they view is best for their individual patients. Researchers do not have that obligation. Or Ruth Macklin and Lois Shepard in the American Journal of Bioethics. Doctors, not researchers, have a fiduciary obligation to pursue the patient's best interests above all other considerations. Or George Annas at the public meeting. A physician must be guided by a fiduciary obligation. A researcher has no such obligation. So the real fear about these studies is not so much actual measurable harm to babies. It's the dark and conflicted heart of the medical researcher. And the question is, are these people, these people who want to study this, angels or devils, are they doing a service to humanity or are they exploiting vulnerable individuals in pursuit, in reckless pursuit of this thing that they value more highly than patients' well-being? That is knowledge. And this is crucial because actual studies can be safer or riskier than conventional therapy. But if the problem is the loyalty of the researcher, then all studies put people at risk because they have no protector. And once you sort of become aware of this trope, you start to see it in discussion after discussion of what's problematic about medical research. Steve Joffey and Franklin Miller. Joffey's a pediatric oncologist. Frank Miller is a bioethicist and philosopher at the NIH. The context of medical care beneficence entails the health provider to do what's best for patients in clinical research. Investigators have to promote social value by generating scientific knowledge. Sam Hellman, former dean of the University of Chicago and the New England Journal of Medicine, researchers are required to modify their ethical commitments. It's not even a choice they're required to. Or Larry Churchill, who phrases it almost like a sexual perversion. The researcher subject relationship compels and urges certain priorities or inclinations to perceive and act in certain ways. The researcher is seen as driven to pursue knowledge committed to utilitarian ethic and thus in need of constant oversight. They're meant to be treated like addicts who can't control their own moral impulses to pursue truth at any cost and will thus exploit patients and harm them. But is it true? I mean, it's a serious charge. And it is, I think, the understanding of research upon which we've built our current system of IRB oversight, where every tiny change, I mean, if you move a comma in your informed consent form, you need permission to do that because that might be a way to... But researchers themselves see it a little differently. What they say they're doing is trying to do what's best for their patients in situations where they don't know what's best for patients. So Norm Fost imagines having this conversation with a patient who he's recruiting would not be responsible to give an unsteady treatment to you in an uncontrolled way because neither you nor I nor future patients would ever know whether it helped or hurt. Or Keith Barrington, a neonatologist who was involved in the Canadian Oxygen Trial, says, yes, I have a fiduciary obligation to provide optimal treatment. I also have a moral obligation to know what optimal treatment is and I have a moral obligation to keep trying to find out what the best treatments may be. So researchers see themselves as less compromised than they are seen by current regulations. And the other aspect of this that I think is even more important in understanding what's at stake in the support controversy is it assumes a kind of purity about clinicians that doesn't bear scrutiny. Physicians have all sorts of conflicts of interest in their acting out their fiduciary obligations. Physician-induced demand, physicians get paid for doing more. It's what we heard about some this morning, defensive medicine. Docs don't want to get sued. People can get incentivized from drug and device companies or different payment systems. And as Dave Wendler and other NIH bioethicists said, clinicians have appropriate interests that compete with providing the best care, including earning a living, helping other patients, conserving resources, training new clinicians, et cetera. But I think perhaps the most important factor that's overlooked in the sorts of critiques that have been leveled at the support study is one that was essentially discovered by the first winner of the McLean Center Prize, Jack Wenberg, who in the late 70s and early 80s discovered the phenomenon of small area practice variation. And as I'm sure many in this room know, when he tried to publish that in medical journals, nobody would take it because they said that simply can't be true. So he had to publish in science and nature and second tier stuff. But he showed things like this, and this is more current data, but it's based on the older data. If you look at different counties in New England and just look at how many kids get tonsillectomies, as recently as 2007 to 2010, varied by a factor of four. In Littleton, 11 per thousand. In Burlington, under three per thousand. Now, it may be that there's epidemic strep in Littleton that hasn't spread to Burlington and that all these consillectomies are medically indicated, but it seems unlikely and Wenberg has found this everywhere he looked. So the interesting question is, are the children who are going to see their doctors whose doctor's clinical judgment is determining their care at higher risk, the same risk or lower risk than they would be if they enrolled in a clinical trial in Littleton, Burlington and St. John'sbury, where they were randomized to an aggressive or less aggressive approach to tonsillectomy that resulted in a similar distribution of outcomes. Wenberg has looked at everything he looks at, he finds the same thing. Which is riskier, undisclosed and unstudied idiosyncratic practice variation or deliberate formal randomization with careful monitoring and evaluation. Now OHRP, after the public meeting, promised to come out with new guidelines to clarify what is required of researchers. And this is my favorite slide in the slide set just because this is supposed to be the clarification to help you decide what you need to disclose. If a research study examining standards of care includes as a purpose of evaluating identified risks associated with those standards of care, the identified risks associated with the standards of care being evaluated that are different from the risks of the standard of care and at least some subjects would be exposed to outside of the research study are generally considered by OHRP to be reasonably foreseeable risks of research. There's really just one comma. Those draft guidelines are now online and a key claim until December 23rd you can respond, you can read them yourself or if you're too busy I would suggest you just write down this bottom sentence and go to their website which I'll show you in a minute and send in a comment. OHRP essentially says any deviation from the treatment that any doctor would recommend creates a reasonably foreseeable risk. And the alternative which I think would be a preferable guidance for standard research is to say standard treatments cause risk. Well-designed studies teach us how to understand and sometimes reduce risk. Write it down and send it to this website because you know nobody reads these they just count how many they get or against so you know. Last point I'm gonna make the battle lines on this have changed a little bit in interesting ways when Silverman was writing about this he was trying to convince his neonatology colleagues that good well-designed prospective randomized controlled trials were appropriate and ethical and many of the doctors said we can't randomize our patients. What's happened today is the neonatologists have recognized it. They designed an incredible study and they're being criticized now by bioethicists and federal regulators and citizen advocacy groups in a way that reminds me of the old saying that no good deed goes unpunished. It's best we can tell too the public actually supports this research after the controversy broke many IRBs required support study investigators to call all the parents who were in the trial and places that did that reported that the parents generally said we understood the study we knew what we were getting into and we understood the risks and we don't have any complaints so we need to get the message out not so much with the sort of data and theory and rational arguments that I've presented here because data means nothing compared to photogenic parents and acute kids so the way to counter this I think is with something like this I'll let you read that this is our grandson who was a preemie born around the time of the support study little too premature to be eligible for the support study but he might have been and they now their family now supports medical research in this area and have become the biggest fundraiser for the March of Dimes so simple message treatment isn't perfect research can make it better and consent forms should accurately explain risks by saying that well designed and well regulated studies are good for both the people in the studies and for people or in this context babies in the future thanks children's mercy by the way doesn't that look like Disneyland? Hi, Niranjan Karnak from Rush. John, great talk. I wonder what your thoughts are about PCORI's focus on outcomes research that's sort of more naturalistically drawn less experimental in design how you think that that changes or is possibly a response to these types of events? So I mean I think one of the big things that's changed since the Belmont report was written and since the current federal guidelines is the availability of electronic health records in big data which allows a different kind of research than was ever possible before so for example one of the questions in the support study was well what were doctors using in 2003 before the study was done how many NICUs targeted what oxygen saturation and the answer is we don't know I mean there were surveys and people said oh this is what we target but now we could get actual data and that makes it possible to do all sorts of different kinds of studies but they straddle the line between purely observational subtly interventional and more or less subtly interventional and I think figuring out how to regulate those is going to be a huge and important problem. Aib Schwab, IPFW so I guess I wonder what you would say to somebody who would respond to this with well so the lesson here is clear what we need to do is we need to get patients to trust their physicians less and so ultimately the problem here is not that they're distrustful of research but that they're trusting the physician because of exactly the data you're presenting. I'd say they're exactly right within parameters right I mean so it's not like any oxygen level was just right I think doctors knew very well that 100% oxygen was terrible and 70% oxygen was bad too but when doctors get together and experts in a field say this is an area where that's an important area to do research because within this after having reviewed all the PCORI type electronic health record retrospective data we realize that this is an important unsolved problem people should realize that if in that domain their doctor says look 92% trust me I'm a doctor that's what's best for your baby people should be skeptical and that's what the informed consent form should tell them. We'll take one more question. Thank you Dr. Lantos when you look at the fact that no good deed goes unpunished I think no good crisis should go unexamined or be wasted so given the crumbling of institutions and trust the present outbreak of Ebola provides a forum to framework the discussion about biomedical research I think part of the problem is we have not taken enough time to educate citizenry about research so how can we step away from the argument the physician the researcher and just get that sweet spot of educating the public what research provides So the question of whether the public distrust researchers is an interesting one that I don't think it's true at least it wasn't in support that the public as represented by the parents of babies who were actually in the study distrusted researchers the public as portrayed by public citizen distrusts researchers but it becomes a question of the extent to which self-appointed advocacy groups can be taken as legitimate voices for the people who they claim to represent Go ahead September short I just have to you know September Williams the surviving inaugural member of the bioethics center at Tuskegee University and its opening and of course these issues are where we started right this is where we started I'm pretty sure a number of people in this room have read the Belmont report in its entirety I know I keep going back to it but the division between who is a doctor and who is a researcher and that their obligations are different is a really big deal so we saw it again in the AZT trials in Thailand how do you separate these things and I think the fact that we have a clinical ethics center and many clinical medical ethicists actually we have fallen victim to the dicing and slicing of different fields of medicine as though people were able to be diced and sliced so when I came to the center it was because me as a clinician and me as a researcher were the same entity and I don't know how that got changed around we had obligations that were the same so that's my comment I had to I mean in some ways the question the debate about the sports study comes down to the question is this Tuskegee all over again or not and which is asked all the time and many many of the people who spoke at the public meeting criticizing the study specifically made the analogy usually in a sneaky backhanded way like I'm not saying this is like Tuskegee but this is like Tuskegee. Well the one thing that we did with the AZT trials in Koteva and Thailand of course was that the level of stringency for the informed consent goes up with the probability of high risk and that question was asked of course around those trials and what we came up with was three points of contact for informed consent. Right so the central informed consent question is how do you accurately describe comparative risks of both two arms of a randomized controlled study and the risks of being in the study or not being in a study in a way that in a decision-aid kind of way actually helps people understand. So one anecdote that I'll end with I talked to our neonatologist there are a bunch of other studies done by the neonatal research network and one of them similar to support except instead of oxygen it's transfusion so at what level do you transfuse a premature baby and so I asked the neonatologist what have you changed your consent form and he goes oh my god yes we have death. Death is written all over it I mean every paragraph says death and I said are people like not signing up because he said nobody reads those things.