 Ladies and gentlemen, I'm delighted to welcome you to today's session on medical professionalism. We're delighted that our speaker will be Dr. Jerry Menikoff, who about 20 years ago was a fellow in the... Yes, 91-92. More than 20 years ago. 2192 was an ethics fellow in the McLean Center program. We were together. With Peter Angeloos. What year was it? Was it 1991? Peter Yubel, Peter Angeloos. Ellen Fox. Ellen Fox. What a class. Jerry Menikoff, my God. It's a tough crowd. Yes. And then after leaving the McLean Center Fellowship, Jerry went to the New York Eye and Ear clinic or... Infirmary. Infirmary and trained in ophthalmology and finished his five or six years of ophthalmology training in Lower Manhattan. And when did you get the JD degree? I had gotten the JD degree earlier. So that had been earlier. So as an MDJD, Jerry went to the University of Kansas and taught both in law school and in the medical school with research interests in bioethics and more particularly the ethics of research on human subjects. In part that explains Jerry Menikoff's current role as the director of the OHRP. The OHRP is the Office for Human Research Protections, which is a portion of the Office of Public Health and Science in the Secretary's Office at DHHS. And in that role, Jerry and I were chatting before we came down to the meeting. He has some thoughts about substantially revising the informed consent rules and regulations, which are not an easy thing to change. They've pretty much been there since the original 1981 legislation. And to make them more consumer protective or patient protective or subject protective, it would be a goal. But there's a complicated political and social process involved. Today, Jerry is going to talk about professionalism and clinical investigation. Welcome back to the center, Jerry. Welcome back. Thank you. I guess I could sort of stand here. Does that work too? Thank you so much for a lovely introduction. It's a great pleasure to be back here. As was obvious, it was a great year. And I'm sure all of us learned a lot. And I suspect a lot of you who are continuing to participate in Mark's program are learning a great deal. I probably should talk louder, right? I will talk louder. Thank you. So I know these talks are about professionalism. Unlike a lot of your other scholars who have spoken here before, I am not an expert on professionalism. I actually use this as an opportunity to learn a little bit about the role of professionalism in the area of clinical research. And hopefully we can have some interesting discussion about this. Can you hear me in the back? Yes? Sort of. Okay. I'll try to speak up a little more. Dreaded disclaimer, these are just my own personal views, not necessarily the views of anybody within HHS. Okay. So what I want to particularly focus on is the area of physician researchers. Sort of the overlap of being a physician involved in clinical care and also being a researcher. That is one area that in terms of professionalism actually has gotten a fair bit of attention and has raised a number of what I think are interesting and important issues in terms of ethics, in terms of what sort of relationships exist in these scenarios. So that's in particular going to be my focus here. And sort of my premise is that it is interesting, unlike the clinical relationship, which I think in many ways we sort of know the core values. It is surprising how much there is in terms of controversy about basic aspects of professionalism in the relationship between researchers and subjects. And to give it an analogy, imagine that you are regularly seeing in the major journals where these sorts of articles get published, discussions of, you know, beneficence, non-maleficence, maybe this stuff is all crap. And in fact, that's not at all part of the regular doctor-patient relationship. Yet that's the sort of thing that happens in the area of research and professionalism that we in fact have major articles coming out regularly that seem to be disputing core aspects of that relationship. And it's sort of interesting, why is this happening? So I will try to spell out some of that. I know a lot of you prior talks in this series go through sort of various specific elements relating to what is it that characterizes a profession. I'm not pretending that this is a particularly exhaustive list. I just put a few core elements here. And by and large, these characteristics are also used in terms of being a research professional. There's a specialized body of knowledge. There's a degree of self-regulation. People get together in terms of associations. There is a self-generated code of ethics. And in particular, sort of the final thing I want to highlight is there is a norm of altruism, a notion of service to others. And in particular, I'm going to sort of distinguish that in particular in terms of the researcher-subject relationship. Because that is a little different. And that does raise a legitimate issue of how much do these rules apply in terms of what we think of core aspects that then make you a profession. Is there something unique about the researcher-subject relationship that's somewhat different? So as part of preparing for this, I wanted to see what was out there. Again, in terms of sort of black and white stuff, discussing the role of researchers as professionals. And you actually come across a few groups that are actually organized associations that deal with being a research professional. And there actually aren't that many of these groups. And I think it's sort of noteworthy, in particular, that there aren't that many of these groups. And by and large, they're probably not the most prominent groups. And we could have some discussion afterwards in terms of how much any of you are involved with some of these groups. So the major one, in terms of physicians in particular, physician researchers, is the Academy of Physicians in Clinical Research, APCR. And it is an affiliate of a much larger group, they're sort of highly related and in the same central organizational offices, the Association of Clinical Research Professionals, ACRP. Again, the latter group involves a lot of people beyond just physicians. The former group is basically the physician element of that. And let me tell you, I'm going to go through a few quotes from apical policies and that sort of stuff. So this is sort of the dry part of it and we'll get on to sort of more fun stuff. But I guess you have to go through some of these basics in terms of the actual ethical and professional rules they apply. So this is APCR's mission to advance medical innovation and public health by providing advocacy, promoting competence, and encouraging exchange for and among physicians involved in or affected by clinical research. Probably it is a similar sort of mission you have from clinical physician groups and similar sorts of things. And this is how they describe themselves. You will take this as truthful. It is the leading professional organization exclusive to physicians that supports and addresses the unique issues and challenges of physicians in clinical research. It has approximately 1200 members versus the parent, the larger group, which is not mainly physicians, has about 18,000. I don't know exactly what to make of that, but my gut sense is that this is not a huge number in terms of the number of physicians out there that are actually involved in clinical research. I suspect there are far more physicians, even percentage-wise, compared to the AMA or something. Most physicians in clinical research are not part of this group. And just to give you my probably not worth a lot take on this, a lot of people, this is sort of, I don't know if you call it the bad side or the pragmatic side, of being a professional. Many groups constitute themselves as professions to increase their prestige, to increase their authority, to create some monopoly elements by which they can regulate themselves. Well, if you're a non-physician who's involved in clinical research, you probably don't have the MD, you probably don't have other degrees that make you very clearly a professional, and therefore it's very important for you to in fact create yourself and organize yourselves as a professional. On the other hand, for most MD researchers, it is probably the MD that is the core of your professionalism, and therefore you don't need to be a researcher professional, and therefore you don't necessarily need this other group to work for you in terms of making the public know how professional you are. So I guess in a sense that makes sense, that there really isn't, in terms of this pragmatic aspect of this, there probably isn't a lot of attention paid by physician professionals to sort of being part of this organization, or needing it to kind of, you know, make the public aware of them as professionals. They clearly are professionals, and they probably don't care that much about this aspect of it. On the other hand, for the ACRP members, it's a much bigger push. A lot of them do stuff that probably is fairly technical, that again the public isn't aware of these people, they do need the prestige of being professionals. So that's some degree of the oddity here. And again, a bit of an explanation as to why perhaps you're just not hearing a lot about physician researchers as professionals. But as we'll see, there are again some interesting issues that sort of deal with the overlap on the ethics side of things. So I want to tell you at least a little bit about this organization in terms of what it says it's doing. It has a number of bylaws. It wants to enhance the organization's value. Again, the sort of things these groups try to do represent research physicians, enhance their proficiency, promote acquisition and dissemination of knowledge. And at last, and I'm sure the fact that it's last, I don't know what that means. I wouldn't necessarily make too much of the fact that it's last, but it does say it will protect the welfare of patients and study subjects. So it's in there, but yes, it is last. It does have a code of ethics and professional conduct. It doesn't distinguish which aspects of these are ethics as opposed to professional conduct, and often it's unclear, you know, one of these vague dividing lines. Here it's getting to the sort of things you often do hear about in terms of research ethics. Be mindful of the important distinctions between medical practice and research. The second bullet here, I want to pay particular attention to because I'm going to say a lot more about it in particular. This is getting to that issue of some degree of confusion out there about what this field is really about, except ensuring the safety and welfare of human subjects and patients as their highest goal. So there they're being very affirmative about this, but let's talk a little bit in a few minutes about is that really what they're doing. Execute the work in accordance with standards of scientific objectivity, very reasonable, continued advanced professional knowledge, safeguard professional judgment. Here we're getting into some more standard. The last bullet here, bioethical principles, principles of respect for persons, practice of obtaining informed consent or honor at all times in spirit and in practice. So, you know, the standard stuff you'd expect, absorb legal stuff, avoid conflicts of interest, adhere to all relevant ethical standards. Okay, so nothing incredibly surprising, probably similar to what you'd hear in other organizations, even just physician, not specifically research-oriented organizations, biting bioethical laws, regulations, ethical codes, etc. Okay, just want to point out there's another organization, some of you may be familiar with it, SOCRA, Society of Clinical Research Associates. SOCRA is actually also open to physician members. I'm not aware that it has a specific subunit for physician researchers, but again it is another one of these groups that is trying to kind of elevate the prestige and the professionalism of people working in clinical research. It has a much shorter ethics statement, only four items starts out actually number one, respecting the research participants with regards to self-determination and full disclosure. I believe research participants should be free from harm and exploitation in accordance with risk and benefits, sounds good. Research participants should have right to receive fair and quantitative treatment and you will be accountable and adhering for standards of scientific integrity. So, reasonable stuff. Okay, so remember I highlighted for you that one statement among the ACRP standards that said it would, the highest priority, the highest goal would be protecting the interests of the research subjects and I want to explore that a little bit. Let me go through a number of quotes from fairly prominent people or organizations discussing what is it that happens in terms of how we treat research subjects. And here's Greg Koski, who actually at the time was running OHRP, my current office. And what did he say? We have to recognize what our priorities are. If we want benefits from science that require using human subjects, we have a moral and ethical obligation to make sure we are looking out for their interests and well-being and rights. That's got to be our first priority, which sounds a lot like what the ACRP standard is, very, very similar. And you could go through quotes all over the lot on the internet and elsewhere, finding people who say similar things. Here is the secretary of HHS a while ago, Tommy Thompson. Science and medical research should not take place at the expense of the people who participate in clinical trials. This is from a New England Journal article in 2002, basically from the head of a committee or a member of a committee for the double AMC that was discussing the standards we impose on researchers. And the guidelines are based on some core principles. The first guideline makes clear that the welfare of the patient is paramount. And he's using the word patient, but he's viewing it as a patient who's participating in a research study, so a subject. So I've selected these all because they're giving a theme. And the theme is pretty much giving the same premise that the ACRP guidelines did that basically our first, our highest priority is protecting the well-being of the research subjects. And I want to explore that a little further. So going beyond just quotes, and here we're going to a very prominent international standard. Many people would probably say this is the highest standard out there. The World Medical Association's Declaration of Helsinki. I'm just going to give you a few quotes from it. So it starts out talking about physicians. And so here's what the Declaration of Geneva binds a physician with the words, the health of my patient will be my first consideration. And the International Code of Medical Ethics says a physician shall act in the patient's best interest when providing medical care. So they're talking about the clinical side. So let's see what they say about the research side. In medical research involving human subjects, the well-being of the individual research subject must take precedence over all other interests. Again, similar to the ACRP theme, similar to all the quotes I gave you. So getting back to the theme I started with about all this, that's just not true. I mean, those are not the rules under which at least, and not just in the U.S., across the world what the standards are in terms of what happens in clinical research. We have a set of rules that are designed to provide a certain number of protections for research subjects, but it is virtually never our first priority to just protect the well-being of the research subjects. If that was our highest priority, we probably wouldn't be doing the research. The reason we're doing the research is to answer a research question. So we have set up a number of rules designed to deal with the conflict of interest here in terms of on the one hand, trying to answer a research question, on the other hand, trying to not take advantage and appropriate advantages of the research subjects. There is a conflict of interest involved here, and by and large you're not going to be able to achieve both goals as your highest priority. So it is interesting to see these statements out there that seem a bit self-serving when they say, well, of course, what we're going to try to do is, as our highest priority, keep the interests of the research subjects number one. You're just not going to do that. And so let me just give you some details of, I mean, you could propose that, in fact, that is your goal, but if that is your goal, you're going to be doing something very, very different than what we actually do in terms of the modern research setting. And I'm just giving you some of the standard examples of what often occurs in research studies, and a lot of these will occur in your average research studies. So we're not talking about anything unique and unusual here. Randomization. We randomize people to two arms. You will have articles out there that people will occasionally argue that randomization is a good thing for you. And you may hear this about a number of these things, but if it's such a good thing, then the response is, okay, how often are you as a clinician randomizing your patients as part of clinical care? In general, every time I ask people this, very few physicians are willing to admit that they regularly actually are randomizing patients, which gets back to a premise on if you're doing something in a realm in which there is a degree of uncertainty, just because there's uncertainty about which of two or three or four treatments might be better, but that doesn't mean there aren't reasons why you actually might pick one of those four as the best thing for a particular patient. We often make decisions under uncertainty. In fact, I suspect most of the time we're making decisions under uncertainty. We live in a very, very complicated world, and more and more we're learning how complicated it is. So randomization is something that by and large is not the best interest of the subject. Standardized treatment per protocol. In other words, you have a protocol. The protocol will explain in great detail certain things that have to be done to each subject. And the reason it does this is to get rid of noise, because otherwise you're going to have a great deal of variation that's going to make it very, very hard to look at the one question you're trying to answer. Well, adhering to that protocol will often mean that you're not able to do what you would have done outside of the research setting in terms of individualizing the care of that particular patient subject. You're going to do what is best for the research study within limits. Obviously, outside of those limits you may have to remove the person from the study, but you're not going to be able to tailor things and individualize things in a way you otherwise might want to do. Extra tests and procedures. Very common in research studies. You need a certain amount of extra information that's going to help you answer the research question. These procedures, these tests may not be all that benign, but nonetheless you do them. Again, they may in no way affect the actual treatment of the subject, but again, they may involve risks. Finally, nondisclosure of interim results. It is often the case that in the middle of a research study, halfway down the road, you may in fact have a very high degree of certainty that one particular arm is not going to win out. On the other hand, and it could be huge, a thousand, ten thousand to one. On the other hand, you still haven't met the standard of the five percent or whatever statistical significance level you need because what you're trying to do is change the behavior of all physicians, okay, and that's a hard test to meet. Well, if you told people a person's dying of cancer, they're in a trial and wish to getting a certain cancer treatment and the sooner they get the treatment, the more likely you're going to cure the cancer. If you told them that you're on the arm that we know is very unlikely to be the best arm, we're not sure the other one is going to meet the five percent standard or pretty sure this arm is not going to be the winner, they're going to run off immediately and get out of this trial and do something else. So we routinely don't tell people, okay. If you were a physician and dealing with a patient, that's not the sort of thing again you would routinely do. You would not in that scenario just fail to disclose important information that a person would really, really want to know. So again, these are standard things we do in clinical research all the time. Are they good for the subjects? Buy and large no. It still may be in the subject's interest to enroll in a study, but we shouldn't pretend that the number one thing we're doing is advancing the interests of the subject. In fact, if we wanted to do that, there in fact are ways to do that, but we generally don't do that. And this is again just to make it clear, this is not particularly mysterious. This is embedded in the federal regulations that govern the conduct of research. There's one provision dealing with the risks to subjects and its relationship to benefits. Risks to subjects are reasonable in relationship to benefits, if any, and the importance of the knowledge that may reasonably be expected to result. Now that's a handful, so let me just turn it into a little equation. This is basically what it says. Risks to subjects sort of has to be in the ballpark of the other side of the equation. The sum of benefits to subjects plus benefits to society. There's that extra term there, benefits to society. Nothing in these rules requires that the risks to subjects have to be in a reasonable relationship to benefits to subjects, let alone that the benefits to subjects sort of have to somehow outweigh the risks to the subjects. So I want to now transition now to discuss a little bit about what obligation do researchers have. And I want to focus in particular about one particular type of obligation, because interestingly this is an issue that is both important in terms of the integrity of our system. I think it's usually important and it's been somewhat in dispute. It's actually surprising to me it's so in dispute and this actually has a University of Chicago connection. I think it's great that I could come here and show you how the University of Chicago has been at the forefront of discussing some of this. And so the issue I particularly want to talk about, which sounds like a fairly benign and minimal requirement here, because there are again, there are a whole bunch of obligations researchers here have. But do they have an obligation to disclose risks not created by participation in the research? That might sound a little tricky to you, okay? You're enrolling the study, often the research is creating risks. But I'm going to talk about risks that weren't created by the research. They were pre-existing. The researcher is not making the risks any greater. And I would view this as a type of rule of rescue, right? We often have the notion of you're walking by a pond, the little child is drowning there. It would be not particularly burdensome or risky to you to just step in and pull the child out, shouldn't, as an ethical matter, any reasonable human being do that. Well, here all I'm talking about is actually disclosing a piece of information. And I'll give you examples. A piece of information that could be incredibly valuable to somebody in terms of their health and well-being. And one of the issues that's been raised here is does it matter whether or not it's an MD researcher as opposed to a researcher who is not an MD? So this is one of these scenarios. If I was better at drawing PowerPoint slides, you'd have this Venn diagram with a big circle of MDs and a big circle of researchers in the intersect. So it is an area of researcher MDs in the middle. And you might have an issue of, are there different standards in the three parts of those circles? So let me give you some details about this. OK. So, and in particular, do researchers have some sort of fiduciary duty that might lead this to this scenario? So what I'm going to now talk about a little bit is some of you may know Philip Hamburger, who used to be of this institution at the law school. Yeah. And he is now at Columbia's law school, fairly prominent, you know, as are a lot of University of Chicago people. He and actually others at University of Chicago got involved with being concerned about the standards by which we protect research subjects. And just so you'd know the background, their concern was actually that the rules are too protective, that they're actually imposing inappropriate standards on a lot of studies that shouldn't be subject to these rules. And as part of this analysis, Hamburger drew this line on grounds of what it means to be a professional saying that doctors have professional duties and of course our professionals write Hippocratic oath. So you'd think researchers also owe human subjects an equivalent duty, but he goes out and basically says it. Researchers, the non-doctor researchers, are not professionals. Anybody who thinks that, you're delusional. I mean, he's known for saying some controversial things, but let's go with this, because he makes some interesting statements here. The fiduciary duty of a professional, so these are all quotes from Hamburger. This is actually all in a Northwestern University law review publication of a symposium that was conducted about how these human subject rules were overreaching. And there's a fairly significant actually University of Chicago contingent there. Richard Epstein had one of the articles in this volume. I actually had an article there that I was one of a few people defending the rules saying they weren't that unreasonable. Okay, so what did he say? The fiduciary duty of a professional remains a duty to act on behalf of another and it still arises from the professional's voluntary undertaking to exercise his conduct on behalf of his client. So you see where he's going, right? This theme of altruism that is part of a lot of professionals in professional societies. Well, if that's where you think it's a core value of being a professional, it sure sounds reasonable and maybe these researchers aren't professionals because that's not what the research is about and he spells it out, right? Researchers in contrast act for themselves rather than those they study and thus they are free to act on their own. Again, they're not acting, they're not trying to advance the interests of the subjects and it sounds like by and large he's correct at that, okay? Within limits they have to protect the research subjects but it's certainly not their first priority. Their first priority is answering the research question. That's why we're doing the research in the first place. If we wanted to help the subjects, there are ways to do that outside of the research setting. Give them experimental care, for example. Make decisions that are in their best interest, okay? Okay, so now he particularly goes on and says that in fact if we go to classic cases out there and this is what I think is particularly fascinating, you go to Nuremberg and Tuskegee in particular, the key wrongdoing in preeminent examples such as those is that there were doctors. So his key point, it was doctors who did all this bad stuff. Had they not been doctors, nothing bad would have been happening perhaps. You may not believe that. I'm not sure I believe that, but just let's follow it. Okay, doctors who fail to live up to their professional duties and so he goes to the Belmont Report, in case you're not familiar, this is sort of the core document on which our regulations protecting human subjects are based. That report is based on a misunderstanding about what was wrong about the Tuskegee study. So this is fascinating that we've all just misunderstood Tuskegee. And this is again big Chicago, University of Chicago elements. You have Rick Schrader, who was another person who they were actually working together in terms of criticizing the rules who take this different approach. But I think there are some interesting arguments here. And so he goes to explain what was wrong in Tuskegee. Of course it is recognized that the researchers were doctors who held themselves out as offering health care, and the breach of a Hippocratic and fiduciary duty is obvious. Again, the wrongdoing in Tuskegee according to him was these were doctors, they had fiduciary duties. So you go one step further, had they not been doctors, presumably there wouldn't have been anything wrong in terms of Tuskegee. Now, so I want to sort of play with this a little bit and I'll be interested in your thoughts. Assuming this point was correct, because he's saying, you know, this was sort of a huge point, one way out of sort of this dilemma, or maybe not out of this dilemma, but a way to rethink it is, well, okay, what if the physician researchers made clear that in fact they weren't acting in their role as a clinician? So basically they were acting as a researcher. This is a different role than just being a physician. And is it possible in that role for them to just tell the subjects, you should understand I am not representing your best interests. They could be very forthright and say, because this is a research study, I am now acting only as a researcher, even though I have the training as an MD. Don't expect that everything I do is going to advance your best interests as a number one priority. You could probably in fact have them do that, which would still leave us with the underlying ethical and professional issue. Okay, what about these researchers, regardless of whether they're MDs or not, should they in fact have some responsibilities to the subjects, the sort of responsibilities that hamburger is saying do not exist. And in fact, I just will point out, it's actually a lot of people would say, again, it's perfectly okay to say that what legitimates our system is in fact having physician researchers make appropriate disclosures. Roles in fact matter, and our society allows people to change their role in various scenarios. And I'm just giving you a number of examples here. We could discuss it further during the discussion. If you haven't read Charles Fried, Charles Fried is a very smart lawyer who was Solicitor General of the US. He was on the Massachusetts Highest Court. He is a Harvard Law professor. And he actually wrote a little book about research ethics. I think it was called Medical Experimentation in the 70s. And it is a brilliant book. And people talk about it all the time without having read it. And what they will do is they will say Fried is the one who often is said to have come up with a concept of equipoise. And that Benji Friedman borrowed it from Fried or something. Fried's book, what Fried actually says is he doesn't even seem to care that equipoise is even meaningful or you need it. What he was basically negating was a notion that at the time you had people enrolling women, for example, in breast cancer studies, where they were studying lumpectomy and a modified radical and mastectomy, a much bigger procedure, and they weren't getting permission. They weren't getting any kind of consent. And the rationale was, in fact, because the people were claiming, well, we're not sure which of these is better. Therefore, how could it be wrong to just assign somebody to one of these or another and not tell them anything about it? So he was basically actually criticizing the people who were promoting equipoise saying this is absurd. There are many reasons one of these women and it's still true in lots of studies is reasons why somebody would prefer one arm to the other. What Fried said is you should let them know and part of letting them know was you let them know that, hey, I'm not the one making a decision for you here. Don't view me as your doctor. Look to somebody else here. So he didn't have any problem with clarifying your role. The other scenario is when I'm talking about other non-standard roles, I'm sure you've had lots of discussion here about some of these issues. So in which, for example, a physician involved in a state-ordered execution, they may be there at the time of execution, physicians in the military involved in interrogation. Let's assume in both of those scenarios this is all legitimated by our society. You may disagree with it but let's presume we came up with some scenario in which this is okay. You will often get groups like the AMA criticizing physicians participating in executions but on the other hand, they are not a physician against the execution. All they're saying is it is wrong for the physician to participate. I think a lot of us correctly I would say that's a fairly hypocritical position. That is a professional organization acting in its own self-interest. They're saying it's fine to have the execution. Just don't have the physicians who actually are people with the expertise to make sure it actually occurs in the most ethical manner. We don't want to have anything to do with it which is of course great because it's not like there are thousands of physicians in the AMA who are going to protest this conclusion or something so it's very easy for that sort of group to criticize this and by the way there's a history of negating that. You go back to Karen Ann Quinlan one of the key things in the Karen Ann Quinlan scenario that the court made very clear was at the time the physician said this was what removing I guess she was on a ventilator taking her off the vent. They said well our current standards of professionalism would not allow us to ever kill a patient by taking her off this machine you better change your standards because this is the right thing to do. You have to turn off that machine. And that's similar actually to what courts have said about physician participation in execution. I think this is South Carolina case that said we don't care what the medical society does you cannot pull a physician's license because they're participating in a state-approved execution. It is for society to determine this. As long as they're not acting in a role as a physician treating that patient this is perfectly acceptable. So let me play with this the final few minutes and then we can sort of discuss some of this stuff. Since Hamburg was talking about Tuskegee let me get back to a scenario that I think is highly relevant to very real issues happening now. Tuskegee actually involves a lot of bad things. Hopefully most of us would agree there are a lot of bad things happening in that study even taking Hamburg's criticisms and it involved for example actively preventing subjects from learning their disease. Let's assume they weren't that bad. What if there wasn't an active role and I'll explain what I mean by that a more passive role and in particular would it matter if non-MDs had done the research as opposed to MDs? So we'll look at that difference. So let me give you a study called Tuskegee today. Certainly could do it today and there are real studies taking place these days that are not all that different than this. A researcher doing disease X he finds a poorly a set of poorly informed subjects they may not even be aware of the details relating to their condition. Maybe they're not even aware they have a particular disease and the researcher intends to collect long-term information from them not revealing let's assume it's true that there's actually an effective treatment for this medical problem they have. So for example the researcher interviews them and says look I'm studying the health condition of people in your rural town or something. Would it be okay if I talk to you once a month and got information from you? Let's assume they're up front and make it clear look I'm not going to treat you or anything but I'd like you to help me out and they know that this is a treatable condition. Is that an ethical thing to do? Let me explain again because I wanted to point out this is a very real issue these days. There was a case called Grimes versus Kennedy Krieger Institute. This is probably the most prominent litigated case out there certainly in decades relating to research ethics in this country. It went up to the highest court in the state of Maryland and what it was about was reducing risk from lead paint in children living in Baltimore in inner city Baltimore many many homes had tons of lead paint. Kids routinely eat the lead paint and it continually gets into their brains a very bad thing. Kennedy Krieger was the Hopkins related institute that had put lead paint problems on the map and it was actually coming up with scenarios for actually reducing lead paint hazards by trying to figure out cheap ways to actually improve the housing so like wash down the walls for a few hundred dollars because these homes were there it was legal to have little kids living in them. The health authorities wouldn't do anything about it because we did not have money. It cost too much to actually repair the homes. It cost more than the homes were actually worth. This was actually an important study a couple of the parents of subjects sued said a number of bad things happened in this study including failure of informed consent and negligence in study design and a number of other things. One of the issues in this case was what duties did the researchers have to the subjects and so here's a quote from the court and many people will criticize the case and I will too. On many grounds it was a horrible case. They fault the researchers they analogize this study to Nazis and to the Japanese doing horrible things during World War II and to canaries in the mines and actually it was not that horrible study but let's not go there. On the other hand it's also been criticized for creating a particular duty so let's see what they said whether the duty of informed consent created by federal regulations translates into a duty of care arising out of a unique relationship that is research subject as opposed to doctor patient and they got very specific it concluded there can be a duty to warn regarding dangers present when the researcher has knowledge of the potential for harm to the subject and the subject is unaware of the danger so what they were talking about in this case and it's unclear whether or not in fact that researchers didn't do a good enough job but were they having the kids sit in these homes and not adequately warning the parents of the risk to the kid of eating the lead pain and getting brain damage ok so I want to point out some people out there very distinguished scholars find the Grimes opinion troubling particularly in terms of this finding about this duty and so let me give you a these are very very prominent people two scholars at the University of Maryland the real danger of the court's opinion is the possibility that will significantly reduce public health studies and the notion being that if you have to tell subjects about a risk that you didn't create we're not to be able to go out there and study what's really happening and so that is the question I'm actually going to leave you with and so here's a quote they're concerned they're giving you a possibility of a study could a researcher study a population that exposes children to a diet without certain nutrients would the researcher be required to tell the subjects of the risks of such a diet so you found this group that based on your studies ahead of time you think the kids are not getting a certain key nutrient and you know for example this nutrient may cause very very serious permanent damage these children and I think what they want to do is be able to do that study and not tell the parents remember the researchers aren't creating the risk they're not doing anything to the risk but they're asking well do we have to sort of tell the parents and I guess I would ask so okay these are major questions asked by prominent people these days very thematically similar to what Phil Hamburger was raising and I'm not sure how different is that from if you did the Tuskegee today scenario and what I find interesting is what does this tell us because our core rules that we're talking about here stem from Tuskegee Tuskegee was the motivating event that caused us to write all these regulations or certainly one of the major events so if we're going to a scenario now where we're saying because it's important to do public health studies it is perfectly acceptable to have researchers be sitting there having a piece of information that may be of great importance to this group of subjects okay so we're not about changing their behavior or anything but just letting them know is that all that different than what troubled many many people in the early 70's when the Tuskegee headlines got out there that we had US researchers sitting there with these subjects having syphilis knowing there was a cure now granted they went further than that but let's assume they didn't go further so I think I'm just going to leave it with that question and thank you sure my opinion would be is that there is such a duty but it doesn't come from the researcher role it just comes from being like a fellow human being role okay you shouldn't let sure okay that's my opinion right and I think that's a great point and but that gets back to say the rescuing the little girl is an ethical duty and I think most people in ethics will certainly say it's there but as a society in terms of laws and regulations which basically then there are penalties for you not doing something we in general don't enforce those sorts of duties rules of rescue are not something that in this country are usually enforced and we can get into an interesting discussion of there are some countervailing arguments as to why actually we have a bigger problem with people actually rushing into rescue people and harming themselves than people not rescuing but our regulations effectively the issue might be should our regulations be interpreted as imposing such a duty because that is going on just ethics and saying we as a society in terms of the integrity of research are going to not allow those studies to take place anymore so that's just going one point beyond what you said but it's not exactly analogous because the researchers benefiting from the study it's like if it's your pond then you might have a duty to rescue the girl so you're saying there are greater reasons that the researcher in that scenario should have a greater duty than even your reasonable human being okay and absolutely but all I'm saying is that buttresses the notion that we as a society should be interpreting our current rules as saying you know even if you could do more studies by doing this by not adhering to this rule we don't want to do those studies we think the minimal one of the duties that exist out there that is enforceable is that you can't be sitting there and not disclosing an important risk item when you know this could be used to the subjects and it seems like you could still do the study it might complicate the analysis of the data right and for example I think that the Grime study is a good example okay a lot of us if you look at the consent form it's actually on the web they actually do indicate they're not going to eliminate all the lead hazard I suspect again these were expert lead paint researchers they were not trying to hide the fact that there was a lead paint hazard there but you're absolutely right you should be able to conduct any studies on the issues that were just raised and your distinction between what's legally required and what everybody I think would probably agree is ethically required what about good Samaritan laws because they do apply to specific professionals who could be defined either as the researchers or the MDs or both so good my understanding is there are actually very few good Samaritan laws that actually require somebody to actively aid most of them remove your likelihood they'll set a higher standard of like gross negligence in terms of when you will be harmed and interestingly oh it's interesting sort of six degrees of separation David Hyman who's a law professor at University of Illinois was actually one of the people who got into this issue sort of with Rick Schroeder and Phil Hamburger in terms of criticizing he's one of the major critics of the human subjects roles with many other distinguished people in fact it was what's the American Association of University professors with who's the MIT professor who did the scenario of the abortion and the violinist hook up to right she was a member of the committee and I think Hyman wrote that report he's actually wrote a very interesting paper analyzing again who rescues and again his paper is the one that sets if you look at the numbers we lose far more people are injured because they rush into cars that are about to explode or something than the people who are actually injured from not being rescued I don't know whether it's true or not but he is sort of these freakonomic sort of analyses or something so but by and large right we don't actually as a legal matter force people to do stuff we just set higher standards so that they can't be sued on the theory that knowing would you know to move in or something seems to me that one way to at least address it in the design this goes back to Robert was suggesting in such a study would be to have some sort of equivalent of a rescue rule within that so let's say you randomize people to an intervention for palliative care versus usual usual care and you discover that in the usual care arm there are people who are getting a 10 out of 10 pain scale a 10 out of 10 score on their pain scale and one way to do that is well for science we're just continuing this and I didn't cause that pain it's being caused by the people who were caring for them but a way to be able to do that and still I think get data which would be more complicated to analyze would be to have a sort of stopping rule that if the people in that you know get a 10 out of 10 or 8 out of 10 score that you then you know set yourself to rescue the person at that point and then you've got data that could be analyzed it does allow this kind of study to go on and it can give you meaningful data but does but also meets I think the standard of giving an opportunity to rescue the person so I take your starting position is that you wouldn't need to get consent for the initial intervention you would you know in the in the consent you wouldn't I think have to you know sort of say more than sort of usual usual care right because you don't want to sort of tip people off in the way that would do it as long as you've got built into it the sort of safety mechanism and you could you know put language into this if it looks like that would be vaguer it looks like you're going to be harmed we will you know intervene I mean that's the kind of approach I think that from what Robert would suggest suggesting would allow this kind of study to go on without sort of creating the Hawthorne effects that would perhaps damage the research but yet fulfill the responsibility of the researcher to rescue the individual who's harmed it's a tricky scenario because then you back right how good is your rescue and how is our society going to view that that you're back to the scenario of how unconscionable it really depends on what the piece of information is that you're studying in terms of their behavior this is a big issue by the way in terms of just research in general cluster randomized trials where you're randomizing people in terms of different locations where they live or different hospitals or different buildings are a very big item and a number of groups that Canadians have a lot of a research ethics group studying this right now because it's very hard to get consent in those scenarios and right does it create Hawthorne effect or something else we're actually on the cusp of a lot of interesting areas in terms of doing you sort of you know different types of QIQA research that are hard to do otherwise and they're not easy answers to these questions could you say in general terms your goals of changing the regs and the likelihood you'll be successful I will not put any predictions on likelihood clearly the health and human services at the moment is supportive of trying to change the regulations from my viewpoint well our goal this is all out there probably more stuff than you want to find out if you Google OHRP you will get to one of the first two or three things will be OHRP's main website on the very front page there's a big blue button at the bottom if you click on that it will get you to the actual proposal which went out for public comment from July to October of 2011 and over 1100 sets of comments which are searchable from pretty much any major organization in modern medicine these days including a lot of fascinating groups that you may not day-to-day interact with but I personally am very interested in like information use and everything so Latanya Sweeney if you're aware she's the person professor who now is on leave at Harvard who found out that the governor of Massachusetts medical records were in a public release of information that was on the web because she was able to look up the data which he was treated and a few other pieces of information so information privacy experts all the usuals American cancer society double AMC I mean you name it I don't know if University of Chicago made a comment but it is quite fascinating to read all this stuff seven proposals one of which is should there be harmonization of the interpretation guidance on the regulations across the various federal agencies the way the system works the same set of rules as administered by 18 different federal agencies improving informed consent should we be able to do a better job in terms of writing consent forms and it gives some specifics that we think are reasonable ways to actually improve consent forms and make them far better of an actual decision-making tool that a person should be making an informed decision about being in a study in multi-center studies having one central IRB for the domestic sites in these studies creating a uniform floor of privacy rules in most studies so that the IRB doesn't have to spend that much time in terms of figuring this stuff out a number of rules relating to the various categories of IRB review enlarging a number of studies that can be in this exam category and eliminating the need for always having an IRB or an administrator to review very many low-risk studies so for example a competent adult who's being asked to answer questions even if they're being asked for some private identifiable information if it's a competent adult is there any reason that you really need somebody to review this? We're asked questions all the time and there's not a review panel so researchers sometimes appropriately complain why are they doing anything more evil than the marketer or something who wants a bunch of information from you so that's giving you a flavor of some of this stuff some of it is fairly complicated and again we got a lot of the comments have been hugely helpful if and when this thing goes out again for comment I'm sure there will be some questions made due to the comments So I was thinking even if you agree that there is a duty to warn say for a particular study it was important that they not be warned could that be in the consent that there wouldn't be a warning even if the researchers were to become aware of some risk and would that be accessible? That is a great question so you get into a tricky issue of to what extent are you getting adequate consent when you're telling somebody that you're not giving them certain information and the key issue there is what information do they know that helps them interpret that missing piece of information I'll go back to my wonderful year here there was a wonderful case called Leah's case which was a real console at the time about a young woman from a particular Orthodox Jewish group that came over here from with her father and she had a particular and this was written up at John Lantos he is a wonderful writer above it and the issue was the woman actually she was about she was engaged she had some sort of uterine tumor she wasn't aware that she had a tumor and that what they were proposing to do was to do a hysterectomy which would have made her non-marriageable under Jewish Orthodox law and the parents hadn't told her that and the doctors hadn't told her much of anything you have a problem your father who is the head of the family and usually made the decisions want to make decisions for you and she knew all this and in the meeting she would say that's okay with me I want to let my father make the decisions I think the concern of most of the people there was one member of the very large group reviewing this was she had no clue what was going to happen to her so even though she was saying I'm okay with you're not telling me it was not informed enough because all she knew was some medical procedure was going to take place do you agree to let your father decide even if one possible decision he might make if you agree is that you could have a hysterectomy and be made unmarriageable which wouldn't be telling her that that was it that's an example no I understand that but the tricky thing is how do you give somebody a piece of information to get them enough informed when the whole purpose of the thing is not to give them that piece of information so it's the same scenario of a person consenting ahead of time when they start out a medical relationship of well if I'm dying of cancer don't tell me so at least at that point you're not worried about it but the problem here was you already had a specific problem that everybody knew the details of and yes once in a while you could have given her a list of 10 of these things but probably the moment you gave her a list there they would be equally concerned that that she would then commit suicide and whatever it was a great case but exactly your issue how do you get appropriately informed consent when the whole key is not giving too much information that's very very hard to do I think that case was from this institution it was right here it was Herbst I want to thank Jerry for coming today and wishing you good luck as you go back to Washington thank you it's my pleasure it's the election season thank you