 The speaker for this series is Lynn Janssen, who is a PhD from Columbia and then made personally the best decision administratively ever made by recruiting her to St. Vincent's in New York Medical College. She now holds the Madeline Brill Nelson Chair in Ethics Education at the Center for Ethics and Health Care at the Oregon Health and Sciences University. And Lynn is going to talk to us about perceived controllability and therapeutic optimism in clinical research. A long-standing ethical issue in clinical research anyway has been whether trial participants fully understand the nature of the trials in which they participate. If they lack adequate understanding or if they inaccurately process really important risk and benefit information that's provided to them, then the informed consent is seemed to be invalidated or at least called into question. My own research on the optimistic bias has contributed to this more general discussion. With my research team, I've documented that patient subjects in early phase cancer trials in particular often manifest an optimistic bias with respect to their susceptibility to risks and benefits that are associated with those trials. A few years ago I presented some of that preliminary research to you here at the McLean Center Conference and today I just want to give you a brief update of sort. I want to talk a little bit more in detail and share some sort of information that hasn't been seen before about the factors that might explain the optimistic bias, at least in the population of subjects that we've studied. So let me start with just a brief overview of the optimistic bias and some of you may not be familiar with it. Commonly referred to as unrealistic optimism, the optimistic bias has been extensively studied in the social psychology literature. Indeed, a recent survey article on the bias reports that over the past 30 years an average of 21 scholarly articles per year have been published on it. And in his influential book, Thinking Fast and Slow, Daniel Kahneman observes that in terms of its consequence for decisions, the optimistic bias may be the most significant of cognitive biases. The bias is actually very prevalent and it's been shown to be having the capacity to distort risk-benefit information in large numbers of populations. Across a range of contexts, people tend to overrate their susceptibility to benefits and underrate their susceptibility to losses or to harms when they compare themselves to similar others. So in a typical study, subjects are asked to rate their risks of experiencing a certain hazard, such as the risk of being in a car accident, for example, or getting divorced. In comparing themselves to their peers, subjects often rank themselves as less likely to experience these negative events, even when there's actually no evidence to suggest that they are, in fact, less vulnerable than their peers. Yet while the optimistic bias is common, it's also event-specific. People don't tend to exhibit this bias with respect to all events. And this last point is really important because we don't want to confuse the optimistic bias, which is event-specific, with a more general optimistic outlook on life, which is referred to as dispositional optimism. Research on optimism, interestingly, has actually tended to find no correlation between the two types of optimistic, unrealistic optimism or dispositional optimism. Now I said a moment ago that the optimistic bias is relevant to informed consent to participate in research. Since the bias distorts how people apply risk-benefit information to themselves, it can compromise the informed consent process, and it can do so in two ways. Biases have the potential to provoke decisions that are not in a person's best interest by interfering with a person's ability to apply relevant information to herself. And also biases have the potential to undermine autonomous decision-making by interfering with the rational processing of information. And I just want you to notice that both of these ways in which biases interfere with the informed consent process are really targeting what's referred to as the appreciation component of informed consent, not the well-understood understanding component of informed consent, which tends to be linked more commonly to the therapeutic misconception. Now it's tempting to want to resist, some people have done this, wanted to resist the emphasis that I like to place on rationality, autonomy, and informed consent. After all, we're all of us here subject to biases, and few of us here are actually in a position to make a perfectly rational decision across the range of decisions that we need to make. But of course, in talking about informed consent and trying to take informed consent seriously, we actually don't need to demand perfection, but of course we can demand improvement. And if we can improve the informed consent process in a way that's not too costly or demanding, then I believe we certainly ought to do so. Of course it's true that we needn't have the same concern with biased decision-making in all contexts. I think, for example, that the standards of informed consent in the physician-patient relationship are commonly seen to be held to lower standards than are present in the research or patient-subject relationship. And the reason for this is actually pretty simple. When the doctor-patient relationship goes well, and I believe it actually goes well most of the time, the physician is actually motivated by a beneficent motivation. He or she is attempting to promote the good of the patient. But this same beneficent motivation isn't at least primary in a researcher-patient relationship. Even when that relationship goes well, the researcher is not aiming to promote the good of the patient. He or she is concerned to promote generalizable scientific knowledge. So given this orientation of the clinical researcher, we ought to demand not just minimally acceptable consent, but I think high quality informed consent in that context. So if this is correct, then it becomes really important to understand what factors evoke the bias. By digging deeper into the causes of the bias in the clinical research context, we may be able to come to a better understanding of how at least to address it. So with this background in place, I can now discuss our research. We sought to understand what particular factors evoke unrealistic optimism or the optimistic bias. And we did so by looking at the same factors that have been shown to evoke the bias in other non-research related contexts. And these include a person's past experience with the event, a person's mental image of the kind of person who's likely to experience the event, whether the person has knowledge of the factors associated with the event, and a sense of whether a person believes that the hazard is something that they can control or not. So that's the perceived controllability. We actually found in this preliminary study that the optimistic bias was present in our population of 72 subjects with respect to a range of events, and that there was a significant correlation between the optimistic bias and perceived controllability. And you can see on this slide the three events that we found the correlation. Subjects believed that they could control whether the cancer was controlled by the drugs that they received in the study that they were enrolled in. They believed that they could experience a health benefit from the drugs that were being tested. And they also believed that their cancer would be cured by the drugs that they received in the study. And again, the bias was present, but also, too, was their understanding or their belief that they could control whether these events actually happened to them in the study. Now, it's kind of interesting because perceived controllability actually works in tandem with one of the factors that I had on that last slide, which is egocentrism. The social scientist and psychologist Neil Weinstein, who's a member of our study team, he explains the link in the following way. He says, if an event is perceived to be controllable, it signifies that people believe that there are steps that one can take to increase the likelihood of that outcome. Because it can more easily bring to mind their own actions than the actions of others, people are likely to conclude that desired outcomes are more likely to happen to them than to other people. And in fact, many of the respondents in our studies seem to commit the same kind of conflation or mistake that Dr. Weinstein is describing by focusing on their own actions and ignoring the actions of other people who are similarly situated to them in the trial. They thought that they would be able to take special steps to control the outcome of the research for them. So, for example, one patient said that factors such as attitude, nutrition, staying active, play a big part in recovery. I think I'm more active, pay more attention to nutrition and have a more positive attitude than most people. Another patient's subject said, I think I am more disciplined in taking the medicine every day and taking care of myself at home than the average patient, following protocols and so forth. And that to me means I have a better chance than the average patient. Now, we're actually in the process of analyzing data for a larger study of 171 patients in an R01 study that we're conducting on this very similar but expanded topic. And it looks like I can say just very preliminarily that these sort of findings of the strong correlation between UO, unrealistic optimism and perceived control ability will be supported in this larger study. So we have pretty good reason to believe then that the optimistic bias in the population at least that we studied is linked to this factor of perceived control ability. But there's actually evidence from other sources that supports this perception of control hypothesis. In a multi-center investigation designed to assess the decision-making process of patients enrolled in early phase oncology trials, Agua and colleagues found that 44% of the participants reported that participating in the cancer trial gave them a sense of control over their own disease. They concluded that the desire to actively do something to fight their cancer appears to motivate these participants to enroll in phase one oncology trials. Furthermore, they also found that patient subjects in these trials overwhelmingly report that they expected to personally benefit from their participation in the trials while also judging that the majority of the other participants would not benefit. So this actually is strongly indicative of the optimistic bias, although Agua and colleagues didn't call attention to it, their results of their study do provide further evidence for the link between perceived control ability and unrealistic optimism. So in the remaining time that I have, I just would like to say a word or two about the upshot of our findings. The first question one might ask is, should we even be concerned if participants in early phase cancer trials believe that by enrolling in a trial, they're taking active control over the course of their disease or that once they've been accepted into the trial that they can do something to improve their particular chances of benefiting? Well, and I think that the first most general thing to say is that the perception that an event is subject to one's control may not actually reflect an error, right? After all, many events are controllable. Nevertheless, the perception of controllability can generate two kinds of mistakes, and I want to talk about these just briefly. The first mistake is to view an event that is not subject to one's control as if it were. So for example, a gambler may come to think that she can control the roulette wheel by engaging in certain rituals. This is a mistake that I'll call an illusion of control. The other mistake is to exaggerate the extent of control that one can actually exercise over an event that is in fact controllable to some degree. So whether I experience a heart attack, it's subject to my actions, but I can also exaggerate the extent to which this is true, minimizing or ignoring the contribution of other factors such as my genetic makeup that are actually not subject to my control. So this sort of mistake is what we call, or what I'm going to call an exaggeration of control. So does the perception of control by participants in early phase cancer trials reflect either one of these kinds of errors? And the answer may seem straightforward, right? Because the purpose of these trials is not to provide therapy to patient subjects who participate in them. As we all know, phase one cancer trials are designed to test the toxicity of the drugs in the trial. They're not designed to provide health benefits to those who receive them. If the trials are not even designed to benefit those who participate in them, but instead intended to generate future generalizable data, then if a person's thinking that they can control what happens to them, then they're committing some sort of an error in this sense. This perception of control on this view is going to be an illusion of control. But of course this view can be challenged. One can't infer from the fact that a trial hasn't been designed to benefit its participants that it cannot provide them actually with some real prospect for benefit. The purpose of a trial and its probable effects aren't actually the same thing. And furthermore, there's some evidence to suggest that participating in early phase cancer trials provide some, although it's true, very low prospect for therapeutic benefit. So given this, patient subjects could rationally believe that these trials offer them at least some prospect for benefit, and they could rationally believe that by participating in them, they were exerting at least some kind of control over the course of their disease. Yet, even if this claim is accepted, it still remains true that many of the patient subjects in the trials that we've looked at haven't exaggerated sense of the degree of control that they can actually exercise in this context. And as I've already pointed out, patient subjects overwhelmingly tend to think that they'll benefit personally from participating in these trials and that their prospects for benefit are substantially higher than most others who are participating in them. So the views manifest kind of an exaggerated sense of control, even if you want to say that the perception of control isn't strictly speaking illusory. So in conclusion, to recap, I've argued that the optimistic bias is relevant to informed consent in the context of clinical research, especially when it comes to early phase cancer research. I've argued that this is a context where we should demand high quality informed consent. I've provided some preliminary evidence for the perception of control hypothesis, and this is the claim that the optimistic bias is engendered by an illusory or exaggerated sense of control. The next steps from my research team and for me will be to develop interventions to address the optimistic bias in this population. And to develop effective interventions, we're going to, in all likelihood, need to have to think about how to address perceptions of controllability that trial participants seem to be bringing with them to the informed consent process. Perhaps I'll have the honor to discuss our work on this area at future acclaimed conferences. I'd like to acknowledge that the work on the projects is funded by the NIH and that this is my study team. Thank you very much. Questions? I'll just observe that we began the afternoon with the Garrison-Chieler bioethics. Now we've got the Lake Wobagon phenomenon where all the children are above average in research. So time for questions. Yes, please. Thanks for the nice talk. I guess one question I have. I'm sorry, Abe Schwab, IPFW. One question I have is, isn't it possible that it's sort of like getting the bonus? So you have the drug which may or may not have any benefit, unlikely to have benefit, but could in a small suction. And then you get something like the placebo effect on top of it. That is, you think things are going to work out well, and so they work out well. And so the question is, is there actually a meaningful harm that comes from them having this optimistic view? That is, is there something that we can identify as here's how things totally went wrong for this research subject because they had the optimistic view? Right, so yes, that's a very good question and there's sort of two responses. So there's a sort of potential moral harm that can come from that, because when we enroll patient subjects into early phase cancer trials, for any form of research whatsoever, we're trying to make sure that they're voluntarily and willingly participating in this adventure with full understanding and appreciation. And so what our research suggests is the bias actually, if you're coming into that context with a bias and you're not appreciating how the risk benefit information applies to you, then you're not fully informed in the right kind of way. You're not applying that information to yourself. So the moral harm is that you haven't given your informed consent and we're bringing you into this trial and the trial is not for your benefit. The physical harm is, and potentially social and psychological harm, is that there are other things that people can do in their lives besides be a research participant and especially when we're talking about early phase cancer research where phase one trials in particular, people who are enrolled in those trials, that there's not no other intervention available for them. So their options are this, which is a perfectly fine option if they're fully informed or go home and do other things, right, in the time that they have left. So again, that's another harm. So if they're participating in the trial and they're not fully informed, then they could be doing other things. And so they're robbed of that opportunity to do those other things. Does that make sense to you? Sure. I mean, I would say, since you asked, on the first response, that would sort of apply to everybody though, right? That wouldn't be subject to any research subjects. All informed consent would then have that problem, right? Well, but I did argue that I'm not so concerned. Well, we generally don't tend to be so concerned with the presence of biases in a sort of non-research context because in a clinical context, there's assumed to be a general beneficent motivation, right? So we're not as obsessed with making sure everything is perfect. Thanks. One more question, yes. I just wondered if you could comment, if you see any difference between optimistic bias and hope, meaning that I can rationalize that if I were going into a phase one clinical trial, I can rationalize that it's for safety and human beings and that less than 10% of patients are gonna have a perceived benefit. But that, you know, if I were in the acts of dying that I would hope that I was the one that would be receiving benefit. And if that's the case, if when you go into the intervention phase, I'm wondering if you're looking at linking literature in social work and chaplaincy and palliative care and reframing hope in decision-making. Yeah, so hope is itself an entirely different construct. And we haven't studied hope as its own independent construct. We've studied dispositional optimism against the construct, the psychological construct of UO. So dispositional optimism is kind of, it's different from even hope, right? So they're like these, it's complicated. So we have not looked at that hope construct, but we have looked at dispositional optimism and unrealistic optimism. So I haven't done anything to really offer you. Sorry. Thank you very much, Lynn. Thank you.