 All right. Let's get started. Hi, everyone. Thanks for joining. I am Dan Cramer. I'm a cardiologist at the Beth Israel where I lead the electrophysiology and digital health section at the Smith Center for outcomes research. And I'm delighted to welcome you all to a contemporary books and bioethics seminar. And we are particularly delighted to have Professor Jenner Blumenthal Barbie, a philosopher and bioethicist with an incredibly timely and topical confluence of expertise that crystallized nicely in her new book, Good Ethics and Bad Choices. So just by way of background, Professor Blumenthal Barbie earned a PhD in philosophy from Michigan State, and her meteoric rise since then has included several prestigious awards, including selection as a Green Wall faculty scholar and bioethics, multiple grants from NIH and the Patient-Centered Outcomes Research Institute, studying the ethics of decision making. And she's currently the colon professor of medical ethics and the associate director of the Center for Medical Ethics and Health Policy at the Baylor College of Medicine. And Jenny's work focuses on fundamental questions around autonomy, agency, judgment and decisions with an approach that blends theoretical analysis in the philosophical tradition with empirical research that's deeply embedded in clinical care. And as we'll hear, she's increasingly explored the contribution of behavioral economics, cognitive psychology to our understanding of decision making at the bedside, as well as health policy settings. And I think it's clear that this work has never been more vital to public discussion and analysis, global events have forced everyone to think about the mechanics of public health, mandates, nudges, implicit and explicit rationing, gatekeeping, reaching all with the goal not just of advancing critical public health goals, but doing so with moral authority and democratic legitimacy. And at the same time, we've been bombarded with revelations emerging from tech companies about the scale and the scope of personal data being collected. And that's brought new attention towards the way in which these sophisticated tools are not just monitoring what we do and how we make decisions, but they're actually shaping the choices that we make as we navigate this increasingly connected world. And so we're really very fortunate to have Professor Blumenthal Barbie guide us through these muddy waters where public health, personal medical decisions, big data, choice architecture and normative ethics all converge. And so I thank her again for joining us remotely in this sort of funny form that we have. And just to highlight a few nuts and bolts before we hand the microphone over to Jenny. If you have questions you can use the Q&A feature which is on the controls at the bottom of your screen. I will try to navigate the Q&A and chat towards the end of the presentation. So get some sort of live feedback from our speaker if you have tactical issues please use the chat feature as well. And then please keep an eye on emails and Twitter for upcoming events news and other programs coming from the Center for Bioethics. And so with no further ado, I will hand the presentation over to Professor Blumenthal Barbie. Thank you so much Dan for the introduction and also for inviting me to speak at the contemporary books and bioethics series. Let me just start my screen share. So, as I said I've been working in the space of the intersection of behavioral economics and decision making for a while and a couple years ago figured it was time to pull everything together and write a book on the topic so the books that I'm going to be talking about today is titled Good Ethics and Bad Choices the Relevance of Behavioral Economics for Medical Ethics. And here's a snapshot of the book structure there are five chapters in the book. I just put this up to give you a high level overview of what is covered in the book and kind of how the argument flows. The first chapter in the book is really all about decision psychology so it's very empirically oriented. It's got a lot of really great fun examples of kind of decision making quirks and heuristics and biases and how they played out in medical decision making. The book then takes more of a normative turn and chapter two tries to make an argument that what all of this means is that we have really significant concerns for decision making, particularly patient decision making in the domains of autonomy, decision quality and also implication for patient well-being and I'll be walking through some some snapshots of these arguments during the talk. This kind of paves the way for thinking about the use of nudges and choice architecture so the third chapter is a sort of qualified defense of nudging so I try to advance several arguments in defense of using nudges and choice architecture in medical decision making and defend some of those arguments against objections. Chapter four kind of gets a little bit more into the weeds of some philosophical distinctions and thinking about different kinds of nudges, transparent nudges, non-transparent nudges also asking some kind of philosophical theoretical questions about how ought we to think about nudges should we think about them as manipulation and what does manipulation even mean. So there are kind of some fun philosophical puzzles and more theoretical musings and chapter four. Chapter five, this was a really fun chapter for me to write I call this nudging in the weeds. And this is really a case study of how nudges get implemented in real clinical medical decision making so this is based on my time being embedded in several contexts, clinical context, fetal surgery, pediatric intensive care units, prostate cancer decision making psychiatry and observing how nudges were employed knowingly or unknowingly in those contexts influence patients decision so that was a that was a really fun chapter. So that gives a basic overview and now I'll dig into some of the content so as I said that the first part of the book is really asking this question, how do patients decide. So let's start out by talking about a quote from Carl Schneider is really great ethnographic work on the practice of autonomy. And what he writes in that book is he articulates a decision making ideal that I think all of us who work in bioethics and medicine will recognize. And what Schneider says is over the past 40 years there has developed an assumption that the physician's principal task is just to remove impediments to the exercise of autonomy. If those impediments are gone people will naturally gather evidence about the risks and benefits of each medical choice, they'll apply their values that evidence, and they'll reach a considered decision. Now what the field of behavioral economics has done it's really fascinating field of research, but one of the things that has done is really posed, I think a significance that a problem and challenges for that ideal that underlies so much of medicine and medical ethics and the challenges that we aspire to, with regard to decision making in that space. So in particular, the field of behavioral economics has shown that patients typically use all sorts of mechanisms that deviate from this ideal so they're, they use a lot of intuition in their decision making their decisions are also impulsive. They employ a lot of decisional heuristics that can lead to several identified cognitive biases in their decision making and I'll be giving some concrete examples of some of these findings in the next few slides. Let me say a little bit about decision making via intuition and impulse I'll spend most of the time talking about heuristics and biases because I think they're they're very fun. But you know one of the things that Carl Schneider wrote about in his ethnographic study on how the practice of autonomy actually happens in real life. One of his clinical scenarios is he said that he found that quote even patients who were sufficiently well educated and reflective to write memoirs frequently describe themselves as having no decisional process at all. Instead they invoked intuition, instinct and impulse. Here's a study that illustrates this this was a study that was done with 130 living kidney donors and the researchers were attempting to understand how they made the decision to donate. And what they found is that 62% of the donors described themselves as making an immediate choice to donate. Only 25% described themselves as having engaged in deliberation about whether to donate. So that's kind of really similar thing in our research that we did several years ago to understand how patients make a decision to get a left ventricular assist device. This is a very significant life changing decision. So we did in depth qualitative interviews with 45 patients who were had made this decision were in the middle of making this decision, etc. And what we found is that 30 out of the 45 people reported really quick and reflexive decision making. And 28 told us that they didn't feel like they even had a decision to make that didn't have a real choice. And 22 of the 45 deferred very heavily to their clinicians. So that just gives a glimpse into this world of decision making that tends to be more intuitive, impulsive. I'm going to spend the bulk of the time digging into the heuristics biases because I think this is one of the really important contributions that behavioral economics was made to the discourse of decision making that's especially relevant for medical ethics. So one of the most well studied and well talked about biases that behavioral economists will talk about is something called the loss of version bias. Here we see the phenomenon where losses loom larger than gains for people. So in the context of medical decision making, for example, this was a study that was done where they were presenting patients with a hypothetical decision about angioplasty. In one group of patients that gave them information in a loss frame. So they were told that one in 100 people have complications from this procedure. For the other group of patients they gave them gain framed information so they told them that 99 and 100 have no complications. Now this is the exact same information it's just framed differently they then ask people what they would decide. And they found that in the last frame 49% of people said that they would refuse the angioplasty compared to only 15% in the game frame. So that's really, really significant if you think about the implications for medical decision making. Here's another bias, sometimes called the frequency bias and what is meant here is just that we've discovered that risk information framed as frequencies is more influential more salient to people compared to information that is given to people in terms of percentages. So let me tell you about this study and the study is interesting because it's not about patient decision making even though most of my book focuses on patient decision making and the implications there. This is a study that looked at clinician decision making because of course we all have these biases have these biases to. So in this study, they were giving psychiatrists risk profile of patients. And for one group of psychiatrists they were told that the patients risk of violence was 20 and 100. The other group of psychiatrists were told that it was 20%. Again, same information frame differently. What we found is that 41% of psychiatrists said they would refuse to discharge that patient when they were given the frequency frame, and only 21% said they would refuse to discharge the patient when they were given the percentage frame. And this really demonstrates that frequencies are more salient or powerful and you can almost imagine 20 and 100 allow somebody to sort of vividly imagine 20 real people out of 100 people as opposed to a percentage which might feel a little bit more abstract to people. Another one that's relevant is what's sometimes called comparative risk bias. And here the finding refers to what happens when you give people not only risk information about themselves, but you give them that information alongside the average risk or risks of other people. So this was a study that was done looking at women who were at risk for developing breast cancer. And one group of women were given their risk score their their gale score their absolute risk of developing cancer. The other group of women were given that information but they were also given the comparative risk information of most other women and this was a higher group, higher risk group of women so they could see that they were more at risk. Another thing to note is that the people who received the comparative risk information, they were more worried and they actually engaged in more screening by receiving that comparative information. Another sort of comparative phenomenon that we see is something that's called the comparison contrast bias, sometimes called the decoy effect and this refers to what happens when you when you introduce different numbers or kinds of options and where you position those options relative to each other. So the way I'd like to think about this is when I go to Starbucks, I will tend to just order the grande because it's in the middle between the tall and the venti. And this is a tendency that we all have to choose kind of what seems like the middle of the road option. So this is really relevant in medical decision making because we kind of now know that if you're giving patients options. You think a particular option is best for them, you could position that as the middle of the road option between the lesser thing and between the thing that is more extreme and that would nudge them towards that particular choice, they would have a bias towards that choice. Another one that I think is really important for us to know about and think about is what are called recency and primacy biases. So this refers to the finding that whatever people here first and last is most influential in their decision making. The definition of this is a study that was done again with women who were at risk for developing breast cancer and they were thinking about taking the preventative drugs moccasins. So what the researchers did is they just changed the order in which women heard about the risks and benefits of this drug. One group of women were told about the benefits first and last, and another group of women heard about the risks first and last. And they found that the group of people who heard about the benefits first and last, were more interested in taking the drug than those who heard about the risks first and last because the risk stood out as being salient to them. And the benefits were sort of sandwiched between the risks. Another relevant bias is something called omission bias. This is really relevant in current times with COVID and vaccines. For example, there's been a lot of work done in this face of omission bias and understanding how that influences vaccination decision making. And the finding here is that people view bad outcomes as a result of nonaction as better than ones they're called directly, even if they're worse. So this was a study that was done asking parents sort of what their tolerance would be for a particular risk of death was a really low risk of death, and they just varied so you think objectively well, the risk is the risk so you should, you know, be tolerant of it or not tolerant of it or tolerant to some degree, regardless of what the cause is, but they found that when the risk was described in terms of vaccination, parents were less tolerant of that risk, that very low risk compared to when the risk was described in from the illness. Now on the flip side of things the flip side from omission bias is something called commission bias also really relevant for medical decision making. We see this in cancer there's been some studies done looking at this in cancer care, but also other contacts and well as well of course. So here is this this bias that we have to feel like you just have to try something that trying deciding in favor of trying something is better than trying nothing. And a really cool study that was done here was asking people they gave them a hypothetical cancer diagnosis and they said okay you have choice between treatment and watch for waiting. People are told that treatment has a 10% chance of death watch for waiting has a 5% chance now you think well objectively you would choose the option that has the smaller chance of death but what they found is that more people 65% of people said that they would choose treatment. And this was taken to be an illustration of the commission bias or this kind of drive we have to decided favor of doing something. It's really relevant bias that every time I give this talk, especially to colleagues who work clinically they say oh my gosh you see this all the time is availability bias and the idea here is that you can give people statistics, kind of all day long but at the end of the day what is particularly influential in people's decision making is what is most vivid to them what is really recent what is memorable because that's what's most available in their mind and that's what stands out and that's what kind of really plays an influential role in driving their decisions. What was done here was, since work by Angela Vellandes who has done a lot of work in this space of kind of making, making certain, you know, context salient or available in people's minds, and this particular one. Looking at the impact of giving people a visual image of what life with dementia is like and making that really readily available and salient people's mind so one group of people were told look, you know, if you have advanced dementia is just, they just give them textual information about what that's like you won't be able to eat you won't be able to emulate, you won't be able to communicate. People were given that same textual information but they were also given a very vivid image, a two minute video of a patient with advanced dementia who was not able to do any of those things. They then asked people to indicate their advanced care planning purposes and they found that for the people who saw the video more of them said that they would want comfort care only in such a state. Because that video, that particular, that particular story was really, really available and influential to them in their decision making now this. Professor Vellandes has gone on to extend this work to CPR decision making and showing people short videos of resuscitation and things like that which has been both interesting and controversial work. So what we have heard of, if you've dipped into the space of behavioral economics at all is default bias, and often this is talked about at the public policy level of, you know, organ donation is the most famous example of the workings of defaults because the US has an opt in system. So that's our default. If you want to donate your organs, you've got to do something and donate them and so we have relatively low donation rates compared to some European countries where their default is an opt out system so people are defaulted in to being organ donors unless they opt out. But I'm going to give you an example that's more in the in the clinical space I think this is a really fascinating example. This was some work that has been done looking at the effect of different default settings in advanced care planning documents. So the group of researchers for one group of people they set no life sustaining treatment as the default. For the other group of people they set provision of life sustaining treatment as the default and anybody could go in and override that, you know, but what they found is that the default had a powerful effect on people's decision making. So when no life sustaining treatment was the default only 20% favorite treatment, when life sustaining treatment was set as the default 38% of people ended up favoring treatment. And other people have gone on, for example, one of Dan and I's colleagues Scott Halpern has also done a lot of work in this space more recently, as Scott often talks about how the effect is so strong that in his research, the researchers went back to people afterwards and they were doing the debriefing and telling them you were part of the study, you can change your answer you know your choice might have been manipulated by the architecture of how we set the default so do you want to change your answer if you want to change, you know, what your preferences in your advanced care document, and nobody changed their answer that the anchor of kind of what was set as the default was with that powerful. So here are some more examples, before moving on to talk about more directly the ethical implications and ideas around nudging. So this is a, I think another really important one for us to be aware of in bioethics medicine it's the impact bias and associated forecasting errors and the idea here is basically that behavioral economists are showing us that we do a very very poor job at anticipating future events will feel like or be like for us. So, in particular, we have an impact bias where we kind of overestimate the negative impact of some suit for event we underestimate some future event we underestimate our ability to adapt. So, there's been a lot of studies here in, for example how people adapt to quadriplegia and paraplegia and disability so it's really relevant there, but the one particular study I have here was a study that was done with dialysis patients. And so what the researchers did is they went to patients who were the healthy control patients and they said just imagine that you had to be on dialysis. And they gave you to rate your, your kind of quality of life or actually more your mood on a minus two to two mood score scale. And the people said, Oh gosh, when I think about that I think it would be horrible. And I guess I would rate a negative mood, and they gave it something like almost a minus point two. The researchers then went to the actual dialysis patients, and they said, Alright, do the same thing and the actual dialysis patients were positive they were a little above point six and it turns out that that was pretty close to to non dialysis patients as well. So the patients had adapted to life on dialysis, but the people who are not on dialysis they don't really comprehensive they don't really grasp the idea that they that they would in fact, adapt. And I think this is the last one that I'll talk about. It's really important for medical decision making and the idea here is something called the sunk cost bias. Sometimes also relate to what people might call cascade effects or escalation effects and the idea is once people have already started going down a path. So we have a tendency to continue down a certain decisional path or a certain, you know, action oriented path, just because they started down it, even if it is clearly not giving them any utility and it might even be giving them this utility. So the way that this would play. So the classic example outside of medicine is that you go to a movie and you buy a ticket and the movies awful. If it's awful, you shouldn't have taken a consideration that you bought a ticket you should just suck it up and leave but people will tend to stay. And the reason they say is because they bought a ticket but that's kind of not, you know, not a quote unquote rational response you should just leave. So the way this plays out in medical decision making can imagine a million ways this plays out but one of the studies that was done that was looking at this was why patients who continue why why do patients continue with a particular physical therapy regime. When it's not working for them and what the researchers found was that the reason could be explained by a soft cost balance bias basically the patients were saying, Well, we've already invested money here. We've already kind of started to pay for this physical therapy. So they just kept investing more money and more time to pay for it. Okay, so hopefully that gives you a flavor of some of these characteristics and biases that have been illuminated by behavioral economists decision scientists, behavioral scientists. And there are a lot of these that have been studied in medical decision making and there have been a lot of studies in medical decision making that have looked at these several years ago this is the dated review now so I'm sure there are more but back in for a research assistant of mine and myself did a systematic review and at the time found over 200 studies of these heuristics and biases and medical decision making a total of 19 different types of cognitive biases studied in 90% of the studies that we're looking at whether a bias was at play, confirm the presence of the bias at play. So they are very influential they are real things that influence decision making. So I think one of the ethical issues does all of this raise. So I think for me it raises to big ethical issues and the first is how we think about the impact on informed consent and autonomy. So I think if you if you kind of think back to the basics of how we think about informed and capacitated decision making you think about you know Apple bombs criteria for basic capacitated decision making and you're talking about things like reason, understanding, appreciation, reasoning, clear and consistent choice. And you think about what we mean when we talk about when bioethics is talk about autonomous action or philosophers talk about action we're talking about people being self governed acting on their own preferences, which typically on philosophical accounts of autonomy requires intentionality, understanding the importance of controlling influence. And then you kind of start to bring in all of these errors in forecasting future preferences preference dependency on context and framing effects, the biases and decision making that I've been talking about the kind of very intuitive I think one conclusion that we would reach from this is that maybe people are not so autonomous they're not as autonomous as we typically think. And I'm not going to the book develops this argument in detail. There's an entire chapter dedicated to developing out this argument but I think you can probably get a sense of the tensions and where where the argument is headed here. I would actually argue that these heuristics and biases mean that people are capacitated. That would be a pretty radical argument I think you might be able to make it actually but I don't go that far I just make the argument that they, that they pose challenges and questions for the extent to which we think people's decisions and actions are really autonomous the way that we typically understand autonomous in the bioethical context. And then the second major ethical implication of all this I think is that we should be thinking about the impact of this not just on kind of the process aspects of patient decision making that the outcome aspect so do these heuristics and biases and various things that we learn about how people make decisions lead them to make harmful decisions. And one of the things one of the arguments I advance in the book is yeah I think we have a lot of concern for this and I'll just put out a few examples here. Imagine a patient who decides not to undergo a surgery, because someone that she knows died during a similar surgery this would be now we have a name for this this would be the availability bias at work. Imagine if you have a patient who refuses a life sustaining treatment because she mispredicts the impact of an illness. This would be again we have a name for this now this would be the impact bias or a forecasting error. And then and again those of those of you who work clinically I'm sure you kind of you see these things happening and now I think what behavioral economics is doing is saying this is a phenomenon that we have identified studied and name. Imagine a patient with a low risk prostate cancer who chooses immediate treatment with a risk of impotence. Now that would be a commission bias at play. And one of the things I do in the book in chapter two is, I just have a bunch of these examples so I've, I've got over 20 examples of kind of real decisions real clinical decisions that people have made that could be tied back to a heuristic or heuristic or bias. And you might be wondering, so this, this is making the argument that these heuristics invites these cause concern for patients making harmful decisions well what do we mean by harmful decisions. One of the things I do in the book is recognize that there are a lot of different philosophical accounts of harm or well being, but regardless of what your theory is, I think you should be concerned so I kind of walk through it will if you mean health related harm. You know, patients making decisions that result in, you know, unnecessary pain and suffering and hospitalizations. Yes, there's a lot of examples and cases where if your six and biases pose concerns for that. If you mean more of a kind of subjective thing by about harm, you know, harm to the patient's own identified desires and goals and values of what's important to them. I argue that yes, these heuristics and biases pose a lot of concern for that. You know, the objective use of harm right sort of pick your philosophers like to come up with lists of objective goods and objective harms. And one of the things that is often on any of those lists is health and well being, because it's kind of a primary good that is essential for pursuing other goods such as the goods of, you know, relationships or things like that I'm talking about not perfect health, of course, but some like these basic aspects of health and well being. So I won't go through this in detail here but just to kind of give a nod to, I think that regardless of what the philosophical account is that one might have a harm. There's going to be a story to tell about why we should be really concerned about the impact of these heuristics and biases on patients decisions. So what do we do so you know I've tried to sort of raise concern to say ethicist bio ethicist clinicians should be concerned about this what what do we do what's the positive response. And I think there are three options the first and you'll see kind of all of all of them on a spectrum. One response that people will sometimes have when I give this talk is almost a call to return to really old school paternalism that oh my gosh well patients decisions just aren't autonomous. And so we shouldn't be worried about autonomy anymore. So we should just, you know, be really paternalistic and override what patients want in favor of what we think is good for them. We don't really have to worry about autonomy. I think that we don't necessarily that's a very extreme response and that's not the direction we need to go in. Then you have the positive folks who their reaction to this is okay well we just have to find a way to strip people of all of these tier six and biases and kind of return them to a really neutral informed rational baseline. I don't think that that's possible I'll go into that argument a little bit more when I get to the next section about the ethics of nudging. But I'll just say that there's been a lot of literature that has shown that deep biasing is successful in some context like getting rid of statistical biases you can do a better job teaching people like how to reason statistically. But a lot of these biases that I talked about are really really psychologically deep seated. And so it's going to be really hard to get rid of all of them and strip people to this kind of idealistic neutral rational decision maker. So instead the approach that I advocate for and that other people have advocated for is that we really try to understand decision psychology we really try to understand the seriousness biases that influence people's decisions. And we try to reach handle them to make sure that they are leading patients towards decisions that are actually in line with their values and goals. And this is the whole idea of kind of nudging and choice architecture and the way that I like to think about nudging at a basic level is trying to use some of this knowledge of decision psychology and behavioral science to shape people's choices and lead them towards decisions that are more in line with their values and goals this is the basic idea of nudging this is sometimes engaging in this activity is sometimes called choice architecture. And they learn Sunstein, recognize this and write about it in their book Nudge which really kicked all this off back in 2008, where they say a choice architect is somebody who has the responsibility for organizing the context in which people make decisions. Many real people turn out to be choice architects without even knowing it. And they actually explicitly named physicians or clinicians as a group of people who are sort of inevitably placed in the position of being choice architects. So when I when I use the term nudging that might lead you to think that nudging is one thing but really there are many different tools of nudging or many different ways that you can nudge. I think to think about it broadly, any, any of the heuristics and biases we just talked about and there are many many more using knowledge of those to shape people's choices, any of that would be characterized as nudging. So anytime you're using one of those insights that would be an example of a tool or a mechanism or an instance of nudging. There's also what I would refer people to is a really nice report. It's called the mind space report. This was put out by the UK government I think back in 2011, but it's a very very comprehensive report on nudges, and it organizes them in this very nice mnemonic fashion they have the mind space mnemonic, where each of the letters stands for a different type of nudge mechanism, and, and for example stands for norms the use of norms, D is for default, etc. So I'd like to refer people to that because I think you're trying to get a kind of framework. That's that's really good. So for those of you who are philosophers in the group. Yasha cigar is a philosopher who developed a more philosophically detailed definition of nudging, which I think makes a lot of sense and it's largely the definition that I did I use and adopt in the book when I'm talking about nudging and ethics of nudging. In this definition, he says, a nudges be a as a person and these other person a nudges be when a makes it more likely that be will fly flies just make some decision or do something primarily triggered by these shallow cognitive processes. Now is influence preserve preserves these choice sets, and it is substantially non controlling. In other words, it preserved these freedom of choice. Now you see the terms that are highlighted here in red or the ones that are doing the philosophical work. So shallow cognitive processes. Yasha means all all the kind of things, you know the so called what sometimes people call system one factors the heuristics the biases the kind of quick, quick ways that people might make decisions in contrast to the kind of more lengthy deliberative like laying out of arguments and substantially non controlling means that the person could fairly easily do otherwise if they wanted to do otherwise or choose otherwise. So why, why should we not. So what what is the defense of using nudging in medical practice. So I would say if I had to boil down. This is a lot of chapter three of the book, which is probably the most lengthy chapter but if I had to boil it down to three arguments or three defenses of nudging. These would be the three. The first defense would be the idea of unavoidability, the idea that nudging is unavoidable that we've got to kind of frame things in one way or another and we can't just turn a blind eye to it, and pretend like we're not doing it so we better think carefully and reflectively about how to do it well. The second argument or defense of nudging and medical practice is the idea of decisional improvements, the idea that nudging can actually improve people's decisions, and that's bolstered by a couple of additional ethical points or principles. It's a pool of easy rescue, meaning we can do it in an easy way like a lot of these understanding how people make decisions and using smart nudges or smart choice architecture is not a huge extra effort to improve people's decisions and prevent them from making really catastrophic choices. It's a pretty easy lift, and if something is pretty easy and we can help people we ought to do it. In addition to that we're taught we're in the realm of medicine, where physicians and clinicians also have beneficence based obligations so there's an actual extra obligation by virtue of being a clinician to promote and protect patient welfare. So it's not just, you know that any of us can improve patients decisions but there's this extra obligation to do that. Another major defense of nudging and medical practice that somebody might employ is what I call it justified soft materialism. And the idea here is that nudging doesn't really violate people's autonomy, because, as I've tried to argue in the first part of the talk in the book, people aren't really making super autonomous decisions to begin with so actually by intervening with a well positioned nudge, you're preventing them from making a non autonomous decision that is potentially harmful to them and that is the classic case of soft paternalism which is usually justifiable. You are intervening to prevent them from in a moment, making a decision that's harmful and not particularly autonomous or non voluntary. And then in doing so you can actually protect autonomy you're you're kind of protecting somebody from asking non autonomously. So I think those are three big picture arguments that were defenses of nudging and medical practice now each of those is going to be accompanied with major objections so let me lay out the objections and then the last part of the talk will meet will be me trying to respond to some of those objections in defense of nudging. The major objection in fit on the side of on this claim of nudges not being avoidable is somebody might say well okay yeah. All right, fine, we've got to frame things one way or the other that's maybe going to influence people's choices, but we can certainly avoid intentional nudging. So it's true that our decisions about how to order things or frame things are going to inevitably have some kind of a fact on patients decisions. But that's a different ballgame when we're doing it intentionally and maybe when we're doing it intentionally that becomes more ethically problematic. Another thing that somebody might say you know this this kind of object or would say you know also we need to admit that more or less nudging can take place. So, for example, it's true that as a clinician you've got to decide the order in which you present options. So, okay, that you're getting some nudging into that decision context. But when you add on a bunch of extra nudges like for example say you also show somebody that video of the patient with advanced Alzheimer's as a way to nudge them towards a particular choice, you didn't have to show that video that's a little bit of an extra nudge. So nudging isn't really an all or nothing thing. To the point about unavoidability that we can do more or less of it. And then people are also in this space going to say and what about the bias. I get that we should be concerned about people's decisions not being autonomous I get that we should be concerned about people's decisions potentially being harmful but why don't we just buy some why do we jump straight to nudging. So let me move to some of the potential responses to that line of objection. So the first thing that that I wonder is, I'm really not sure this idea that well intentional nudging is avoidable. So I think that one thing that I've, I don't want to say struggled with but just kind of morally has has been interesting but the more that you learn about kind of how choice architecture works and how decision psychology works it's a little bit hard to say that you're unintentionally that you're not intentional in the way that you lay out a choice. So now I know all these things you know now I know that if I tell people a story if I if I talk about the risks first or if I talk about the risks last, I'm going to influence their choice. So I can tell myself a story where I say, Oh well that's not intentional I nearly foresee it to use like a famous philosophical distinction I merely foresee it but I don't intend it. And I guess I'm not particularly convinced by that I merely foresee it but don't intend it lying in this context. And the other thing I would wonder is that even if it were. Okay, even if intentional nudging was avoidable. We might wonder what is what is the moral relevance of unintentional versus intentional. I mean if we're going to influence the patient's choice we're going to influence the patient choice and assuming we're doing it, you know for good and we're we're trying to be do it in a way that's in line with their interest. So what is the particular moral relevance about saying I did it intentionally versus unintentionally. But I do think I do totally agree with the claim that this is something I emphasize in the book that I think that nudging can happen more or less. And I think there's something that I feel like they learn the Sunstein kind of skate over and a lot of other people skate over with this whole line that nudging is unavoidable so one of the things I try to do in the book is actually develop kind of considerations and ethical guidelines for determining context or markers where less of nudging should take place where we should kind of lean towards trying to implement less nudging compared to other contexts. So let's go back to the second major argument about nudging improving patients decisions. And I think that the major objection here is going to be and some of you are probably predicting it. This question of okay well you're saying that not just can improve patients decisions but how do we define good. How do we define, you know, improving decision outcomes what do we mean by that. How do we deal even if we got it right even if we knew what we met by, you know, a decision being better for a patient. How do we deal with the fact that clinicians as as I kind of showed in in the psychiatry example clinicians are also valuable they have their own biases now we've got this problem of the valuable choice architect. So how are we going to improve people's decisions, if the nudgers or the choice architects have all of these characteristics and biases to. So what I what I try to say in favor of that is is one thing I point out is that I think there are cases in which whatever your theory of the good is so when we say a good decision or a good outcome or a patients better off. I mean a lot of different things by that. But I think that there are certainly cases where we have convergence where, you know, if you're a hedonist if you think what's good for people as happiness, or you know you're a subjectivist you think what's good for people is that they satisfy their own goals or desires, or you're an objective as you think there's some kind of list of things that are just objectively good for people there are going to be cases where. Because people were sitting in a room together, they would agree that the particular decision that the patient is making is not in line with any of those things. And I also think that even though this gets pretty theoretical, there are just cases where reasonable people can agree that this patient is making a decision that's not really good for them. And we could, we could improve it, and this is what it would mean to improve it this would be a better choice for them. And in response to the kind of valuable choice architect stuff. I think that this is a real concern but I also think there's some interesting evidence that people are less subject to these kinds of errors when they are deciding for or engaging in choice architecture for others than they are when they're making decisions for themselves and I talked about some of that in the book. And I think very importantly, one thing to give pause to people who are really concerned about the valuable choice architect is well what's the alternative so the alternative is just, we don't do anything. And then we leave patients susceptible to all these random heuristics and biases, or, or these ill intended nudges, you know, like a drug company trying to get them to decide in favor of particular drug because it's going to be, you know, it's going to give them more profits or something like that. And one of the things that we would be countering, and it seems to me that even if we have a valuable choice architect, one that has good intentions and has a reasonable process to try to combat error might be a better alternative. One of the things though I think we should do is we should use what I call in the book the principle of minimizing nudging when there's lowered expected utility gain so the idea here is for example, if there is a situation where, for various reasons, we're not certain about what choice actually would improve a patient's well being or make them better off. That would be a case where we should try to pull back and use less of less nudging to the point of us being able to use more or less. And now we get to the final, the final argument, which was the argument, the defense of nudging from justified soft internalism and I think that the objectors here just going to say whoa whoa whoa this argument moves way too fast. Okay, I grant you that maybe people aren't, you know, maybe what behavioral economics is showing is that people aren't so generally autonomous when it comes to their decision making and maybe we've got some calls for concern. When you enter in with nudges. What you're doing is you're not really, you're not really protecting or promoting autonomy you just continue to disrespect it. Because you're failing to kind of step in and fix it or foster the development of it or promote it. And that's, that's the disrespect so nudges don't really respect autonomy they really disrespect autonomy because the thing to do if you really respect it would be to get in there and try to fix some of these problems rather than just employing nudges. And I think in response to this objection, and this is sort of my last slide and then I'll, I'll stop and open things up for Q&A. I try to ask myself, what do people mean exactly by fostering or promoting autonomy. When people have this objection of well when you step in and you know you're you're not really fostering or promoting autonomy. And the answer is, well, you're not enabling people to really decide more on the basis of intentionality, deliberation, kind of digging into deeper values and goals. This view is what's really known as reasons responsiveness views of autonomy, right that autonomy and respecting autonomy is all about getting people to articulate reasons and be responsive to their reasons and their decision making and that's the kind of ideal that we should be working towards. So look, I try to develop some responses to this. The first is that just the practical problems again kind of the idea of the neutrality myth, the idea that the data shows that debiasing is not very effective. So well this might be a philosophical idea. I think that there are practical problems and also empirical data that don't point to this being a particularly achievable goal if that's what you really mean in this kind of narrow sense of have this narrow sense of respecting autonomy. So this is an interesting line of argument to be made. And this is kind of drawn from the work of George Shear who did some interesting work on in his book beyond neutrality it's an older book but it's a kind of argument about why the state shouldn't be neutral. It's a sense of kind of the state being non neutral and using actually some various mechanisms nudging wasn't around then so he doesn't talk about nudging but some kind of similar mechanisms to nudging to influence citizens choices about how to live their lives. One of the ideas here is that he makes is he has a point that non rational influence can enhance a person's later capacity to see or respond to reasons. This is an idea I kind of pick up and develop in the book. So the idea here is that maybe at first a person is just nudge towards a particular choice and you know they don't do it on the basis of any reason. But over time they come to see the reasons behind that particular medical decision or or why that was good for them. So there might not be it might not come from reasons at the moment, but it could still later develop the person can develop like a capacity to kind of respond to the reasons that were always there and see them. I also one of the other things I tried to develop out a little bit is the idea that you know nudging it. Ideally, if you do it well, it reflects a choice that has reason behind it it has good reason behind it like you nudged a patient towards something because they had a particular set of goals or values. So in that sense, it's not, it's not reasons responsive in the sense that we typically think of reasons responsive like that there was a reason and the patient, you know, appealed to that reason and reaching their decision. But it's not like reasons are just out of the picture reasons still have a role to play. And there's still a kind of reasons responsiveness going on when we nudge somebody. I think for people who are kind of particularly worried about getting rid of the role of like reasons and reasoning in the decision making process I think there's still an important role for them to play, even in the realm of nudging. And then the last point is kind of how we think about autonomy and, you know, some philosophers have written about autonomy as really a meta view this is called like second order theories of autonomy, where they say look a patient's choice is autonomous so long as they endorse the process that they use to come to their choice. This explains why deferring to somebody else could be autonomous. Right, even though we might typically initially think oh wait is that autonomous they just defer to somebody else. The second order theories are really inspired by the work of Harry Frankfurt. Their choice could be autonomous if they if they have a second order desire they reflect that they want somebody else to make their choice. And I think we can say a similar thing about nudging. My idea is that what would the person's attitude be towards being nudged to make a particular choice and if they would endorse nudging as a process by which they came to make their final choice or decision, then on second order theories of autonomy there's a compatibility that we can say nudging is totally respectful of autonomy. What's on scene has done some really interesting empirical work to look at whether people actually endorse nudges do they have that kind of second order endorsement of nudges or do they really resist that as a process by which they came to make a decision and what he found is that people endorse being nudged when the end is viewed as legitimate and important. So those are the two, the two markers that that people generally use to kind of nudge, or view, view nudging as being acceptable. So I think those are some kind of just ways to think about think creatively about the compatibility of nudging and autonomy, because I think there are a lot of ways to think about the compatibility, and we don't have to jump straight to the notion that nudging is incompatible with or disrespectful of autonomy. And then just to give a nod the last couple chapters of the book. For those of you who you know want to pick it up later and dig into a particular chapter because I know people don't necessarily always read whole books anymore. As I said chapter four is these are some of the topics that are covered in chapter four so if any of these topics in the word cloud interest you these are kind of conceptual topics that I dig into in chapter four of the book. And then chapter five as I said is that the idea of nudging in the weeds really looking at how nudging operates in these very clinical context so those of you who have interest in that that that would be I think a fun chapter to jump into. And I just want to acknowledge funders of kind of work, not necessarily the writing of this book but projects that really contributed to the ideas that were developed in the case studies that were used in this book and also thank a couple of people Bernie Lowe has been especially supportive of this work and this book, Brooke Brody, who, you know, has passed away but he had really encouraged me to do a book project and then I have a couple of decision science colleagues in Houston, who were just really great because I came in as an ethicist and philosopher that didn't know much about decision science, and they kind of really welcomed me into the decision science world. And so I'm particularly thankful to them. And I will stop sharing. And that's all I've got. I have an incredible overview. And as someone who has read the entire book, I encourage those who are listening to dive more deeply into each of those arguments counter arguments and some of the practical applications for for clinical care in particular. So folks can submit questions to the Q&A function at the bottom of the zoom grid, and I have a couple just to start, which you mentioned towards the end, some research around how individuals feel about being nudged when they are when they're talking about it and, and that could be that question can be asked, of course, like all questions in different ways, including the way you mentioned Dr. Halpern's work where patients were then told that their advanced care planning documents were structured in a way that certain defaults may have influenced the decisions that they made. And I'm curious if your work or others that you could highlight have identified whether or not patients seem to care about not just whether they were nudged but the way in which they were not sort of doesn't matter which heuristics were intentionally brought to bear to help shape their decisions or is their assessment of their having been nudged depend more on the context and the consequences, or how different on reflection their decision might decision making might have been. Thanks, that's a great question. I will say upfront that I think that there's a lot of room for really interesting research in the space because from a normative perspective the view that people have on being nudged is really important so empirical work that looks at these different questions of, is it the type of nudge is that the context is it the intention there's not a ton of work that has been done in this space, to my knowledge. I think, as I said the work that the cast has done on in he this is in his book that influence where he has a lot of empirical work on nudges, it's really seemed to boil down a lot to kind of the role of the person and whether they should be having a role as the nudger choice architect in that particular context, and what their intentions are like are they intending good for you, or are they intending to benefit themselves I think those have been some of the really important markers of endorsement, at least in the work that that cast was done empirically. And I think, to the point that was alluded to at the beginning which is that even asking this question of how people feel about particular nudges or nudge processes in a particular context is challenging because it's so meta. Now we know that the way in which we frame the questions to get an endorse ability are going to influence their responses in terms of whether they say that they endorse it or not. We just know that. What is a particular challenge in this area is if we really want the truth of the matter about what people feel about particular types of nudge mechanisms in particular context. We have to do a lot of work to understand these effects and ask the questions in a non leading way or in a way that minimizes the influence as much as possible so we get honest answers. So I think that's a really important point and like a challenging points of studying this question. A related question that comes up is of the different heuristics that you described that can be leveraged to nudge or mold individual decision making the one that that bothers me the most is the default bias, and it. And what bothers me about is it's pervasive it's manifesting everything we sign up for online retirement plans and then lots of health care decisions as well. And it's very powerful as you illustrated and some of the research that you cited that quantitatively it really clearly makes a difference. Just as much as the ordering things on a list, make a difference for what people choose or how you frame things relative to the different size of coffee cups that are available. But even this the magnitude of decisions that can be nudged like a selection of life sustaining therapies as a default or not or choosing to donate an organ or not. Does, but it's also incredibly simple right it's very easy for whether it's a programmatic or structural actor in public health for example to structure a default in as a nudge. It can manifest what either intentionally or not as part of group assignations or whether individual religious affiliation or politics or sports affiliation. Any of those kinds of tribes influence the default decisions that people either make or can be induced to make and I wonder if not in your book as much but in other work that you've done. Have you focused on the default heuristic in particular because it's so pervasive it's so powerful and so subject to utilization in nudges that I wonder if it is deserving of particular philosophical scrutiny. Yeah it's an interesting question I, I have not focused on it individually and what the kind of whether there are special issues going on there. I mean, I think that just because it is powerful does not necessarily mean that it is morally problematic. I think one of the things that is really important when we're talking about default is the kind of ability to overcome the default right and so if you don't even if you think it's really hard for example part of the discourse around the ethics of the use of default in organs donation has been how hard is it for somebody to opt out if they really want to opt out. And so that's something that is really relevant and I think what is particularly bothers them with a lot of default because they all are very pervasive and they're used in a lot of contacts and I what bothers me about some of them is just how hard it is to opt out. And these companies that sell your data for example it's just sort of, well if I wanted to get around that default I would need to read this really long document and understand what's going on and then take all these extra steps and so I don't do any of those things. So I think that one way if you're concerned about defaults is to kind of look at the permissions or the conditions under which someone would need to opt out or get around the default but I don't think that you know just the bare fact that it's particularly influential is particularly problematic or bad. And I think about for example, the work on the use of default in research ethics that that Neil Dickert has been doing. And what he's been doing with this is kind of looking at the default of, you know, right now our current default is that people are, it's assumed that people don't want to participate in research, when they enter into a clinical care setting. And they have to do all these really specific things to opt in to research studies and to, you know, consent to them. And so wonder if we had default, at least consent to contact where we're going to assume that my default it's okay for us to to contact you to talk about particular research studies and whether you want to participate. Now that has turned out to be extremely powerful. And it's really changed the default but has it really changed the default in a problematic way because, you know, the default was just set in the other direction. So I think when you think about the ethics of default, they are powerful but a lot of times there's another default setting already going on. So that's part of the moral equation to think about to what what are what's the current default that's happening or that's going on. We have I guess a related comments from the, from the Q amp a board which is, it's a proposed challenge to your reasoning about the unavoidability of nudging as one of the justifications for intentionally leveraging them in public health or medical decision which is, I'll try to do this justice. So, suppose we find that options placed earlier on a list tend to be chosen more frequently than those placed later. The libertarian paternalist thus places more prudent options first, taking care to preserve all options. But sure that the options had to be presented in some order or another but to see that this edge isn't innocuous think of the same strategy used by someone who is trying to sell you something. Well, I guess it would be are they trying to sell you something against your interest or in your interest. I think one of the things that certainly the interest of the sellers I suppose it may or may not be in yours. That's right. So I mean I think what is kind of particular about the medical decision making context that we're assuming that the intentions are to benefit the patient and that's also the idea of libertarian paternalism is that the main intention is to do something that makes you better off, right, which is in contrast to something like libertarian welfare is and for example where you're trying to nudge somebody in a way to make other people better off like a population or a group of people. So I think it depends on you know if is somebody trying to sell me something with the intention to make me better off and so they're putting certain options, you know, earlier in the list to nudge me to choose those and those are actually going to make me better off that would be the kind of classic libertarian paternalist setup. And then I wouldn't see a problem with that I think the problem comes in when the intentions are to benefit the other person, right and then it's not really a libertarian paternalist motivation, because it's just somebody who's trying to sell stuff and benefit themselves. So I think that would be the contrast. The question is a reason is in this in this clinical context in particular is if you are involved in clinical ethics consultation how does this affect the way you just assess clinical situations during those consults view address nudging their implicitly or explicitly when you see it and realize that the nudger may or may not be aware of it. And particularly when the patient seems to be benefiting from the nudge itself, like how do you how it ought that to be navigated by those of us who participate in bedside clinical ethics consults. I think that's a great question. I think it should be talked about more explicitly. If we have a language to understand as a team how nudging is happening and we see it happening. I think we should call it out and talk about it and it doesn't. You know it's not always an ethically bad thing. I think you know that's that's one of the points of the book is that it depends on the context whether it's a good ethically problematic or an ethically good instance of nudging depends a lot on the contextual context of what's going on. And you know is it, what are we nudging towards and why and how do we know, you know how are we coming up with the in that we're nudging towards. But I, I, one of my hopes would be one of my hopes for this book maybe it's a kind of idealistic hope is that people have a kind of language and can also see certain types of nudges or biases when they're happening, either in in a manner of making organically or when they're being used to influence a patient's decision and that there would be more explicit conversation about the ethical aspects of that. That would be a great hope and you know I once gave a talk to a supportive care group where they had as a team developed a language they learned about these different heuristics and biases, and I remember them telling me that they use this language all the time and they call each other out, you know they'll say they'll say are we are we as a team just falling prey to the cost bias, or something and they would have explicit conversations about the role that it was playing in their decision making or in counseling patients. And I just thought that was such a, such a great thing, I think it would be great if we did more of that. And I'll highlight for for people that not yet not yet read the book that there's some wonderful and very moving vignettes from neonatology and pediatric intensive care in particular where qualitative research gathers these really heartfelt views from frontline donors who recognize and wrestle with the way in which they are clearly nudging families towards really devastating and incredibly difficult decisions based on their own experiences but also, and their best medical judgment, but mindful of the fact that they are they're nudging and they because they clearly have a view towards what even in that patients and likely families best interests ought to be what they think the outcome ought to be. So there's some really powerful work in the book that describes how that plays out at the bedside. And just to jump in quickly Dan that like also that there are some really reflective accounts of people who regret and not engaging in nudging so I was thinking in particular about the qualitative work about the pediatric intensive care and I remember this one quote from this critical care physician that really stood out she talks about how she's looking back and reflecting on you know the role of nudging and in her interactions with patients and talks about this one particular case where it's reflecting back on it and it was a child with very devastating neurological injuries and you know she kind of thought tracheostomy is not the best option for this patient and this family and she sort of says looking back. You know under the under the guise of neutrality. She uses this language to think like under the guise of neutrality I allowed the family to walk off a cliff. Right and I and I kind of wish that I would have shaped their decision making more. For that reason so I think it's also it was also moving to kind of just see that struggle of not only like, I know I do it and do you know how should I do it but also looking at times where maybe it wasn't on and there was regret around that. I want to switch gears very slightly towards another area where I know you've done a lot of empirical work and prospective clinical trials even with with patients which is around shared decision making for particular interventions and it's a timely topic in part because in my field, I'm a cardiologist, the there are growing numbers of mandates to use shared decision making very directly. And these are mandates that are coming from the centers for Medicare and Medicaid services and they have the force of law, if we if we don't document for certain procedures that we have participated in what is characterized in the statute as a normal shared decision making encounter, then we are subject to various penalties or non reimbursement and whatever and this has real, this has real teeth, and it's very hard to overstate how how violently the cardiology community has reacted against those mandates for all the reasons you know, from people who thought they were already participating in some ideal of shared decision making or people who think that it's more of a evidence based medicine set of objections that we don't know enough about what these tools do or how well they work. And how can you layer one kind of nudges that's really more of a mandate than a nudge onto something else that is necessarily kind of a bundle of nudges whether it's videos or paper tools or both. And I know you've worked to develop some of these decision making tools and I wonder, do you, do you think we know enough, even for a reasonably well characterized decision like left ventricular assist device implantation. There's enough about the choice architecture that that is critical to that specific intervention and that patient population to not only design a shared decision making instrument that is useful, but then to mandate its use, because those are different. Those are different things. And no one has to mandate that for the procedures that I do that I wash my hands or that I use peri procedure antibiotics because the evidence for both is self evidently worth following. And yet someone has decided that they have to mandate that I use particular shared decision making tools before certain kinds of defibrillator implantation. So just curious what your views are on that particular topic because it is hotly debated in cardiology. Yeah, thanks. So I think it is a very, very important pause point and criticism of, you know, it's, it seems fine and good to just mandate that people use shared decision making tools like decision aids. But the point is a really valid one that I don't know if people who are, if everyone who is developing these decision aids knows enough about the choice architecture in the decision aid and what's going on. And so I can imagine if I was a clinician and someone said you have to use this patience, I would ask, do we know how it influences their choice. And most of the things that you know people who develop and measure the impact of decision aids is kind of a, you know, booster to share decision making look at is how it affects knowledge. So that's the kind of main outcome measure is looking at doesn't improve their knowledge about this choice. And in many cases it does but you might think that's kind of a low bar because it might improve their knowledge but it might also really heavily change their direction in ways that the developers have not thought critically about. So I think that there's a lot of work to be done there there now one of the other quality indicators for these kind of shared decision making schools like decision aids is what's called values choice congruence. So in an ideal world, you would look at whether these decision aids kind of point people towards or away from what they value. You could potentially one check on kind of, you know, people who are worried about the choice architecture in these, you had some kind of assurance of, well it actually, you know, we know we we've looked at this and it, it has values choice congruence. That could give, you know, some kind of, I guess, help with that concern, but that's often not measured that can really conceptually messy to measure practically hard to measure and people typically don't do it. I think totally. So the one that Dan mentioned, the LVAD decision aid so my team worked for a long time to develop and do an RCT of an LVAD decision aid. Another group did the exact same thing and we actually found differences, both of our decision aids improved knowledge about the device. We found really different outcomes in terms of the number of people who ultimately chose the LVAD, depending on which decision aid they got. So, to your point, what is it about each decision aid? So for our decision aid, we, you know, like, still a lot of people chose the LVAD, there wasn't much of a difference between the intervention group and the control group. The other group found like a pretty significant group or difference between the control and the decision aid where more people who received the decision aid actually chose to not proceed with the LVAD. One might argue it nudged them against the LVAD. So what was it about? I mean, you have two decision aids about, you know, the same decision. We're both following like all the criteria for development of these things. What was it about one that, you know, nudge choice in one direction and one that nudge choice in the other direction. And so yeah, I think it's really, I totally hear your point, but if I was a clinician, I would say, let's look into this more before I'm like mandated to use a particular tool. I didn't know that there were such disparate outcomes in terms of the patient willingness to move forward with such a profound procedure that has such important quality of life tradeoffs and potential survival advantage as well. Do you have a sense from what is it about one or the other LVAD decision tool that appeared to nudge patients in a different way? Was it that one really focused more on education and was a little bit less attached to the statistics about, you know, bad things happening or good things happening or what was it about them that led to that disparate set of results? My hypothesis is that it has to do with the testimonials in the decision aid. I mean, there's been a lot of work showing the impact on like patient and provider testimonials of stories on people's decisions. And that's a very common thing to be integrated into decision aids. And we had like four, we profiled four patients who had generally positive experiences with the device and one patient who had negative experience because that kind of matched the general odds. The other decision aid, I would say, had more emphasis. They had a much bigger palliative care section. They had like a video of a patient who had declined an LVAD and that patient was explaining their reasoning. So I think my hypothesis would be just kind of maybe the difference in the testimonials, although I don't know. I mean, I think it would be a fascinating project to kind of try to understand a little bit more about what was going on there. Your point about testimonials is dovetails with another question that's come in through the Q&A. And the way in which if you're designing a set of videos, for example, like Dr. Valdes has done to be incredibly careful, I'm sure, about whose stories you sample and who's telling that story and how are they telling it. This relates to a question that came in around how nudges function across and between cultures, languages, even ages and all the different categories that we think can have both clinical and maybe normative importance when your healthcare providers and their patients are interacting. And I'm wondering if any of the work that you cited in your book or your own work or your own experience using these tools, which are predominantly but not only available in English, for example, whether you've gotten into some of those cross-cultural concerns in particular. No, I think that that is an area where there needs to be a lot more research is really understanding the cultural differences in different nudges and how their effectiveness and also their perceives, their perception and their acceptability. I don't think that there's a lot of careful detailed work in that space or in that area. I think that another thing that is complicated is sort of thinking about the more, I think it's uncomfortable to think that the more that you know about a particular, for example, if I'm from a particular cultural religious background and you start to study me and you know what's important to me in that particular background and you use that in kind of a targeted way to influence my decisions, that there's more of a discomfort with that kind of targeted approach by some people. So I think it's a little bit of a double edged sword that the more that we understand about kind of, you know, differences in the role of culture and nudges, it gives us more information and more understanding about how people make choices. But I think that if people aren't using that, you know, information responsibly, respectfully to kind of shape choice, then it can be a problematic, a problematic or perceived problematic area of inquiry as well. And I'll just say quickly about the Volandes video it's interesting I was on in this question of like what we highlight in the testimonial and I was on the Jerry Powell podcast with Alex Smith a couple weeks ago talking about the book and about this particular example and what I didn't realize is apparently what was one thing that was really controversial about that video or that particular intervention was that they showed the patient at a particular point with her tongue hanging out. And that was something where it was an effective nudge, but there was a lot of, you know, question of was that the right thing to portray and why did you, why did you do that like what why did you show that it's fine to show okay the patient can't communicate or emulate or eat, but that that struck people as like that that's that was the nudge right and that was like a problematic aspect of the nudge is that particular aspect of the place the patient's tongue hanging out. So it's just also very interesting to think about, you've got a nudge, but then you've got all these like micro nudges within the nudge that people might find problematic. Sure. And I think this alludes to something else that someone had commented that which is that the those of us who were medical residents at one point. One of the overnight thankless ICU roles there is sort of figuring out what someone individual code status might be in circumstances where everybody from knows that it probably ought to be do not resuscitate for a variety of clinical reasons and there's lots of things that you can present that set of choices to usually family members and healthcare proxies who are making that choice and in my residency training where we got some discussion about how do you have those conversations particularly when family members are and we talked a little bit about what we didn't use the terminology around nudging but what we were told to think about is, you know, to what when you talk to people about the like the violence of CPR for example, are you describing that just with your words and the acting it out which I'm sure some people have seen I think your book includes the vignettes where people talk about the, the, the, the lucidity with which you describe what it's like to be their practice or receive CPR that the, you know how vivid that story becomes and how it actually influences the decisions that people make. Yeah, I think this gets good. I was just gonna say really quickly I think this gets to the distinction between understanding and appreciation that is so often made in bioethics right and you can understand like oh, I can, you know, I can tell you, you know, in words what it means to have my loved one resuscitated but you might argue that you don't really appreciate it until you see it and often people will say oh my gosh that's what it means. So I think appreciation is a really a kind of important although slippery concept. That's great and the other thing that it occurred to me when you're describing the controversy I guess about that particular aspect of Angelos video is how the weight and the responsibility that people like yourself and other groups and Angelos who have tried to design these tools that every little decision you make about the order in which certain things happen or the where you choose to put in specific statistics and where you use just broad generalities about this is unusual or this is common that each of those decisions, I would not if I tried to write one of those decision making tools now. I would feel like paralyzed by how every line of this, however many pages it turns out to be plausibly functions in one way or another. I would find that almost paralyzingly frightening to know that every line you choose has to be considered in the context of choice architecture and the possibility that you are, whether you like it or not. In fact, nudging people. And I would just say not to freak you out Dan, but like every line that you say to your patient, like I'm like oh my gosh I'm glad I'm not a clinician because every conversation and every word that comes out of my mouth it's I would still have that same effect of just knowing you know what I know about decision making how that might influence their choice. Yeah, well on that on that compelling note. So responsibility we all have about to think about the way we talk to our patients and not to freak you out Dan but like every line that you say to your patient like I'm like oh my gosh I'm glad I'm not a clinician because every conversation and every word that comes out of my mouth I would still have that same effect of just knowing you know what I know about decision making how that might influence their choice. telling note, I suppose, about the responsibility we all have about to think about the way we talk to our patients and the way we design tools intended to make that process more effective. Just on behalf of the Center for Biotics, we thank you for participating in this unusual format, which I hope doesn't change the way we perceive your message at all. I would encourage people who are interested in this topic to read the book. I've found it really engaging, and I think it provides a deeper exploration of each of the issues that you were able to cover for us today. So thank you, everybody, for participating, and thank you again to Professor Booman-Falbabi for sharing your time and your insights. Absolutely. Thank you, Dan, and thanks to Harvard and thanks to everyone. Bye. Thanks, everybody.