 Next speaker this morning is Dan Brodney. Dan is a professor in the Department of Philosophy and in the college at the University of Chicago. He's also an associate faculty in the Divinity School and co-chairs the Human Rights Program here at the university. Dan writes and teaches in political philosophy, philosophy and literature, bioethics, and the philosophy of religion. Dan is a regular teacher in our Clinical Ethics Fellowship Program and has been working with the McLean Center for the last seven or eight years. Great addition to our center. Today Dan's talk is The Heard of Elephants in the Room. I didn't know if I should pronounce The Heard of. He was in parentheses. Welcome Dan. Well first I just want to thank Mark for bringing me on board to the McLean Center seven or eight years ago. By trade I'm a moral and political philosopher and what that means is I deal with concepts. I deal with the ideas. I deal with the sorts of things, for instance, that John was talking about, about whether we actually can make sense of the thought that despite diversity and ethical beliefs we can reason rationally and perhaps give each other good reasons and maybe even convince one another. I think Alistair McIntyre is a little too pessimistic about that. What Mark has enabled me to do is to watch how ethical concepts are brought down to particular cases. He enabled me to see what Aristotle called fromesis, practical wisdom in action. And for that I'm very, very grateful. And that's what I wanna talk about today. I wanna talk about what I take to be a problem in clinical ethics, one that's growing. I'll suggest a couple of solutions neither would be easy to put properly into practice. Doctors often invoke the maxim first do no harm. Just wanna start by noting that what counts as a harm in the medical context has changed over time. So just a few examples. First, with the modern emphasis on patient autonomy, it's now possible to perpetrate a new harm. Namely, you can harm someone's autonomy. That was not thought of, no doubt it happened long ago but it was not thought of as a harm. Second, and here's an old idea, Plato mentions it in the Republic. Certain forms of illness are such that a life afflicted by them would not be worth living. That is, quality of life can be assessed against life itself. Now advanced technology can aid us in overcoming apparent handicaps but it can also open the possibility that people will be kept alive when this is no longer in their best interest. So in terms of the patient's quality of life, the opportunities to benefit but also to harm have increased. Third example would be that with the advent of the possibility of keeping the patient alive in a massively debilitated condition, the patient's quality of life might be so low, the condition might be so bad that it's proper to speak of a new harm, a harm to the patient's dignity. Well, that's background. My real topic is this. The biggest change in what counts as harm is, I suspect, the change not in the kinds of things that count as harm to the patient but a change in the range of objects or subjects of harm. The object of harm has always been thought to be the patient. Recently, it's been argued that there are other potential objects of harm present at the bedside. Let me list a few possibilities. To begin with vulnerable third parties, say in the case of infectious disease, this harm is hardly new, but the issue arises now in new contexts. Most obviously, when a pediatrician must decide whether or though the risk to the child himself is small, parental refusal to permit the child's vaccination poses a sufficient threat to the child's friends and classmates that the pediatrician has an obligation, that is an obligation not to the patient but to those friends and classmates to pressure the parents to permit vaccination. Public health interests might be relevant in a similar way. For instance, when parents refuse to permit vaccination and while there might be herd immunity for the disease in question, the pediatrician believes that she has a public health obligation. Again, now this is an obligation to the community as a whole to pressure the parents to permit vaccination. Or suppose close family members need to travel to be at the patient's death bed and so ask that a dying and perhaps suffering of a patient's life be prolonged beyond what might otherwise seem strictly proper. Perhaps their interests are relevant. And while, of course, we try not to talk about it, resource issues are often in the background, whether it be a question of dollars or of scarce beds in a unit. My guess is that all of you physicians can come up with other cases in which there's a question to be resolved about balancing the physician's obligation to the patient with the physician's obligation to a range of possible others. Those others are the elephants in the room that I, that's part of my title. Now, I wanna be clear that I consider these other elephants in the room, these stakeholders, to be legitimate stakeholders. By that I mean that they do in fact have ethically relevant interests. I'm not talking, for instance, about situations in which a physician is asked to balance her obligation to the patient against something clearly unethical, say against a hospital's interest in maximizing revenue. I'm talking about genuine ethical conflicts in which the physician feels correctly that whatever decision she makes is likely to do someone harm. So if these sorts of conflicts arise, how should the physician or the team decide them? How are they to be resolved? Philosophically, this is the stuff of introductory moral philosophy courses. I teach this to freshmen all the time. You go through a certain number of options. So to begin with, there's utilitarianism. The utilitarian says that we should use some form of consequentialist metric with the original utilitarians like John Stuart, Mel and Jeremy Bentham who was pleasure and pain. And our task is then to determine which of the available choices will maximize the balance of pleasure over pain. That's how you resolve the conflict. The ontologists look at things differently. Someone like the great English philosopher Sir David Ross says that we should determine the variety of what he calls prima facie obligations at issue, doing this by rational intuition. And then once more via rational intuition, we should see where the balance of obligation lies. Each obligation has weight. And while a deontologist like Ross denies that we can reduce all values to a single metric such as pain or pleasure, nevertheless he believes that we're capable of seeing in a given case where are all things considered obligation lies, where the balance of reasons lies. Finally, and slightly different from Ross, there's virtue theory. The person of practical wisdom, Aristotle's Fronimos, is the Greek word, is capable of determining from the morass of morally relevant data what one's overall moral obligation is. If one has practical wisdom, one can see in a given case what the person of practical wisdom would do and that's the thing to be done. Now, you'll note I haven't mentioned Kant because Kant actually never talks about conflicts of obligations in his writings, but there are commentators who have formulated rather complex, Kantian ways to handle such things. If that interests you, I can discuss that in the question period. The point I wanna bring out is that each of these philosophical proposals has the following actually quite unsurprising feature. Each requires fine grain judgment applied to the individual case. You can talk about maximizing pleasure over pain, but of course you're gonna have to figure out which of the available options does that in a given case. You can talk about balancing one primary facial obligation against another, but you're gonna have to judge which is the way to your obligation. So in the end, we come back as so often to Aristotle. In order actually to make a particular decision in a particular case, you're going to have to have practical wisdom. So of course this has always been the case in medical ethics, it's always been the case at the bedside, even when the focus is solely on the interest of the patient. My point is that as more interest become relevant, the scope for the required exercise of practical wisdom at the bedside has increased. And that then of course raises the question, who in the clinical context is likely to have practical wisdom? One might think that it's the physician. I don't think this is obvious. After all, there's a take at little about the selection of candidates into medical school that gives us reason to think that on average, physicians are more likely to have practical wisdom than anyone else in the room. You might say that physicians do make these ethical decisions. They actually are empowered to make them. And so if they make these decisions, one might think that over time they become better at them in the same way I take it that a diagnostician becomes better over time seeing the results of what his or her judgment has been. Aristotle thinks practical wisdom comes with experience, perhaps this is true of physicians. I do think there's something to this, but I do also wanna ask whether it's enough. The reason is that the analogy to the diagnostician rests on two premises. The first is that as with diagnosis and treatment, the physician or the team hears about the long-term consequences of their ethical decisions. Only with such knowledge will one be in a position to know whether one's decision was in the end, the proper one. Only with such knowledge can one's experience be part of a learning process. The second premise is that when physicians do hear about such long-term consequences, they spend time asking what they can learn from the case and engage in sufficient reflection to internalize any lessons. So my question for this thought that the physician is likely through her experience to become practically wise is this. No doubt physicians see short-term consequences of their decisions. Is the family pleased, dissatisfied, and so forth. But if the conditions for ethical learning are to obtain, they must have long-term experience, not only of the physical consequences of their decisions, but of what one might think of as the ethical consequences and they must be in a position in their busy lives actually to reflect upon those when they learn about them to see what improvements in their judgments can be made from that. You will let me know whether that happens often enough so that we can simply have confidence that physician experience is a sufficient instructor in practical wisdom. I'd like to propose a couple of things instead of just relying on physician experience. One's already in place in hospitals. I think it's a great thing if done right and Mark has brought me on board so that I've experienced this now for seven or eight years but it is important to see that to do it right requires very demanding conditions. This is the idea of an ethics committee. When ethical issues are brought to a committee, there's discussion and if discussion operates in the right way we actually might think that the output is practical wisdom but the right way is quite demanding because of course what has to obtain is what in the passage that John quoted from McIntyre, McIntyre says doesn't happen. Namely people give arguments, they give good arguments, they're open minded and their minds are actually changed by the force of the better argument. If that were to happen on a regular basis we would have what Jurgen Habermas calls an ideal speech situation. We might even have the conditions such that Condorcet's jury theorem obtains which says that under appropriate conditions a reasonably substantial majority of a group will actually be correct. Will be statistically very, very likely to be correct. So under those ideal conditions one might think that an ethics committee actually generates practical wisdom. Of course those are demanding conditions one can wonder how often they're really instantiated in real life ethics committees but that's one possibility but of course not only the conditions demanding presumably one can't take every case to an ethics committee. So I'd like to propose something else. Something sort of as part of background learning that can then one hopes have its impact at the bedside. In many legal systems say our system there's a process to resolve conflicts. One goes to court and if one just keeps at it one might reach an appellate court and there the decision will often involve the application of some sort of standard. The standard needs application to the particular case and so practical wisdom is still required but the standard narrows and focuses the role of individual judgment. The whole idea of introducing a standard is to limit the discretion of the judge. The judge has to apply the standard. Discretion is still there but it's not absolute. As an example let's take the constitutional standard of strict scrutiny. Supreme Court has applied the standard to laws or policies that impinge on a right explicitly protected by the Constitution such as the right to vote. Once it's clear that the standard is to be applied the state must show that the law or policy being assessed is necessary to achieve a compelling state interest and if the state can show that the policy is necessary to achieve a compelling state interest it must then demonstrate that the legislation is narrowly tailored to achieve the intended result. So there's still judgment that's needed by the court. The court has to judge whether or not a compelling state interest is at stake and whether the particular bit of legislation is narrowly tailored but it's focused. I don't think clinical ethics as yet has a useful array of standards to apply to the range of conflicts that tend to arise at the bedside. We ought to try to formulate such an array of standards. It seems to me this is a way of not having to invent the ethical anew every time. For instance, if the cases one were further medical treatment is futile but there are family members with an emotional stake in further treatment we need some sort of standard to guide clinicians about how to weight the family members' interests versus in this case perhaps the interests of the patient. Individual judgment will of course still be needed but the hope is that the scope of judgment will be usefully narrowed. So I'm just gonna throw out a fairly obvious standard for such cases. I have no stake in it being the right one it's just something to shoot at. So here would be a standard. Delay imposing a unilateral DNR. One, if delaying, if one delaying is unlikely to cause much suffering to the patient. If two, imposing a unilateral DNR is likely to cause significant suffering to the family and if three, a resolution is likely to come of itself before very long. So here still judgment, judgment in three places but the thought is to restrict it. Give some structure to the clinicians in their decision making. Now unfortunately for this proposal there are differences between what a practicing physician does and what an appellate court judge does. The latter's daily work consists in applying standards to particular cases and one can be cynical about how courts work but it's not wildly irrational to hope that over time appellate court judges will become better at applying standards. That's what they do on a daily basis constantly. By contrast the clinician adjudicates between conflicting ethical claims more rarely relative to her overall job activities. Moreover the clinician does not have access to a large history of relevant prior cases the way an appellate court judge does. This last deficit could at least in principle be remedied. It could in principle be possible to generate an appropriately de-identified archive of usefully cataloged prior cases that could be available to physicians and to medical students and so forth preferably with information about the long-term outcome of the cases. So my proposal then as part of ongoing physician ethical education is that clinical ethics produce an array of standards to focus judgments in particular cases and to produce an archive of cases to learn from. The main issue of course though that I really wanna press is the thought that we need to think about these conflicts at the bedside and think about how to find an ethically reliable way to resolve them. Thank you. Consensus is great when it's consensus for the right reasons. In any kind of moral conflict if we can get the conflicting parties to come to agreement that certainly has a great deal to say for it as a way to handle the conflict. Is that the only criterion for a proper handling? Certainly not. One could easily imagine that a physician sees a way to get the parties to agree and thinks that what's going on is by along some metrics quite wrong. Whether that's a sufficient reason to try to forestall mediation and forestall agreement is itself going to be a question for practical wisdom in a particular case. Laura and Jim. Dan, thanks for the talk. So my question is going back to Aristotle. How important is it that the committee that's putting together these cases and deciding which ones and which features of those cases are the ones to be taken as morally significant and to be applied and so on? That committee is itself made up of people already formed well to understand what is good and not as good. And are we not back in the same problem? That is exactly the question. That is, if we have discussion in which either everyone has the exact same view and does not question the view or if we have multiple views but all that's happening is people are stating their views and they're not really engaging, then there's no particular reason to think that the committee process has made any kind of ethical advance on where things started from. Any thought here that an ethics committee actually is doing something other than mediation and other than providing some psychological benefit for its members must assume that the course of discussion can actually make at least some degree of progress. And that requires, as I say, people who are in good faith trying to find the truth and listening to one another and are willing to change their minds. I leave it to you to decide how frequently this happens. One of John Stuart Mill's most searing critics fits James Stephen, the uncle of Virginia Woolf, thought that Mill's principle, the harm principle that we often use in clinical ethics, Mill says this obtains only for people who are mature enough to be capable of listening to arguments and changing their minds. Stephen said, oh well, then it's okay because that's never gonna happen. I was wondering what you consider the relationship or the nature of the relationship between developing and being aware of the taxonomy of cases one develops locally and developing standards. Insofar as I would imagine in North America we have a diversity of communities. And so if you develop a local communal sense of taxonomy, does it have to become a standard? So let me just see if I understand. So the first question then, first thing for my proposal, the first sort of desiratum is that there be inappropriate as you put a taxonomy of cases so that these count as futility, these count as something else and so forth. And is your question that given cultural diversity there is likely to be disagreement about the classifications? Or how a particular case falls into a classification? Again, I mean I know this is what drives people crazy about Aristotle because in the end all that one can say but I should hope clinicians find this actually less disturbing is that one must struggle to make a judgment in a particular case and to give reasons as to why a given case falls into a given category. Now there's nothing wrong with thinking that a given case might fall into more than one category and so that more than one standard might be appropriate for trying to deal with it and that will call for further judgment in trying to figure out which is the more important standard in a given case if they happen to be in conflict. They can't be in some way put together. Dan, thank you so much. Thank you.