 Good morning and welcome to our ethics conference. Thank you for rising early. I particularly want to acknowledge Megan Johnson for reminding us all to wake up early and for coordinating this. If you haven't noticed, Grand Rounds is running quite smoothly and that doesn't happen by accident and there's a lot of work behind the scenes, so thank you, Megan. We also have an upcoming M&M conference. Dr. Zogg, how are we on cases? We have one. Okay. We could use another if somebody has one. So Dr. Zogg is calling on all of you to get your cases to Brian. Get your cases, of course, always makes for a really good discussion and there are many opportunities to learn and improve our systems, improve our care. So I am very excited for this particular ethics conference. This is something that has always been in the back of my mind and I've never really had any specific training in regards to adopting a new technology or how you go about, you know, ethically, what are the ethical considerations of adopting a new technology, using a new medication that hasn't been used previously and how can we do our due diligence to ensure this is done correctly. So I'd like to introduce our speakers today. First of all, by the way, I have no financial disclosures. I'm not a participant, but at the very least I should mention that. This is Dr. J Jacobson. Dr. J Jacobson does not have a strong social media presence, so I was unable to find an online photo of him. He's one of the few people on earth, so I did get a great picture this morning, so thank you, Jay, for posing. He is the former chair of the Division of Medical Ethics at the University of Utah. He's a professor emeritus of internal medicine and infectious disease and continues to have a strong presence within the medical ethics community. Not only here at the university, but statewide, and certainly his reach expands beyond our state. Also we will be welcoming as a panelist Dr. Alan Crandall. We see Alan so often, sometimes we forget, that he actually is the senior vice chairman for the Moran Ice Center. He's also the director of global outreach among many other things, both he and Nick are past presidents of ASCRS. And Nick Mamelis, our other panelists today, he's the director of comprehensive ophthalmology. He's the co-director, along with Liliana Horner of the Intermountain Ocular Research Center, also the editor for JCRS and the past president of ASCRS. I wish it was past president, it's coming out of here. President Eulack. President Eulack, thank you. Just kind of assumed you'd done it already. Coming back for your second term, thank you. And with that, this is a resource specific to ophthalmology. Before we're having a discussion on ethics, the American Academy of Ophthalmology actually has a well-developed, robust website where there are topics such as global ophthalmology, adopting a new technique, learning new techniques and residency, ethics of informed consent, being an expert witness, and there are advisory opinions, so if you are looking for resource-specific ophthalmology, this is where that would be. So without further ado, I'll turn the podium over to Dr. J Jacobson for an introduction of the topic. I'll then briefly present two ophthalmology-specific examples for discussion with the panel and then discussion with the audience. Jay, time is yours. First of all, let's see if I can get my slide up. It was there, correct? Yeah, I saw it. All right, Ethan. But I don't see it on this screen. Sorry, Ethan's working his business. Excellent. Wonderful. So the first comment is to, first of all, thank Jeff, who's really a collaborator in this presentation. We independently started thinking about this and then had kind of a conference call of two and delved a little bit deeper. And of course, his help to me is invaluable because he can make what I know about become a little bit more relevant for ophthalmology. And the first difference that you'll see is Jeff's title, which includes the phrase with the best intentions. And that actually is going to be a theme throughout our discussion today. That is, both in the law, particularly in criminal law, and maybe that's not coincidental, the most important thing prosecutors have to think about is intention. So for example, if you think about the difference between manslaughter and murder, it's both involved the death of an individual. The critical difference is intention. And intention is actually the hardest thing to discover often in law. And I think that that theme is a very, very important one. And I think another phrase that probably is helpful for you to remember, and you know this, is that phrase about the road to hell is paved with what? The best of intention. And so intention, hard to discover, but even good intention doesn't always mean good result. So it's actually quite a complex concept. So let's move ahead and just briefly reiterate some history, some of which is familiar to you, but I'm going to guess some is not. And that's actually one of my goals today is to help you see how this whole enterprise of regulated medical research, and to a lesser degree, but similar, is the regulation of medical practice. Where did that come from? And so the first thing to say about medical research is if you read, I'll just begin by saying not a lot of organized medical research until the 19th century. In my field, for example, that's really the beginning, right? Pasteur and Koch, most of whose research was in the laboratory, but some of it involved human beings, and in some cases themselves. So another really interesting thing to think about through tradition of medical research is doing research on ourselves. I'll just tell you that smallpox vaccine began with a father inoculating his son. So research began as a very small enterprise in the 19th century, a lot of important breakthroughs particularly in infectious disease. And it's an enormous enterprise now and one that is regulated, but regulations are relatively new. How about the philosophy of doctors doing research? You can see that toward the end of the 19th century, Claude Bernard said, the principle of medical and surgical morality consists in, and I emphasize, never performing on man an experiment which might be harmful to him to any extent. So I think you have to read that as almost a prohibition on any sort of research or something that was unknown in terms of its outcome and consequences, even though the result might be highly advantageous to science. So it's a widely held view that medicine has kind of received wisdom from our mentors and trainers, we do what they do and we need to be very cautious or never try something new that could hurt someone, even if that might result in benefiting lots of people. 1907, Osler, I think, very well known to American physicians said, researchers can experiment on patients only if direct benefit is likely and only with full consent. Those are two important parts of this phrase, only if direct benefit and even the language of direct benefit finds its way all the way into the 21st century and then only with full consent, and I share that with you just to say that doctors were beginning to talk about consent as early as 1907, but that wasn't formalized in terms of law or standard until much later. 1916, a physician and physiologist urged the American Medical Association to mandate informed consent for research, but that organization refused, and this is a quote from their refusal. Misconduct was a problem of rogue researchers, not of research itself. Trust, not regulation, would foster better research in clinical care. So organized medicine, turn of the century, very resistant to outside control and regulation. I'm going to show you on a slide that follows this one, kind of a timeline, but just to make you aware of the history of medical research in the United States. In 1932, a study began sponsored by the United States Public Health Service that you're familiar with, a study to determine the natural history of syphilis in African American individuals who didn't know that they were in a study, think back and informed consent, and weren't permitted to seek treatment when treatment became available. So an important study, it began not so much as an intervention study, but an observation study, but a very important part of the study was the exclusion of treatment. 1946, you're familiar, obviously, with the horrors that occurred for individuals in concentration camps at the hands of doctors who, I might add, in many cases, were volunteers. They included academic physicians who saw that patient population as an opportunity to ask questions and answer them that are free and unfettered from any rules or oversight. And then the last point on this slide is something that happened in the U.S. importantly in 1964 after the Nazi doctors trial and after the Nuremberg Code was promulgated, which insisted on voluntary informed consent. The Nuremberg Code also included a lot of prohibitions on the kind of research that was likely to harm people even if voluntary informed consent was a part of it. So very important code, very reactive to the problem that occurred or the situation that occurred in Germany. But in 1964, release of information about this research project garnered a lot of attention. Researchers injected live cancer cells into patients, most of many of whom were demented. They were all uninformed in the Jewish Chronic Disease Hospital in Brooklyn, New York. So that's kind of the background for you to think about moving into the 1960s and 1970s. I just found an example of something that I thought would be relevant to you. One of the physicians in the Nazi camps was Joseph Mengele. It turned out that he was particularly interested in genetics, did a lot of experiments where the control was an identical twin. He was particularly interested in conjoined twins and he actually created some conjoined twins surgically, if you can imagine, doing something like that. He was also fascinated by eye color and as you can see, tried to change the eye color of patients by injecting dye into their eyes. Bethlehem blue for children that were not blue-eyed and alternate dyes for others, usually resulting in painful infections and even blindness. So this slide is one that you would read from left to right. You've already seen that in 1932, the Tuskegee syphilis study was underway in the United States and continued until 1972. The Second World War, we've alluded to, and it was terribly important in shaping medical research regulation. The war ended in 45. The trial ran on for several years and resulted in the Nuremberg codes. Several events were happening in the United States shortly after that, but while people had been aware of both the Nuremberg trial and its codes, you'll recall the observations about birth defects in the children of mothers who had taken thalidomide in the 1950s through 60s, which I think added a lot of emphasis to the idea of federal oversight and federal regulation in the United States. Those drugs were being used in Europe before they were used in the U.S., but were beginning to be used in the U.S. before the drug was licensed. So it's a very strong reaction to what can happen when a new agent is used without careful study. As time went on, Dr. Beecher, who was an anesthesiologist, reported in the New England Journal both on that Jewish chronic disease hospital study and 21 other studies that he had found in the literature, which he felt had violated the norms set out after the Nuremberg trials. So studies that didn't include voluntariness or informed consent or in some cases resulted in serious harm to patients that could have been anticipated. So that drew a lot of attention, and by the way, a lot of criticism for Beecher coming back from the American research community. People were quite aggrieved by the publication of this article, feeling that perhaps he didn't explain fully all of the things they thought that kind of made their trials acceptable and appropriate. On the right-hand side, you can begin to see all of the policy steps that were reactive to this history. In 1974, Congress passes the National Research Act. By the 1978, we had the Belmont report, which people like Nick, who serve on the IRB, are very familiar with. That's kind of the watershed moment of a rule that applied to all research done in the United States, but particularly research that is funded by the federal government. So that's kind of the connection. But it's still kind of the pattern for conducting research even that's not federally funded. So what I wanted to share with you were literally the words of some of the people that were involved in setting that policy. You would know Edward Kennedy, he was working with Senator Javits of New York and Senator Mondale, and they were reacting in part to the publication by Beecher. And in part to the publication of that experiment, I mentioned that the Jewish Chronic Disease Hospital. So he was chairing a committee, and in 1973, he scheduled hearings on human experimentation partly in response to those scandals that I've mentioned. He proposed that medical innovation, and again a very important use of a word here, he used it. Medical innovation should be used in clinical medicine only once. They were sufficiently validated by biomedical research. So even that, you can picture from the inside of medicine, looked like intrusion because innovations in medicines have been going on for a very long time, but importantly, they were called innovation or advanced medical practice or modified techniques. They weren't called research. And so this kind of language raised some alarms. He went on to say, the absence of sufficient quality control in the practice of medicine coupled with the almost unlimited freedom of action which physicians have in the treatment of their patients encourages the development of patterns of medical practice that may well be premature and based on inadequate understanding of the new technique or new drug. Remember Nick, sorry, Jeff's title. There is no malice involved in such a situation. The question is whether or not we can tolerate a system where the individual physician is the sole determinant of the safety of an experimental procedure. So it's a lot of words, but a lot of thought, I think that you can read into that. First of all, this is not a senator who's suggesting that physicians are somehow inherently evil or badly intended. He's acknowledging that their intentions are actually good, but acknowledging that without some additional level of regulation or validation, there is some risk to patients. So very driven by the idea of safety and protection. Jay Katz, who is a physician and psychiatrist at Yale University and kind of maybe as a result of this recognizes one of the very early physicians involved in medical ethics was one of the people involved as a witness in these congressional hearings. Kennedy asked him to list the most difficult and complex problems, what he hoped would become a national commission, the problems facing a potential national commission on bioethics. And Katz said, which interaction should be designated as research and which therapy? And what are the boundaries between research and therapy? And he's said to Kennedy, a physician is given a great deal of authority, a lot of latitude and the exercise of his therapeutic functions. There is now you're looking at the 1970s. So it's after the Second World War, the beginning of awareness and the thought of maybe what we need to regulate. He says there's now an increasing trend to label certain studies, not as experimental, but as therapy, because they do not fall within the existing guidelines. We have to figure out what falls within the jurisdiction of the National Commission. So I share that with you to have you understand that in many ways at the very origin of the regulation of research in America is this very big question that we're talking about today. We have to figure out what falls within the jurisdiction, meaning is it research or is it therapy? So it sounded like a dichotomy that these two things could be easily separated. Stephen Tullman, a philosopher, also testifying but also then on the commission that was to draft the regulations following this congressional report said, I suspect Congress lost its way at this point. When they presented that boundaries mandate, that is, you all need to decide what are the boundaries that separate practice from research? I believe Congress had the idea that there was something intrinsically more risky or more hazardous about research than about normal practice. And they wish for criteria so they could recognize that which is intrinsically risky. The next sentence, I think, or the next quote from Tullman is extremely helpful. It's very real. And it's something that I don't think we think about very often. So Tullman went on to say that certain medical practices, such as those based on innovative therapies, are potentially more risky than well designed research. I just say to you that I think medical practice per se, independent of innovative research can be very dangerous. You've actually read, right, the IOM report about the number of deaths that occur from medical errors, sometimes even without errors in the course of applying things properly, so-called adverse reactions to things that are well known and well validated, very dangerous. Some people have suggested that maybe the fifth leading cause of death in the United States, incidents that occur related to medical therapy. Most it would be called mistakes. But if you add adverse effects of licensed validated therapy, turn down a lot of people that are harmed. If you think for a moment about deaths from medical research, I think you'll appreciate that's a very unusual event. But not unrelated is the fact that in Germany, many of those experiments were designed to result in death. So I think that hangs over the idea of fearing that research is so dangerous. So he goes on to say in the end, commissioners realize this is the people writing the report, the Belmont report, their sole responsibility was the protection of research subjects, not the protection of patients in medical practice, and the risks of practice faded into the background, and risks and benefits of research dominated the deliberations. I think that's terribly important that we live in a world that's highly regulated, but maybe on a peculiar premise on the notion that what we're doing is terribly dangerous and more dangerous than what we do without regulation, a very important place to begin. So the Belmont report, and this is going to be foundational for the rest of our discussion, attempted to separate practice from research. So the peas here refer to practice, and they put in this phrase for the most part, that's a quote, for the most part, practice refers to interventions where the purpose is to provide diagnosis, preventive treatment or therapy. By the way, you could look at the word intention there, the purpose, the intention is to provide diagnosis, preventive treatment or therapy. Secondly, it's designed solely to enhance the well-being of an individual patient or client. Remember the words of Claude Bernard and Osler, that in some ways practice is tied to the idea of benefiting direct benefit to that individual. And then it goes on to say, the parentheses are part of their report, though benefit to other persons is sometimes the goal. So we're helping you, but maybe our goal is to help other people, but even though that's the goal, what we're doing is helpful to you, are you with me? And by the way, it can also happen that our primary goal is to help you. But what we learn from helping you could be advantageous to other people. So not simple. The third point, the intervention has a reasonable expectation of success. And they acknowledged that there might be exceptions to those rules, which is why they said for the most part, because look at these examples on the bottom line, when I vaccinate you, there may be a small benefit to you. But if it's a relatively uncommon disease and you're a pretty healthy person, the benefit to you might be very small. But the benefit to the community might be enormous. You're actually protecting other people that might be more vulnerable than you are. Same thing with blood transfusion, about zero benefit to you from a medical procedure to take your blood, step it up a level to take an organ like a kidney, no benefit to you. There's even risk, right? So we fail the test of direct benefit. We fail the test of it safe, but we do it. So they weren't talking about those kinds of practices, but they acknowledged that they existed. And they also didn't label those research. Research now, two conditions, and the first one not necessary for all forms. And if you go to the IRB, for example, you may have a project that doesn't include a control group. There are observational studies. You might not have a protocol, but you're doing research. So the first one is not necessary. The second one is, so the first one is, there is, if it's a pertinent research project, a formal protocol designed to test a hypothesis. By the way, that's an intention statement. The purpose of the research is to test a hypothesis. You're not reading the purposes directly to benefit a patient, even if you thought it might. Purpose hypothesis testing. Second, there is an organized design. This is important to develop or contribute to, and these words are really going to be critical today, to generalizable scientific knowledge. So two criteria for research. So I'm actually going to skip this in the interest of time, but show you how that background gets codified into regulations. This is a form, probably a similar one available from the University of Utah or any other medical school. And if you notice, it's the answer to a question. The question is, do I need to submit this idea or project to the IRB? It's a question lots of investigators have. So in particular, if it's an innovative procedure, treatment or instructional method, do I need to submit it? So if it's, remember what we just said, systematic investigation of, here's the word, of innovation in diagnostic therapeutic procedure or instructional method using human participants, and it's designed to test a hypothesis, permit conclusions to be drawn, and develop or contribute to generalizable knowledge. Those are quoted, right, right out of the Belmont report. That's what needs to be submitted. So if it meets those criteria, yes, it needs to be both reviewed and in fact followed by the IRB. How about the alternative? The use of innovative interventions that are designed solely to enhance the well-being of an individual patient, have a reasonable expectation of success. The intent of the information is to provide diagnosis, preventive treatment, or therapy to the particular patient. There is no plan to generalize results or publish the finding. And then they give some exceptions, and the exceptions actually come from the FDA. So if you have an IND, for example, and you're only going to study this in one, two, or three patients, you might not otherwise need the IRB approval and a protocol but the FDA requires that that be done. So that's kind of fine print. The important thing I'll tell you is that that last point is even controversial. No plan to generalize results or publish findings. Well, sometimes doing something new, different, or innovative leads to a spectacular observation. And I will tell you the medical community has actually said that they would be critical of a physician who discovers something in terrible, a very, very important discovery and who doesn't publish that information. So here again, one of the questions might be what was the intent? Was the intent initially to share that or was the publication the result of something that was unexpectedly found? And there's still a lot of debate over the idea of publishing something that might involve a new use of something that existed. I'm going to just stop there and let you know that there's a lot of important work that begins to focus on this distinction that we've always made between practice and research. And some authors even arguing that we've gone astray, that maybe what we should focus on are things that are dangerous to patients and think about ways of limiting that, whether that's through informed consent or procedures that limit risk rather than artificially separating sometimes innovation and research. The last thing I'll mention is Stanford has something called the Innovative Care Committee and they've actually developed guidelines that are available for researchers that find this to be a perplexing question, which it is. So we have some help today. Jeff is going to talk about some examples that are very relevant to ophthalmology. And I think you'll begin to see how complicated this is. That is, is it research, for example, when physicians make their own use of a drug that's licensed for other uses, right? So a drug that's been tested in controlled experiments licensed by the FDA, but someone says, well, but it might work in this other situation. So we see that practice very often. And a great question for you is, well, what boundary is involved here? Why is it that because it's licensed for X, we can use it for Y? Why couldn't we have used it for X before it was licensed if we thought it was a good idea? The other one, of course, is about procedures and literally doing things differently. And what constitutes enough of a difference or enough of an intent to take doing a new procedure from an individual choice by a physician to something that needs to be shared with an IRB. So Jeff, why don't you help us with that transition? And then we'll open this up widely. We also have you take either a comment or question now during this turnover if anyone has one. Roger Harry. So the question is regarding the concept of placebo. And I am not the ethics expert. So I will not venture to answer that question, but rather let Jay. So first of all, the theme of benefit to patients lands heaviest on practice. So it would be very hard to justify what's called a practice intervention that did not have at the minimum the intent and expectation that would help a patient, which is a huge problem in terms of thinking of practitioners using a placebo. So I want to just sort of share with you that's happened in practice, right? You have a patient, they have a serious complaint. You're dubious about the organic origin of that complaint. You want them to feel better. You know a literature that says placebos often make people feel better. No question about any of the science there. But people would challenge, I think, a clinician using a placebo there, partly because of the lack of informed consent, right? In that example that I gave you, that's a practice example. And let me emphasize by a well intended physician, the way that placebo works depends on the absence of informed consent, right? Patient believes they're getting a drug which has activity and is medically indicated. So that would be subterfuge in the practice of medicine. It happens. Let's acknowledge that. But it would not be sanctioned by most people that would argue that even medical practice requires informed consent. On the research side, the burden of benefiting everyone is very small. The issue in research is not to unduly risk anybody in the project. But the goal is not to help the individual. So if half the patients in the research trial are told that they will receive an inactive drug but won't know, that's an acceptable solution to a research hypothesis to test, sorry, to testing a hypothesis. So they're very different situations and again they emphasize that the heaviest distinction that the Belmont report drew is the practicing doctor's goal is always to help the individual patient. The researcher is busy answering a question but has a duty to inform patients, right, of the risks of the procedure which could include getting a placebo. Thank you, Jay. And then Nick, as you do mute this or unmute, it's just the bottom button for you and I'll. So one real life ophthalmology example right now. So it's worth mentioning Craig Chai has actually reached out to an innovator, Dr. Yamani. And actually it looks like Craig's been able to get Dr. Yamani to come as one of our visiting professors next year and also speak at our Utah ophthalmology conference, which is really significant. I mean, this is someone who's developed something really cutting edge. Now, cataract surgeons, anterior segment surgeons, we all work with lenses, we insert lenses, we remove lenses, some of us. We sew lenses, put anterior chamber lenses in the eye. And Dr. Yamani came up with a really interesting innovation I will just let you to watch and let me orient. There's a couple of needles in the eye. Those are the yellow hubs. And what's happening here is with the needle and then an instrument, the haptic, the arm attached to the lens is actually being inserted into the lumen of the needle. That's already been done in the top left. And now what Dr. Yamani is going to do is he's going to actually pull these out and externalize those haptics. Now, again, this is something that had never been done before. Presented his research, not this last, not this last ASCRS, but last years. And then with that haptic out, he actually applies some heat to create a bulb so that that haptic will not enter back in the eye. And then those bulbs are tucked underneath the conjunctiva. So this procedure is an innovation. It absolutely is an innovation. As surgeons, we're going to be really consistently and thankfully faced with these these type of techniques, procedures, advances, innovations and our practices. And how do we approach those? When does something require more than just an attempt? What can we do to make this safe? Who are the stakeholders? And then changing, switching gears a little bit to Avastin. Again, Avastin, this was a medication approved for use in GI cancers, an anti-vegethe medication. It was well studied and found to be well studied after its release. And an off-label use to be found useful in macular degeneration. Many of us will remember a formal faculty, Kong Zhang, who would very much be described as someone who's an innovator, who on that spectrum is on the leading end of someone who, well, perhaps this would work in this vascular eye condition, another vascular eye condition. It really was someone who pushed the envelope. And so with that, those two examples turn the time back over to Jay and then also our panelists. And I do ask for forgiveness. I do have to leave just a couple of minutes early. So I do want to make sure that we take a moment just to acknowledge Jay and thank Jay for his time today. So thank you, Jay. Thank you. And I'm very indebted to Jeff. And I think those are great examples to kind of get us thinking within the field of ophthalmology about two, I think I would describe them both as innovative uses, one of a technique, one of a drug, and ask the question of what steps need to be taken to comply with existing regulations? And are those smart steps? Do we defend those? So Nick, if you'd come up and Allen, Nick, I think, brings to the panel the expertise of both an investigator who's submitted protocols to the IRB and an IRB member who actually grapples with the very question that we're thinking about today. And I might actually ask Nick to begin with that, that is, it is said that not every project needs to come to the IRB's attention. So we're already thinking about the distinction between research and innovation. And if an investigator thinks that their project is not research, right, that it's designed that what they're doing today is designed to help the individual patient before them, but they're modifying a procedure or using something new. The one question would be, does that need to go to the IRB? And other investigators who clearly understand that they're testing a hypothesis might believe there's almost no risk to the patients in this particular project. I'm querying a question here. But at the end of the day, nobody is at risk. They also wonder, does that need to go to the IRB? So I think Nick might help in terms of what kind of an ophthalmology project might come to the IRB and be turned back. That's actually a pretty interesting question. That is, occasionally it happens that somebody submits and the IRB says either this is not research, it doesn't fall into our purview, or there is so little risk in this study that you may not even have to have a so-called research informed consent and the IRB doesn't need to follow it. So how does this look from the IRB side? You know, I think that the important thing to remember when you are grappling with these issues, saying is this something that I may need to bring to the IRB or not, you're not just working in a vacuum. And there are resources available. And first of all, the IRB is very available to answer questions. And they have criteria on certain, if you need a full permission, if you need just permission with minimum risk, if you don't need permission, there are guidelines on the IRB site that you can go to that can help you. But more importantly, right here at Moran, because we do a lot of research, the clinical research group guided with Deborah Harrison and the people who are there can actually help you before you even get into that. And so if you've got a question with whether or not the particular project that you're working on will require IRB approval or not or maybe just the minimum risk approval, go ahead and go to Deborah and her group and say, hey, listen, this is what I'm proposing. Do we need to go to the IRB or go to the IRB itself because they can answer things ahead of time. And it's a lot better to try to find out ahead of time. Do you actually need to go through the entire process? Or is there a way that you can get information either from the IRB or from our clinical research here that says, no, this project does not need to have any kind of a full approval? Because once you're going down the path for full IRB approval, it's not onerous, but it's very involved. And it's very important that if you're going to do that, that you actually need to do that, and that you know ahead of time. So the first thing I would recommend is go to the IRB. You can go to the website. They've got a lot of information there. You can even call them or go right to Deborah Harrison and work with the people who are here in our clinical research that will let you know, is this a project that needs full IRB approval? Does it need just minimum risk approval or does it need no approval at all? And that's the best place to start before you jump in and try to go further with your project at that point. So we've talked a lot about how these regulations evolve to protect patients. I want to say a word about protecting physicians. And I want you then to sort of do a thought experiment with me is to look at a physician's position if you are innovating or trying something very new and different and things go wrong. So the first thing to think about might even be what protection do you gain as a researcher or clinician by having your protocol or your experiment or your research project filed with and followed by the IRB. The first thing I would say is that you have some shared responsibility that is you're no longer alone that is if the flaw was somehow in the concept of what you were trying to do, you had a whole group of people whose job was to protect research subjects who accepted the premise that this was an acceptable project. So if things go wrong, you actually have institutional support at some level. It's not a happy situation when something goes wrong in research. But so think of that first and then think of the situation where it's even alleged that you're doing a research project and a person is harmed in the context of that project. But there is no IRB approval. I just share with you that from the investigator's standpoint, that's a very different position. In a sense, you're accused of two things in that situation. One, doing research in an environment which the person alleging would claim needs to be regulated and approved and you didn't do that, that's a failure. The second one is there's a harm and that's not very different than a clinician who harms somebody in the course of their work, right? But it is a little bit different because it's not identical to malpractice, right? Where you're judged by the standard of care, it's actually now alleged that you're doing research which should have been approved and wasn't. So from the investigator's perspective, there's a lot to think about there. But that's only about protection. It's not an argument that every project should go to the IRB and certainly not every project if the intention is to somehow protect yourself. So this is complicated. Alan, maybe you could comment on the situation that all of us encounter, particularly when we travel internationally and we don't have available things to us that we would consider the standard of care and we're often having to be innovative or creative trying new things. And I'm sure that's not limited to international experiences. I can think of situations in the U.S. where people who know how to do this the best and most tested way find that that's not available but they have a patient that needs help. Well, I think that when you're talking about doing techniques, first of all, yeah, and let's, before I answer that question, if I can just mention something about the Imani technique, what Jeff showed is absolutely correct. That's an innovative technique involves two needles into the eye, right? So when you look at it, the first time you say that looks pretty easy, it turns out it's not easy. And it's very, very difficult. It takes time. And part of his paper that he gave that had 105 patients long term. That's those are fairly large numbers for something that we don't do all the time. So when you first see something like that, first thing you have to say is, OK, one, can I do that? You know, is my skill set OK to do that? Also, two, what's it going to replace? Now, what it replaces is we are we're confronted, especially here in Utah, because we have a lot of pseudo exfoliation, a lot of and a big referral center. We're often involved in presenting patients that present with dislocated lenses, lenses or sublux, things that we have to deal with. We can't just leave them in the eye. So you have the choice of taking them out and putting in an anterior chamber lens, which is relatively easy to do. Anterior chamber lenses have problems. We all know that. So if you're somebody who's over 80, you might say, OK, this is this is a risk that I'm willing to take because they're, you know, they're not going to be around for 50 years. Whereas if you're dealing with somebody who's 25 or 30, might be a different story. So one of the things that it replaces is a technique that we use. It involves four punctures. And that's the standard, one of the standard techniques. You fixate the lens through haptics or through other devices. So you're potentially replacing it with something you can do through small incisions. So the issue then becomes since this is not an easy technique, what do you do? Well, what we did is we we used simulation eyes and we tried the technique and see that you could do it. We did Miyake views looking at the technique. We we work with MST. And in about two months, there'll be a set of instruments come out that will make it relatively easy for everybody to do it with the acknowledgment that that it still is can be very tricky. So that's that's one way. I think when you see something that looks promising, you really have to analyze it in a number of different ways. You just you just don't go to the OR and try it because things like that when they look easy are not necessarily easy. But it's worth looking at. And I think it's a valid technique. Then then to the to the issue of what you do internationally, it's very interesting because what we're doing, let's say the most of what we do, I do and most of our group is cataract. The 39 million people in the world blind from curable diseases. The biggest of that is glaucoma. I mean, excuse me, cataract. We know how to do a cataract. But the average cataract in the United States is removed and Nick, you can you know, the literature better than I do, but it's probably 2050 or less. Visual acuities. That patient's not going to get on the table internationally. I mean, when we do, we go there. We get we send in a crew from our outreach division do great stuff. They go in, they figure out size of rooms, number of patients, average patient, you know, acuities, that type of thing. So that when we get there, we're ready to go. But they, you know, we what we do is we usually bilaterally blind. That's because that's the issue, obviously. So we do the first 100 or 200 or whatever one eye only. And we circled around to the second eye if we have time, time and conditions. The other thing is that these are not cataracts that you routinely see in the United States. So what we find is that a lot of times and we think it's, I think it's unethical for someone to come into an area, do 20 cases sort of, they mean well. Okay. But they don't have the skill set. They don't have a backup. They're just doing it. And then they go out. They don't follow up and they don't do, you know, the kinds of things that in the United States we would consider as mandatory. So, but there you have to have, you actually have to be able to do old procedures. You can't do some of the modern ones because we're doing them on 2040 cataracts. And these are light perception cataracts. So they're different things. And another new technique that that has led to, which is very interesting is the my loop. What is the my loop? It's a modernized version of what's known as a snare. It's a, most snares were done in India. What they used to do is they take a guitar string, good steel, put it in, make it a loop, pull it through some kind of tube and it would cut a hard cataract. So this is a modernized version of that. But it's a technique that involves new thinking and new ways of doing things. So in terms of international and the US, that's the bigger, it's a very large difference I think. So I want to follow up Ellen with a question that ties back to what we've talked about. So from the, you've learned that even before the Nuremberg codes, there were physicians who were arguing for informed consent, both in research and then later in our country in just clinical practice, right? We tell patients risks and benefits, alternatives, et cetera. Talk to me a little bit about what your approach was and maybe your colleagues in terms of talking to patients about a new technique that involved two needles and attachment of a lens. Well, what we do, what I do is mention to them that we have, you have a dislocated lens. We have to deal with that dislocated lens. There are a number of different approaches that one can take and when I go to the OR, I'll go there with five plans. Because you have to have backups. You run into situations, there's a tilt, you can't get to the right position with the Umani, you can't get to the right position with sutures, they've got to go home and they've got other issues. So that's what I tell them and I tell them, I'm hoping to do this, which is a new procedure but we really like it and that's about all you can say. I mean, you can say that it's new, you've done it, you've practiced it, but it works if you can do it. If you're not, you get a A, B, or B, C, D, E. Fair enough. Let me ask Nick to comment on what might be different in the process of informed consent between a physician doing something new, I'll pick Alan's example, it's something that he's heard about, that others have used, he's practiced it and perhaps is maybe even modifying it a bit, planning to do a number of cases and perhaps share that information. Would that be something that the IRB would say, thanks for calling, we need to fill out all the proper forms, not onerous but rigorous or would the IRB say, well, I understand what you're doing, it sounds like innovation and congratulations on sharing informed consent, we don't need to oversee that. You know, I think one of the issues is the dichotomy between what requires approval and what doesn't and certainly if it's a new medication, if it's a new device, an intraocular lens, a device you're using, a glaucoma device, those have to go through full FDA approval and those would require IRB, but if it's a technique, that becomes different and oftentimes with a technique, you don't have IRB approval, you don't have to go through IRB approval for a technique. Where it gets a little bit sticky is in the area of off-label use and one of the examples that we have in ophthalmology is when we are suturing in an intraocular lens, one of the sutures that's been used to keep an intraocular lens suture to the sclera is Gore-Tex and not only is Gore-Tex not approved for ophthalmology, it's sus on the box, not for ophthalmic use and so this is where the issue of informed consent becomes critical. If you're doing a technique and you're using a suture that's not approved for use in the eye, lawyers will have a field day with that if you have a complication because you're using something that's not approved for the eye and so that's where informed consent becomes critical and I think that especially if you do in a complex procedure, there is more risk of complications with that and as a result you need to have an even more extensive informed consent ahead of time and I'll even tell patients if you're using this suture for example, this Gore-Tex, we're gonna use a suture in your eye that we have found is the best for keeping this intraocular lens from slipping. However, this suture is not approved for the eye and the company never did go through the mechanisms for getting this approved and we're gonna use this even though it's not approved in the eye because in our experience this is the best for you and this is not something that requires an IRB but this is something that requires extensive informed consent and that's where level of informed consent becomes really important. So I think next point and of course Allen's examples are really very useful and I think although we're talking today about the origins of regulations in research, there's kind of a contemporary story about ethics and clinical practice. The thing that they share is this focus on informed consent and it's particularly American. Our European colleagues and Asian colleagues are still a little, look a bit of scant at what Nick has just recommended as almost standard practice if I can call it that where you're using something either off-label or in a new way. I think it comes from this strain of individualism and autonomy in the United States and it also grows out of the rights movement that many of you can recall, right? Civil rights, consumer rights, women's rights, gay rights, all of which are predicated on the fact that we're adults, we're capable of understanding, we should be able to exercise our preferences as long as they don't constitute risk to others. So medicine's take on that has been you have rights as a patient. I think the assumption now is you do have a right to know what I'm doing, why I'm doing it, what the risks and benefits are and what the alternatives. So for example, Alan's point about going with backup, five possibilities, that's not too far from what some patients might ask which are what are the five options for me and can I pick among those the one that's best and you can appreciate I think even in his example how complicated that is, right? Option one doesn't even work for you and isn't needed. Option three might work but because of your particular status wouldn't be as good as option four. That's all about an exchange of informed consent and it's predicated on the idea that patients now come to these encounters with the understanding that they have a right to know. That doesn't mean that every one of them wants to know all of that detail or even to exercise their choice. Some come to Alan and they're bored if Alan were telling them that there are three possibilities. It's more like, well, Dr. Crandall I came to you for you to do the thing that you think is best to me. What I wanna underline is number one, a connection with the idea of liability. It's not chance that Nick mentions lawyers having what I think he described as a field day and I think the background for that is actually the background of informed consent and what I will tell you is that some doctors have come to understand informed consent as in some ways a shield and it is. It's a shield against violating the right to know. If what you have done is told patients about the newness or the riskiness of a particular procedure, you've actually told them. It doesn't shield you from an adverse application of that. That is, if after telling them that it's discovered that they're allergic to a drug and you didn't ask them about the allergy, the fact that they gave you their informed consent doesn't shield you from doing something wrong. It does shield you from the allegation that you fail to provide informed consent. So what I wanted to underline is I think many of you are used to that now in your clinical practice, things that are not experimental. Let's just do the cataracts, quite routine done in the conventional way. I'm sure you're telling your patients what the risks and benefits are, what the reasons are. So you're doing that. I just wanted to underline that that is a shared aspect of research and clinical care and that responsibility doesn't go away because you're doing something that's innovative. In some ways, it's reinforced. That is, there's a greater obligation to share as Nick did the newness of this or the fact that it's untested. Let me just have you imagine if we were doing this conference for OB-GYNs and not ophthalmology, every day they're prescribing drugs to pregnant women that have not been carefully evaluated in that situation. So they live in that zone of I'm doing something, I'm well intended, I don't have the scientific basis for this, I'm telling you what I do know and I'm sharing with you the zone we're in. You wanna add anything to that, Alan? Well, one of the things that I think is important too is just what Nick was saying, why did the company put that on their label? Because they knew we were using it. And in order to get approval to use something that's gonna be done, 1,000 cases a year or whatever it is, is very expensive for the company. So it's much easier for the company and say, okay, we know you're using it, but they don't say that. We know you're buying our suture and you're using it in the eye. So we're gonna tell you you can't do that. But because we're allowed to do that with, because of other experiments which have shown particularly, you might add, I mean, how long have you been using Gore-Tex type sutures in the lid? This business is about buying smaller companies of raising prices, epic betting, et cetera. Gore-Tex is threatening to withdraw Gore-Tex now because they're making enough money. It's usually cardiac, so nowhere does it say that it's approved for some to take this use, it's interesting. So they're all skating their asses here and there. And the other thing too that I think is important is to really understand the literature that's involved. Now, some of the, there's been some recent studies and some older studies that have looked at the validity of papers that are published in journals. And it's scary because they claim, and I'll get the papers for you because we're doing a protocol for glaucoma that involves sort of looking at that kind of data. And they claim that six out of every 10 papers that are published are false. And I'll get those references for you, you know that. So you also have to take into account what you're taught, what you're learning when you do it, not just what's in the literature. Yes. No, I know that. I'll get you the actual references. You can look at them. Yeah, but I wasn't sure if you meant... No, they do use Gore-Tex if I may answer. They don't feel that the patient needs to know what the specifics of every technique that they do. And that's really true in Germany and I know it's true in Japan as well. Basically, probably even most of the Asian country, patients are just, they just want to know are they gonna get fixed or not fixed. They don't have the same kind of questions that our patients do. They don't demand it. It's just a mode of operation in those areas. So, but it's a terribly important point and I brought that up that we're a little different. That doesn't mean we'll stay different. And it also doesn't mean that ethical standards are in some ways permanent, right? They do change. I mean, that's part of this presentation today is that we've gone from a point where things are highly unregulated to what some people would say is a highly regulated, someone say overregulated environment, but it's not universal. And so it's possible that we may change or that, for example, that the standards for clinical practice become much more like the standards for research so that we don't keep making that distinction. It's possible that our European colleagues will change. One of the ways of thinking about this is from the standpoint of utility, right? Who does it help and how much does it help? In some situations, it turns out to be really valuable for patients to know what's going on. I'll just comment on that and I'll say for you all, it's also valuable. A good example about informed consent is if you talk to patients about anticipated adverse reactions, right? This could happen to you, not because I've made a mistake, but because this drug can produce this symptom change or these sutures can cause irritation or you may see some redness here, it's nothing to be afraid of, but you talk to them about the possible consequences and why you're doing what you're doing. On the patient side, what that alleviates is anxiety when those things happen. That is, they're not as alarmed by the problem. From your standpoint, it's fewer phone calls and from an insurer's standpoint, it's fewer visits to a clinic or an ER for a complaint which is transitory and expected. That is, this is something that almost always happens to people post-surgical or on a new drug. So there are arguments just for the consequential advantage of informed consent, but not always. That is, you can't always show that in a discussion like what Nick and Alan might do about this is still an unapproved item for use in ophthalmology, it's not clear what that will do for the patient. Frankly, it's not clear what it would do for you, but it's hypothetically an advantage for you because you have met the standard of informed consent. That's a nice way to think about it, that we have that standard. The Europeans would not be punished for not telling. It's not so prevalent there that there is this right to know all of the alternatives. There is a right to be treated well, properly and competently, but again, you can look at the large environment for a patient who suffers a severe adverse event or an error or the use of something that's unapproved that in fact shouldn't be approved because it doesn't work well. In the US, the responsibility for taking care of the complication is on the patient or the insurer. In Europe, that's kind of taken care of the way the first event was taken care of. They could go back to the same doc or a different doc and they're no worse off financially for that. So the environments have a lot to do with why we create a system that we create and it's complicated. So I want you to be thinking about that that when you're doing informed consent, I think your first motivation as a clinician or a clinician researcher should be doing something that you feel is helpful and important for the patient. You should be looking over your shoulder as Nick implies that it's maybe a call to the IRB to ask do I need to do more than what I'm doing here verbally. And finally, I guess you could think about your self-interest, right? Life is complicated. We do things for more than one reason. Just back to that original question about a placebo, just so you know, even at the level of the World Medical Organization that drafted something called the Rules of Helsinki, again after the Second World War, they struggle with the idea of placebo and it's a particular problem in the third world. I'll explain how that works. In the third world where there's almost no standard of care, as Ellen pointed out, to get your cataract fix for 2050 is not happening. Well, for many treatments, drug treatments and diseases, there's no treatment. So it's much easier to answer a research question if you test an active, potentially effective drug against the placebo. But we wouldn't do that in the United States back to that question. We would test the new drug against the quote, standard of care. But the environment makes a difference. If you could argue that there's no standard of care anyway, then you might be able to get a study approved that uses a placebo and you'll answer your question more quickly because the differences will be more apparent. So the idea of international difference is very important both in clinical practice and in research. Are there other questions for Ellen or Nick? Yeah. Rupi? So residents, there are two major problems with research in 2018. I want to belabor the point that Ellen just brought up briefly, the pressure to publish both here and elsewhere has created crises, all three of us are editors of judges. And last couple of years ago, nature withdrew 33% for all Chinese publications of nature because of not the states, but the wrong publication, meaning made up results, which brings us to the other major crisis, which is with research, the current problem is many techniques that are not reproducing. In other words, we asked for a 2016 cure for cancer and it came up and tried again. Gene spliced it. You couldn't prove it. Now, Santayana said, if you don't want to put in history, people are feasible. So who in this audience, the cure, if you don't listen, placement of problems, who knows placement of problems? Put your hand up. Zero percent of it. The biggest scam in the 20th century, the biggest scam was performed 500 yards from here. The biggest liability was 500 yards from here. So these two people at the University of Utah said, I could make energy, I don't have the dire strength on money for nothing. I could make heat for nothing, cold fusion. This bloody university, that goes and registers this to make money out of it without even having it published. So call your seniors out when they mistake nobody and what they find from this history. It's recent, it's 1989, it's been, for instance. So this goes along the time. You have to query this new technique of putting the things outside. Do you know what really matters? Is what would the results be when the average of modern studies, not the person who wins it? And so many of these techniques are not reproducible without complications. You could get records and records. You could consider your body to have them. You could get local light. You could get intraocular courage. In an average of the model, you just have it. So always consider that. So don't look at the history. Just because somebody said it's great, be cynical about it, make sure you look at results by other surgeons before you start adapting and consulting these things. And be careful of being invited to the high with a free meal to be sold with some drug which was five times as much, which is an ongoing process. Last point, the numbers game. And I'm going to call the university out of this one. Perish, publish. You won't let them be sprayed. Publish or perish. This university last year changed the rules. You cannot be promoted from assistant to associate professor unless you publish 20 papers as a clinician in many departments. You'll go to full professor, 40 papers. Every speaker of this university asks me and Adam and others to set a list of obligations. Nobody bothers reading my papers. They don't want my list to be posted about it. So we, sir, are promoting, publish or perish. And we started to look in the mirror and said, this is not right. I know this was brought on by the previous pile whoever it was in China in this university, but we haven't called this out. This pressure to publish is creating the problems with publishing papers that are not worthy of publication. So I appreciate your comments. A couple of ways to sort of bring this back again. First of all, your comment about publication I think goes, if you think about the struggle at the development of the Belmont Report, the question was, how do we know when things are validated? Because we want to call those practice or standard practice, right? And I think your comment is, we need to be careful even when we seem to be meeting the standard, right? There are a dozen, quote, case-controlled studies that have the same conclusion. They may or may not be correct, right? There's some of those that were published that are later retracted. I don't have an answer for that. I think that the concern there is actually an appropriately patient safety. And that's true for the researcher as well as the clinician. So I think it is on us to read critically. And again, maybe to share in a straightforward way. That is, many papers are reporting that this is a helpful, useful, safe drug, but it's only been out for several years and I'm just letting you know. I mean, in a way, you're being generous. I think you're sharing with the patient your own reservation. I certainly agree that we want editors to be careful, et cetera, because those are terribly influential in our environment. Those are controlling at every level. We wait, many of us, until we read those studies and the representatives of drug companies are famous for citing the studies that are advantageous to their point of view and maybe not mentioning the other studies. So I think the burden on the researcher is really significant. I think this idea that we're talking about today of informed consent is actually very important. And the question might be, once we do that, why do we also need an IRB? That is, if I'm straightforward and I tell a patient, this actually is not only untested yet in eye patients, but it's never been used. We have a good idea here. We're studying 100 people, perhaps half of them, if it's a drug, we'll get a placebo. We're even doing things like sham surgery now in the context of procedures, but I lay that all out. Maybe we could get Nick to share, what does having IRB approval add? I'll be the advocate. I'm sort of saying, well, informed consent is a really good thing. It does a lot of work. What does going through the IRB add to that that would either advantage patients or protect them? You know, that's a really good question because when we, as researchers, look at a particular project we're doing, we see, okay, we're doing this safe. We have our patients in mind. We don't need a bureaucracy telling us what to do, but the role of the IRB is not only initially approving your project, but it's ongoing. And you do have to do regular reports. And so I think what's critical with the IRB is patient protection. And the IRB will do the follow-up to make sure, did you follow the procedure that you said you were gonna do that was approved during your initial approval of the research project? And so I think that the IRB is critical for patient protection. And the IRB will ensure that the methods that you put forward were actually followed through. There's other things that the IRB will do too in that you report adverse events. And so if they're unintended, not unintended, unanticipated, I guess is a better word, unanticipated adverse events, those will pop up in your reports to the IRB. And the IRB will do follow-up and say, were there unanticipated complications? Were there problems with this? And eventually they can even make some recommendations on whether or not the study should be continued or should be looked at in terms of stopping the study early. And so I think the role of the IRB, primarily given all this history that we've had with the abuses that were done through the years is patient protection. So I think that's the role of the IRB. And then a very good question is to think about what the AMA said in the beginning when Canon said we should all have informed consent and in theory should all have our studies reviewed. They said, no, trust is very important. We want our docs to be good docs. They might be, the pre-existing AMA might be very happy with the idea of trustworthy informed consent without IRB oversight. So I'll just add a couple of parenthetical things. One of the things for an investigator that could actually be helpful at the IRB is much of what the IRB does, and we'll get Nick to respond, I would say, is to look at the informed consent document. Most of you know now, many of those documents run many pages. They are incredibly complete and there is a template for what must be in them. What you must also probably know is that most patients don't read them. That is, as technically correct as they are sometimes adjusted for a sixth grade reading level, they're often ignored. So at the end of the day, while the IRB may feel they've done an excellent job of protecting patients because of what's in the informed consent, they actually don't go back and test patients for recall and comprehension. So we haven't yet reached the standard of whether the IRB now in its effort to protect makes sure that you know and understand. I will just tell you that we know from many studies that people don't know and understand and even don't understand that they're in a research project if the investigator is also their clinician. So for example, and I'll just pick Alan, we can hear how he deals with it or Nick. When you were someone's ophthalmologist and you have seen and treated them for their condition in the past uniquely and as a benefit to them, and then you propose a research project that includes a form that might even include placebo in a drug trial, the patient's perception is that Dr. Crandall or Dr. Jacobson has only put me in this study because they're there to help me. So they just can't believe that if there was a placebo that I actually would put them in the placebo arm. I'm overreaching a little bit. What I'm talking about is called the therapeutic misconception. Many, many patients in research with the principal investigator being their clinician believe the goal of that project is to help them. Do you remember what I said in the beginning about the distinction? That's the definition of practice, not research. So a lot of misunderstanding, so what's the good? Well, I would say to you that sometimes as a researcher you get incredibly good feedback from the IRB because there are other research investigators there. One of their jobs is to answer this question. Will the design answer the question that the investigator proposed? Is this a well-designed study? They often have very good suggestions about a change in design or maybe even the inadequacy of the present design. I would say that's direct help to the investigator. The other one honestly is sometimes a conversation with someone else in the field who can advise the investigator. By the way, you know that much research is funded by the pharmaceutical industry. So back to Dr. Patel's point, a lot of those protocols are manufactured by the drug provider and maker. Their goal is to have a study that most likely shows a positive effect. Their goal is not necessarily perfect science or patient benefit. It's understandable, that's not their goal. But the IRB's job is to make sure that patient generalizable benefit is secured. If the design is so poor that it's really not useful if it results in a publication that can't support the claims that it's making, the IRB is actually the place where that should stop. So that's available. It's not always the major function, but I think we shouldn't overlook that. That can be very good feedback for the investigator. The other point I wanted to make for the residents, and I think Dr. Patel brought it out, is that when studies either of drugs or procedures move from a controlled research environment to the larger public community, we often see things go awry. Certainly at the technical level, the first time something is done is not as likely to be as safe as the 50th time. And with a drug, it's usually about using it in a broader population, not as carefully screened as the research group. So one of the things to think about is that even when something reaches the standard of scientifically established and licensed, that doesn't mean it will work well. And I think sharing that with patients is a really good idea. And as a trainee, if we were using the word experiment today rather than research, the first time we do anything, in a way, is an experiment. And actually, I'd ask your opinion about that. I think the residents have a special burden. Do you think that it's part of informed consent or expected that you would tell your patients that you haven't done this procedure before or you haven't done it independently before? What do you think? I'll point to you if you don't volunteer. Seriously, let's try it another way. What has been your practice? We've talked about differences in Europe and the US about what's an informed consent. As a resident, have you ever told a patient that the procedure you're doing, you haven't done before? So I'm gonna take that as an answer unless I hear an exception. I've done that as the very first PRK procedure that I did was on my sister-in-law. And I told her, I've watched a lot. I've done a lot of other surgeries that kind of prepare me for this delicate procedure. You're the first one. Yeah. I don't know if that's like any accounts board of a patient that you're never really gonna see after a year. I think a patient who doesn't know you have a protective to you, he's immediately gonna say, that's sensible. I'm sorry, I don't want you to be the first person. That's true. I'm holding my heart. It's a little better trial. I would say that and I would expect you to say that. So let me take advantage of that because I think that's very important. In this idea about what do we tell patients, sometimes we make our decisions based on what we think they'll say or do once we tell them. So there is some work on this and I'll share that in a minute. But let me first ask, have any of you had a patient decline of procedure that you explained to them? So that's actually pretty interesting because when we define or explain, we usually include risk, right? We say there's a small, for an ophthalmologist, blindness, I think is always on the list of something that could but rarely happen, right? For me, it's about adverse effects of the drugs that I use, right? That's possible that you could have a very bad reaction to this. I won't go into all the things. But it's very unusual that after hearing the risks that patients decline the procedure. Why do you think that is? They trust the doctor. So I think trust is really useful and something to think about with trust, which is very strange, is that when we disclose to people things that are potentially harmful to us, they actually trust us more. It's a very strange phenomenon, but it's been well studied. That is, when I reveal to you, for example, I've been up all night. I was on call at the ER. I'm really exhausted, but you're also an emergency patient. I'm just letting you know if I seem to be nodding a little bit that I'm really exhausted. You would think that many patients would say, get out of here, bring me a doc who's slept. Oddly enough, they sort of appreciate the confidence and it's actually trust building. I'm in the zone of social science now, but this is well understood. So I wanted to just collect a couple of other comments on this idea of revealing, and then I'll tell you about some studies where people have looked at just that. Yeah. It was completely transparent about that, but in the process before I could even tell them that the veteran said to me, well, at least it's not your first surgery. And I said, actually this is going to be my first surgery from start to finish. I've done all the steps. I've practiced for the first time. I have like a skill that I've practiced in the mirror for this one situation. And then he said, well, you know, I'm sure at least, he kept trying to reassure the situation, and I kept talking to him correct, I actually know this was my first case. Yes, this was my first time. And at the end of the case, he said, you know what, I still feel okay about you doing it. And then after the surgery was completed, he sat up and looked at me and said, so how'd it go? More interested in my experience being positive than from his own vision. I said, I mean, it was very startling how, as I was vulnerable and transparent, how people became invested in my training was interesting. That's one of the reasons I love coming to this meeting is that I couldn't say that better. What you've described is very much affirmed by the studies. And I'll get right back to one, but I just want to see if other people have a different experience or want to share something different about revealing kind of your own level. By the way, it's not exclusively a resident question, as you point out, as attendings, we're doing many things for the first time independently. So a very important question. Anyone else with an added comment? If not, here's the story. And this applies particularly to trainees. So people have studied what happens when a young physician in training tells a mother that they haven't done a lumbar puncture for meningitis yet. So if you remember your training, you'll probably remember experiences like that. So this would be a medical student saying to a mother, I need to tell you, as they would do and all the other things, what we're gonna do to prep the child and what the risks are, et cetera. But I also need to tell you that I'm a medical student and that I haven't done this before. They also presented that in the context that I will be supervised. I need to tell you that, it's an important distinction. I'll be doing this, but Dr. Jacobson will be looking over my shoulder, so much to say. So the story is that about 90% of the moms said, that's fine, why don't you go ahead? So there are some moms that actually will back away at that point and what they might say is, I'd rather be attending to it or I want someone else to do it with more experience. But I love that study because it makes real, I think some of the assumptions that we make all the time, which is it's not a good thing to tell people that this is a new drug not used in this situation. I haven't done this procedure before, but I've done the simulations, et cetera. But I would just tell you that 90% is pretty close. In fact, it's obviously more likely that you would have had an experience like the 90% rather than the 10. So at the end of the day, I guess one of the questions would be, what would be the worst case scenario that is if someone did decline of the procedure or the drug or the technique, right? They would go to something that's either better tested and better known or they would come to the realization that maybe for their condition, there isn't anything that's been fully validated yet. So I guess I really appreciate your comment. It's a good one to think about that in general, that experience, it actually fits with the trust. When you share with people where you are, but also what your preparation is, maybe what your backup is, they're generally pretty supportive. So there's not a big downside. The reason to talk about that is the general case. That sometimes people feel it's very burdensome to do informed consent, but at the end of the day, it actually, it doesn't, even in research, maybe Nick wants to comment, it's very unusual that you send somebody, you say to somebody, here's this 12 page informed consent, we've never done this research in human beings before, here are the risks including the unknown risks, right? That's the drill in research. How many patients decline studies like that? You know, it's interesting. It's the same thing as the resident telling them it's their first case. A lot of times it just depends on the trust of the patient. The patient will say, I understand, or well they'll say they understand, even though they don't understand, but it's very rare to have someone actually decline it. And even given the fact that there are unknown risks and that there are certain benefits, there are certain risks, you have to go through the whole thing with the informed consent. It's very rare. Again, probably about 90%, we'll go ahead and we'll still consent. Maybe last comment, Dr. Patel and then we'll wrap up. I think it's time to call the space a space. Look, my residents recognize this. Which patients almost always tell us, I don't want the resident or a fellow operating, only want you to operate, he's the physician. Yeah. Thank you so much. Thanks very much. Now, the attending does a surgery. So I think some, a study needs to be done. Why are we not like average patients? We've seen the residents work. Right. I mean, I'll do just these two seconds to say something here. We spent an hour and a half talking about ethics. This is not a party political statement because I've neither Democrat nor Republican. But we're talking about ethics today and yet we have an exposition of the president who let the president write his own medical history and submitted for his consumption. We have a president president's physician who is drunk on the job who describes all the aside necessarily. Why are we talking about ethics at this point and nationally, we're not having an ethical debate. That's my question. Yeah. So it's more than a question and we can't really do that but we can do another conference if you all want to begin to talk about some of those issues. What I want to do is just thank all of you and thank our panelists. Would you join me in thanking Alan who had to leave. Thank you. Thank you. Thank you.