 Here we go. How many climbers in the room? A few. So most people are not climbers. Most people think climbing is dangerous, right? And we'll come back to this climber when we come to some illustrations later. I want to talk about how we should think about integrating the domain of activities that occupies most of us right now, which is business processes, CQI, Six Sigma, Toyota. How do we get better value? You hear the dean speak about value, and everybody's supposed to know exactly what that means. With clinical research results and computer protocols, and how can we target Six Sigma performance, that's three errors per million interactions. Just think about that. Three errors per million interactions, and what the role of closed loop control is getting the clinician out of the way so that some decisions are made automatically by rule systems. Now, this is the reason why we need to be concerned about this. Here are two things that came across my desk just a few months ago. A Massachusetts survey done by the Harvard School of Public Health indicated that a quarter of the residents had encountered preventable medical error. A quarter of the residents in Massachusetts surveyed, and here even more compelling from the Journal of Advanced Nursing are the results of overstretched neonatal ICU nurses. Can you think of a more vulnerable population than neonates in an ICU, where sepsis is a catastrophe? 25% of them, in their last shift, missed appropriate IV site care. They knew they had to do it, but they couldn't get to it because they were overwhelmed. Well, this is a system that's waiting for catastrophic outcomes. Now, I'm going to discuss three major foci. One is the transactional unit, which may be new to some of you, the challenge of scaling and clinical performance target. So what do you think this is? What's the unit of analysis? Is it the patient? For example, if I ask you, what's the outcome of a patient with glaucoma? What would you say? So you may be reluctant to speak, but many people say, oh, this is the way it happens. But the key is that it depends upon where the patient is. A patient with glaucoma in Salt Lake City has a certain outcome. Same disease, same patient who finds himself in a rural village in South Africa who has to walk eight hours to get to a rudimentary medical clinic has a completely different outcome with glaucoma. There's no way to answer what's the outcome of glaucoma without understanding the context. And the context has to do with this blue box in which the patient is interacting with a clinical environment, with a clinical caregiver. That's called a transactional unit. There are transactions between the patient and clinician. And that is the unit that determines the treatment regimen, whether it's individualized, patient specific, and the outcome. Makes no sense to ask what's the outcome of a disease. No more than it makes sense to answer the question that some of the residents were asked years ago. What's a normal cardiac output? I mean, that question has two answers. If the professor asks, the answer is under what circumstances? If your peer asks, the answer is that's a stupid question. It's just as intelligent as what's the normal dress code. So context is determining. And we will come back to that. Now, the reason that this unit determines the treatment regimen is that each patient expresses unique patient-specific clinical data over time. Patients aren't the same. They're nonlinear, complex, biological systems. And you can't tell exactly what they're going to do. So let's ask, how are the clinicians doing? How are we doing in our contribution to this transactional unit? And that takes us to a very important issue, which is that we are all profoundly limited from the cognitive perspective. Now, that's an important thing to say, because we all think that we are at the pinnacle of biological evolutionary development. But our brains are, on the one hand, remarkable structures, and on the second hand, quite limited. So let's ask the question, how good are our perceptions for identifying credible evidence? That's another way of saying, how good are we as censors for determining how the world works? Because that's what we're after. If you know how the world works, if you know how the lens works, then you can do some things about it. If you don't know how the lens works, then you're out of business. So take a look at this. These two squares are identical shades of gray. Yet everyone in this room will put 20 bucks on the table and say that they are different. Now I'm going to show you that they're the same. And we're going to still see them differently. So watch this. Oops. This square is being moved over here. And it'll be moved back. And when it's moved back, it will overlap a little bit so you can see that they're different. But they're the same. Now even when you know they're the same, we see them as different. And the reason I've just learned from an interesting neuropsychologist in London is that our brains see the shadow from this cylinder, even though these two have the same luminosity, knowing that this one is in the shadow, right, and should be darker because it's in the shadow, makes our brain convert the equivalent luminosity to something that's lighter because it appears the same. Sounds literally crazy. But Neil deGrasse Tyson, the director of the Natural History Museum in New York, a physicist, said something very important. He said, rather than calling this optical illusions, we should call them brain failures. And it is really true. This is a failure of our brain to interpret the world as it is. And that's very important to understand because all of us think we're experts and all of us think we're well-trained. In fact, when was the last time at a national meeting you ever heard somebody volunteer that, well, I was actually not in the top 5% of the class. I was in the bottom 5%. Or I was not well-trained when someone else says, well, listen, I was well-trained and this is what I want to do, right? So this is very important for moving ahead. Now, what are the consequences? Well, these consequences were brought to my attention by one of my colleagues who gave a grand round on hypertension just a few weeks ago in medicine. First of all, physicians don't adhere very well to hypertension guidelines. Hypertension is a huge medical problem and we don't do what we're supposed to do. And second, he said, the only way we're going to get proper measurements, accurate and credible measurements of blood pressure in the office is to use automated devices. We've got to remove the humans from it because we muck it up. Very interesting. If we can't measure blood pressure properly, well, what about visual fields? What about you could fill in the blanks? So one of the consequences of our inability to deal with situations that are complex is that the guidelines we're given are not adequately explicit. They have inadequate detail. Guidelines are very high level. They contain statements like reduce world famine, right? Make peace in the Mideast. I mean, nobody's going to raise healthy teenagers. Nobody's going to argue with that, but how do you do it, right? I mean, the details are not trivial. They're crucial. And the decision maker must do two things. Must add information and must make the right decision. Well, almost always, the right decision is unknown. I mean, science is an ongoing dynamic process of moving towards better understanding of the world. It's not as if we know what's absolutely true. So let's consider glaucoma, which I consider an overwhelming problem. You've got visual acuity, visual fields, picimetry. I learned that that was measurement of the thickness of the cornea, right? Isn't that right? Yeah. The eye exam involves variable judgment. You've got people saying, oh, yeah, I do. I don't see this or arguing about it. Pressure threshold, when I was taught in medical school, there was a threshold above which glaucoma was very likely. And then when I became a physician, I learned that there was low pressure glaucoma. So what's the threshold? I haven't the vaguest idea. I'm guessing the threshold is like blood pressure. It's the lowest pressure you can sustain that still maintains the physiologic function of the globe. Just like the best disease to have for with regard to blood pressure is the lowest blood pressure you can have without fainting or getting dizzy when you jump out of bed. Then you've got drops and pills, beta blockers, alpha agonists, carbonic anhydrase inhibitors, plastic anids, each with about, according to my expert consultant, about four variations. That's four times four times two. That's already 32 possible things that you can touch without worrying about combinations. And without worrying about not only combinations, but the sequence of the combinations and the doses of the combinations. And then you've got trabeculoplasty, conventional surgery. And then I looked at a website, which I've listed down here. I counted about 90 systemic and about 150 local risk factors for glaucoma. There are thousands of combinations of these variables for glaucoma. So that leads to a very important question. How many variables can humans manage when they're making decisions? And that actually is a very interesting subject that's been studied extensively for the last 80 or 90 years. So the number of conceptual objects, the psychologists call this chunks, that can be retained in short term or working memory before decisions are degraded. We don't want degraded decisions. Was originally published in a review in 1956 by Miller, a professor at Harvard, as seven. The title of the paper is The Magical Number Seven plus or minus two. But more recently in behavioral and brain science, it's estimated to be four plus or minus one. Thousands of combinations just for glaucoma and our brains are limited in short term memory to just a few constructs when we're making a decision. So this is a huge mismatch. Even though medical students are still told that the way to make the best decision is to get all the information, look at all the lab data, look at all the tests, and then use this computer between your ears to make the best decision for the patient. Something which for over 100 years we have known is humanly impossible. So we still teach medical students to do something that's impossible. So here is a reflection of that. This is a patient enrolled in a study of extracorporeal CO2 removal, heart-lung machine used. Yes, we'll come back to that because it, right. So the chess master, that's a good example. Actually, I think psychologists would argue that the constructs that the chess master uses are very data rich so that it includes a whole bunch of possibilities and patterns. Whereas the beginning chess player is sitting there and she's thinking, let's see, the pawn is just one box. Is that or is it two over, right? So the content of the constructs can be very different. So the expert ophthalmologist will have a very rich content, but the number of constructs that appear to be manageable in short-term memory is limited to four, seven. Doesn't really matter whether it's four, seven, 15 or 20. It's trivial compared to the enormous amount of data. Let me go back and we can come to that again. The subconscious involves pattern recognition. That's, if you're familiar with Kahneman's book on thinking fast and slow, the subconscious is system one thinking where pattern recognition is operative. So here is the patient, right? Here are the lines connecting the patient's groin vessels with a heart-lung machine over here, 10 IVs, mechanical ventilator, the patient's on a bed scale. This is a complicated system. So this is where we first developed a computer protocol to generate patient-specific or personalized instructions. And the issue here is that in this room, this patient has many more than 236 variable categories. I went to a room of a patient and I just counted the categories that were readily available, systolic blood pressure, diastolic blood pressure, urine output, and so forth, breathing rate. I ignored everything that was hard to touch. I ignored CT scans, echoes, consultation notes, doctors' notes, nurses' notes, therapists' notes. And I counted 236 variable categories right off the bat. So there's this terrible mismatch between information which overloads the clinician and clinicians' observations. Now, can that lead to problems? Let me give you one example. This is, these are two clinical observations from the Extracorporeal CO2 Removal Clinical Trial I've just illustrated. This first patient is a 53-year-old lady who had acute respiratory distress syndrome that's a life-threatening lung failure. And after she was enrolled in the clinical trial, had a sudden neurological catastrophe, the bedside clinician recognized that she was brain dead and a CAT scan revealed air embolism. And you can see these black, serpiginous, and oval shadows represent air in the cerebral vessels. This lady had a catastrophic consequence which is known to be a complication of heart-lung machine use. All those connections of the tubes are places where air can leak into the system. And so what is the conclusion? Well, if I was presenting this at an international meeting, you better be damn sure you're careful if you're gonna do heart-lung machine support. This is not for any Tom Dick and Harry. Know what you're doing, right? This is an example of how you can get into trouble. And here are just two of many illustrations. She had air in every organ in the body. Here's air and blood coming out of the vena cava. And here are bubbles of air in the omentum. There were air in the coronaries. There was air subcutaneously. This was a disaster, right? So here's the second patient. This was a 27-year-old woman with acute respiratory distress syndrome following pneumococcal pneumonia, ordinary pneumonia. She deteriorated for five days in a Wyoming ICU and arrived in grand positive septic shock. Now, for those of you who remember blood gases, her arterial PO2 was 26 with a positive end-expertory pressure of 27 while breathing 100% oxygen and a systolic blood pressure of 60. When the positive end-expertory pressure is higher than the PO2, you know you're dealing with a catastrophic illness. I left the ICU after enrolling her in the clinical trial at eight in the evening and thought to myself, she's going to be dead before I get back in the morning. She flew home to Wyoming 10 days later. She satisfied all of the criteria that are published about de jure death that allow you to do whatever you want to the patient because the patient is a dead duck, can't possibly survive. So presenting her to an international meeting is what? Yes, this is why you have to do extracorporeal support. And we have very few observations in clinical medicine that are more compelling than these two. So here they are in summary. Be careful because you can hurt people and do it because it's life-saving. And the interesting thing about these two patients is they were both enrolled in the control group of the randomized clinical trial. Neither patient had any contact whatsoever with a heart-lung machine. So are observations a good reflection of the way nature behaves? Be cautious because they can frequently lead you absolutely in the wrong direction. In fact, here's a quote from Michio Kaku, a physicist who writes for the general public. If appearance and essence or truth were the same, we would not need science. We wouldn't have needed the reformation, the renaissance, the enlightenment, right? Bacon and all of the contributions they made. So one's opinion, which is based upon what we see and how we perceive it like those two gray squares that are different shades when they're actually the same is no guarantee of proper behavior. Remember this young man who's very proud of his performance and we tell you that he's really on target. Opinion and arrogance lead to error. Arrogance is maybe why the normative church included arrogance or pride as one of the seven deadly or capital sins. And dimming himself said experience by itself teaches nothing. Experience without science, very, very flaky. So let's turn to the challenge of scaling. What do I mean by that? Well, this is a question that I wanna put on the table. Can important current initiatives lead to favorable clinical decision support at the patient encounter scale where you are encountering a patient and have to make a decision like this lady who had trouble seeing because of the veil. Are there emergent properties? Properties that are not predictable but emerge when you take simple things and join them together to make a complex system. What are current initiatives? Well, our most important current initiatives are initiatives from the business side. Quality improvement, lean, zero patient harm, things that take a lot of our attention these days because most health care institutions want to more than anything else avoid bankruptcy. But I'm gonna deal with two different scales. One is the reductionist scale where we break things apart to their simple components and study them. So you can think of genomics in the biochemical laboratory as being down at the reductionist scale. And the other is the integrated or holistic scale where we deal with business initiatives. So let's consider that we can study parts separated. Isaac Newton, you may remember, said if I knew the behavior of all of the celestial bodies in detail, I could predict the behavior of the universe. That's the reductionist mentality. People don't believe that's any longer true. So you go from parts separated and studied in special laboratories to the integrated or holistic system. And let me show you how I view the scale of inquiry and the laboratory that's appropriate for that. First, if you're interested in chemistry, in genetics, for example, you need a biochemistry laboratory. If you're interested in self-physiology, you need a self-cultural laboratory. Remember, each of these laboratories has its own environment and its own tools. No reasonable person would try and study the heavens with a microscope or cells with a telescope, right? They have tools that are configured to operate at the scale of inquiry. If you're interested in pathophysiologic changes, then you need human measurements that are credible. If you're interested in medical outcomes, you need a clinical outcome environment that can function as a clinical laboratory. Think if the environments you enjoy can function as a clinical laboratory. If you have societal interests, you need large databases. We frequently use administrative databases like Medicare databases to answer questions for which they were never designed, like what's the best therapy at the lowest possible cost, which on the surface seems rather silly. And then, if you're an evolutionary biologist, you look at populations over time. Now, I'm gonna argue that you should recognize that translational research involves the movement between these different scales. Now, the NIH, when it talks about translational research, almost always only means, well, what kind of genomic information can we move up to the medical scale? But in fact, that's a very limited view. Translation has to do with movement in both directions. And the key here is, what kind of information do we need at the medical scale so we make good decisions? So the lady with the veil over her eye that blurs her vision, doesn't get a, yeah, a glazer treatment that enables somebody to fill the tank of a BMW, but doesn't help her. So let's start with biochemical information. Can we move from the biochemical level with confidence up to the medical scale? So here is the scale of inquiry going from biochemical. And I'm gonna discuss with you a disease in mammals, including humans, that's associated with inheritance of only one of the two elastin genes. So the individual mouse or human is hemizagous for elastin. You'd expect with only half of the elastin genes present at the cell physiology level, you'd get half the elastin. And in fact, that expectation is confirmed. Here are mice, homozygous and hemizagous with only one elastin gene. And you can see the black elastin lamella here about half as big as the lamella in the homozygous. So everything is okay, but now comes the problem. It's not possible to anticipate an emergent property. What is the emergent property that comes at a higher level of scale? Well, the organ that's subjected to stress like the A-order that's pulsatially enlarged and contracting winds up with an increased number of lamellae. Here's the mouse with two genes and the mouse with only one gene. Look at the large number of elastin lamellae. Well, that's not anticipated from knowing something at the biochemical level. That's a new property. And more important at the pathophysiological clinical scale who could ever have anticipated that people would develop a disease called supravalvular aortic stenosis. Why? Well, here is the normal control aortic wall and here is the wall of the hemizygote with only one elastin gene with the increased number of lamellae that increases the wall thickness so the lumen is compromised and the patient behaves like they have aortic stenosis but they have a normal aortic valve. This is an emergent set of properties that can't be predicted from the biochemical scale. So the answer to the question, can you go from the biochemical level to the patient clinician encounter scale reliably is no, you can't do it reliably. Can you score sometimes? Well, sure. My wife and I just came back from Las Vegas. We could score sometimes but I can tell you that we could not reliably score. What about meta-analysis? Right at the clinical scale we have large numbers of studies that people try and using statistical tools to aggregate to lead to compelling conclusions. Well, here is an interesting study published now almost 20 years ago by DeLaurier. 19 meta-analytic results that preceded a subsequent randomized control trial. How good were they at predicting the results of the scientifically credible study? Well, they were okay two out of three times. They were only wrong one out of three times. The CAPA score was rather low and that means that false positive and false negative rates are one out of three. Right, would you put in a lens in somebody's eye if you figured one out of three times you'd screw it up? I doubt it. We can come back to that and we'll talk about concordance of results when we come to electronic protocols. That's an important perception. So the answer is once again, not reliably, right? Well, what about business process initiatives where most of us are now focused, right? CQI, lean, Toyota, zero patient harm, can they lead to favorable clinical decision support at the patient clinician encounter scale, right? So can you go from business process? I've now enlarged the clinical scale to include societal because business processes cross a number of societal domains that's not really too important. Let's ask if we can go from there to clinician patient encounter. Now here it becomes crucial to recognize that within this transactional unit in which the clinician caregiver and the patient are interacting, they're both surrounded by a context, right? I mean, if you work in an environment that doesn't have a slit lamp, your options are constrained. If you don't have an operating microscope, your options are constrained. If you're in a clinic in a rural part of India or Namibia and you've run out of topical anesthetic for the eye, your options are constrained. So the context is crucial, right? And the context enjoyed by the clinical caregiver may be different from the context enjoyed by the patient. In India, in some clinics, they don't have beds. So the patient sleeps on a mat. So if you wanna order an air insufflation bed to prevent bed sores, you're out of business. So where do the business process models that we're so invested in work? Well, they work primarily on the context. They don't touch the clinical caregiver directly, but they work on the context. How do they do that? Well, they map the process. We deal with workflow, clinician decision flow, governance, training, strategy, and standardized treatment. You may have a different perception than I, but I have yet to meet an administrator who knows what standardized treatment is. It's commonly touted. We wanna standardize the treatment. I don't think they have a clue, but that's what business process deals with. Now, does this unburden the clinician? Remember, the clinician is overwhelmed. The nurse who can't one out of four times attend to proper care of an IV site in a neonate in an ICU can't do that because he is overwhelmed. And what about clinicians? I mean, I can tell you the primary care clinicians get 10 minutes to see a patient in the clinic. It took me more than 10 minutes when I was in clinic to dump out all the patient's medications from two big brown bags on a stretcher and go over the meds and find out that they had three active digoxin prescriptions and multiple lists and so forth. So, I mean, how do you do the job you're expected to do? Impossible, you can't. And Larry Weed has been writing about this since the 1970s, that we're asking clinicians to do things that can't possibly be achieved in an unaided environment. So, these are high level aids and the clinicians must still make every decision, right? These can rearrange information presented to the clinician. They're desirable, I encourage their use, but they can only get us so far because the overwhelmed clinicians still has to make every decision, right? I will talk about training, just to point out to you that we tout training and education, but I'll remind you that we've been training about hand washing and proper hygiene for over a century since Lister introduced the technique in England in the 19th century and the compliance with proper hand washing procedures and two surveys done last year in major U.S. networks varied from six to 71%, right? How many of you would be excited if your child came home and said, oh, listen, I had a 33% improvement in my performance on the math test. I got 15% right last semester and this semester I got 20% right. Hardly a stimulus for familial celebration, but that's what we deal with in medicine. And we have to recognize it. So business process, translation, the patient clinician encountered, not reliably. So what does this mean? Well, we have to obtain the information here. This gets back to the question of, well, what about trials that have dissimilar results that only propagate confusion? And that's most studies. Well, we'll come back to that. How do we obtain the information here? And I'll talk about tools for obtaining the information reliably. So after we obtain the information, what about performance of decision support tools? If we had decision support tools that collated the information for the management of glaucoma and we had that captured and it returned to the clinician or even automatically made for the patient decisions about what changes in medication and what changes in doses and so forth, what kind of compliance do we need to make this happen well so that we're performing at a level which comes close to airline performance, right? Alan Crandall just told me he just got back from India after getting back from some other place and I don't remember, may have included Mars, but he wouldn't get on a plane. If he thought the chances of having an unfavorable outcome on the plane were anything close to the chances of unfavorable outcomes in clinical environments. So here it becomes important to distinguish two kinds of decision support. Single event from time series. What a single event? Well, reminders, right? It's time for mammography. It's time for your repeat colonoscopy given your age and background. Order sets, admission and discharge processes and even some simple protocols. Somebody comes into the ER, we have a protocol with the Ottawa ankle rules so that you don't have to do x-rays unnecessarily and go through a particular examination, right? What about time series? What about the management of the patient with glaucoma that's refractory to ordinary therapy? What about the management of the patient with chronic rheumatoid arthritis or inflammatory bowel disease or heart failure or atrial fibrillation that's persisting? Where you're seeing the patient time and time again and you're now going through a series of titrations and responses to patient expression of the disease. Well, those can only be managed as far as I can see with detailed computer protocols that capture multiple levels of reasoning. And you could also use detailed computer protocols to do the single event but whether it's worth the effort or not is not clear because you can get pretty good response from other things. So what should our target compliance be, right? What compliance is needed to make medical care safe enough? So let's turn to clinical performance target. And for that, I wanna present to you some data from an ICU at the Hebrew University Hospital in Jerusalem. A closed ICU, closed staff, run by very, very vigorous and careful and domineering directors, right? Study was done in collaboration with the clinicians by experimental psychologists from the Technion, the Israeli counterpart of MIT. They observed that for the patients in this ICU, there were 176 interactions on average per day. So the nurses and doctors were doing 176 things on average to the patients per day. They were correctly performing this 99% of the time. Ask yourself if you can point to a clinical environment where you can confidently conclude that the people in that environment are operating on average at a 99% correct performance level day in and day out. I don't know any. Now, this translates into about a four sigma, a four standard deviation performance. That is about 6,400 errors per million interactions. Now, this means that there are about 1.7 errors per day. That's 1%. Those 1.7 errors per day were distributed into two groups. Most of them were minor, had nothing important to contribute to any outcome, but 29% were a major threat to life or limb of the patient. That meant on the average, every patient was subjected to a major threat to life or limb every other day. That's with a clinical environment operating at a 99% correct performance level. Now, let's come back to the climber. This is a Japanese climber climbing a cliff on the Mekong River in Laos. They're setting new routes. If he makes a mistake, he may fall like this Chinese climber, but you'll notice he's roped in and there's somebody at the bottom controlling him so he doesn't get hurt. Now, how do these pictures get taken? Well, here's the photographer, right? He's in that sling for three or four hours. The Hmong tribesmen who protected this climbing group built this bamboo strut that take him out, project him away from the wall, and he's supported from above three or four hours. And this is what he's picturing. There he is. Here's the climber, and here is the belay. Now, most people think that's very dangerous. Here is a belay controlling one and three-quarter climbers. There she is. There's the fetus, right? Right? He and all the belays are responsible for life and limb. Well, that sounds like what we're responsible for. Life or limb, life and limb, right? Now, I wanna show you the product of this gestation so you'll remember this. This is one day, one month, and about a year and a half, right? Do you think any of those people would put on a harness and tie into a rope if they thought that every other day on average, they would be subjected due to era of the belay or equipment era to a major threat to life or limb? Not a chance. So here is the interesting conclusion that in an ICU operating at a 99% correct rate, that's one era out of 100. Patients are exposed to a much more dangerous environment than climbers doing what you just saw. 99% correct clinical care is not likely achievable with current methods, and if achieved, it's certainly not adequately safe in the ICU. Now, what compliance do you need in the ophthalmology clinic? I have no idea, and that's why we need to study this. Maybe you need 99%, maybe you only need 80%. I'm guessing you need more than that. So we need to study this in different contexts. Now, how can we achieve the goals of Toyota and Six Sigma? That's 99.99966% error free. That's 3.4 million, 3.4 errors per million. And here is one of the mnemonics for doing that. Well, I think we can do that with tools like e-protocols, these electronic protocols that enable consistent clinician decisions linked to evidence. You cannot do it with guidelines or common protocols. 99. something percent of the protocols are just too high level and lack detail. These electronic protocols that my colleagues and I have developed have targets and guidelines have targets, like reduce world famine. But they also contain detailed rules to reach the targets and there's about a 95% clinician compliance in an open loop servo control manner. So the protocol returns an instruction about exactly what to do and the clinicians accept it 99 up to 98%, 95 up to 98% of the time. And I'm gonna argue with you that we should close the loop because closing the loop for those things for which we have enough evidence enables us to unload the overloaded clinician. So that she has time for other things because clinicians don't have time to do everything they need to do right now. So how does this play out in this transactional unit? Well, the computer protocols we've used that we've developed protocols for mechanical ventilation, intravenous fluid, blood glucose control. We use protocols for standardized interpretation of pulmonary function tests. We're now working on protocol for parts of the heart failure management. They standardized decisions for the caregiver. Now they reach in and touch the caregiver. It's not just a matter of changing the context. Let me show you one example of that for management of reactive hyperglycemia, stress hyperglycemia in the ICU, high blood sugar. There's a simple guideline that came from Tufts University, a bedside paper protocol from the University of Virginia and a bedside electronic protocol that we used at LDS Hospital. They all target the same thing. Here are the percent of blood glucose measurements from zero to 9% on the vertical axis and the blood glucose from zero to 350 milligrams per deciliter on the horizontal axis. And here is the target range, 80 to 110. We're talking about tens to hundreds of thousands of measurements. Here's the performance of the simple guideline at Tufts University. It's as simple as it comes. It's a closed unit. The guideline says get the blood glucose to between 80 and 110 and that's it. There's no argument from nurses or physicians. Here is the performance in the dashed line of the bedside paper protocol at the University of Virginia and here is the bedside computer protocol performance in the solid line at LDS Hospital. Now these are clearly statistically very significantly different but the important thing is that based on the best evidence at the time this study was done we expected a 6 to 9% absolute mortality difference so that if Tufts had a mortality of 40% in the ICU we'd expect the LDS Hospital had a mortality that could be as low as 31%. If you had a drug that reduced mortality by 6 to 9%, the first thing you do is call your family members and have them buy stock on the company. This is a home run. So this is a big deal, this is not trivial. So we exported this electronic protocol to these other two sites. Western is LDS Hospital, South East is Virginia, North East is Tufts and we also exported to the National University Hospital in Singapore and all of the differences disappeared. This gets back to the question of well what about different studies? Well different studies use different methods. These four sites now use the same method and this demonstrates that it's possible to export a method from a development site to other sites and have it perform the same. Now in addition, we also embedded this method, embedded the rules in the Intermountain Help System. So here is the research result I've shown you before from our research ICU, from almost 500 patients, 21,000 measurements and I'm gonna show you now the results of its use in clinical care after an announcement to the Intensivus Intermountain that if you're interested there's a protocol for managing blood glucose, you can use it or not among all the other protocols available. Almost 2,300 patients, almost 110,000 measurements. Now these are statistically significantly different but the real signal is that they're almost identical and here for comparison are the results from Tufts and the University of Virginia and this demonstrates that it's possible to actually export the tool from the research environment to the clinical care environment and translate research results into usual clinical practice. Now to respond to the question about what about conflicting results, here are from 12 different studies during a decade. This involves millions of dollars, a lot of effort on the part of the healthcare community. Here are the results with, here are the insulin infusions in units per hour on the vertical axis as a function of high to low blood glucose and this gray bar is the range of insulin infusions from the 12 different protocols used in 12 different studies that produced inconsistent results and this is the electronic protocol I've just shown you. Now what does this mean? Well it means that if Toyota or Mercedes built cars with this kind of variation of the production line they wouldn't be called Toyotas or Mercedes, they'd be called Ugos, right? They wouldn't be, they wouldn't have a viable business. This is no way for us to use our limited healthcare resources but we do it because we have no healthcare system and there's no direction, there's no coordination, right? Now this would not be acceptable in the RAT laboratory and in fact, in fact I've included an editorial from Marsha McNutt who is the editor of Science pointing out that science, nature, nature medicine, New England Journal, British Medical Journal and a number of prestigious outlets, NIH, are very concerned about the fact that this is the kind of reproducibility they're seeing in reductionist research. They're worried about the fact that the same laboratory can't reproduce its own results when they subsequently do a study. How much greater a problem is that likely to be in the clinical environment where we have the problems that we've broached? So didn't have the funding to pursue that. The mortality, well the results are variable. The biggest study which now drives decision making came from the Australia, New Zealand intensive care society in collaboration with the Canadian clinical trials group and they indicated that a preferable target was identified because it was associated with a lower mortality but they used a different method than we did. Oh, that's based upon other data not on direct measurements of mortality in those studies. Yeah, no longer used there. Because the FDA required us to remove the protocols and the people at Virginia, the nurses at Virginia went to the director and said what the hell are you doing? This is a better way to care for patients and he said we can't do it because the FDA doesn't allow us to. Funding was not adequate to pursue that. So e-protocols are clearly feasible, they can be exported across cultures, they can be used to translate results to usual care and I briefly mentioned these things for which I've not presented data. So what does this mean? Well, we can standardize 95% of the decisions of the clinical caregiver by reaching in and where does that lead us? Well, because the data come from the patient to these rules that are captured, we unburden the clinician. We present decisions, not data. So the clinician still has to accept or either accept or decline, but she has to accept or decline the decision, not look at the data and make her own decision, that's unburdening. But it's not as great as the unburdening that would occur if we closed the clinician loop and got the clinician out of the way, so we reduce the number of decisions. That would unburden the clinician because it reduces the decisions necessary, it limits the cognitive burden which is already excessive and I think, I don't know this yet, I think and was encouraged by the director of our medical informatics research program to include this because this is what administrators wanna hear. I think that it would reduce visits, reduce the readmissions, allow the patient to assume some care responsibility, something that is already recommended in medicine and increase patient satisfaction, but these have not been systematically studied. We can take data and have done this in two patients for 850 hours. We have millions of hours of use in the open loop mode, but we've taken two patients and 850 hours and closed the loop so that nobody has to look at the instructions, the data go right to the computer and the computer adjusts the mechanical ventilator and for 850 hours and these two patients have worked like a charm, there were no problems. I'll come back to that. For glaucoma, how would this work if someone were interested? Well, we collect from the patient data necessary, we generate patient specific instructions after going through a knowledge engineering process to capture the reasonable way to make decisions given data presentations and this would be presented to the clinician and we could omit the clinician by just presenting to the patient instructions with telemedicine, just sending to the patient, increase the drop in the bottle with the brown cap, preferably the brown bottle because caps can sometimes be switched from two drops twice a day to two drops four times a day, do that four times and then come in to the clinic to have your eye examination at 11.45, the appointment's already been made for Friday morning. That would remove that decision from the clinician and all we have to have for that is a reasonable set of rules that I suspect many people in this room are quite capable of doing if they were willing to invest the effort to make that happen. So even if we close the loop, the patient would still express unique patient-specific clinical data over time and the treatment regimen would still be individualized, personalized or patient-specific because when we present this to many clinicians, they say with a knee-jerk response, well, I was taught in medical school not to practice cookbook medicine but in fact, we do. Some post-operative care is cookbook. You know, if you're 80 or eight, you get 40% oxygen until you wake up and post-operatively after cataract implantation, you get certain treatments that are the same for everybody. That works but that's not what this is. So just so that you know I'm not talking about something very innovative, Lou Shepard who was an engineer at the University of Alabama published in 1977, closed-loop control results in 8,500 patients in the post-operative cardiac surgery ICU. 1977, worked like a charm, done for clinical purposes, unloaded the clinicians, unburdened the clinicians. Where have we been? Somewhere else. So to summarize, maximizing clinical quality and value requires expert clinicians we're never gonna get rid of clinicians. People who are afraid of losing their jobs are just unrealistic. Can imagine if we had everything to know about infectious disease, what would we have done when HIV was introduced and exploded in San Francisco? What will we have done with Middle East respiratory syndrome or SARS or Ebola? And if you think that's the end of those things, you and I have very different perspectives. So we're never gonna get rid of clinicians. Business process tools are needed to improve the context and these can address single events. What you do when a patient comes into the clinic and you enroll them and they indirectly unburden the physician, but replicable time series methods like the e-protocols I've shown you to close the loop and directly unburden the physician are where we should be putting some effort and this is absolutely ignored by the NIH, by the NSF and by others. And with that, I'll close and we have about six minutes left for discussion. Yeah, there's a good example. Dr. Olson mentions Theridoc. Theridoc is a company developed by people who originally did the antibiotic assistant at LDS Hospital in the 1990s, published in the New England Journal, seen as a major advance. The antibiotic assistant was in fact an assistant. It wasn't a replicable method such as those I've described here. I tried to get my colleagues excited about that. I was not successful about making it replicable, but there were some real challenges like getting information from X-rays. Even now, you can't get coded information from X-rays because the PAC systems were all independent but that's changing. So knowing something about infiltrates in the lung was very important for making decisions about and pleural effusions about antibiotics. But that's a good example. The antibiotic assistant was actually followed at LDS Hospital about 62% of the time. Not the 95% I'm talking about, but Theridoc has built a business on this and they're absolutely right. There is no way a practicing clinician can keep in her head all of the things you need for this and other things are obvious as well. Diabetes management, the three or four new oral agents for diabetes, the anticoagulant management, the new oral anticoagulants, that means it's just impossible to keep together. Okay, so the validity of the input data, of course, is crucial, but that's true of decision-making whether I make it myself or whether it's made by a set of rules. I mean, if the data are bad, then how can my decisions be any better than a protocol's decisions? So what the protocol does as far as I can tell is it helps us consistently link decisions with evidence and with reasonable rules as opposed to letting me fly unaided in which case you can anticipate fully that the decisions I make will not be consistently linked to evidence and will vary. And when I say me, if we took 10 of us, we would all make different decisions because there's a lot of very well understood inter-decision-maker variability, but there's also intra-decision-maker variability. So you can imagine that if you had to do some eye surgery and have one bad outcome, your decisions for the next several months may be dramatically colored by that bad outcome, happens all the time to everybody. So the issue here is consistency and reliability, which will, I think, answer the question, how can we do clinical studies better so that the results of the clinical studies are more credible and also might be transferable to clinical practice? If we have the methods in the clinical studies that are captured in electronic rules with all of the effort going into EMRs, we're likely to be able to introduce those rules as web services and get people to use the exact method that was used in the clinical study. But your point is well taken. Sure, so the question is, the question is, doesn't eProtocol have to be standardized in the same way with randomized clinical trials as the ultimate determinant of whether it's better or not. It's only been done once. In the 1990s, we exported our mechanical ventilation protocols to 10 institutions in eight cities in five states, none of which were involved in the development of the rules. And we didn't have enough power to look at mortality, but unfavorable consequences like new barotrauma, air leaks in the lung were reduced in the computer protocol group versus the clinician, unaided clinician decision-making group. And in that RCT, the bedside carts that included a computer and a mechanical ventilator were identical in both groups, both the intervention group and the control group. The only difference was the intervention group received instructions, the control group received none. But the data were incorporated in the same system and the same mechanical ventilators were used. So that's, well, we only did 200 patients and we didn't have enough for mortality power. That was done in the 90s. I'll have to look back and see if we had ICU days. But it reduced barotrauma, which is one of the unfavorable consequences. If anybody is interested, I'm very pleased to chat with you about this. My email is alan.mores. At imail.org, that's the best one rather than the university email.