 Well, it looks like we got our quorum here, so I'm going to go ahead and get started. Today's topic is Diagnostic Errors in Medicine. And by way of background, a landmark study was published by the Institute of Medicine in 1999 entitled, To Errors Human, and it focused on preventable errors in medicine. The findings were gathered from two large studies of hospitalizations in the United States, and they determined that adverse events from a cohort in Utah and Colorado, and we're talking about hundreds of thousands of patients, what adverse events occurred at a frequency of 2.9% in that group and 3.7% in Utah, and of these patient deaths occurred in 6.6% of the Utah-Colorado group and 13.6% in the New York group. So this data was alarming, and when they extrapolated to the 33.6 million hospitalizations that occurred in 1997, they concluded that between 44,000 and 98,000 people die every year as a result of medical error, and you've probably heard these data in the lay press, but this was really a wake-up call, and the errors that they defined fell in these four categories. Diagnostic errors which were errors in or delay in diagnosis, failure to employ indicated tests, use of outmoded tests or therapy, and failure to act on the results of monitoring or testing. There were treatment errors, these are errors in the performance of an operation, procedure or test, error in administering the treatment, error in the dose, or method of using a drug, avoidable delay in treatment, or in responding to an abnormal test, and inappropriate or care or care that was not indicated. There were preventive errors, failure to provide prophylactic treatment, inadequate monitoring of follow-up, or inadequate monitoring or follow-up of treatment, and then other things like failure to communicate, failure to pass off patients well amongst colleagues, equipment failure, and other system failures. And at the Annenberg conference on patient safety leading up to the publishing of this study in 1998, the year prior, Nancy Dickey, the past president of the AMA, said the only acceptable error rate is zero, and then at the same conference in 2001, Gordon Springer, CEO of Alina Health, said, let's be absolutely clear on this. The goal of patient safety, of the patient safety movement must be to eliminate all errors. This is like climbing Mount Everest, but it must be our goal and it can be done. So that led to two questions in my mind. Number one, is it reasonable to have a zero-tolerance policy towards medical error? And number two, could the statement, the goal of the patient safety movement must be to eliminate all error? This is like climbing Mount Everest, possibly the worst analogy ever offered. So that really got me interested in this question of risks climbing Mount Everest. So here's an article entitled, effects of age and gender on success and death of mountaineers on Mount Everest. And what they found was the overall death rate, but what they found basically is climbing Everest is very risky. And so, by analogy, practicing medicine. Overall, the death rate by age is 5%, if you're over age 16, about 1.5% for younger climbers and there's no gender difference. But you're more likely to die on the way down, apparently, than you are climbing. And so for the death rate among some of the tiers actually is a whopping 25% for people greater than 60 years of age. And only climbs modestly to 2.2% amongst younger climbers. And again, there's no gender difference. So that actually got me interested in the question of whether the effect of age on death and complications during hospitalization. And here's a study published out of Great Britain. And they're looking at all of the hospitalizations and negative outcomes in England and Wales over a full decade. So you can imagine, again, we're talking about millions of people. And they found the following. Our study shows that older patients are more likely to experience a misadventure during surgical care and medical care and more likely to die as a result. That seems pretty obvious. One explanation for the frailty of older people, one explanation for this, is the frailty of older people, which alongside the presence of comorbidities is likely to affect their capacities to survive misadventures relative to younger patients. Again, kind of obvious. A fragile health status is also likely to account for the elevated mortality observed in children less than one year of age. So there's a graph of mortality by age. And we see that it claims markedly as we grow older. The higher rate of misadventures in older adults is likely to be a consequence of greater exposure to health care. So risk factors simply to be a part of the system with advancing age. Furthermore, our study suggests that older age groups have experienced a greater increase in the number of procedures performed than younger age groups. Despite being more vulnerable to misadventures, the data might suggest that older individuals are also being exposed to more procedures than in previous years. It is also possible that older people undergo a greater number of complex, risky, and invasive treatments to which they are more prone to error. Believing the obvious, but clearly showing that age is a risk factor for complications and for death as a result. So we conclude then that as population, general population ages, we can expect both a higher rate and a higher total number of misadventures or complications. And with respect to zero tolerance, the aging baby boomer cohort is a major driver towards an increase in reported medical errors. But today we're going to talk about this first category, diagnostic errors in medicine. And a couple of comments. It's important to note that most misdiagnoses result from our failure to consider the correct diagnosis as a possibility and not from a lack of knowledge of the correct diagnosis. In fact, a full 96% of errors are really just have to do with interpretation of the data and not knowing what the true diagnosis is. So and also to point out that the diagnostic errors occur most frequently in primary care settings such as family, internal and emergency medicine and they're less likely to occur in specialty settings. So back to the question of can we, you know, eliminate the risk or incidence of diagnostic errors. This is a review that considers the feasibility of reducing or eliminating the three major diagnostic, three major categories of diagnostic error in medicine. The first category are no fault errors and these occur when the disease is silent or presents atypically or mimics something that is more common. So it's relieving to hear that there are, that they give us credit that sometimes we simply can't make the diagnosis at all and these are no fault errors. These errors will inevitably decline as medical science advances, as new syndromes are identified and diseases can be more, can be detected more accurately or at earlier stages and they emphasize that these errors can never be eradicated. Unfortunately, because new, as new, because new diseases emerge, tests are never perfect, patients are sometimes non-compliant. What they mean here is patients sometimes just don't give you the information that you seek, don't describe their disease as well, well enough for you to make a diagnosis and physicians will inevitably at times choose the most likely diagnosis over the correct one illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. The second category are system errors that play a role when the diagnosis is delayed or missed because of latent imperfections in the healthcare system. Again, these errors can be reduced by system improvements but can never be eliminated because improvements lag behind or degrade over time and each new fix, every time we try to fix something in the medical system, it creates the opportunity for novel errors. There are always trade-offs between making decisions, trying to improve things because resources are limited and they simply are shifted from one area to another area so we'll never be able to completely resolve the issue of system errors. And finally, the main topic, cognitive errors which reflect misdiagnoses from faulty data collection or interpretation, flawed reasoning or incomplete knowledge. And the limitations of human processing and inherent biases and the inherent biases in using heuristics which are just rules of thumb to help you make quick decisions, guarantee that these errors will persist. Opportunities, however, exist for improving the cognitive aspect of diagnosis by adopting system level changes such as a painting second opinions, implementing computerized decisions support systems, enhanced access to specialists and by training to improve cognition or cognitive awareness. So in summary, diagnostic error can be substantially reduced but never completely eradicated. So how do we make decisions? That's really the topic of discussion here. And discussion making is really the most important thing that we do in our lives. I mean we decide where we're going to go to school, what we're going to major in, our careers, who we're going to marry, et cetera. So the question of how we make decisions is actually a very important one. And through the history of Western thought from the Greek philosophers up until the recent president, there's been a great emphasis on making decisions through a very logical and reasoned process. And it's been generally discouraged to use intuitive thought processes because they're thought to be too impulsive and emotionally based and really based in more kind of primal and evolutionary forces. So there's been an emphasis on using logical reasoning to all the great philosophers and it is generally accepted that we become better at logic through culture and education and intellect. So there's various schools of thought on decision making, big, you know, lifetime areas of research and they fall into two categories, the intuitive versus the analytical approach. The traditional thinking in medicine is that with training, logic in medicine, training in logic and mathematics, positions should and will use the analytical approach. They should obtain a complete history, perform a thorough exam, consider the differential diagnosis, then order tests and arrive at a diagnosis using principles of logic and application of conditional probability. I became interested in this topic after hearing Dr. Jerome Grootman on national public radio speak on how doctors really think in the heat of battle. He's the chair of medicine at Harvard Medical School. He's also the chief of experimental medicine at Beth Israel. And he's also a staff writer for The New Yorker Magazine. And he describes clinical decision making in these five categories. And I'll go through his thoughts quickly here. So intuition, the use of intuition, these are his comments. The more experience you have, the greater temptation to rely on intuition or consult when making medical decisions. But this is fraught with potential for error and you really have to remind yourself to remain systematic. Another approach is pattern recognition, which is based in heuristics, these rules of thumb and certain cognitive biases that are really built into us. And this is the flesh and blood decision making process. All cues to a patient's problem from history taking to examination radiology labs and ophthalmology visual field OCTs all come together at once to form a pattern in our mind. And this pattern occurs within seconds, largely without conscious analysis, and does not occur by a linear step by step combining of the cues. Rather, the mind acts as a magnet and pulls in the cues from all directions, and hence pattern recognition. Clinical algorithms, he says, can be useful for run-of-the-mill diagnoses and treatment, but they quickly fall apart when the doctor really needs to think outside the box. And he starts his discussion by, you know, he's a lab researcher primarily, but once a year he spends a month on the general medicine ward. And he became very concerned sometime in the 90s when he wrote this book because he noted that a lot of medical students and residents had been trained to use clinical algorithms, and he felt that they weren't thinking well and that they weren't able to think outside the box. So in such cases, he states, algorithms discourage physicians from thinking independently and creatively, and instead of expanding a doctor's thinking can actually constrain it. So his thoughts on evidence-based medicine, that it's rapidly becoming the canon in many hospitals and medical schools, and treatments outside the statistically proven are considered taboo until a sufficient body of knowledge can be generated from clinical trials, and this rarely mirrors the reality at the bedside. And then finally, Bayesian analysis is a method of decision-making that is favored by those who construct algorithms and strictly adhere to evidence-based practice, especially people who design medical records and design our clinical decision support systems. It relies on mathematics to model diagnosis and treatment, and most importantly, rarely do we have high-quality studies that are available from which decision analysis can drive a probability. You have to have very well-designed studies in order to apply these. So in reality, we live in a medical world of pattern recognition using heuristics, and we definitely become more error-prone as we believe that we can use intuition to make decisions. And this is really a question now, given the background of complete information, whether we've even become more error-prone as we move in the more analytical direction. In the past few decades, there's been a confluence of data from a variety of fields, including cognitive psychology, neurology, neuroanatomy, neurophysiology, genetics, and philosophy that all support the central role of intuitive, non-analytical decision-making. And the recent data suggests that many decisions in everyday life are made quickly and reflexively using heuristics and useful biases, and this is due to built-in neural architecture that has evolved through Darwinian national selection. These are viewed as efficient mental strategies to deal with an uncertain and ambiguous world, and on most occasions they work, but occasionally they fall apart. So we really need to be aware of the conditions under which our fast thinking, our intuitive thinking falls apart. So two Nobel Prizes have been awarded in this area. One to Herbert Simon in 1978, and both were in economics. He got the Nobel Prize for decision-making in the organization. He developed these concepts of bounded rationality and satisfying. And he also won a Turing Award, which is amazing. He's the only person to do this. The Turing Award is really the Nobel Prize of computer science, and for basic contributions to artificial intelligence and the psychology of human cognition. Daniel Kahneman in 2002 won the Nobel Prize for the psychology of judgment and decision-making, and he discovered these cognitive biases for human error as a result of using heuristics. In psychology, heuristics and biases are viewed as efficient mental strategies with which to deal with an uncertain and ambiguous world. And they work, as I mentioned previously on most occasions, but they do occasionally fail, but they're not as biased considered to be intrinsically bad. A cognitive bias is the human tendency to make systematic decisions in certain circumstances based on cognitive factors rather than using the evidence. An example of this is the availability bias. And what this is, is if we see 13 patients in a row with a flu virus because it's traveling through the community, then the 14th patient that walks in with the constellation of sign and symptoms, it's the most available thought in our mind, so we may just diagnose them with that, and it may turn out to be something entirely different. But these cognitive biases are thought to be very valuable evolutionarily because if you see a threat in the savannah or the jungle, you learn it and you respond very quickly to it, and you don't stop to think, well, maybe could that be like a timeline? Respond. So heuristics are evidence-based techniques for problem-solving, learning, and discovery. And we use heuristics when an exhaustive search is impractical and thus heuristics are used to speed up the process of finding a satisfactory solution. And an example of using heuristics is what we discussed previously, the use of pattern recognition in medical diagnosis. I just, as an aside, found this little story about Daniel Kahneman interesting. This is where he describes why he ended up going into psychology. He was originally from Israel, and in late 41 or early 42, his family was living in Nazi-occupied Paris, and the Jews were required to wear the Star of David and to obey a 6 p.m. curfew, and he was a little boy, and he had gone to play with a Christian friend and stayed up too late, and he had to return home, so he turned his brown sweater inside out in order to walk a few blocks home. As I was walking home, I saw a German soldier approaching. He was wearing the black uniform that I had been told to fear more than others, the one worn by specially recruited SS soldiers. As I became closer to him, trying to walk fast, I noticed that he was looking at me intently. Then he beckoned me over, picked me up, and hugged me. I was terrified that he would notice the star inside my sweater. He was speaking to me with great emotion in German. When he put me down, he opened his wallet, showed me a picture of his boy, and gave me some money. I went home more certain than ever that my mother was right. People were endlessly complicated and interesting. I found that quite moving, and that is, again, what moved him to go into the field of psychology. The experimental data demonstrates that in busy clinical settings, physicians primarily use pattern recognition and diagnostic decision-making, because it's fast, efficient, and often correct. Conditions of stress, and it's most useful under conditions of stress, fatigue, and sleep deprivation. Because we lose in these conditions our ability to use analytical reasoning, and we move towards the use of heuristics and biases in order to get to the day. It's also the most efficient strategy under conditions of clinical uncertainty when the data is incomplete or ambiguous. This is a quote that I found quite interesting from David Eddy, his professor of health policy at Duke. Uncertainty creeps into medical practice for every poor, whether a physician is defining disease, making a diagnosis, selecting a procedure, observing outcomes, assessing probabilities, or putting it all together. He is walking on a very slippery terrain. It's difficult for non-physicians, and indeed for many physicians, to appreciate how complex these tasks are, how poorly we understand them, and how easy it is for honest people to come to different conclusions. So under conditions of time constraint, the data is just unclear, and when we're stressed and fatigued, we move away from logical thinking towards intuitive thinking processes, in particular using pattern recognition. The role of context here is very important. Context is the milieu in which a case presents and the decision is made. For example, contrast these two case presentations. If a complex diagnosis made in a busy, emergency room at 3 a.m. by an overworked, tired and stressed out resident who is using pattern recognition and other heuristics to get to the night. And then the same case discussed in grand rounds, taking a more logical and reasonable approach using, again, logical decision-making patterns. It's understandable that under these different circumstances we could come to completely different conclusions. So is the context misleading or adaptive? Well, the brain is hard-wired to interpret information via the context in which it is presented. And so here we have the letter B, you know, surrounded, we can interpret, versus the same shape now understood to be the number 13. We can arrive at very different conclusions depending upon the context in which information is presented to us. And the brain uses context to interpret meaning, which can be adaptive and also misleading. Here we have actually two different vertical black lines that are of identical size, but they appear. You'd swear that this line is much larger than this line because the brain interprets that we're in some architectural space, and this may have, again, you know, an adaptive advantage to us, but can also lead to wildly different conclusions based upon the context. So context influences intuitive decision-making, and is a significant potential for error, but has little effect on rational decision-making. So, as, again, under conditions of stress, time constraint, and fatigue, we move towards intuitive decision-making, which is highly influenced by context and very prone to error. And consider this example as well. This is an actual headline from the National Post in 2008. Man, fatal issues, wife, and chastain gets away with it. And this was, you know, on day one, this was the headline, and there was understandable outrage because the perpetrator had been released from prison. And the following day we learned that the accused was an elderly man diagnosed with terminal cancer who was the sole caregiver to his wife with NCH Alzheimer's, and he died two weeks later. That really just changes everything about how we interpret the case. And so, again, context has great impact on how we make quick decisions when we're just, you know, interpreting events as they're presented to us. So there's an article entitled, Context is Everything, and written by Pat Cross, who is a big thinker in the field, and he states that retrospective investigations, such as root cause analysis, critical incident review, morbidity and mortality rounds, and legal investigations all suffer from the limitations that they cannot faithfully reconstruct the context in which decisions were made and from which actions followed. In the past few decades, there's been a confluence of data, again the same concept, from a variety of different fields, and they've all come, everybody agrees, all people who study decision making agreed it on this model for dual process theory. And it basically states that there are two ways we think, which is what I've been presenting to you, an intuitive system one and a rational system two. And the properties of the system we've discussed, you know, fast heuristic level thinking versus deductive thinking. The awareness in system one or intuitive is very low versus high when we're in analytical, very aware when we're thinking analytically. It's a reflexive, system one is reflexive, automatic, it's fast, the effort is minimal, low energy, and, you know, works at 3 a.m. It's low psychological cost, it's very vulnerable to biases, it has low reliability, makes a lot of errors, has low predictive power, and it's probably hardwired into the brain. Low scientific rigor and it's highly dependent upon context. So this is dual process theory. Basically, if we're presented with a patient, it moves into kind of a pattern processing section of the brain and we interpret all the data very quickly. If we recognize the data, we move immediately into fast system one thinking. In contrast, if we don't recognize data, we should move along the system two path where we're again using intellectual ability or education training and skills that critical thinking through to try to arrive at a logical conclusion. Over time, as we learn a particular area, it becomes repetitive and it moves into more of a system one process. And at any time, we can use, if we find ourselves questioning our system one thinking, we can rationally override. And that's really a main concept in this field is that we should be aware of which system we're in when we're thinking about patients. And the concept is metacognition that we can be aware that we're really using rules of thumb and heuristics and rapid theological thinking patterns and override the system and move back and say, okay, wait a second, let me think logically, let me break this down and rationally override the processes. So dual process thinking is a really big model now in various fields and describe how we think and how we decide. But it is, as I've mentioned, very vulnerable to error. So how do we overcome these vulnerabilities to making errors when we're in this fast thinking mode that we're typically in as we're working in busy clinics? So back to Jerome Grootman in his book How Doctors Think, majority of errors in physician thinking occur because of a cascade of cognitive errors. And again, as I mentioned, rarely due to technical mistakes or lack of knowledge, 10, 15% of clinical diagnoses are inaccurate. In one study of 100 diagnostic errors, only four resulted from inadequate medical knowledge and the rest of these errors fell into cognitive traps. And the major risk factor is being rushed. So these are some of them. There's really at least 50 different cognitive errors. I'm going to go through about eight of them. But these are the most prominent in medicine. Anchoring is a shortcut in thinking where a person doesn't consider multiple possibilities for diagnosis, but quickly and firmly latches on the first one that comes to their mind. It's very common. Again, these are evolutionarily adaptive ways of thinking. But they don't always work in the complex world, real modern world that we find ourselves in. So this is prematurely closing the differential diagnosis. And one of the strongest safeguards against cognitive errors, such as anchoring, is to make a short differential on every patient. If you can train yourself to do this, you're going to avoid this major form of cognitive error. And one thing on your differential, you should always ask yourself, what's the worst thing that this could be? The availability error, which I discussed briefly earlier, describes the tendency to judge the likelihood of an event by the ease with which relevant examples come to mind. So what's available in your mind is strongly colors that you're thinking about a new case that has some similarities frame of reference. So he gives an example in the book where he was an intern at Highland Hospital in California, inner city hospital. And he had just seen a number of victims of violent crime come in who had been beaten up badly. And then there was a young man who came in, maybe the eighth or ninth patient that evening, also a victim of violent crime and unconscious. And he just assumed a victim. But, you know, actually, I'm telling the story entirely wrong. A number of drunks, he had a whole series of people who had come in under the influence. And then this eighth or ninth person came in under the influence and he just assumed another guy. It turned out he was a physics graduate student at Berkeley who had been a victim of violent crime and had been completely misdiagnosed as being a drunk. So the availability error is one where we, the most recent events are the ones that come to mind most easily and set our frame of reference. It may cause you to ignore the major differences in that particular case and come to incorrect diagnosis. So Grootman says that being quick and shooting from the hip are indications of anchoring and availability and that these are the two most frequent cognitive biases in the emergency department. And often there are all the doctor needs to hit the mark to make the correct diagnosis, but they can also deal widely off the mark. You really need to remember that, you know, HACE is what forces you down this pathway and just to be aware of what type of thinking mode you're in and try to use some techniques to overcome cognitive biases. The confirmation bias is attention to data that supports the presumed diagnosis and minimizes data that's contradicted. It's driven by an expectation that the initial diagnosis is correct and involves selectively serving the data. So once you've arrived at the diagnosis you just pick out pieces of the data that support your diagnosis and unconsciously ignore pieces of data that might refute your or contradict your diagnosis. So to avoid the confirmation bias we need to pay attention to data that does not fit. Always look for pieces of data that just don't quite fit what you think is going on and then chase them down and figure out why they don't fit rather or what's going on, why they're appearing rather than simply dismissing them. Improper framing, this is an interesting one. This occurs with patients that are being referred to you. So if somebody says to me I'm sending a patient with primary opening with glaucoma I may just accept that frame and fail to recognize that this is indeed an angle closure patient or a pigment dispersion patient. And it often occurs with specialists once an authoritative senior physician is fixed a label on a patient it usually stays firmly attached. And self-aware physicians should know that accepting the frame can lead to serious area. So when you receive a patient who's been labeled start over and try not to fall victim to this framing effect which is quite common. The representative error thinking is guided by a prototype and we fail to consider properties that contradict the prototype. So for example if a healthy fit man appears with chest pain that doesn't fit the prototype of coronary artery disease so we might dismiss it or diagnose it as muscle pain or something else or perhaps a young person with cup disc. It must be physiologic cup and you can't possibly be glaucoma, they're too young. But you have to be prepared for the atypical and most importantly as I learn in practice don't reassure every patient just because they don't fit the prototype that they're okay because it may not be the case. An affective error is a tendency to prefer what we hope will happen rather than less appealing alternatives and this tends to occur with people that we like. This is why we don't take care of family members because we selectively survey the data. We want a positive outcome for this person and that we like them. And so the example might be in the patient that I like and this patient might have some suspicious disc but their visual field results are good so I overvalued that information. I report the great news. I don't bother to get an OCT that might have showed me the negative, the early glaucoma and I fail to follow up appropriately so I commit the affective cognitive error and I think this is the last one here. The attribution error is a tendency to overemphasize personality-based explanations for complaints or symptoms. So when a patient fits a negative stereotype we say, he's a complainer, the pain is in his head, she's an alcoholic and she's just drunk and we fail to recognize that these things may be true. We may feel this way about this particular patient but they may have a true underlying diagnosis so you have to learn to recognize negative feelings towards patients and then plan a red flag in your mind that says, I'm thinking, I have personal feelings about this patient that may be coloring my ability to see what's really going on. So in summary, the publication to Erich Human from 1999 cast a real dark shadow on the American medical system. Approximately 50 to 100,000 deaths per year were attributed to medical mistakes and this resulted in a major call to action to reduce medical error. The challenges, however, are daunting. Like climbing Everest, medicine is inherently risky and unpredictable. Diagnosis and treatment are often obscured by uncertainty. Uncertain place a huge role in medicine that we often are unwilling to acknowledge. We have an aging population that is increasingly susceptible to bad outcomes, and as cohort ages we're going to see more and not less bad outcomes unless we make dramatic changes and it's important to remember that when we look at mistakes retrospective analysis entirely misses the context in which decisions were made and retrospective analysis is done, you know, like Grand Rounds, it's done using logical system to thinking and so you can't necessarily understand how mistakes occurred if you can't create an environment in which we understand the context in which the decisions were made. The expectation of perfection or zero tolerance for error simply is unreasonable. Error cannot be eliminated because there are no fault errors, there are system errors and cognitive errors which I went through earlier that simply are built into the system and there's no way around them. To reduce the risk for cognitive error we have to remember that being busy, stressed out is really the number one risk factor for error. We need to recognize when we're under these conditions basically whenever we're in clinic and we have to really structure our thinking make a conscious effort not to commit cognitive errors. We need to always create a differential diagnosis and consider whether we're just anchoring on the very first idea that comes to mind. We need to always ask ourselves what the worst thing this presentation could possibly be and am I dismissing or ignoring information in the presentation because I'm just focusing on what I think the answer should be and slowing down the same way to say what's this, why is this not fit and chase that down. We also need to ask ourselves whether we've uncritically accepted an improper frame when a patient is referred to us. Basically start over with your work up whenever you receive a patient. This is a hard one because it's really built into our brains in terms of how we think but ask ourselves if we're simply in an environment where we're being exposed to one type of case consistently whether we're just jumping to conclusions when we make that diagnosis in the next patient and it's my thinking erroneously guided by a prototype am I considering atypical presentations of common diseases or fitting everything into this common prototype. In conclusion then remember that the air is human and to forgive is divine so I thank you for your attention. The first thing I would advise is don't blame yourself mistakes. That's really the best way to learn. Unfortunately the medical culture the societal culture surrounding medicine is that we're supposed to be air free so I think that's where the tension was. We're there's a number of biological explanations as to why learning occurs best by focusing on mistakes and there is a time to go into that norma and expert in that but the prime example is great chess players for example the way they become great is they look at their games they go back and they focus on the mistakes eventually by through mistakes thought processes become intuitive so it's counter to what we're taught that we're never supposed to make mistakes and everything can be learned in a linear fashion step by step but it turns out it doesn't always work that way. The corollary, well thanks again for all the