 Rwyb er mwyn o'rámau fag Ogysgorg, felly mae'r gyrfaen ychwanegol y gallwn amgylchedd nhad am eu cyfnodau a hynny'n gynnig bod nhw'n bwysigfyrd a nhw'n gwybod bwysigrwydd y cyd-xuriau gwybod bwysig ychydig mae'n jmerthyniwyd chymre Tetrach yn cael eu cyd-refryd, felly mae'r pwysig yn dweud i'r rhaid nifer persyfiadau komsylwysig neu mae'n mynd i'n byddai'n gweithio. Felly mae'n gwybod ymgred preferences ynghyd o'r ganwethaf gyda'r defnyddio. Felly mae'n cael ei wneud yma. Felly, mae'n cael ei ddefnyddio, mae'n cwrs arall, ac mae'n cwrs ar ddych yn cael eu phenominol yn gallu'r wneud o'r bwysig sy'n cyllid yn ei ddechrau'n gweithio yn y ddefnyddio. Mae'n ddweud yn oed yn cyfnod o'r ddefnyddio, mae'n ddweud i'r gwaith yn cael eu gweithio yn y ddweud i gael ei ddiwylliannol. Felly mae'n gweld yn cael eu ddefnyddio yw'r ffordd i'r wych yn gallu'r ffordd, yng nghymru yw'r ffordd, yw'r ffordd i'r ffordd, yw'r ffordd i'r ffordd, ac yn ymwysigol, sy'n ddod yn ni. Felly, ydych chi'n ymgyrchu, gallwch yn fneddolo'r gweithas. Mae'r ddweud o'r ffordd i'r ddweud mae'n gwybod cymdeithasol syniathol, am oed yn ymdweud i'r ddweud sy'n dweud i'r ddweud, yw'r ddweud i'r ddweud, maen nhw'n ei ddyn ni'n hwn o'r ddadu'r oeddennau, oeddennau, oeddennau llunion, a hefyd hefyd yn llwyddoedd ffordd o'r ddod o'r bach o'r ysgawdd, oeddennau o'r ffordd o'r ddod o'r oeddennau. Ys ddysgu'r rydyn ni wedi gweithio'n ddau'n oeddennau i'r bach o'r ddod o'r ddod o'r ddod o'r hollau, i'r hollau o'r ddod o'r hollau, ond mae'n cymryd yn ymwyfyr ac yn lleologi, gyda where people sit on this spectrum and what's effectively their answer to one question. You can go and measure the behaviour and according to randomized, double blind controlled experimentation, which is a bit of a mouthful, but I was told to make sure I could get it across this is rigorous social science. The behaviour turns out to be radically different and what remains of this presentation, I want to try and unpack what these two mindsets do in terms of driving institutional behaviour, particular in the context of sy'n ymdododg. Rydyn ni'n gweithio i'r sefydlu i'r cultur ei fewnydd mewn gwirionedd. Yn ymdodd ar gyfer y cyfwinelliaid, mae'r cyfwinell yn cyfwinell ac yn mynd yn ymdodd ac yn cynnig ac mae'n gweithio'r proces, proses, a'r protocols yna o ran hynny yn cael ei ddweud yn cyfwinell sydd ymdodd ymdodd, mae'n gwybod yma'r cynnig ymdodd, sy'n gwybod ychydig yn cael ei ddweud. Nid yw'r gweithio'r cyfwinell, Not because we lack intelligence, but because of the complexity of the phenomena when you're trying to keep a non-linear dynamic system safe. And this orients the minds of the professionals in the industry to the learning opportunities that are always out there, but are so easy to neglect. So as you know, when pilots almost hit each other in midair on the tarmac, a near-miss event, they openly and voluntarily, transparently submit a report. Dozens of filed around the world every hour, and these are analysed in their statistical totality to figure out what are the weaknesses in our assumptions that are leading to these near-misses. What could we rationally do differently to avert an accident before it's even happened? So second by second, minute by minute, hour by hour, they are learning the lessons to drive a dynamic process of change. And beyond health care, you'll have heard about the importance of adaptability and agility, and my pitch to you really is that these organisational qualities emerge quintessentially from underlying cultural and psychological assumptions that map onto the growth mindset. And what if God forbid there's a crash, the most serious form of system failure? If you're anything like me, it's quite difficult, isn't it, to confront and interrogate our failures, particularly if we're supposed to be smart and talented and senior and highly competent. But only in its history, as was said in that kind introduction, aviation created a mechanism that makes its failures data rich, the black box. One records the electronic data, the other records what's called the ambient sound, how the pilot and co-pilot were interacting in the build up to the crash. So when an accident happens, the investigation branch can rescue the boxes from the rubble of the crash, deconstruct what went wrong, and once again put reforms in place to ensure the same mistake never happens again. That to me is the anatomy of a learning organisation. And as you know, it's had an incredible effect on the key metric, which is the accident rate, which was very high at the beginning of the last century. In 1912, more than half of US army pilots died in crashes in peacetime. The early fatality rates of some of the military training schools were close to 25% annually. But decades of institutionalised learning, driven by this culture of continuous improvement and openness, led to a situation last year. Don't know if anyone caught this on the news. The accident rate for the major airlines globally was zero. So the accident rates now drop to one crash for every 17 million take-offs. And that's partly about the talent, but it's fundamentally about the culture. And I think what I've gleaned from being here the last couple of days is that healthcare doesn't quite have the same level of transparency. And a lot of the content has been about the problem of a very punitive culture. When clinicians anticipate being unfairly blamed or struck off or put on trial for culpable homicide for completely honest mistakes created by more subtle systemic weaknesses, they're less likely to volunteer that information. And I was very honoured to work with Jeremy Hunt on the creation of the healthcare safety investigation branch and other reforms that are trying to get a better cultural consensus around drawing a rational line between blame and accountability. And perhaps we can, well, you've already heard a lot about that. I want to bring up a related but separate issue. And I hesitate to say this, but I'm going to say it anyway. Which is there's something quite deep in clinical culture. I don't know if you'd agree with this, that when you get to the top of the hierarchy, you're a consultant, the insinuation is you're that talented, that brilliant, that audacious, you've had that long and expensive of an education that you don't really make that many mistakes or you don't make mistakes. Jim Reeson, the great safety campaigner, said that one of the problems in clinical culture was this notion of the infallibility of the senior doctor. I think that's one of the things that makes it difficult for junior people in the team to speak up when they can see something going wrong. That hierarchy gradient is partly constructed upon the idea that the person at the top of it is the big cheese, the person who gets the answer right. And that sounds not too bad as far as it goes. Isn't it a good idea to have somebody there who is audaciously brilliant in their reasoning, clinical judgments, and so on? But I think it does hint at a potential cultural danger, which is what happens when there is a suboptimal outcome, when something goes less well than it could have done. Is it fair to say that's basically everything? You can improve on any given performance-worthy dimension through time with a few caveats. The most serious form of suboptimality, of course, being death. An opportunity to learn the lessons to make sure that the same mistake doesn't happen in the future. But you'll be fully aware of the observational studies of senior doctors talking to patients' families where there is a tendency partly towards concealment, which has been dealt with, I think, very brilliantly at this conference, but also self-justification. And think of the psychological dynamic. If I've been positioned as an infallible clinician, then that accident and that mistake can't be anything to do with me. It must be an unavoidable death, or a complication of the procedure, or just one of those things. This phenomenon of the fixed mindset making it difficult to create meaningful adaptation exists in many different industries. Perhaps a good way to highlight it is the domain of economic forecasting. There's a very interesting objective finding by Philip Tatlock, the great American academic, that the high-reputation forecasters, on average, make the worst predictions, which sounds paradoxical. Why are the high-reputation forecasters making less good forecasts? By the way, high-reputation as measured by how often they visit TV studios. And you can see the problem. What is an error of forecasting? It's an opportunity to revise or enrich one's theoretical assumptions because the model that drives the prediction is not the system itself. It's a simplification that permits prediction and there is a signpost about how it can be reformed to make it more robust and predictive in the long run, scientific method. But the high-reputation economists don't like to admit to their mistakes so they come up with these tortuous ex-post rationalisations for why they were right all along. And because they're intelligent, these self-justifications have superficial plausibility. They delude both the economist and his or her clients and torpedo the adaptive process. There is rigorous finding in social science of a negative correlation between some talent metric like IQ and performance or seniority and performance when people are in the wrong mindset because sub-optimality is not seen as an opportunity to grow and adapt and make deeper sense of the phenomena that one is engaged with but an opportunity for creative self-justification. The growth mindset isn't saying that talent doesn't matter. We want to have talented people. It's about liberating the talents to constantly find ways of improving through time in the way that science has changed our world. Am I going off on one here? Just to take us out of the healthcare context. I spent the day at the end of October with... Does anyone know who the chief executive of Microsoft is? Used to be Gates, right? Then Baalmer. Does anyone know who the current CEO of Microsoft is? I didn't know who it was. Sacha Nadella. It's kind of gone under the radar. Took over in 2014. Since then, the market cap of Microsoft has gone up by $360 billion, which is not bad, right? You like thinking a big deal. I said, how did you manage to get this extraordinary change? It's a remarkable transformation. He said it's very simple. We changed the culture from a fixed mindset to a growth mindset. Microsoft, and he put it in these terms and forgive the slightly naff language, Microsoft, because of its historic success, had become an organisation of no-itals. Everybody wanted to look like the smartest person in the room. If you want to look like the smartest person in the room, what's the last thing you want to hear about? You don't want to hear about the deficiencies in the product line. You don't want to hear about what competitors are doing better because instead of seeing that an opportunity to pivot onto something even better yourself, you think that shows they're smarter than us. If you've got a manager who's getting prickly and defensive about any aspect of sub-optimality, can you see how that destroys a bottom-up flow of information that is so crucial to innovation in the high-tech age? He said we need to change the culture from an organisation of no-itals to an organisation of learn-itals because if you're a learn-it-all, you want to know where the deficiencies are in the product line. You want to know what the competitors are doing better. You want to hear all of that rich information because you see success not as static, not as having to defend the status quo, not seeing expertise as knowing it all already, but expertise as growing dynamically through time. And when you're in an area which is involved in quantum computing and mixed reality and deep neural networks and machine learning, you've got to have that attitude of adaptation. You've got to be prepared to make those changes and not to get defensive when somebody, junior nurse, says maybe you could be doing this better.