 Good morning. I couldn't write too much on this small piece of paper, but thank you President Berwick for your comments. Over the last 10 years, cybersecurity has risen up as an imprimatur for patient safety across the healthcare ecosystem. In the last few months, days, weeks, minutes, as Mike said, artificial intelligence based on a platform with all the predictive analytics systems that we put in place over time has risen up as almost like cybersecurity is a requirement. But how does it affect us from does it sustain patient safety? Can it improve patient safety? Or how do we mitigate the risks of AI using the predictive analytics data that we get from all of our systems? So with gratitude, we have a great panel membership here to provide a multi dimensional perspective on predictive analytics and AI. And we'll just go through each panel member, you guys riff and give us your perspective. And then we'll take some questions at the end. So let's see. Thank you. Thank you so much. AI and ML has the potential to make a positive impact towards zero harm, if done intentionally. And what do I mean by intentionally in the design of it, making sure that we've got the right diverse stakeholders at the table in the testing that it's tested on the right data and in implementing and making sure that there's continuous monitoring and accountability for the results of it. If not done intentionally, it can be one of the greatest threats that we have towards lack of patient safety. And one of the greatest threats that we are seeing today when it comes to patient safety is in the disparities in healthcare that have been growing and getting exacerbated over the last several decades. So this is what good looks like when we look at a patient safety issue, like being able to provide innovative therapies for everybody equally. And we're actually leveraging AI and ML. We have at Oracle Health, a learning health network that has over 100 healthcare delivery systems with over 100 million patients. Leveraging algorithms were able to identify the patients within this database who could benefit from innovative clinical trials. Whereas before they would be left without any options. As a result of this, we're bringing three times the diversity to clinical trials that we're seeing in national clinical trials. We have King Faisal in the Middle East that leveraged algorithms for sepsis and pediatric patients and was able to reduce mortality in pediatrics from 29% down to 17%. We've got Fort healthcare, which leveraging algorithms was able to reduce the number of prescriptions for opioids down to 53%. Benzodiazepines, another addictive medication by 49%. So algorithms set up with intentionality have tremendous power. We have to take that responsibility and accountability to make sure that we set it up in that way. If I may ask a question. How many of you have gone to chat the GPT and asked it a question? Are any of you confused that a person did or did not write the answer? And the way I pose that question is in preparation for this panel. I thought, well, try and become a modernist. So I asked chat GPT how AI could influence and promote patient safety. Unfortunately, chat GPT came up with in seconds a rather interesting two page dissertation, but it did not provide the insight that I believe is going to be important as we move forward to harness the power of artificial intelligence. I may be the contrarian in the group, but I'm convinced that unless we can integrate chat GPT into the culture of the delivery of healthcare, not only from its technical aspects, but also in the users of artificial intelligence who will be the members of the healthcare community and its patients, unless we can create the environment in which we trust and implement appropriate algorithms of care and truly believe that and I would agree that algorithmic care we've heard we know works. The recent COVID crisis actually has given us a stimulus. Over the time of confusion, hysteria, we ended up practicing more sensible care across this nation, more standardized care. And as clinical trials tried to tell us what drugs would and would not work, we actually ended up with more of a continuous quality improvement movement rather than relying on randomized double-blinded clinical trials. The standard of care became the platform. The deviation from the platform became the experiment. The ability to analyze the results depended upon sharing the information from multiple institutions and enabled us to early on decide on which drugs worked. Within the United Kingdom, dexamethasone became an obvious answer pretty quickly. Within the United States, within the first 100 days, despite its well-understood or at least its theoretical advantage, we understood that hydroxychloroquine was probably not a standard of care. So we have succeeded with the combination of platform, artificial intelligence, rapid interpretation of freely diffusible data, and I think that the future is very bright. But we need to understand that culture of the organizations and commitment of the personnel in those organizations and the partnership with the patients they treat is ultimately going to be continually important. Thank you, Phil. I suppose really I would say what are you trying to get predictive analytics and artificial intelligence to do? Are you trying to get it to help you make a decision? Or are you trying to get it to do your job for you? There's a fundamental difference between the two things. And so we talk about new treatments all the time. But if you look at outcome, if we did the things we already know well, we will improve outcome far more than any new treatment that will come on in the next 10 years. I'll give you an example. Can you imagine a pilot flying a plane that says my altimeter doesn't work in bad weather? But hey, I'll just give it a go. But we have every day people who will use equipment in hospitals, use data that is less than accurate, and based on decisions on that, because they think that's the best thing I can get. I'm an intensivist. I'm a neuroanesthesiologist. And I can tell you less than 10% of people who are undergoing anesthesia or sedation in critical care have the brains monitored. But there's plenty of evidence to show that if you don't monitor the brains, those patients suffer. But we still shrug and go, hey, it's just the way it is. I think what COVID did is COVID created a burning platform for a lot of healthcare systems. It wasn't because it was intentional they wanted to send patients home and look at them remotely because there was no other option. And I think that's a bad way to practice medicine. I think to wait for a disaster to happen before you look at your safety, to wait for a disaster to happen before you implement a safety system is retrograde. So I think what we need to do is get the predictive analytics, artificial intelligence IT system to help us make the right decision ahead of time and not take the decision out of our hand. And I think far too often we expect somebody to make decision on our behalf because we're not quite sure what the right decision is. The slogan says do no harm. It doesn't say do nothing. And that's the bit that we really need to follow. That's great. When I think about the coming prospect of artificial intelligence and what we've learned with machine learning, I wonder how we will apply it. And I would ask, I'll ask a question too, but I'll make it rhetorical. We don't need hands. If you work at a hospital, if you work at a healthcare system, what is your number one patient safety issue? What is your most frequent temporary harm event? What is your most frequent high harm event? And then think about where does that data come from? Is it from voluntary event reporting? Most of our hospitals, that's where we rely, that's where we gather that information. When I think about President Burwick's charge of addressing one in four, and that is accurate, one in four patients have a harm event while they are an inpatient. Does that match the data that you have at your hospital or your hospital system? I bet it doesn't. No. So before we ask AI to tell us how to fix patient safety, we need to measure patient safety. We need to understand what heroes like Ruth Andrell at the OIG have told us for years now that we have a bigger patient safety problem than we even realized today. The research is way ahead of operations in hospitals today. We've got to get our hospitals and again thank you Dr. Burwick for charging us to do that. We've got to get our hospitals to keep up with the research that exposes these patient safety problems so that we can prioritize them, we can understand them, we can craft improvement techniques and teams that will go and help to address those problems. So before we apply all of these novel learning techniques, I hope we will take a deeper understanding of what our actual safety problems are because it's just not exposed today in a common way. So you said earlier that you would be a contrarian. You said you have another contrarian, I believe, but we've got a lot to do. We've got a lot to do and I sure do wish we would hear our current president or current leaders charge us in a way that we heard earlier. I think that would be incredibly beneficial. Building on what has been said, I want to move a little bit towards how can we really get something actionable for patient safety out of the potential that exists in predictive algorithms in AI for we hear rightfully concerns. And as users of AI, many of us came here by plane. None of us would board a plane, I assume at least, if you have the choice between one where the autopilot is broken and the other one where it's working. We trust AI works there. That's an autopilot, heaven's sake. I prefer this one over the other. But in healthcare we are conservative and to put new technologies, even evidence-based practice, into the mainstream of how healthcare is being done and delivered takes us years, sometimes decades. Now AI is knocking at our door, but will we wait 10, 20 years before we possibly reap the benefits? I think that would be a crying shame and for sure not what this movement is about, making it actionable. I think we can learn from other industries besides, by the way, aviation. If we look out in German, I love cars. And I learned to drive a car with a stick shift. My kids learned it with automatic. Going forward, what will be how the automotive industry is moving towards mobility, that's the end game of it. It will be with assisting systems. I don't want to, I still want to drive, by the way, but automated driving will be there sooner than later. And we're being led there by accepting and learning the benefits of smart systems. I would want to lose my smart distance control. I have to speed control. I like very much that I get an indication, don't overtake right now, for there is a car coming behind. I think safety features in automotive will hopefully help us as users of healthcare also start with then predictive systems, Basel, to take up your point, not taking away the decision making, but making it easier for you to move into accepting the benefits of clinical practice being improved with AI. And I think we need to do it step by step. Thank you. So a couple of weeks ago, we got together and talked about our different perspectives. Half of us were in Europe and half were on the left coast. You guys get up at six o'clock in the morning. One of the things that we talked about, and I think Mike Ramsey told me this this morning, is that China just came up and asked that we develop standards on AI. And so just like we have the world patients safety movement, I'd ask each of the panelists, what's your take on governance for the use of AI or predictive analytics across the world healthcare ecosystem, and is that possible? We'll start with you, Peter. Honestly, again pointing to Joe, you mentioned this morning that open sharing data is a prerequisite. You said, David, that we need to measure truly quality, else we cannot improve it. I think this is a hygiene factor. We need to get over thinking data is something that is private. Right. Incidents shouldn't be reported. We keep them under the carpet. Now, we need to be very, very transparent first and foremost with the information we have, the good, the bad and the ugly. I think we need to go also to really make artificial intelligence become reality and eventually even move it one level further like into an autopilot, into automation. We need to talk about interoperability, open systems. We need to talk about bi-directional communication. Data is a foundation, but if we want to get into practice of AI, I think we need to really think about how we make systems work seamlessly together to provide the clinical users with the help they really need. And I want to push that just one step forward when we talk about data, which is today when we talk about data, we have an 80% problem, which is only 20% of the data that we collect is what happens in healthcare delivery systems and determines someone's health outcomes. 80% of what determines someone's health outcomes is based on where they live, where they work, their health habits. That data today is not really collected in an effective way. So when we come to ensure safety and make sure that we're caring for people from birth through the transitions at the end of life, we really need that entirety of the data. We need a platform that's to your point, it's open, it's connected, it's intelligent, but it actually needs to have all of that data because outside the four walls of our healthcare delivery systems are a lot of issues with safety and quality that are happening that we are not capturing larger in number than what we get within the four walls of the healthcare delivery system. I wonder if I could... I think one of the things that we're talking about, you're talking about data, you're talking about data. In fact, actually what we're talking about is information because data is just stuff that goes around and gets analyzed by clever people in the lab. What the average person understands is information. So what we need to do is turn this data into useful information that can prevent me from doing something wrong, that can help me get to the right decision. And I think this obsession with data, I mean, all my movement in my car is tracked. Google tells me where the road blockage is. I use my apps to tell me where the airplane is landing, which gate has changed to. So this obsession with data security is really overblown. I mean, I don't think anybody's interested in whether I've had my hernia repaired in Hong Kong or Singapore, but yet there's more security on my healthcare date than there is on my bank account. And I just wonder whether we are emphasizing the wrong things. What we need is information so that the clinicians, that the patients, can make an informed decision as opposed to this data analytics artificial intelligence that words that people repeat, it's almost like laser 20 years ago, everybody wanted to have laser surgery. Didn't understand what it was. But it was laser, it was now moved to robotics and now we're moving into AI and predictive analytics. But genuinely, a lot of people don't understand what it means. We have to turn it into something that's understandable, that's tangible, that can make a difference to you. And that's where it is. And in terms of regulation, I'm sorry, international regulation for predictive analytics and artificial intelligence will be the same as regulating firearms. Every country's got a different way of looking at things and it's impossible to get the regulation to do the things. The way you get it is to make sure that the way you use it is beneficial to you and your patients and then people will adopt it. I think one of the clues may be international. And I would say that if I'm flying from Los Angeles to Mauritius, which I did two weeks ago and I also made the return flight, some of you may not wish that, but LAX and Sarasaraga Ramgulam Airport in Plaisance didn't change their locations and they were specific. An A380-300 aircraft doesn't matter if Emirates or Qantas or anything else is flying it. It is an A380. So the pilots, the standards to land in any international airport are exactly the same. So to say that we cannot have standardization and interoperability with highly reliable AI guided systems, I think is a fallacy. The question is, do we have the courage in health care to have the same rigorous standard as they do in aviation that are translatable around the world? A pandemic has just proven to us that none of us are island. It may be easier to have international standards than it will be to standardize care across Los Angeles. It depends what you mean by standards. So the landing may be standard, but the service may not be standard, the food would not be standard, the way your luggage is handled won't be standard. The Wi-Fi won't be standard. Some Wi-Fi will work, some Wi-Fi will work. So it's not likely to kill you? No, no, but the reality is actually some things are not likely to kill you, but they're likely to damage you for a longer period. So delirium is a real serious issue, but people pay less attention to delirium than they do to mortality. Length of stay is important, infection in hospital is important. So you can regulate, you can standardize, but you have to be able to show people what level is important to you. So infection in hospitals may be important for us in the UK or the US, but if you go to a desert area or an underdeveloped area or a poor area, they may not be the first thing they would think about. They think about water, sanitation may be food. So the question of standardization is important, but regulation is different to standardization. Can I take a step back from the, that I think is a little bit more of a basic concept? And maybe I'm speaking to myself as a patient safety researcher and other patient safety researchers who may be here, but standardization that I hear you all speaking is when this happens in the clinical scenario that the next step is that, evidence-based practice. One of the things that I think in the patient safety research world that needs to happen is we don't use terms the same way. Our vernacular is applied in different ways all across the world. What is an adverse event? There is not one common definition of an adverse event in the United States, certainly. I don't think across the world either. Preventability, same issue. We heard a little bit from Dr. Burwick about the bias of the information that we receive. We know that reported events are biased. We know that disadvantaged groups have higher incidents of safety events. So before we start regulating, before, well, at least concurrently, we need to make sure that we're speaking the same language, that we have a harmonized patient safety language. Right now we don't. So do we take, you know, from we have a reverse Pareto principle that you brought up in the scene and then we have standards and what we define as standards then taxonomy and then utilization, how do you drive a stick versus let your Tesla do the full automatic driving? Probably not right now. But thank you for that dialogue. Can I add a little bit to that because I think, you know, we talked about regulations. We talked about standards. One of the things that we've realized in AI and ML is that just because we have standards, the output that you get isn't always regulated. So you can have a particular output which in a population actually gives you positive results and in a different population actually gives you negative results. I would say in addition to regulation standards, there has to be a continuous feedback loop where in the dynamic environment of healthcare, because these algorithms are dynamic and live, that you are continuing to feedback and continue to improve that. I wanted to add that on. The other pieces outside of regulatory parts, I want to go back to a point that you said when you were talking about data and information, you kept referring back to AI, which is the consumer. In healthcare, we've done a very poor job of engaging the patient and the consumer. And I think as Don pointed out, there's a huge gap in what we need to do to actually be able to engage them. Being able to have information should also have a significant role in how the consumer is engaging with that healthcare to ensure safety. So when I was a hospitalist practicing at a tertiary quaternary academic medical center, and I would look at the labs for my thrombocytopenic patient and it was the platelets were not back and we didn't know if we were going to do a transfusion or not. When I would go into the room and the patient's like, hey, I'm looking at my labs, my platelets are low. How much are you transfusing me today? That enabled us to actually give them their transfusions several hours earlier than what would have happened if I had to wait to the end of my day and give them the transfusion, which prevented spontaneous bleed in the patient. So having algorithms, but that close partnership with the patient to be able to ensure safety is, I think, going to be a critical part of this. I'm going to check the slido to see if we have... I think that's right, but the reason the patient checked the platelets, the system shouldn't need the patient to check the platelets. No. There are multiple paths, right? I understand that. So there's the physician, there's the nurse, there's the automated system, but there's also the patient. They should have the right to do that. Absolutely. But there's a difference between having the right to do that versus having to do that because somebody else hasn't done it. It has to be quite clear that what we're doing is... That's the point. We're not putting the emphasis back on the patient to say, hey, tell me when things aren't right. It's when I go to my lawyer or I go to my accountant, I expect them to tell me, this is what I suggest you do, not, hey, what do you think do you want to do? Which most of them do anyway. But the reality is, is actually you need to be able to arm the patient with your advice and your experience so that they can make the informed decision. Not to say, what would you like to do with your chest x-ray? What would you like to do with your CT scan? That would be unfair, I think. And I think that's what's scaring a lot of the remote monitoring and report patient is that the patient feel that I'm alone, I'm going to have to make decisions. And what we need to do is reassure them, you're not alone. We're looking after you continuously. And particularly in Saudi Arabia, when they introduced remote patient monitoring and patients at home, they found the customer satisfaction went up by 50%. Why? Because they were involved in their care, but they weren't left alone. So instead of saying, you're going home a day early, you're saying, you're going home a day early, but we're going to keep an eye on you for another three or four days because we really care about you. And that changes the narrative. And I think that's what we need to do. Get the patients to be comfortable that we're making the right decisions or helping them make the right decisions. I think that open, transparent data is critical for that partnership to happen. When we end up being the ones who hold all the data and the information, you can't have that two-way conversation. And to your point, there are different levels of patient engagement. But we have to provide open data to patients to be able to engage. This is in the end really the basics. But I think to Bezel's point, it's about starting with the why. Start with the why. That's the same with patient safety. We need to make the business case. And I would really take the approach how you eat an elephant, cutting it into little pieces and going for it one after the other. That's also some of our adverse events are more applicable to be solved and addressed with predictive algorithms than others. And I think we need to really be choiceful and bring examples. Success, success, success, fail fast also if it's not working, adapt the algorithms totally with you and fine tune them in a learning way. But I think that's the way to make it actionable. Not trying to solve world hunger and world peace as much as we'd like to do it, but maybe going after delirium, maybe going after acute kidney injury. Let's focus on a couple of distinct examples and really make the case there. And I think this is what will drive adaptation and change. World hunger and world peace by 2030. Yeah. So we've received some questions from our esteemed audience here. So I'll read the questions and then you can start with the answer first and see. So we failed to use technology in our benefit with the electronic health record. What makes the use of AI a better experience? Oh, that's a great question. That's a great question. So I'm old enough to have both trained on paper and pen and practiced on paper and pen and went through the transitions analog to digital. It's electronic health records. And I remember distinctly when we implemented electronic health records, the promise was all the stuff that we're talking about. It was higher quality. It was safety was open data and transparency. And we have really failed to do that. Our electronic health records have become a for the most part, a static repository of lots of information, much of which we can't access effectively. We're going to need to change that. The two largest electronic health records in around the globe are on four decade old technology. We're working on modernizing one of them so that we can actually be able to leverage the data effectively. We have to be able to modernize the technology across the board so that the data can actually be usable. I think it to me it goes back to intentionality. If we keep that purpose clear in terms of what we're trying to do and transition our electronic health records and we have better data than on that we're going to build better algorithms to be able to be more effective. But we got to we have to keep each other honest. We have to keep coming back to that and continue to iterate on that on that process to really get the results that we want in quality and safety. To add to that I think this is where we also need to align government and healthcare incentives. We were forced into electronic health records before they were ready for primetime use because it became a fundamental benefit for reimbursement to hospitals if you met that standard. And we actually used your 40 year old technology because better stuff was available but we weren't given enough time to implement it appropriately. And I think that when regulation outpaces the ability of technology we lose. I think electronic health records were invented for billing purposes to start with. So they weren't really analytic systems. They weren't designed to have a look at outcomes. They weren't designed to teach people anything. And we have an electronic health medical record that does that. But what you have to remember is also there's a learning curve and what people do is implementing a system like Epic or Cerner takes so much effort that just getting it in everybody's eyes and go great it's done now it's working. People have stopped complaining. So it's moving that and we talk about it's no different again to remote patient monitoring. We have this idea that digital inequality by introducing digital technology you may be disadvantage some people who don't have the ability to use that. And you go well actually we take 80% of the people away and we concentrate on the 20% that can't use it so we give them more care. So the reality is about creating a system that is fit for purpose and electronic health medical records are not analytic systems and they're not artificial intelligence and we need to adapt them to do that. Charlie I might take it in a little bit of a different direction and rather be the patient safety fundamentalists that I've been for my earlier responses. I think we can adapt and we have examples of areas where there's a lot of success with feedback loops. Treating type 1 diabetics with continuous monitoring and continuous and adjustable infusion rates is a great example right. I'm a pediatric intensivist several of us up here are intensivists. I would love to see the day hopefully not very long where our ventilators can keep up with patient improvement once they start to improve. We've got continuous oxygenation. We've got continuous ventilation parameters. Those should be driving our changes. One of the things that happens in both the ICUs that I've worked in is that many times when a child is better we don't keep up with their improvement. Therefore they stay on the ventilator longer and therefore they're at more risk. I think we can apply those types of things and that doesn't seem to me to be insurmountable even today. Those are the kind of things that I think we could do quite readily. David you just emphasized again my plea for open communication between systems and devices. That's absolutely the way to go to make your dream become a reality not years out but hopefully a lot sooner already. But to come back to the original question a bit what failed in the EMR implementation and what will be different around AI. I think we need to early on show the success the why again and then also bring the users along. What's in it for me. What's in it for a nurse or doctor to use an electronic record. I'm not sure we really have successfully answered that question. So I think we need to clearly show here what's in it for the patient for the outcome but also for the user. Some of the questions that we're getting from our colleagues out there align perfectly with what you've all presented. I think I'm just going to one here. The you know how can a health care system improve the quality of the data collected. I think we've talked about that. That's really good. And this is the social one. You know patients need to have full access to their data which the 2080 perspective yet we need to be careful with the do your own research perspective too. So is is physicians what's your perspective on that with having our patients and families you know more empowered with more data or as long as the data is good. How do you balance that out. I'm always so appreciative. I can remember back when the Internet was first coming to be and patients were looking up their own diseases or their children's diseases in my case and at first it was threatening but very quickly I started to feel that we all needed to embrace that to be asked hey Dr. Stockwell I just read about this condition. Is this something that would help me. And as I embraced it more and I've had a child who's also had some serious health illnesses. You think of course that's where we have to go. We have to embrace high level conversations because everyone will benefit from that. Everyone will benefit from that. You know when when Joe started the the movement and the team started the movement you know the data highway and the sharing of data it's improved over the last 10 years but it's still not where it's supposed to be. You know Cerner was one of the first Phillips and GE and a bunch of other companies stepped up to sign the the open data pledge. We're still a long way from that but it has it has gotten better having been on the administrative side looking for you know interoperability with medical devices as well as the sharing of data bringing that forward so it's facing front facing with the patients and their families. It's a move. Where does the where do the algorithms and or some of the AI components is that going to accrete into that world now too. Do you see that from your perspectives. We have a minute and three seconds to answer that. I think these things as you say we have we have to adopt these things it's like we all Google what should I do when my BMW doesn't start or when my iPhone you know what I you know we all do that so so the reality is that people Google but what I say to them is that yeah sure you can Google that but based on my experience of 20 years of critical care that's what I would do but if you want to choose the internet fine we have to remember this isn't this isn't a this isn't karaoke you can't just choose the way I treat you you can choose to be treated or not but ultimately we have to do what we think is right taking the patient's opinion into consideration but I wouldn't do anything wrong just or do a treatment that I don't think is beneficial just because the patient wants it that wouldn't be something I would do and so I'm not scared of people looking up to stuff but I also am I'm entitled to have my own views on things it's always good to have that conversation I think but the conversation helps the example we had from Martha if there would have been the culture of having an open dialogue right that's what prevents these cases and we need to be open as physicians to have that conversation is that why they call it open AI now that's not sure of this open ended yeah so uh we're done so thank you uh team panel members um we have a it says Charlie Charlie throw to a 10 minute break so I'm throwing everybody to a 10 minute break so thank you thank you guys yeah thanks great thank you