 Questions in the chat box on Webex that would be perfectly fine. We'd love to have you share it that way. We will open it up at the end of the 40 minutes with Kristin and Sasha and see if we can let people speak, but sometimes it gets pretty noisy. So, I'd encourage you to use the chat feature, if at all possible moving on. So, the patient safety movement foundation has a very bold and audacious goal of zero preventable deaths by 2020. And yes, we realize it is 2019. We're getting close, but we believe that zero is the only acceptable goal to have. And how we work to get closer to zero is by fostering new efforts and building on to existing patient safety programs through commitments to zero. So we're a commitment based organization that is excited about new innovations related to patient safety, but we also want to celebrate what is already happening that has been successful. Moving on. So, we work with a few groups of people, a few stakeholder groups are super important to us and we encourage action because we, without action, we can't get to zero. So we encourage hospitals and healthcare organizations to make commitments. So these are an online web form where they share what they're doing to improve patient safety in their four walls or if the hospital system within their system. And so we encourage them to share details about their process for improving infections or reducing maternal complications. And if they're able to, we ask them to share their lives saved. The second group that we work with our committed partners. These are nonprofit professional societies, association, other nonprofits in the patient safety realm and advocacy groups. And we know that there are a whole bunch of other groups beside us that are waving the flag of patient safety, especially during this week of patient safety week. We know that there's so many out there other groups out there that we can count on to help us get to zero. And so we come up with unique ways for each group to partner with us. We call these a commitment to action letter and we've about 65 of those to date. And then the third group that we work with our healthcare technology companies, and we believe that this is a really unique part of our stakeholder group. We ask these companies to sign an open data pledge, which says that they will not knowingly interfere. Block or charge on top of what their product is purchased for in the name of patient safety in order to improve interoperability and data sharing. And then the fourth group are patient and family advocates. Patient safety is all about patients as partners. And so we do a few different things. We allow people to share their stories so it can inspire change. We also have created resources and have found wonderful resources through these people who have had these incidents and experience experiences in the hospital. So those are just a few ways that we encourage all these organizations to take action and really become a huge part of our movement moving on. And our actionable patient safety solutions. I mentioned these. These are evidence based best practices that we've created since 2013 to date. We have 18 overarching topics. But when you look at all the subcategories underneath each one of these we actually have 34. So it's quite robust and we believe that it's hospitals. Implement these actionable patient safety solutions, they can get to zero, have around zero and maintain zero. So, while we don't have a apps on human factors human factors leads into each one of these topics in many ways. And so we felt that it was important to have a webinar and a workshop today to focus on the human factors of patient safety. So if you'd like to check out those, it is on our website. So just to share a little bit of good news, our impact to date has been growing steadily since we started in 2013. This is just a graph showing you our impact to date related to the hospitals that are in our network that have committed to getting to zero. So over the last year, we had about 150 or so new hospitals join us. It was a lot smaller than in previous years, but that's because we really focused on getting those 4598 hospitals to up their commitment to to update their commitment and commit to more. So we focus more on the quality, not to the quantity over the past year. Moving on and those 4,710 hospitals over the last year saved 90,146 lives. So this is self reported data provided by hospitals, but many of these hospitals have provided us with really robust methodology on how they're measuring the improvements that they they've been implementing in their hospitals. So we're really excited to have seen the steady progress over the last 7 years and hope that that number goes up a lot more in the next year and next slide. So with that, that's just a brief introduction about the patient safety movement. Again, we're really excited to have Kristin and Sasha. So I'm going to introduce them both right now, but they're going to split up their slides. So to introduce Kristin Miller, she's a scientific director of the National Center for Human Factors and Healthcare at MedStar Health and associate professor at Georgetown University School of Medicine. Kristin is a clinically oriented human factors researcher focused on medical decision making and behavior, informatics and the and the assessment of medical interventions and practices with an emphasis on usability, human error and patient safety. So we welcome you Kristin. Thank you for being here today. Our second presenter is Sasha Burns. She's a human factor specialist also at the National Center for Human Factors and Healthcare at MedStar. Sasha has worked in human factors and ergonomics consulting for the last decade assisting organizations with applying customized human factors and ergonomics strategies to improve safety and efficiency. And her research is focused on optimizing caregiver workflow and improving patient safety through the integration of EMR. So with that, I will turn it over to Kristin. Thanks so much, Ariana. This is Sasha. We're actually going to flip our order and have myself go first and Kristin will be our second speaker. Sounds good introduced us. My name is Sasha burn. I'm a human factor specialist with MedStar Health National Center for Human Factors and Healthcare and my presentation today with the time that we have is going to be a introduction to human factors, human factors. One on one engineering the future of health care to let you know about my background and why I speak about human factors. My education is solely in the realms of human factors, ergonomics and biomechanics and I'm very involved with our education efforts here at the National Center for Human Factors and Healthcare. So I also try to be engaged in a number of different outreach opportunities, including the Board of Certification and Professional Ergonomics Center for Health Design, the Human Factors and Ergonomics Society. I sit on the National Ergonomics Board of the American Industrial Hygiene Association, the Curriculum Committee through PSMF, and I'm an adjunct faculty with Kristin at the Catholic University Department of Biomedical Engineering. To give you some background about why Kristin and I are speaking today and what our center does our center, the National Center for Human Factors and Healthcare sits within MedStar Health Institute for Innovation. We started in 2010 as a team of only two people, but have grown dramatically over the years and we're now an operation of more than 30 people with a highly, highly multidisciplinary team with specialists in everything from human factors to health equities. We have clinicians on board, usability specialists, environmental design specialists, safety specialists, a highly multidisciplinary team that allows us to achieve our multiple goals. And what we do at our center is four main things. First, a significant amount of our work is funded through contracts and grants, through government, foundations and industry, and we like to be able to publish, deliver presentations, interventions, and being located in Washington, D.C. We also like to be able to provide recommendations for healthcare policy. We have a division that focuses on usability services that works both within the MedStar system and externally to help our healthcare clients. We have a team that focuses on safety integration, making sure that we have human factors representation and considerations on safety consults and serious safety event reviews. And then one of our other large efforts is towards education and outreach. That could be the webinars that we're speaking on today. We teach a number of academic courses and human factors in healthcare, as well as host a number of internal, external education efforts, talks and trainings and workshops. As I think you'll see today, Kristen and I are both very passionate about human factors in healthcare and we like being able to talk about and disseminate what human factors is to as many people as possible. As a matter of introduction, when we're talking about human factors, this is what we're talking about. But this is really one of our taglines, that we don't redesign humans, we redesign the system within which humans work. We are not trying to turn people into robots. What we are trying to do is make it easy for good people, for humans to go to work and to make it easy to do the right thing and to make it difficult or even impossible for them to do the wrong thing. With a broader lens, what human factors is actually about is understanding information about human behavior, abilities and other characteristics and then applying that to the design of tools and machines and systems and tasks and jobs and environments. And when we do that successfully, it ends up improving productivity, safety, comfort and efficiency for the humans in that system. We're really focused on designing for the person. In the realm of human factors, some of the work we do is for the physical realm. If you'll take a look at the picture on the left, this is an effort to use anthropometry and anthropometric data about the measures of different sizes of people to design a touchscreen interface. Here, to design for as many people as possible, about 95% of the population, we can look at the smallest users, about a 5th percentile female, smaller than all but 5% of women in her population, a 95th percentile male, larger than all but 5% of his population, and design for the overlap so that these tools are physically as usable by as many people as possible. In human factors, we also work largely in the cognitive realm. If you'll take a look at the lines of text on the right there, I am sans and I am serif. We often ask people which they think is going to be read faster, the sans serif text or the serif text, and despite what you may think, it's actually the serif text that reads quicker than the sans serif text. The reason is that the serif markers on the serif text that are circled in this image, they are trained into our brains to represent the letters so that we don't even need to read the entire letter before our brain understands the significance and the representation of the serif and we can read that text faster. So in the world of human factors, we largely work to combine efforts in the physical realm and the cognitive realm to make spaces more comfortable, more safe, more efficient and more productive for all users of that space or tool or environment. Now what I'd like to talk about in my portion here is to give you a broad view of what human factors is with an application to healthcare. But the way I'd like to think about this is actually to have you think for a minute about the airline industry and specifically what we think about or what we read in the media once there's been perhaps a plane crash. You may have heard the phrase that it was pilot error. We've probably heard the phrase user error very many times, especially when there are large significant safety events in national news. But it's maybe with a bit of misunderstanding that the media uses these phrases, something like human error. If you think about your own lives, think about a door, a door that has a handle on it. Now largely when we see a door with a handle on it, it's an indication to pull rather than push. Yet if you think about your own lives, how many of you have ever pushed a door where you were supposed to pull or vice versa? Is that an example of human error or is there perhaps some design of the system that encouraged us to behave in a certain way? Think again in your own lives about a stove, especially a stove you're unfamiliar with. Have you ever turned on the wrong burner thinking that the controls were mapped to different burners? Is that again user error or is that again something in the design of the system that could be influencing or encouraging us to work and act in a certain way? When we think about the airline industry, if you're a nervous flyer at all, it may comfort you to know that plane crashes and these sort of safety incidents usually don't happen because of one single event. Usually there are about seven different things that have to go wrong in sequence or in parallel for something catastrophic to happen. And that's exactly what I'd like to talk about today because the same thing is true in healthcare. When we apply human factors lens to healthcare and especially to patient safety in healthcare, we're largely not looking for user error or human error or one sole cause. We're looking at all the different factors of a system that could influence user's behavior and their perceptions and then also their actions so that once we understand all of these influencing factors and sources of hazard, we can work to systematically design them out of a system. The way that I'd like to talk about this is by what I've shown on the screen here, introducing you to the Swiss cheese model of causation. The basic principle here is that all of these slices of Swiss cheese are potential for hazards. And if there are just enough holes aligned in just the right way that the hazards can make it through all of these holes, that's how we get to an accident. However, if the safety barriers actually are working, yes, there may still be gaps in some of our systems and hazards, excuse me, hazards may still get through and occur, but they're not going to all occur in succession so that it may lead to an accident. This is when safety barriers work. Now, what are these different layers and different possibilities for hazards to occur? Well, we have the environment, the policies, the institutional culture, the teams, the individual, and the equipment. And what I'd like to do for the rest of my time is actually walk you through some examples we see in practice and in research of where human factors can help us explain where these hazards come from and help us design them out. I'll give you an example right here for interruptions and noise. In this study here, the difference between an uninterrupted condition, the control condition, and the interrupted condition when there was an interruption of a few seconds, the difference between the two situations in percentage of error was a ten-fold increase. What this means is just being interrupted in a task for a few seconds can cause an increase in error rate of ten times. If we were to then pull this out further so it wasn't just a few seconds of error, but an error rate or an error length, excuse me, an interruption length that goes up to 60 seconds, the error rate rises 30-fold. If you think about the work that is necessary to be done in health care, there are constant interruptions, whether it's from coworkers or patients or families or alarms or any manner of things. The more interruptions there are and the longer the interruptions are, the more likely a user is to make an error. By understanding this about the environment, we can work to design these sources of error out and create a safer system for everyone. Another example I'll share is the influence of an organization. We know that organizations influence behavior and we know that organizations are largely good people just trying to do the right thing and do their job. Accidents are rarely preceded by bizarre behavior. People act within their cultural norms. When organizations have policies and enforce these policies and can make people make their workers, excuse me, influence their workers' behavior, it's much more likely that the entire system is going to be safer. And in addition to this, we look in human factors about the culture of the institution, not just what the policies of the organization are, but how the organization adheres to those policies and behaves within those policies. I'll give you another example from healthcare here, that double checking and inspection. We know that double checking is a standard practice in so many areas of healthcare, but there's actually very little evidence to support its efficacy. A few examples of where errors can occur when inspection is the only thing that's relied upon. First of all, if one person makes a mistake, it's entirely possible that another person is going to make that same mistake. Second, diffusion of responsibility. It's the phenomenon where a person is less likely to take responsibility for an action or inaction when others are present. So again, errors can pass through. Confirmation bias, the tendency to search for and interpret and favor and recall information in a way that confirms your pre-existing beliefs or hypothesis. And deference to authority, yielding or submitting to the judgment of a recognized superior. For all of these reasons and more, we can't depend upon inspection and double checking as a sole mechanism to reduce safety events and improve safety in healthcare. And here's a further example of that. This picture shows the general manager of a linen company with surgical tools that were found on the sort line by employees where linen from healthcare facilities are sorted. The linen facility staff work on the assumption that all of their laundry that they receive is free from debris such as even needles. But we know that's not the case because we can't rely on the power of human inspection alone. If we are to take a look at some of the possibilities for hazards and error related to the team. First of all, we know that in healthcare, multitasking events and multitasking is a very common part of the job. In fact, in an emergency department, it's common to see an average of about nine multitasking events per hour. But on hospital wards, that number jumps up to 17 multitasking events per hour. And the reason why this can be a human factors hazard is because there's actually no such thing as multitasking. It's not possible to focus and complete two tasks at the same time. It's not really a multitask. What we would refer to it as is maybe tasks in parallel or an interrupting task. And we know that when interruptions happen, it can increase the user's risk of committing an error. But it also means that it takes longer to complete a task. The user is under increased stress. There are possible memory lapses. And these can lead to subsequent errors in actions. Now, one of the things we try to get people to acknowledge through human factors is that humans are fallible and error can be predictable. Let me show you a few examples of where error can be predictable. First of all, in this study, 70% of the VA root cause analysis cited communication failure as one of the contributing factors for an adverse event or close call. We know communication in teams is very frequently still cited as a contributing cause for serious safety events. We know, again, that stress and fatigue lead to errors. And we can start to predict where errors might come when our staff are under high stress or fatigue. In this study, the risk of errors almost doubled when nurses ended up working more than a 12-hour shift as opposed to a less than 8-hour shift. Stress and fatigue can be provoking factors for error. We know that shift turnover can lead to more errors. Things like handovers that occur can lead to a higher risk of error for the patient. And having high time pressures can also lead to, excuse me, be provoking factors for errors. And once we acknowledge this, that humans are fallible and even the best humans make mistakes, we can start to understand where errors might occur and redesign systems to design them out. A couple more examples for here within this Swiss cheese model of causation. The individual. This is where we often hear user error being tossed around. But as we know, user error isn't solely the user's fault. That, as I just mentioned on the last slide, we have to acknowledge that humans are fallible. That even the most perfect worker under the most perfect conditions commits errors about five to seven errors every hour. And then under stressful or emergency or unusual conditions, we have an average error rate of 11 to 15 per hour. And this is just you and me every day. We know that performance is then worse when someone's under time pressure or workload pressure or trying to quote unquote multitask or doing a highly repetitive job over and over. Even the best workers are fallible and make mistakes. We can't depend on the perfection of our users and we can't pretend that user error is all the fault of the user. In fact, one of the root causes of this error is the human brain that we have limited attentional resources. And yet we can only concentrate our brain can only concentrate on maybe two to three things simultaneously. We have a limited working memory and most people can only really remember five to seven chunks of items at a time. But we ask people to remember things all the time. If you're in the ED and you need to write an order for medication, there's so many other things that you're doing that you might order the wrong medication. Verbal orders go in all the time. Asking people to do things verbally and relying on their memory, it's very easy to make and anticipate a mistake. This is an interesting side effect of a high cognitive or mental load that in this college study, students were asked to either remember a two digit number or a seven digit number. The lead then asked the students to walk down the hall to another room to complete the study. But along the way, nonchalantly said, there are some snacks in the hallway. There was fruit and there was chocolate cake. So the students that were given a two digit number to remember, they chose the fruit and the cake about evenly. But the students who are given a seven digit number to remember almost overwhelmingly selected the cake. And the explanation for this was that the higher your mental or memory load is, it's possible for you to make worse decisions. Think about any, any healthcare worker. The job in which high cognitive loading, relying on our memory and relying on just our focus and concentration are par for the course. A high cognitive load can lead to us making worse decisions, and this can impact safety and care. Very frequently, when we see safety events happening, the phrase is thrown around, we need to be more vigilant. Or the policy is we need to make sure we remain vigilant to prevent safety errors. Well, vigilance is pretty poorly understood. Vigilance refers to the ability to focus on a situation for an extended period of time. And the goal is usually detecting something, a sensory event or a signal. And again, even the best worker, the best user has a vigilance decrement. The decreasing probability of detecting a single, excuse me, a signal the longer the time goes on. In this study, which was performed back in the Army, the subjects were asked to identify enemy submarines on a radar. What you can see here is that after the first hour of this vigilance experiment, the user's error rate was fairly low, and their performance was pretty close to 100%. But as time went on, even in this study, they saw a vigilance decrement, and after the fourth hour and beyond, the user's error rate started to approach 50%. And this was under study conditions. You can imagine how this impact is impacted by the real world, where we have time constraints and workload and pressure constraints. Now, in some industries, we understand this well. In fact, the TSA's policy is to switch scanners frequently, so not having them do scanning for an extended period of time. But in healthcare, it is common practice to have someone working in a pass lab for eight hours, and their sole job is to double check. We don't yet see this being fully executed in healthcare. The last thing I'm going to do before I turn it over, I want to talk about the equipment. We've looked at factors of the environment, the organization, the culture, the team, the individual, but Dr. Miller is going to talk a little bit more about equipment next. But let me ask, if we think about the design and signage of our equipment, think to yourselves how many of you have ever purchased the wrong grocery item because the labels look similar. On the screen here, all of these different cans of tuna are different, but they look very similar. To yourself, you don't have to say it aloud. How many of you have ever unintentionally walked into the wrong restroom? If you were to see a sign like this, it's not hard to see why. In this, if you pay close attention, the arrow pointing towards the word men is actually pointing towards the female figure, and the arrow with the women text is pointing towards the male figure. This signage does not lead us to the right solution. And this is a picture that was taken at LAX Airport. The text here does say women, but the black text is obscured by the blue figure of the women, and it's very easy to quickly read this as men. Those are all in good fun, but what would happen if you confused the following product? Epi-pens, oxygen and air, potassium chloride and sodium chloride, these sorts of errors that have factors based in the design and signage of the equipment that we're using can significantly impact patient safety. So the last thing I'll say here, strategies for supporting human performance. Whenever possible, try to decrease distractions and decrease cognitive loading. Reduce chances for confusing signage and confusing design. As much as possible, make sure that the status of the system is visible so we don't have to rely on a user's memory or assumptions, and make sure that the execution of any actions are visible so the user understands the status. Consider the design of an environment, a tool, a system, a culture to make sure that cues and reminders are part of that culture and part of that design. And consider the use of warnings or alerts as part, not the only part, but part of these strategies for supporting human performance. So before I hand it off to Dr. Miller, let me say again, I encourage everyone on the call to think and understand that humans are fallible, but this fallibility has been well described. That we know that humans cannot be fully vigilant, that we know that humans cannot rely fully on their memory, even the best workers under perfect situations, so we have to redesign the systems to make it easy for our good workers to do their work and make it easy to do the right thing and harder to do the wrong thing. As a next step, I'm going to hand this over to Dr. Miller. Dr. Miller, take it away. Thank you, Sasha. So my presentation today is Human Factors 201. So now that y'all are human factors experts, I'm going to build upon the foundation, Sasha, so nicely laid out, and give you some concrete examples of how you can apply human factors engineering in your work environment and to your patient safety initiative. By way of background, I am formally trained in human factors engineering and ergonomics through Texas A&M University School of Public Health, but it being A&M, largely industrial engineers, and so I learned about human factors and system safety and wanted to apply that in healthcare. I am certified professional in patient safety and have worked in a number of healthcare systems, including Johns Hopkins, the VA, Christiana Care, and now MedStar Health. And have a number of academic appointments, including Georgetown University, Catholic, the Association of Computer Machinery, SIGCHI Summer School for eHealth and AmHealth, and serve as a key advisor to the newly formed Dental Patient Safety Foundation. So as Sasha discussed, human factors includes a wide range of disciplines, primarily cognitive psychology and industrial engineering. And thinking in these two buckets, from a cognitive and psychology perspective, we can see a variety of expertise that can be applied in our work, including anthropology and ethnographic approaches, informatic techniques in applying visual design and user experience, and data science thinking, things like big data analytics and predictive analytics. From an industrial engineering perspective, we apply principles of biomechanics, ergonomics, anthropometry, anatomy, and physiology. As human factors researchers, there's lots of things that we can measure, attitudes and motivation, preferences, knowledge and expertise, skills and abilities, both physical and mental, aptitude and potential to perform and succeed. And then exciting things like emotion and physiologic state. In this example here, you can see the results of a trachea capturing heart rate variability. This was a study done of emergency physicians in trauma situations, and so trying to understand the physical stress that comes from that. We also measure things like task and processes. So these are the results of an observational study where we track different tasks as performed by different clinicians by task type and duration. We also look at behavior. This graphic represents an eye tracking study using heat maps of gaze patterns to better understand how clinicians interpret digital EKGs and identify anomalies in the patient's heart rhythm. One thing I'd like to point out is the value of collecting both preference and performance data. We need to understand if an intervention or a patient safety initiative is going to improve performance, but we also need to understand the end user's preference. So for example, if we design a great new decision-making tool and we want to see if it's improving clinical outcomes and we show that it is, but providers hate it, we know that it's never going to work because the providers will never use it. Alternatively, if everybody loves our tool, but it's not actually changing decision-making behavior and having any sort of clinical improvement, that's equally important for us to know. So we need to recognize the trade-offs between preference and performance. You can see in these examples a few ways that we measure this. Thinking about preference, we could run a user survey to capture subjective preferences. And in the bottom example, we can evaluate difficulties and failures of an infusion pump usability evaluation to understand performance related to device use. And I'll talk more about device use in just a moment. There's a range of methodologies that we apply to capture these elements and different types of study designs. So when we conduct descriptive research, that's the type of work we can do from our desk. We're evaluating data. We're creating mathematical models. We can design and evaluate products and interventions by applying design standards or conducting surveys. There's also different types of simulations that we can use, including computer simulation, to understand things like resource allocation or patient flow. Human simulations like usability testing and the simulation center. We conduct ethnographic research where we go into the field and we observe practice and behavior. And we also run field studies including pilot studies to understand when something is introduced. We can evaluate the physical environment, the introduction of a new device or change in the electronic health record. In these field studies, we can evaluate in real time the impact on structure, process, and clinical outcomes. So for our center, we likely have more bandwidth and resources to conduct these types of studies. So today we wanted to be able to share with you some concrete examples of things that you can do in your work environment to apply human tractors engineering. I selected five core concepts based on the Institute of Medicine based on concepts in quality and patient safety. These include avoiding reliance on memory and vigilance, attending to work safety, involving patients in their care, incorporating user-centered design, and purchasing for safety. The first principle is to avoid reliance on memory and vigilance. ASASHA demonstrated human vulnerabilities including being easily distracted and unable to concentrate on repetitive tasks over long periods of time because of cognitive limitations, create vulnerabilities in the healthcare system. If we want to ensure clinicians get things right the first time and notice when things go wrong, we need to better support them so as not to rely on any individual's memory or vigilance. One strategy to avoid reliance on memory and vigilance is to standardize and simplify the structure of tasks to minimize the demand on working memory, planning, or problem-solving. We want to improve access to accurate and timely information at the point of care when necessary and simplify key processes. So this image here represents a central line insertion kit. So prior to initiatives like this one, if a clinician were to place a central line, they would need to go multiple places in their unit to acquire the necessary equipment. They would then need to remember the order and the instructions for each step. In this healthcare system, a fairly simple initiative where they collected and organized all the right materials. They labeled each pocket with a distinct step and they provided instructions for clinicians, if needed, to support a standardized process. And they looked at outcomes like time efficiency and gathering materials, sought higher satisfaction for providers and having all of the equipment readily available. And they also improved self-efficacy for novices to complete this task without errors. Another example more specific to clinical decision support and informatics initiatives in the electronic health record is to improve access to accurate, timely information and to ensure information for decision-making is available at the point of care. This image is a mock-up that we are working off of to support decision-making in the context of sepsis. And providing clinicians with patient-specific information, including indicators of organ dysfunction, we're able to highlight missing data to convey to a clinician they might need to order a certain test, visualizing shock type to inform treatment and also showing them projected mortality, which we think can assist in forecasting outcomes and appropriate resources. The second principle is attending to work safety. So we know when there's a mismatch between the physical requirements of the job and the physical capability of the worker, it can result in things like work-related occupational injury and adverse events for patients. So one strategy is to consider the physical environment for both providers and patients, including things like accessibility, usability, and the design of equipment. Here you can see some simple examples that improve the ability of work hazards. So as Sasha had mentioned, other industries have gotten pretty good at this sort of thing. Specifically, manufacturing has done an excellent job of marking the physical space. So you can see the image in the top left. They're indicating to keep the aisle clear. They've also used techniques like shadowing, which you might have seen on a cork board where somebody has drawn the outline of a hammer and colored it in and the employees know that's to return the hammer. And that's a quick and easy technique to designate specific locations for every item in the workplace. And we've seen that starting in the healthcare industry as well. But we can apply similar strategies to show things like which way a door opens and the correct way to push open a door. There are additional interventions to improve work safety in thinking about emotional and cognitive well-being and a very important focus on clinician burnout. There's been initiatives like mindfulness and meditation spaces and thinking about the physical well-being and reducing occupational injury. Lifts and other equipment can be used to reduce the need to lift patients, which reduces the likelihood of injury but also protects patients from adverse events like falls. And it's important that in the physical space we think about that equipment being available. So many of you have heard the anecdote where a new piece of equipment was introduced but it's kept two units away in the closet. And we know that it's not at the forefront of the mind of clinicians and if it's not visible to them they might not remember to use it or have the time to go get it. So making sure that when we introduce these things that they're easily accessible to ensure youth. The third principle is to involve patients in their care. In doing so we can help close the feedback loop in patient provider communication but it also builds in another layer of defense or another piece of cheese if we think back to Sasha's discussion of the Swiss cheese model of causation. Wherever possible we should invite patients and families to become part of the care process. And there's a number of studies that suggest safety is improved when patients and families are well informed about their condition, their treatment, but also the technologies used in their care. One great example of how to involve patients is the daily plan which is an initiative that was led by the Department of Veterans Affairs National Center for Patient Safety. This is a document that provides patients and families with information about what to expect for their visit on a single day. It enhances patient safety by encouraging the patient to ask questions about their health care especially if something seems different than planned. The document is populated from information in the electronic health record making the process efficient and it strengthens the communication process of patients, the family, and the health care team. Through populating the daily plan but also open discussion, everyone involved in care can identify errors of omission so a patient thinks that they were supposed to have a test done but it's left off of the care plan but also identifying areas of commission so extra things are to be done during the day but they weren't expected or planned for. Another example of involving patients is through pharmacy discharge program. So in the past few years health care systems were noticing that patients were experiencing poor outcomes including readmission due to a lack of medication adherence and when they looked deeper they saw that many patients were unable to acquire their medication once they were discharged and this was happening for a number of reasons. A lack of transportation, a lack of time or resources to get to the pharmacy due to competing life interests. So in response hospitals have started pharmacy discharge program, a partnership between outpatient pharmacy and inpatient units to ensure all discharge prescriptions are filled before the patient leaves the hospital and these programs have demonstrated improved clinical outcomes like reduced readmissions but also improved satisfaction which we think could potentially be linked to improved HCAPs and reimbursement for the health care system. The fourth principle is to incorporate user-centered design so when preparing to design a patient safety initiative team should employ user-centered design remembering that the design revolves around the central importance of the user. Design should understand how to reduce errors depending on framely likely sources of errors so what we think might go wrong and pairing them with effective ways to reduce them or mitigation strategies. The design process is iterative so you can see in this example this one focused more on web technologies where there's a plan, there's a design then there's some sort of prototype or a mock-up and then review and it's an iterative process where once you put something in front of the end user if they don't interact with it in a way that you anticipate maybe you go back to the design phase and create a new prototype. So thinking about the end user and incorporating them into the design. This example is one that was the result of an adverse event at MedStar Health an event review and it's been published so in that story they say flush with fever an 11 month old girl was rushed to a MedStar Health Emergency Department she very nearly suffered the impact of a harmful medical misunderstanding. After a nurse recommended giving the infant five milliliters of ibuprofen based on a common dosing form, the one on the left the baby's father almost gave the little girl far too much of the medicine because he had bought the commonly available infant drops which is a much more concentrated version of the painkiller. Before he did the infant's mother who's a MedStar associate realized that the nurse's recommendation had instead been based on the children's version which is a weaker formulation and this was a close call but it's the kind of example that human factors can be applied to address a redesign. So you can see on the right the redesigned form and this one is much simpler, it's much clearer it shows the different type of medication based on weight and medication type being able to distinguish between an infant formula, a child formula whether it's a tablet or a liquid making it much easier for the patient and the family to understand but driving the design towards the user. In the second example we demonstrate the complete redesign of a medication alerts in the electronic health record for a community healthcare system. So prior to this redesign all of the pop-up alerts looked and functioned very differently. There was no standardization some alerts said high alert others said warning, caution, danger, critical the type of information was different in each and it was displayed differently. So a committee of experts including pharmacy, health IT, end user clinicians and human factors engineers evaluated all of the active alerts and created what we called an alert framework to standardize design. So each alert was redesigned as you can see in this image by risk action. The framework included the following information why the alert was triggered what the risk was to the patient which was often left out of some of the alert the recommended actions using a signal word so you can see warning in this example the allowing corrective action so many of the alerts were just acknowledgments but didn't direct you to change your behavior and then suggesting alternative therapies. So you can see in this example the alert was triggered because the medication could lead to a fall risk in the why category the clinician is informed this medication is inappropriate as a sleeping agent in the elderly population the risk is an increased risk of delirium and falls and then in the action to cancel the order and consider alternative therapy so the clinician can then choose to cancel or continue the order. And then the last principle is to purchase for safety and we've spoken today quite a bit about poor design so people tend to blame themselves or others when things go wrong hence the term human error but it's often bad design at least people to make errors. And when we think about medical devices many of these errors are due to inappropriate designs for user interaction rather than mechanical failures. So you can proactively evaluate medical devices prior to purchasing to ensure the device is selected support patient safety but also occupational safety and healthcare system. And one way to do this is through usability testing and that's defined as an effort to ascertain the degree to which something has met the usability needs of its intended user base. One caveat is that usability is difficult to evaluate and measure but by simply giving an end user the opportunity to evaluate a device you will get a wealth of information. The pictures here show a really complex usability evaluation of infusion pumps run through our simulation lab but even in a more simple evaluation if you ask the device manufacturer to provide a sample you bring in end users and you let them handle that device, you ask them to perform tasks similar to how the device would be used you will get a lot of really important feedback about their experience. The goal of that infusion pump study was threefold so we wanted to identify common usability issues and provide design solutions we wanted to determine the aspects that could be included in our screening regimens and we wanted to provide usability information to inform future procurement decisions. In this example the hospital had already selected the infusion pump so it wasn't about purchasing but it was about a safe rollout and so being able to evaluate a device under a controlled condition to see what errors clinicians were making we could then support that rollout in a much safer way. The second example was a decision between two different defibrillators. In this evaluation a research team identified which criteria were most important so we were thinking about design elements like real time CPR feedback but also logistics like battery life and cost. We evaluated different devices and then conducted a comparative evaluation between the two and this information was used to create an informative evaluation that we gave to the purchasing committee. I think it's important for folks to recognize that if you have met FDA approval it doesn't mean that there's not important usability considerations that could present patient safety hazards. So to summarize errors are inevitable even for healthcare providers but we're hoping that we have demonstrated to you today the application of human factors engineering can be applied to improve patient safety. Whatever possible develop solutions that address latent errors in the system make it easy to do the right thing so we thank you for your time and we're happy to take any questions. Wonderful well we really appreciate both Sasha and Kristen your expertise on the subject it was super interesting I was taking notes myself so again thanks for being with us today and we'd love to let them move on to the next slide. Perfect so for those of you who are on the phone or on the web we'd love if you could submit some questions through the chat feature on the web we don't have any questions yet just lots of great comments about the webinar and where they can find the materials later but we did actually just get a question so someone privately has asked could you share the reference on your data on greater than 12 half hour shifts. Yeah I think certainly we can find that study that one was a little bit older and trying to understand yes shift duration I think there's been other more recent studies looking specifically at residents as well so certainly a hot topic that has been argued multiple ways in terms of how shift lanes can impact areas but yes we can certainly find some references for that. Great yeah and we have another question from William and it do you think have any human factors work around sepsis detection or decision support tools for the ICU? Yeah I'll take that one as well this is Kristin so the bulk of my research portfolio is in clinical decision support and specifically in sepsis and acute physiologic deterioration so we've been doing a lot of work sort of with the same systems approach that we talked about today so on the back end thinking about the predictive analytics and how you can move the thresholds to be more sensitive or specific thinking about the important features that you would use to predict sepsis detection and that's a really important component but that's not all of it so then there's the front end informatics component as well which we're also really interested in how do you provide information to clinicians in a way that they can understand it but also correctly respond to it and so we think that visual display component is important as well and then sort of that third dimension is the usability and the acceptability understand anything like workflow when would you present an alert to someone in a meaningful time in their workflow that they could act upon it so yes we're actively working on that I don't know that there's probably some things that we could share with you I think there's lots of people trying to figure this out and I'm happy to see more thoughtful approaches that are really again considering the end user and making sure that the ultimate solution is strong on the data side but also strong in terms of development and being a useful tool Great, William says good I'm with Mayo Clinic Ambient Clinical we should talk offline so Claire from the Patient Safety Movement can connect you to we got another comment just saying great presentation so getting some kudos out there any other questions still do have five minutes while we wait for people to hopefully submit a few more questions I have a question from the foundation so Sasha Kristen is there like one action or thing that you would suggest either frontline staff or leaders within healthcare to you know lead with today to go back to their institution Sasha start if she has some good ideas no that's a great question does anything come to your mind immediately Kristen yeah I'm thinking about so Sasha and I we do lots of education as she mentioned and we've been doing presentations and I think the way that we talk about this and we often have much more interactive sessions where people are experiencing failure you know in things like vigilance where you have the audience saying like I would never make that mistake and then we create an environment where they make those mistakes and I think it helps people to think that like yes humans are fallible and we can see these errors and convey that to frontline staff so that they know this is not all your fault you know a lot of this is poor design but with that having some sort of avenue where they can report that information back so I recognize lots of patient safety teams are very small and resource constrained but if there's a way that you can gather that information because if it's happening to one person it's likely happening to lots of people and we know that serious adverse event reports come in and that's a great avenue but being able to collect things that not even a close call right and close calls are important but even somebody struggling with something or they could see how a device could lead to an error and just being able to capture all of that and then obviously try to address it but making sure that there's a place where people feel like they can be heard and they can share that information because it really is the end users that have all of that really important knowledge. Often it looks like we have a few more questions we have a question from Bonda and it can you point to research that looks at overriding orders what do we know about how people will tend to feel more confident than technology and override systems that are meant to ensure safety Oh that's like a whole other workshop I think of really really really important thing and again I think thinking back to the design of these tools and the way that they're introduced and I know going back to the substance example we had done some work to look at sensitivity and specificity of alerts and had found you know a 70% false alarm rate in one example and in those instances yes people are going to overwrite it and correctly so and so how can you make sure that what you're putting out there you know if you have some context to say like you know we think for the substance example we think this might be substance but it could be something else and giving people important information that they need I think as opposed to what we're doing right now which is a very binary like yes or no and then people being punished or they get in trouble because they're overriding something you know I think having more context and more thought around this is going to be really important moving forward great thank you and we have one final question from do you have a repository of the solutions that you have developed so far for problems that the hospital faced for example the pediatric medication form that was designed to prevent the errors sometimes court of law punish you due to breach of duty but actually human factors issues are involved can you please comment in such cases what should be done yes so for the first part I think that brings up a really good point I don't know I mean there's lots of people doing this work there's patient safety folks there's human factors folks and I don't know of one great repository where all of this stuff goes and maybe I don't know if that's in the scope of patient safety movement foundation but a way to crowdsource the great initiative that people are doing if we wait for things to be published you know we're waiting three or four years you know I don't think that that's the best venue but being able to share some of the stuff yeah like that medication form I think is a great one that obviously we love to share and other people could take that and use it as well so maybe collectively we can think about a good forum for that the second question is more on the risk management side and I know MedStar has a really good relationship with our risk management team and having human factors folks sitting in on adverse event reports to be able to talk about design error is really important but I personally do not get into the more legal side which I think yeah it's certainly a concern. Yeah and I'll just tag on to that so from the foundation side the patient safety movement does have a pediatric adverse drug event up I will say that it doesn't have that tool that you shared Kristen and Sasha so I may reach out to you I think that would be a great resource to add it as an appendix so I love that idea so thank you Umar for that so we're at 10 o'clock on the dot we just have one more slide just to show you what's going on in the world of the patient safety movement our next e-newsletter will be released on Monday April 1st so make sure to check that out we have spotlights on hospitals and health care organizations that are improving patient safety so we provide examples of those to encourage you all to do more in your institutions and celebrate those that are doing great work the second is our mid-year planning meeting which is one of two main events that the foundation here is going to be Tuesday September 17th that will also coincide with world patient safety day which will be planned for September 17th so we're excited about that it'll be here at the University of California Irvine in Irvine California and you can request your invitation today on our website and then we actually have two upcoming webinars we have a webinar on May 8th which is on our new patient safety curriculum that was designed for all health professionals so same time on May 8th and then on June 12th we have kind of an interesting topic on reducing ED boarding time hospital lengths of stay and inpatient mortality for hospitalized patients after implementation of an electronic throughput dashboard kind of a long name but interesting topic that'll be Brandon Lau from Johns Hopkins so you can register on our website for those upcoming webinars and our events and we will be posting your webinar on online within 24 hours so if you want to catch this or forward it onto a colleague that missed it we'd love for you to do so so again thank you Kristen, Sasha so much for your expertise and knowledge it was a really well attended event and lots of great questions so we hope to catch you all on our next webinar thank you