 Hello. My name is Robert Lobato. I'm the chief of the division of multi-specialty anesthesiology at the University of Nebraska Medical Center. I'd like to talk to you today about how run charts can supercharge your quality improvement program. Now as we know the goal of quality improvement is to refine a process. There are three fundamental steps for doing this. Step one is to reduce variation in how a process performed by establishing a standardized process, usually using evidence-based guidelines. Number two, deploy the new practice in a test environment and measure improvements. Number three, expand adoption of this practice and attempt to sustain improvements over time. So how do you know that your quality improvement program is working? And how do you communicate your progress to stakeholders? I believe that run charts are an answer to both these questions. So what are run charts? Run charts are a graphical method for visualizing process variation. They're also helpful in identifying trends or changes in performance over time. The image here is an illustration of what a really basic run chart might look like. Run charts were developed in the 1920s by Walter Schuhard of Bell Labs. They were used extensively in manufacturing throughout the 20th century and made their way into business operation in the 1990s. In fact, Don Burwick, one of the gurus of business and management, wrote this piece in his Institute for Healthcare Improvement address in 1995. He said, quote, if you follow only one piece of advice from this lecture when you get home, pick a measurement you care about and begin to plot it regularly over time. You won't be sorry, unquote. Powerful words from a leader in the field. So how do we construct a run chart? First, we take an outcome of interest and plot it against a relative time interval. Then we calculate and plot the medium for a sample. That's pretty much it. It sounds too simple to be useful, right? But it isn't. Run charts can improve QI by illustrating the variation in an existing process. It can also help detect changes in performance over time. So what do I mean when I talk about variation? In this context, variation is the difference between an ideal and an observed process. Really, what we want to understand is how consistent is a process? In a run chart, we can interpret variation as the dispersion around a central tendency. So in this case, how far do points lie above and below the median for that sample? The larger the distance between the median and the points, the larger the variation. On the left hand of the figure below, you can see a process with moderate variation. On the right side of the figure, you can see reduced variation following a process improvement of some sort. Both of these samples contain the same median, but the distance those points lie above or below the sample has been drastically reduced. So why is variation important? Variation can result from differences in any part of a process. The goal of quality improvement is to minimize variation wherever possible. Run charts can really help us do that. The other thing that run charts can be really effective for doing is detecting changes in performance over time. We start by pre-defining a number of consecutive events above or below the median that constitute a run, or some people call it a shift. When a run is identified, a new median is calculated. Some people call that rebasing a new median. Now as I just mentioned, it's important to pre-specify a number of consecutive events above or below the median that constitute a run. This number of events will be specific to each individual quality improvement program and is based on what an expert would call meaningful change. This is going to be different from program to program. It also depends on how often your outcome is measured. An outcome that's measured on a daily basis may require several consecutive events before identifying a run. An outcome that's measured less frequently, maybe weekly, monthly, or annually, may need fewer consecutive events in order to constitute meaningful change. So let me give you an example from clinical practice. My day job is as an anesthesiologist at the University of Nebraska Medical Center. We're a level one trauma center located in Omaha, Nebraska, and we're Nebraska's only anesthesiology training program. We utilize an anesthesia care team model involving anesthesiologists, neurosynesthetists, and anesthesiology residents. Together we administer approximately 25,000 adult anesthetics per year. The quality improvement program we undertook centered on postoperative nausea and vomiting. PONV is a relatively common phenomenon after anesthesia. It's an important patient-centered outcome. In fact, most patients report that they would rather be in moderate pain after surgery than experience nausea and vomiting. In some cases, PONV can be quite severe. PONV is associated with increased healthcare costs and has well-recognized risk factors. There are also evidence-based society guidelines for preventative treatment. So all in all, PONV is an excellent candidate for a quality improvement program. In fact, PONV has become such an important patient-centered outcome that CMS has designated it a MIPS measure. They've identified it as number 430, which they call the Prevention of Postoperative Nausea and Vomiting Combination Therapy. Now for those outside the United States, CMS is the Center for Medicare and Medicaid Services and is our primary government payer. They're responsible for about 20% of U.S. healthcare expenditures. MIPS is CMS's merit-based incentive payment system, and it's a financial reward for institutions that perform well on specific quality improvement measures. So how's our institution doing? We went back and pulled data from the first nine months of academic year 2021 to use as a baseline. Our median, during that time, was that 90, roughly 91% of patients got treatment that was consistent with the PONV guidelines. That's not bad, but we have room for improvement. As I look at these data, I see an outcome of interest and I see a time interval for each measurement. This looks like a perfect job for a run chart. My favorite method for generating run charts in R is using the run charter package written by John McIntosh. Mr. McIntosh is an analyst for the NHS. He gave a talk at R Medicine 2020, in which he introduced both the run charter package and SPC charter. The QR code to the right of the slide is a link to that talk. I highly recommend it. So let's step through how we'd use the run charter package. The primary function call is run charter. This first variable contains our time interval of interest. For us, this is the surgery month. The second variable is our outcome of interest. In our case, this is our MIPS 430 success rate. This next variable contains the number of observations used to calculate the initial baseline. In our dataset, we had nine months of historic data that we wanted to use to create a baseline, so we set this to nine. This next variable contains the number of consecutive events that define a run. Our experts felt that three consecutive months of changed performance would be clinically meaningful, so we set this number to three. We're interested in shifts, both above and below a baseline, so we set our direction to both. This last variable is a grouping variable used for faceting. In our case, we only had one group, so we set this to a dummy variable called plot group, and it's just a constant. This is what the default run charter output looks like. It's got two components. First, there's a ggplot run chart. Second, there's a calculation of medians for each interval of interest with rebasing. As you can see, run charter does a very nice job by default of generating an attractive plot. It recognizes that our time intervals were in date time format and labels them appropriately, and gives us a nice y-axis scale with a reasonable number of significant digits. This is our institution's baseline performance on MIPS. What we can see is a relatively stable process marked by what those in statistical process control would call common cause variation, meaning small differences in patients, in locations, in provider practice patterns that cause fluctuations above and below the baseline over time. What I really like about the run charter package is that its output contains a ggplot object. That means there are a number of things I can do to change the way this ggplot object is presented. I can do things like remove faceting, apply a custom theme, reformat the axes and the way they're labeled. I can also add a title and a subtitle. This is the finished product after my small customizations. I think this really looks pretty nice, and it's perfect for a large group presentation. Next, we implemented a department-wide QI program. We started with large group education through departmental grand rounds, where we reviewed the current guidelines on the prevention and treatment of post-obnajan vomiting. We also chose to include our MIPS 430 performance in our value-based quality incentive program, which is an enterprise-wide effort to give a small reward to the department in exchange for meeting or exceeding pre-specified quality goals. Our first quarter performance, three months later, looked like this. That looks pretty good. The run charter package has identified a run, meaning we have three consecutive events of above the previous baseline. Run charter has rebased a new median for us, and everything has been represented using our updated theming through ggplot. Before bringing this back to our quality group, I thought I'd add a little more detail by adding ggplot annotation layer geoms, custom fonts, colors, and parsed formatting. This is super cool. So this is the final plot we presented back to our department as an interval update on our QI project. We were feeling pretty good about that. Three months later, our QI group got together to examine our second quarter performance. Here's what it looked like. We all looked at each other and said, wait, what's happening? Clearly there's been a substantial decrease in our MIPS performance rate over the following three months. Now for folks who are familiar with QI, this is not going to come as an amazing surprise. It's not uncommon for a new quality improvement project to be marked by an initial burst of enthusiasm and see increased performance initially. The following months are often marked by a return to baseline performance. In fact, some people call this the sustainability challenge. This particular publication reports that in the National Health Service in the United Kingdom, up to 33% of quality improvements are not sustained upon evaluation one year after completion. Other studies that have looked into this phenomenon have concluded that multiple interventions are often required in order to generate sustained change over time. Without repeated intervention, it's not uncommon for practices to return to their baseline performance. In response to this, we decided to redouble our quality improvement education efforts. On top of our large group education through departmental grand rounds and inclusion of our metric and our value-based quality incentive program, we began a monthly metric review at our quality conference. This gave an opportunity to review our performance in front of the large group and remind people the importance of this metric. We also began to generate individual clinician-level quarterly reports of metric success. This is an example of the clinician-level performance report we created. This particular image is from the overview report for our nurse anesthetists. On the right half of the image, which is really not intended to be readable to the audience, it lists each individual nurse anesthetist's performance and ranks it in comparison to his or her peers. There's also a red line which indicates our aspirational performance for the group as a whole. Each clinician receives his or her individual report with their peers' names blanked out so that only that clinician's name appears. It gives them the opportunity to see how they're performing relative to their peers and how they're performing relative to our group standard. This was tremendously impactful in generating change. So let's see how we did on our third and fourth quarter performance. As you can see, the increased intensity of feedback restored our performance gain. The third and fourth quarter were marked by successively improving performance over time. In fact, during the fourth quarter, a new run was identified and a new median was rebased. I've also added the first quarter performance from academic year 2023, which are the last three dots in this plot. One of the really gratifying aspects here is that our performance gains have been sustained over the last six months, and our new rebased group median continues to hold. One of the lessons our quality improvement group took away from this is that one intervention usually isn't enough to sustain change. For us, multiple interventions were required and frequent personal reminders were necessary in order to sustain change over a long period of time. We're confident that eventually, with continued reminders, our performance will no longer be viewed as the new way of doing things and will just be viewed as the way we do things. At this point, change will become permanent. I'll close by summarizing some of the ways I think run charts can help supercharge your quality improvement program. Most people are used to seeing time series data presented in a format similar to that of a run chart. So when we present run chart data, people have an intuitive understanding of what both progress and setbacks look like over the course of a quality improvement project. Run charts can display both performance change and process variation over time. By establishing at the beginning of a project how many consecutive events constitute a meaningful change, we give our audience the advantage of being able to understand when a true change has taken place. The run chart algorithm will identify these changes and will rebase the median automatically for us and display that as a new median on the run chart. This is tremendously powerful in presenting data before an audience. The run charter package in particular offers us ggplot output, which is tremendously customizable. This allows us to increase stakeholder engagement by changing themes, fonts, shading, adding annotations, or even adding animations along the way. I'd like to thank the R-Medicine 2022 conference organizers for inviting me to speak today. I'd also like to thank Mr. John McIntosh who authored the run charter package, as well as the authors and contributors to the thousands of packages that make R such a powerful tool. Thank you.