 Hi. So now that we've talked about implementation science in general, and we've talked about the multitude of frameworks and models that are out there, I'm going to talk specifically about ReAIM and how we can operationalize it and related to genomic medicine implementation. And just as David talked about, the point of implementation science is how do we get the evidence-based practices, the research into practice in the real world? Or more pragmatically speaking, what works for who, when, and under what conditions and in what contexts? And so I'm going to give the examples of ReAIM and how we can operationalize it that way. Oh, this is automatically advancing. So just quickly, who has, before today, heard of ReAIM? Okay, most people, I hope, used it. Actually used ReAIM. Used more than reach and effectiveness. Yes, that's the point. So in deference to REST Glasgow, one of the creators of ReAIM, I must say, you know, as we've mentioned, there's a lot of models and frameworks out there. They're all wrong. Some are useful. I'm going to talk about how ReAIM can be useful. So ReAIM is an acronym, stands for Reach, Effectiveness, Adoption, Implementation, and Maintenance. It is a framework, as Lori mentioned, it doesn't tell you how those things go together, but it gives you the constructs to work with. ReAIM itself was developed about 20 years ago, been used for a very long time. The point was to, initially as an evaluation framework, but has since been also adapted over time to be used in program planning and evaluation as well. In order to provide standard ways of measuring those factors that are important to more broad application in public health, really to facilitate the translation of research into practice, importantly, in the real world, in our clinical settings. And it does that by encouraging attention to these dimensions, adoption, implementation, maintenance, and emphasizing not only the internal validity, like our effectiveness trials, but also the external validity, what's going on in our settings, which basically gets to what David was talking about in the fish bicycle conundrum. Let's take into account the settings in which our programs are implemented. How is it working for them? The staff that are implementing. What are their needs? How are they doing with this? Again, emphasizing that real world, the real world in that setting and context, encouraging that multi-level thinking, so it's not just the patients and those health outcomes, but it's how the healthcare system deals with it. The external forces that are impacting how we can implement these programs. It's also concerned with cost, but not talking about cost effectiveness, but more cost to the system. Those healthcare costs, the costs for implementing the personnel costs that are important to healthcare system decision makers. And it's also concerned with those adaptations that get made over time. As Lori was mentioning, we all do something in our own system and report on what was those adaptations that we made. But Reaim lets you focus on those adaptations over time. We all know you've seen one healthcare system, you've seen one healthcare system. The traditional definitions of Reaim, the constructs for each construct, reach the absolute number proportion and, importantly, the representativeness of the individuals who participate. So reach is the individual level. Adoption is just reach at the setting level. So then the absolute number and proportions and representativeness of those sites and staff that participate in your program. Effectiveness, efficacy, we all pretty well know that, but the importance is also focusing on those negative effects, those unintended consequences, making sure you report on those so we can all learn from them. Implementation is generally at the setting level. So that has to deal with your fidelity. How well do the staff follow the program as intended over time? What adaptations do they make? And what are the costs of implementing? And then maintenance is really that long-term thinking. Typically I defined as six months or later. At the individual level, how are those health outcomes maintained over time? At the setting level, can it be sustained? Point of maintenance is really to begin thinking about that integration in the system over time. So I'm going to talk about, first, how we operationalize it in a research setting with Geisinger's MyCode Community Health Initiative. Some of the background reading Mark provided on this, but basically hopefully everybody knows about Geisinger's MyCode Health Initiative. It's our biobank. Patients are consented to the biobank. They give a sample that is sequenced and they also consent to have medically actionable results returned. So when we look at reach for MyCode participation, so the consenting of MyCode, we typically talk about a pretty high consent rate, about 85% of those approached consent, but ReAIM gives us the structure to really report the important pieces of that. So our target population, essentially, if you get carrot Geisinger, you can consent to MyCode. So there's essentially no exclusion criteria. ReAIM walks us through the reach construct, walks us through reporting on all of these. So the total number approached, I didn't have those numbers for last week, but this is essentially where we say we get an 85% response rate. As of last week, though, we know that we have over 200,000 who have consented actually signed a consent form. However, our participants, they to actually participate in MyCode, they have to give a sample. So as of last week, we had a little over 140,000 who had actually participated. So or 65% of those who have consented. But we also want to look at the representativeness. So what is the representativeness of those who consent and provide a sample versus our general Geisinger population? And we've published some of that. Some of our initial looks, we looked at age and the number of conditions we know they're a little older, and we know that they have a little, a few more conditions than the general population. For effectiveness, we can look at both the effectiveness of the consenting into MyCode, the population screening, the number of pathogenic, likely pathogenics that we find. But we can also look at the effectiveness of our return of results, the genomic screening and counseling program. And so we've started to look at the number of conditions that are found, changes in medical management. We can also look at misunderstandings. We've started to illuminate some of those on both the staff and the individual level. And we can start looking at costs to the healthcare system. We have studies ongoing with that. Again, adoption is reached at the setting level. As far as I know, we don't have any of our clinics that have said, absolutely, you cannot put a consenter in our clinic. But for the return of results, the screening and counseling program that we've set up, we can look at those processes that we've set up for the providers, and if they actually use them or not. For the implementation construct, we can look at both on the, again, this deals with the fidelity. So we can look at, for the consenting, how consistently do they present MyCode? What are the costs to implement that? And what adaptations do we make over time to that process? And then on the return of results to our genomic screening and counseling implementation? What is our fidelity to that process that we set up? What are the adaptations that we make over time to that? And what are the costs to keep that implemented? And then maintenance, again, at both the individual and the setting level. So we're finally starting to look at those longer term outcomes for the patient impact over time, as well as how feasible is it to sustain some of this over time? And we're also looking at doing this with some of our eMERGE outcomes as well, so across the eMERGE network. But what I want to focus on is how, since we're talking about implementing, utilizing RAIM, operationalizing it, I want to talk about how you can also operationalize it for planning and evaluation when we're talking about a clinical program or clinical implementation. So Geisinger has been, some of you may have heard on the news, we're also looking at clinical implementation of sequencing. So while recently they have published the pragmatic use of RAIM. So these questions, so these are just the constructs of RAIM, again, like I posted before the definitions, except for these are now the pragmatic use of the RAIM framework. The questions that you can ask and that our programs, our clinical colleagues often think about, this is the who, what, where, when, why, and how, essentially. And putting those terms to the RAIM constructs and defining them in those terms rather than the more researchy what is the number and portion of participants, things like that. And so when we turned it, so what we did is as they planned the clinical sequencing, we were able to put the RAIM framework around their planning process. So who were they going to do this for? So Geisinger health plan members, initial health plan members is who they want to benefit from this. What do they want to do? They want to sequence the, and return results for the ACMG 59. That's their effectiveness. Where? They're going to start it at two clinics. These are our early adopters, our innovators, champion clinics. We're going to start there. How do they want to do this? They want to use the clinical infrastructure. Basically routine visits and then our infrastructure for my code genomic screening and counseling program. And when? So they want the maintenance. We're not looking at maintenance, but we can consider scale up from the get-go. That's how we put the framework around the planning process. But then they've also, as they've started gathering data, they're adapting and reiterating over time. And RAIM has actually, they've published on how you can use this over time. And now, this may look familiar to those of you who have, if you've done any quality improvement work or anything like that, this should look a little familiar. It looks kind of like a plan-do-study act or a lean process. And that's essentially what you can do with this. You can use RAIM to help you iterate over time, but still put that structure around it of what you're looking at and show the impact of those iterations over time. So some of the adaptations that have already been made to date as they started just looking at their data for the clinical implementation is, first off, the eligible patients weren't being identified, didn't seem to be identified and asked to participate. So they're a solution. We're going to give a list of eligible patients every day to the front desk staff. And that seemed to be working. Now the front desk can easily identify who's eligible. And some of the other things they've noticed is, well, maybe we need to expand the inclusion criteria because we may not have enough patients with, eligible patients with appointments in a reasonable time period. And so this also actually serves to fix additional issues with this identification at the front desk, so workflow issues, which are some of the things that they really want to figure out. And so, again, the pragmatic use is just this answering the questions from the standpoint of those who are doing the clinical implementation. So they're still asking the same questions that we showed in that initial table of REAIM. They're just asking it from the more pragmatic standpoint of what percentage and types of patients did we reach, who did we reach, things like that. So what I want to do is show one of the important things about REAIM is utilizing the data that is already collected, especially when we're talking about clinical implementation. So here's where we put the REAIM framework around the data that is already being collected for the clinical process. So for REAIM, they're already looking at the total eligible patients, daily, weekly, monthly. And they can actually see, they actually are looking at all the time, the patients who are actually approached versus those that are eligible and the patients that are tested or declined. With a little extra, this other column here is with a little extra time and effort, the additional information that could be gained. Out of that same data, that is already being collected for these clinical purposes. So we could easily get the representativeness of those who participate and those who don't. For the effectiveness, they can look at the number of types of results that come back, they're already looking at that. Again, a little extra time and effort, we can get some more information. For adoption, again, reach at the clinic level. Both clinics are participating, but we can look at the, if there's staff that consistently do or don't offer the testing. And with a little extra work, we can look at the representativeness and the differences in those staff. For implementation, we can look at how the fidelity, how well the whole workflow process is adhered to and what adaptations need to be made. And they're tracking those adaptations over time. I just mentioned two that have been made so far, but they are tracking those adaptations that are made when they make them, why they make them. So all of that is being collected and can be reported on. And then, again, while we're not looking at maintenance at this point, we are looking at considering how can we sustain it and how can we scale it up? So in summary, for clinical implementation or the pragmatic use, what usually they talk about is the pragmatic use of re-aim. There's this focus on the real world that enables the utilization of the data and the outcomes available in the clinical setting. So instead of us going and asking them, you have to record and manage a whole bunch of data that they don't really wanna collect, we can help them think through these questions that they're already thinking through and put it into the framework. It also allows for this re-aim helps us look at the iterative evaluation and adaptations over time that they're doing as part of this clinical implementation. And one thing I wanna point out that's actually just a recent learning is this, using these pragmatic questions and thinking through it with them in terms of the who, what, where, when, why, seems to be really helpful for those clinicians and innovators and champions who will sometimes emphatically tell you they don't do research. But they still think in terms of who are we reaching, why are we doing this? And so it perfectly fits into the re-aim framework. And what's good about that is then because re-aim is an accepted framework, it helps with the reporting and the scientific rigor that that can then be turned around and the researchers can use that to help identify what interventions need to be made. We can identify if more people prove, more clinical implementation presented in the re-aim framework, you could do meta-analyses, things like that over time. So again, overall with re-aim, each dimension provides opportunity for intervention. It can be used in any type of study, an observational study, effectiveness study or an implementation study. All dimensions can be addressed, in fact they should be addressed in any project, but you don't need to intervene on all of them at any given time. And then if you look at the re-aim website, there are methods like I showed with Reach for really reporting on and summarizing each of these constructs in a methodological and systematic way. So as an outcomes framework, it's also useful for planning an ongoing iterative evaluation over time. And here again, the re-aim website, most useful resource out there for re-aim. You can also get to a longer version of this presentation and some of the other ones, the website that Lori mentioned as well. Any clarifying questions?