 Thanks very much, and I believe the last time I was at this gathering it was the second of the genomic medicine meetings, and it's because of all of the work that all of you have done that some of the things that I'm going to talk about and some of the advances in our thinking have been possible. So thank you for having me back. Just very quickly, I want to make sure that we're on similar front, or at least I'll give you a perspective from what many of us at NIH who are engaged in implementation science are thinking about in terms of what are the challenges that we're grappling with, what are the activities that we're trying to deal with, where the field is moving, with the summary that this is a great time, because I think genomic medicine has a chance actually to lead because it gets beyond some of the things that I would argue in implementation science we've been stuck with, which is this assumption that we know everything about the evidence and then we implement it, which isn't necessarily true in any case. So I think we have a chance around this table to actually lead a broader community, but I also wanted to make sure that we talked a little bit to be on the same page. When I say implementation science here is what I'm meaning. By show of hands, who in this room has seen this particular slide? Okay, a couple folks, but maybe many have not. This slide summarizes work back in 2000 that basically asked the question, let's assume that implementation happens when we complete our study and we complete our publication and we submit it for peer review. What happens to that? Well, what you can see here is this rough publication pathway. On the left side, you see all of the ways in which we lose that valuable evidence. Maybe we have a negative result, and it's hard to get a negative trial finding published. Maybe it doesn't get picked up in various reviews, various guideline development, various textbooks. And as a result, that valuable evidence doesn't make its way into implementation. The right side just approximated how long each and every one of these steps would take. The tag line, I'm guessing by show of hands, people have seen this one though, right? The 17 years, 14 percent. This is where it came from. So if anyone hadn't seen this study, but has heard that 17 year, 14 percent of original research, this is where it came from. This summary that we lose so much, the vast majority of our evidence before it gets to benefit patient care. The other tagline on this, by the way, is that 17 years to turn 14 percent to the benefit of patient care so that you are as likely to get an evidence-based practice as you are unlikely. So it was actually 17 year, 14 percent, but the threshold set for implementation was 50 percent. So just think about that. The challenge that we have calls into mind, there's a famous saying, a blank needs a blank, like a fish needs a bicycle. It calls to mind this idea that we create a lot of bicycles. And we don't necessarily think about who we are creating them for, and we may find that those who are demanding something to help their health or to help their health care are not those who are able to use the things that we are designing. So we have to think not just about how do we get these various tools, these various innovations into care. We need to think about what is the relevance of those innovations, and particularly how they are designed so that they can be used by the range of different stakeholders. So if we think about any genetic tests, let's argue we have a genetic test that can help us to identify optimal treatment for a particular illness, reduce risk. What happens if we have that perfect test, but only half of our insurers choose to provide it, choose to reimburse for it? And let's furthermore say that only half of our health systems, again, remember this is that 50 percent that constituted success in that pathway. Let's say only half of our health systems, and that would actually be pretty good, choose to train clinicians to incorporate this into their practice. And let's say only half of those clinicians at those health systems are in a position where they're able to prescribe that test, and let's say they get to half of the patients. And again, this would be pretty good for most of health care if we got this far with any evidence-based tests, because often we don't do this nearly as well. Even if, in this scenario, we have no problems, which we know exist around access, around testing, around follow-up, we know that we're down to a fraction of the benefit that that test could actually have. And so the likelihood with this scenario that we can get optimal treatment based on this ideal, this perfect genetic test loses a lot of luster when we start to think about the different elements of the system that we need to be thinking about with regards to implementation in order to have a better outcome. This is basically the argument, and you'll hear a lot more in a little bit from Alana, for ReAIM, for saying that while we've made progress in thinking about our research going beyond traditional efficacy trials into effectiveness, it's only one piece of that broader impact that we need to have. We need to be thinking about how well our interventions are able to reach our target population and what kind of supports are we providing so that organizations can adopt these particular interventions so that they can use them effectively, and so that, importantly, over time they can be maintained. It makes no sense to try and implement something that a few months down the road stops being used, particularly if it's effective. So it's this ReAIM framework that Russ Glasgow and others had developed that helps us to think beyond just what is that evidence around that test. So the terms that I'm mentioning here, just to make sure that people are on the same page with how we at the NIH have defined this space. Implementation science has been this broader term related to the study of different methods where we're trying to integrate research findings, integrate evidence, integrate interventions into different health care policy and practice. We at the NCI and others define health care policy and practice pretty broad, basically the settings where we are trying to maximize health and receive health care. At the NIH we've done a lot of work around dissemination research and implementation research because traditionally we haven't done either of them all that well. Dissemination research recognizing that we need to get evidence to be integrated by into the practice of different audiences, that we need to understand how that knowledge is best spread and sustained because often it's insufficient to do some of our passive dissemination efforts and we need, particularly as technology has been changing, we need improvements in that front. Implementation research very much focusing on how do we get these different tests, these different tools, these different interventions to be integrated within clinical and community settings so that we can actually improve patient outcomes and benefit population health. So a couple of examples, I assume everyone around the table is familiar with Lynch syndrome. Hopefully, many? Okay, so some, maybe not everybody. Okay, so Lynch syndrome for us in the NCI world represents a recognition of incredibly increased risk for colorectal cancer, endometrial cancer, stomach cancer, ovarian cancers. But it's not just that we were able to identify the individual who has colorectal cancer and carries the mutations that correspond to Lynch syndrome. For implementation science, it's how do we take that knowledge, that ability to identify that person and scale it up among all of those who have colorectal cancer which doesn't currently happen. Furthermore, since the gain is not only for those individuals who are already suffering with cancer, but it's the family members who might be at higher risk, how do we scale up the testing to those who might be at that broader risk and get to them earlier? But not just that. Implementation science says that once we identify those folks, we need to be thinking about what to implement in terms of screening, monitoring treatment that correspond to that identification of increased risk. And importantly on the implementation science side, we need to be thinking about what is the capacity of our workforce, what are the training needs, how do we make this become standard practice if the current understanding of Lynch syndrome and the potential things that need to be cascaded aren't already in place. Second example, precision medicine. You've heard a lot about the early success of trying to launch this broad cohort across the NIH, the all of us cohort, so that we can learn incredible amounts about a million or more people within the U.S. Here we have an opportunity to not just think about the discovery aspects, to think about what are the things that we can learn about the population that may someday help folks in terms of their healthcare and their health. For all of us and for precision medicine overall, we have this potential opportunity to think about how precision medicine findings can be incorporated in clinical practice in a steady stream. The traditional way of thinking about things is we complete our research and then we think about implementation. But with precision medicine we now have this new opportunity to say how do we get these things implemented, get these things understood, get these things optimized as we continue to learn. But importantly, how do we implement, and this was Robert's question before, how do we implement evidence that is still evolving? How do we think about training and supporting a workforce that is going to be ready and primed and thinking about what else, what have we learned most recently that can improve the practice, improve the population, improve health and healthcare? And importantly, as was mentioned before, how do we understand and plan for the necessary reimbursements, the necessary coverage, the necessary ability for anybody to benefit from these findings? Third example, personalized medicine, or I would say broader, depending on, and I'll get to this in a moment, depending on how we define what it is we're implementing, we start to think bigger about the picture, about the task, about the challenge that we have in front of us. Okay, so this is Eva, she's about six months old in this particular photo. She's developing, she has some GI issues, but her parents start to get concerned when she misses those early developmental milestones. The pediatrician is the first place to go. Pediatrics over time says we need more expert understanding of what's going on here, so she gets referred to diagnosticians. They recognize that there are certain symptoms that need to be dealt with, and so a range of different specialists are involved. Over time, the county services that are being provided, infant and toddlers, jump in, and they have potential services. There are public and private therapists that do their best to try and understand symptom by symptom what's going on. Over time, there may be behavioral analysts because there may be certain aspects that there's an assumption that sort of behavioral training might work with. There's insurance company dialogue over time. There's an educational system that comes into play first in terms of preschool and then of course as she gets older. When there are disagreements, sometimes the legal system has to be involved and certainly over time as this child who has some pronounced communication and in this case communication and movement disorders, there are different equipment manufacturers that come into play. All of this is a cascade of care around this baby and this is personalized. This is different for her from another child, from another child, from another child. The question for you is who is ultimately in charge of implementing that care plan? Any guesses? Okay, no one. If there was someone, who do you think is expected to take care of this? Okay, I heard parents. Okay, mom and dad. Here's mom and dad, right? What are mom and dad dealing with? A lot of implementation challenges. They're dealing with challenges in terms of finances, in terms of their training and the training of all those providers that they're trying to make sense of, the challenges around time, the challenges around information exchange around this different cascade, the challenge around adverse events that may occur, caregiver burden which they're acutely aware of, sometimes opposition among clinicians or with different systems, multiple diagnoses, multiple disorders that may be competing for their space, travel challenges, varied opinion. Not everybody agrees what's best for this little girl and of course major challenges in terms of access and policy. So what this gets me to is that depending on what we consider to be that thing which we are implementing, we have to draw our circles wider and wider and wider. We need to though start with asking that question of what is it, the importance of what? Are we talking about the test? Are we talking about the efforts that are needed to make sure that those test results are effectively disseminated or effectively implemented? Is it the monitoring, the follow-up, the preventive care, the treatment that needs to follow that takes advantage of that information that we gained? Or is it all of the above? We really need to make sure that depending on where we set this, we don't have to set it in the same place in all cases. But it's really important that we don't just jump over the what. So when we think about studying implementation, again, we're trying to draw that contrast between the what, which you'll see on the left here, it's empirically supported treatments, evidence-based practices, quality improvements tests, and the right, the health outcomes that we're ultimately trying to look at. The usual trial focuses on that left end and that right end. We enroll a sample of patients, we give a particular intervention, and then we look at the health outcomes. Importantly here, we want to make sure that we don't jump over the how do we get that what, how do we get that intervention, how do we get that test to be delivered, and importantly that we have specific implementation outcomes. We need to be thinking about how acceptable, how sustainable are these different interventions, how faithfully, with what level of quality can we get these different interventions to be implemented, and what are the costs associated, what level of uptake. When we think about it again, we're drawing a contrast. We've hopefully done some work on the what, but we're now focusing on how do we make that standard care for everybody. But the argument is, if we do this well, we should have knock-on benefits for our service systems and knock-on benefits for health, not just one person at a time, but at that population level. And of course this is back what the NIH mission is about, transforming discovery into broader health for everybody. Thanks to folks in the room and others, the paradigms for dissemination and implementation science have moved over time in good ways. For a while we were talking about this generally linear pathway from basic to clinical and in clinical research and then into clinical and community practice. Over time we've recognized many more translational hurdles and that there's more reciprocal opportunities, although we still don't do as great a job of making sure that what we learn in the clinic, what we learn in community practice helps to shape what that next level of new discoveries are. But the point is that you all, we all I think collectively have made some progress. You'll hear about a number of different models. This is just to say that we did a review back in 2012 and came up with just at the end of the review 61 different models, different frameworks, different theories that were focusing on implementation science and trying to understand those different processes. Some of them are more about dissemination of information. Some of them more about implementing interventions. And some of them go from individual all the way up to how do we change policy. But the point is we don't have to create these anew. Here is just one of those. This is called the EPIS framework. It would be amazing if you could read it from wherever you are. It happens to be locally born. So Greg Aaron's who's at UCSD had developed this with colleagues and basically says that there's a lot that's going on in terms of the local setting and in terms of the broader ecology around practice that is important and that we need to be thinking about some of those factors differently. If we're just exploring, do we introduce this new intervention? Do we prepare to get it implemented? Do we actively implement? And then how do we sustain it over time? Just so you know NHGRI and CI, many others have been engaged in a trans NIH set of program announcements around dissemination and implementation research. It's how we've over time tried to organize the DNI research agenda, more than 200 grants, but a fraction of them have been really around genomic medicine implementation. So it would be great to see more in this area. And program staff, Ebony Madden from NHGRI is the key contact, but any of us are happy to talk to anyone about your concept papers, your specific games, because we recognize how much more we can do in this area. Just so you know, a couple of the key priority areas, we'd love to see more on local adaptation of evidence-based practices. We know tests, we know that interventions are used differently in different settings, and we need to keep gathering the evidence-based around it. We need to learn more about sustainability of our different interventions, more about scale-up, and increasingly, and particularly in this space, if we recognize that the current practice is not optimal, we can't just think about implementing something new. We need to also think about de-implementing what currently exists, and how do we study that? We haven't done nearly as much there. So just to finish up, we have a number of ongoing implementation science training for anyone who's interested. TIDER, the Training Institute for Dissemination, Implementation, Research, and Health has been our broad NIH one, and we have people in the room who have gone through that. NHLBI has recently had a whole series of institutional K awards focusing on implementation research. There have been other R25 research education grants in cancer and mental health that have also tried to get a new cohort of investigators, and we recently launched the cancer version of TIDER, TIDERC, very creative, the Training Institute for Dissemination, Implementation, Research, and Cancer. Would love to have more of you join us in December for the 11th, it's the Year of Elevens, the 11th annual conference on the science of dissemination, implementation, and health. This has a whole area around the future of implementation with Precision Medicine right up front. We'd love to see you engage there. And then finally, it's been very exciting to have the Cancer Moonshot, which was launched a couple of years ago because it was a recognition that implementation science is important and that genomics and implementation science, as in intersecting field, is important. So we've had an RFA, it's back on the street right now, focusing on that cascade screening and what can we learn related to it. So a couple of years ago, we did a thought piece in JAMA that just said rather than thinking about Precision Medicine, implementation science, and the notion of ongoing learning within healthcare systems independently, maybe we should figure out how to braid these pieces together. So final slide, Eva, who is now eight, is in the center there. Her brother Jordan, who is ten, is to the right. This is why I'm so engaged and so inspired by the work that you do and wanting us to do more. And I know all of you have reasons why you were inspired as well. So thank you very much. Thank you, David, for a great kickoff talk for this session as well as for this meeting. So unless there are any clarifying questions, we're going to have an interactive period after the three talks with all three speakers. So if there are no clarifying questions, or are there any clarifying questions for David before we move on to the next? I always have one. You have a clarifying question. I do. Okay, Terry. So, David, you mentioned implementing when the science is evolving and obviously, you know, the evidence is always evolving. And maybe we can save this for the discussion, but if you could think about or help us think about, you know, how do we justify something not being too preliminary or too tentative to be able to start to implement? Sure. So very quickly in giving credit where credit is due, because Laurie, who's about to jump up, said this to me while we were listening to the opening discussion, is we've seen an increase in the number of hybrid trials where basically you're asking questions about the effectiveness of the underlying intervention simultaneously with asking questions about implementation. And that can take different forms depending on how uncertain is the evidence. It can range from saying we're typically focusing on the what, on is there evidence for this particular intervention, but we're gathering early information that's going to be relevant to implementation. Over time, it may be more equal a focus on the what, the effectiveness of the intervention or testing and implementation, and then it can even skew where the primary focus is on implementation, but there's still a need to check back on the evidence base. So that's one of the things that we can talk more about that if you'd like. Thank you. And actually that question came up in some of our pre-discussions before the meeting with the three speakers. So thank you, David. We're going to move on to the next talk by Laurie Orlando from Duke University. Who's going to speak on frameworks, models, and genomic medicine? And I noticed during David's talk that people were taking pictures, which is fine. I'm going to, the chairs haven't said this, but I'm going to assume that all speakers will provide, if they're willing, copies of their talks, either edited or in PDF form or whatever so that we can. They're available essentially, you know, with the video. Yeah. But I just wanted to make sure that people understood that they didn't have to, take multiple pictures if they didn't want to. Laurie.