 All right, we have about 30 minutes for discussion Q&A. What I thought I would do is maybe try to use a system where if you put your card like that, it means that you want to speak. And I'll try my best to try to get you in the order in which you did that. Since my card is up first, I'm going to take the opportunity maybe to pose questions and are well done in putting our three speakers together as a mini panel in the U. So thanks to NHGRI, we have a really prolific ecosystem of genomic medicine implementers with Ignite, Merge, Insight, Caesar. And at least in Ignite, I know we have 17 affiliate groups. And I'm sure there are many others across the other networks. And then there is implementation going on that we simply don't know about. And there's a global community that's implementing genomic medicine. So I guess my question is really, how do we harness the knowledge that's being gained across those communities in a way that allows for us to be more effective going forward? And I'd like to also just pose the question of what action can we take coming out of this meeting that will give us something tangible that we can begin doing to harness that wealth of knowledge? I don't know if David, Laurie, or Alana, you want to respond to that. So I think often getting folks together to talk about what are common measures that we can identify that we could potentially use across different observations, across different studies would be a great thing. So we've talked at the NCI, one of the things that the Cancer Moonshot gave to us was the impetus for thinking about big data and thinking about what a cancer data ecosystem would look like. And the broad discussion has been primarily around the individual sort of biologic level as opposed to the system level. I would love to see getting folks around the table and in the broader community together to say, what would that ecosystem look like that enables us to capture similar data across different health settings, across different populations, et cetera? I'm not sure the degree to which that's happened to this point. I mean, folks around the table might be able to tell me, but I would say that that's a logical starting point because I think until we get there, we're all measuring different things at different times and we can't pool it together as well. So I know a number of the networks that I mentioned are also developing standards. I'm not completely convinced that they're working across one another to your point. So thanks for that. Any other comments from Lori or Alana on that? We've got a lot of other questions around the table, if not. Yeah, I think I completely agree with David's comment. That was the whole point of kind of what we were doing with the IgniteWet network is, are there things that are common to genetic medicine implementations regardless of what the specific intervention is that we could measure? And I showed you the list of constructs and we were actually able to get a lot of those from our sites once we were able to sit down and talk about what they meant and why they were important and how we would collect them. We were actually able to do that, which I think was a remarkable achievement given, we didn't go into that grant with that idea, but it evolved over time. And I think that that could happen at the broader community level as well for sure. Yeah, so again, the same thing holds true as we've started looking at putting a framework around the Emerge network as well because everybody is implementing the return of results, but using slightly different processes, adapting it to their own setting and putting a framework around it allows us to look at that across the sites instead of looking like everybody's doing something totally different. Thanks. Okay, so I've got Rex, Bob, Howard, Bob, these two guys over here with his names I can't read and then Stephen, so let's start with Rex. So one of the things that, I mean, we've heard several times in the presentations that you talked about was you've seen one thing, you've seen one thing. And we're talking about really implementation of genomic medicine, which often occurs with an N of one situation. So I'm struggling as how do you match the N of one environment and you've seen one thing, you've seen one thing paradigm and then how do you go about implementing? And maybe the answer is these common measures that you've just been talking about, but it still seems to me that's one of the biggest challenges that we face is how do we match this N of one world with a paradigm that allows you to make cross system comparisons? My first thought would be contextualizing it, right? So your N of one is probably pretty similar to somebody else's N of one, is similar to somebody else's N of one and if you were to put the framework around it, so you say you measure these things so that I understand what's going on with that particular patient and how that pathway happened. Other people who are in a similar situation would be able to take advantage of that and now the adaptations might be slightly different from individual to individual. The idea that there's some common understanding of what's happening, they will converge over time. Yeah, and I think there's also a lot to be gained for people trying in different settings some quite similar strategies. And so I think there's both the potential, there's sort of common ingredients of context that can be measured. There's also common ingredients of the approach that individuals are taking to trying to get this taken up setting by setting by setting. And so I think there is convergence in the same way that these 61 are now 70, 80, 90 different models started from different places but are often saying the same things based on observations of a variety of different settings of illnesses of clinicians, et cetera. Just to follow up on that, do you have any sense of what N is needed for this convergence to actually occur? So I think it's always gonna depend on what's the question that we're asking. What's up? I suppose. But I think, again, in the same way as we're used to powering trials that are focusing on individuals, we have trials that are powered focusing on other individuals, providers we have trials that are powered on at the clinic setting. The challenge, of course, is at some point our one becomes the universe and it's hard to randomize the universe. Maybe people will try. I would just expand on that a little bit in saying it's sort of a different way of thinking so that the, yes, the N of one, rather than discounting that N of one is by using standard frameworks and reporting structures and outcomes and comparing those N of ones, as David just said, until you get your bigger universe, that's the point of looking at it this way is so that that N of one doesn't become just to oh, that N of one, we can't use your data. That data is just as important as this other N of one data and this other N of one data and that's what's gonna move the field forward faster and help us implement genomic medicine or anything more efficiently and effectively. Let's move on to Bob Nussbaum. Those were wonderful talks, I learned a lot, thank you. I was struck, David, by the comment you made about who is it that coordinates all this care and what I was struck by in all of your presentations was they were very dispassionate but I think the patients were all passive receivers of interventions and I didn't see how the patients were actually being factored in as active movers because they are. Yeah, I think it's a wonderful point and again I think one of the challenges is sometimes we have days and weeks to talk about these things and so in the 20 minutes one is selective but other ways in which that point exactly has been front and center, I've used this cartoon that has this person who's selling snow cones in a very cold place and someone else sort of taps them on the shoulder and says it's not or demand, it's supply and demand and it's exactly to make the point that too often we make that assumption that we know exactly what's needed as opposed to starting with the person who presumably all this is intended to benefit and saying what is it that would be most helpful. So, but yeah, it's a great point to say that it was an end around, I guess, but yeah. And I would also say too that while I think a lot of our talks did focus on the more organizational and setting level, some of the point of these models and frameworks is to remind us to move beyond that focus on just the intervention being effective to improve a health outcome for an individual patient and look at is it also feasible in the context of your healthcare setting? So it's not an either or it's all of it, patient and the setting and the providers and the staff and the broader environment. So again, and PCORI is doing a lot with that, making sure that the patients are engaged in this process. And so there are another place that we can look to. Thanks, Howard. Thanks, Jeff. Thanks for making me hungry for a Brazilian steak by you when you tilt, when you put it up, it means you want another portion. Oh my God, I'm sure, I was wondering where that was coming from. I know we have a break coming up anyway. Yeah, you can have chicken or you can skip the chicken hearts though. One of the things in oncology that we've had a problem with is rapid adoption is usually more driven by economics and champions than it is by process. And we suffer from that in terms of almost everything we have now in terms of the therapeutic side. But how do we drive away from a semi-panic implementation, which is let's get something going, which is why we have so much disparity even within an institution, towards something that's more process oriented. And I think the frameworks are nice if you have the time to sit there and talk them through. But typically a study comes out, KRAS is now mandatory if you want to prescribe a Tuximab and get it paid for, and therefore it is done. And certainly precision medicine is just full of that sort of thing. I'd love to get your thoughts on how do we practically move away from this lack of strategy? That's an easy, no, incredibly challenging question. So I mean a couple of things come to mind. So one of which is to what degree can we at least be able to use these rapid experiments to learn from? So how can we by studying the ways in which we are doing the best that we possibly can given these external influences and learn from them in terms of what's going well? Usually it's a longer process than that initial step. So I think that we can use those natural experiments to try and learn from. But the other thing I think is a broader way in which, and that was that very quick shot of the learning healthcare system and precision medicine and implementation science coming together, is to think about, again it may not be the latest crisis or the latest thing, but to think about over time how are our systems built to take on these things which I don't think we've always done such a great job of. And so to some degree expecting that the next thing that we have to deal with is gonna pop up in the next couple of months pushes us away from at least saying oh this is a crisis that's already on us. So again I think there have been other industries that have said how do we re-engineer our system so that it is able to account for the uncertainty that's coming in the future and I don't know that we've done that as well. So that was just too quick. Actually your second point was exactly what I was gonna say is if you put in place your infrastructure so you know you're gonna need the ability to do specific genetic tests in order to drive the chemo therapy for your patients. If you put into place the framework and we know that champions are great but they are definitely not sufficient. It just makes things very scattered and dysfunctional. So if you have an entire, you understand the ecosystem, you understand what does it take when you need to add a new genetic test? Who in the lab needs to be involved? How does that process happen? Who needs to have the buy-in? So if you have that whole process thought through each time they come out with some new paper that says you need to by the way add this you already have an implementation infrastructure in place and the buy-in from the health system that you're gonna be able to adapt more quickly. I'm not saying it's gonna be easy but you have a process for adaptation that's at least controlled. It's great for the places that are in this room. I'm thinking of, at our place we have an implementation department and it doesn't matter if it's a new scan or new test but most care for everything, not just cancer, is out in the community and there just is not the time to think much less do. Yes but they're also not doing K-Rest testing for most of the time. No they are because they don't get paid otherwise. Precision medicine has been adopted in the community way faster than in the academic setting and so because of these other drivers, so that's some of the challenges that are there. So I would add on that too and my question to you would be then, do you have the infrastructure in place that there's at least some of that data you know which, because you have to know which patients got the K-Rest testing so you can get paid for that, do you have the ability to pull that and say then yes, we know that all of our patients are getting K-Rest testing or that they're not so you know that there are some providers who still aren't doing it, don't want to get, apparently don't want to get paid for it. But again, using the other half of this is the evaluation. Using that data that you're already looking at because you're already saying, okay if I have to use this to get paid, I have to know that my patients are getting it so that my community setting is getting paid. You can report on that and what's happening and that helps the broader community with the one thing that David touched on too which is the de-implementation. What isn't working? Thanks for that. Just one take home message I got out of that discussion was that how much financial pressures and are really what may be driving a significant driver of implementation, particularly not outside of the non-academic community. Next is Bob Olden. Thanks also for all those wonderful talks. I'm really interested in digging into this deeper and so glad to see that the links and so forth on the slides. I recently was reading a paper where implementation framework was used and it seemed very complicated and challenging for me because I'm not used to it and began to think about what is the evidence that a framework like this actually helps things move forward as opposed to, say, guided chaos. And I think if I was trying to do something in my own institution and went to a clinician and said I need you to participate with this because it's a systems thing, they would say the same as they would say if when I go them and say let's do pharmacogenomics they say what's the clinical utility? So that's my question. What's the evidence? One of the things I wanted to say to your question is not, there's a lot of evidence that the frameworks and the constructs within those frameworks are you work and are useful for implementing evidence into practice or implementing things into any change. But one of the other things that you brought up was this you go to a clinician and they're gonna tell you I don't do research, I don't wanna do this. But and I think that's why there's been so much work into the pragmatic use of re-aim and what full disclosure re-aim is my first implementation science language. I worked with Russ Glasgow so I've sort of ingrained it into my being. But it's very simple, it's very approachable and they've used over time this more pragmatic structure to facilitate that conversation rather than going to a clinician and saying hey we need to use this framework and look at specifically these outcomes. Just questions, who are you trying to reach? What is the benefit you're trying to create and you can show then how this once, that everything that they wanna do fits in this framework and here's how we can then turn around and present it so that it's useful information not only for you clinically but for the researchers and the broader generalizability of it. I would just say, I mean there are over 100 studies that have shown that using an implementation framework is very heavily used in mental health. But the ICU infections is sort of the perfect example of how that's useful. Also there've been a number of studies in cost effectiveness for blood transfusions where they use an implementation framework and greatly decrease the cost for the health system. So I'm happy to give you a whole bunch of them but there are a lot out there. They just aren't always sort of tagged as this is an implementation science study and so you don't always know that the benefits were due to that. Okay, I've got Peter, Lincoln, Steven, Dan, Bruce, Terry and then Casey. So Peter. Thank you, so great to start with implementation science and this stuff tales a little bit with Howard's comments that a lot of this is happening at the community level and that's included in our organization. We've been trying as we try to implement different pathways to bring more rigor to that both internally with implementation science and it's interesting too with this concept of de-implementation because we try some things and realize, hey that didn't work the way we thought and we need to pivot a little bit. My challenge is we do have some internal funding and we're looking for external sources but the typical NIH cycle for funding these types of projects or just projects in general really isn't realistic for an organization like ours that needs to make a decision. Hey, we need to shift gears because of other pressures as Howard mentions. Is there a novel way or a thought process to this roadmap where there may be other sort of callous funding in this space to help us bring this information to the sort of the general academic knowledge as well? So great question and I think no matter even when some of our institutes and centers at NIH have done these sort of rapid mechanisms, it's no not as rapid as we would really like. One of the concepts that we presented publicly so that we can talk about it as part of this moonshot effort is to create these broader sort of implementation science centers and I mentioned on one of those slides that precision medicine might be this really lovely example of how this could be particularly useful because it's evolving so rapidly. Part of that is this idea of setting up these natural laboratories to study implementation and it would be thinking about what kinds of community or clinical sites might be willing to partner together and with researchers and have much more fluidity in terms of being able to identify as a center pilot studies and moving things forward. So that's along those lines and I think it's an ongoing challenge for all of our agency folks to say how do we balance between the work that needs a little bit more time to be fully fleshed out and the emergent need that comes in community and clinical settings and probably useful to any of our strategic planning efforts over these days. Just along those lines I know that Muin is not here today but he published a few years ago a data that suggested that the funding of T2, 3, 3 and beyond research was the NIH was 2% or less maybe even at NCI. How has that changed maybe from your perspective or perhaps Eric's in terms, has there been a shift to the right in a sense of the funding along the translational pathway? So I guess the quick answer is we have seen increases broadly in terms of funding. I don't have, we do our portfolio analyses that look at the trajectory over time and certainly we've seen increases as well as more people who maybe their primary question is around the effectiveness of an intervention including questions about implementation so it's both sort of expanding those solely focus on implementation and expanding implementation focus within other parts but it's still gonna be relatively small and the biggest driver from my experience has always been what's the denominator of studies that are coming in that we can fund? So across NIH we have typically a success rate and typically percentiles. The higher the denominator is in a given area it seems like the more ability there is to fund studies but if we have too few studies or if people say I'm not sure this would get funded and don't apply then there's no way they can be funded. We also do have a standing study section that specifically focuses on implementation and so that's been an effort to try and drive more but I don't know if others wanna make comments but typically it operates the larger the size of the community, the larger work we can get done. Yeah, I think to date much of this has been institute initiated rather than invested initiated for many of the reasons that David elucidates and that the peer review system sort of isn't ready for this or isn't seeing it as being ready for prime time and that's a real challenge but the only way to get it there is to keep knocking on the door. Right. Lincoln. So there's been this ongoing debate and we even talked about it this morning about how much evidence is needed to really justify implementation of genomic medicine in various verticals and I'm increasingly suspicious I think others are that there isn't a single randomized phase three study we can conduct that will convincingly result in the data we need to implement genomic medicine. I think increasingly it's just becoming, I think it almost has to be something of an executive strategy at some of these institutions where they decide we think that the metadata is sufficient that we're gonna do this for patients. And so I'm really intrigued Elana by what you had to say about the experience at Geisinger and using the 59 ACMG genes and you're gonna be sequencing patients and that you're gonna be targeting just the GHP patients initially. And so did you go and have a conversation with the GHP leadership or how did you decide, I'm guessing that GHP is gonna pay for that and what was that conversation like and how did you decide on that strategy to target those patients first? So this was a strategy. It was decided by the group that decided they wanted to implement this program and try it and see what we could if it could be done. And the health plan had said, well, we need to take on that risk because we can't ask other health plans to pay for this if it doesn't work. I don't know if there's a better way to say that. Yeah, it's actually beyond health plan. It's Geisinger employees who are covered under Geisinger health plan. And I think the piece that we forget about is that most of the insurance decisions made in this country are made by employers. Employers are like Geisinger that are large employers are self-insured. So they're actually their own insurer. And so when we think about this, we think about what is, where's the biggest return? What we call the sweet spot. And so if we have issues where a health related issue might impact a person's ability to be at work or be effective when they're at work, the presenteeism, absenteeism, that's a cost to the system. There's the actual medical cost, the insurance cost to the system. And then there's the personal cost to the individual, into that individual's family. And so we think if we can operate in that sweet spot, that's where we can have, show the biggest return on the investment. And so that's why the decision was made when we wanted to launch this critically to go to the health plan. And the health plan has something called a quality initiative. So they recognize that when the system provides better care, they actually run a higher margin. And so part of their contribution back to the system is to say, we want you to do more of this. We think that this is a virtuous cycle, so they invest in it. And this gets to Peter's point too, which is we don't look to the NIH to do this sort of rapid cycle because they're not set up to do that. If we have an interesting research question, then we'll go to NIH. But this is an institutional commitment, which I think, Lincoln, you were also alluding to. We have to invest in this, but the payoff is that if we actually invest in this, we do it better, and we can actually measure the return to the system. And piecemeal implementation always costs more than thoughtful implementation from the system perspective. So in some ways it's a cultural change that we've been through and Intermountain has been through and that other systems haven't quite gotten to yet, but that's the way we approach it, and that's the reason why we have willing partners to say, we want to go ahead and move forward with this because the research project that you'll hear more about this afternoon has convinced us that there's real value here. And so I actually think this is a huge key to how a genomic medicine can be implemented at scale. And there isn't a single research study that any of us can publish. It is going to convince everyone to go and now implement this. The barriers are really significant. And so it becomes a question of, can you present sufficient data to your partners on the payer side that this is worth implementing with the promise that you will measure outcomes and return those outcomes back to the payer. And it does become that virtuous cycle. And then you gain enormous credibility and you can begin to argue for additional implementations. I think also patient demand is gonna be a driver in this space too, because we see it certainly. And eventually if enough patients are demanding this, right or wrong, we have to guide them, but that's gonna be a significant factor. So how do we educate the patient on the promises versus the hype? That's right. And that's part of where the executive strategy comes in is executive level decisions can be made based on consumer demand, right? And they're responsive to that. Yeah, actually in a first genomic medicine meeting, GM1, one of the key drivers that we found across the communities that were already beginning to implement genomic medicine, more the fact that the C-suite, the CEOs or CSOs of those organizations had embraced it and said we're gonna take the risk. So I've been told by the boss that I can go over time a little bit since there's some changes in the schedule a little later today that will not make us late for anything. So I'd like to see if I can get as many of the additional questions in as possible. So next I think is Steven or David. I can't remember, didn't see who's sign was up. Thanks, Jeff. Really awesome stuff and I look forward to getting those slides. It really strikes me that this field of implementation science is ripe for NHGRI engagement. As I've listened and looked through some of the URLs that you suggested, it does seem to me that there's a lack of interoperability amongst these various frameworks. And at least for genomic medicine, it does seem to me under this NHGRI remit that there ought to be certain characteristics that define optimal framework choices and something that maybe would be a helpful offshoot of this conference would be just a gathering to actually determine what are the critical components of a framework for genomic medicine implementation. It strikes to me that we need both cost and outcomes that those are really critical. We need both dissemination and implementation and we need to stretch from individual benefits all the way up to policies. And as I look at the choices that there are, I'm not sure there's something that's really tailor made. So I guess I'm supposed to ask a question. And so my practical question would be given all of that. You've given us two examples of genomic medicine initiatives in Ignite and Geisinger where you did choose a singular framework with CIFER and REIM. Could you explain how you made those choices and to what extent you would agree with me that we need maybe a custom interoperable set of guidance in terms of as a community where we should go with this as opposed to selecting one off the shelf. So I will say, I completely agree with you, right? So that's what we kind of did when we went through CIFER and we said, oh, look, the patient's really missing from this part and, sorry, I can't look. I know I'm speaking to the microphone. That's really awkward, I'm sorry. But so, yes, we completely agree that there needs to be some customization and so we kind of tried to start that process. And then I should have said that CSER also is in the process of doing this and they were kind enough to want to learn from the work that we did in Ignite and so we've kind of incorporated the things that we were doing in Ignite with the things that they're doing there to try to come up with a new sort of updated genomic medicine model. And I know Carol's working on a paper that I'm supposed to be writing, so. Anyway, I think that bringing the communities together to continue to build this knowledge would be, I think, ideal way to go and I'm not sure we picked CIFER just because it was very broad and didn't define limit the constructs that we could look at. And so, to answer the last part of the question first is how did we choose ReAIM? For both eMERGE and the, sorry, I can't turn the speaker in the microphone either. For both eMERGE and the, for eMERGE it was because it was an, it was helped to answer the question of everybody's doing something different, how do we look at those adaptations and measure them across the sites? And so it was a framework that helps us do that. Same thing with the clinical sequencing is it was a way to show how ReAIM fit to show how we can take the questions that they're already asking and the outcomes that they want to use and put them into a framework that is then reportable and with the guidance to say here's the things that you need to report that you're already collecting data on how you can report them in a way that makes sense to you. So it was really giving a framework so that instead of it looking like the Wild West that you were just throwing it out there that here's a framework that helps you really put some structure around it. And that also what I will say with the, you know, with what's going on with the other networks using CIFER, all of this. There are lots of models out there and I don't know that it's a matter of finding the one model. It's the more like Lori's checklist idea. The point isn't the checklist, the point isn't the model. The point is putting the structure around it and one of those things we want to make sure that are measured and reported upon, the framework or the structure that you use to do that may vary based on or the framework that fits may vary based on your question or your study or your project that you're implementing. I just have one other thing. Based on the conversations that we were having earlier, I think one of the challenges that Genomic Medicine has is very different from anything else. Most of these implementation studies are things that are not expensive to do. Handwashing in the ICU is not an expensive intervention or are things that will decrease the cost to the health system and there's no reason for a payer to be involved in the decisions around these implementations. I think it's very rare that in Genomic Medicine where you're so beholden to the payer because it's so expensive and there's such a large infrastructure that needs to go around it and because of that uniqueness to Genomic Medicine, I think it really does take something at a higher level than the individual. So in your case, thinking about the community of people, if there was some sort of broader network or infrastructure that was available to them at a higher level so that when these things came out, they could just tap into and say, okay, now I've got my built-in pathway to do what I need to do. But that's a different level of commitment. But I think based on what people are saying is that's what you really need to make this work. That's just my thoughts. Thanks, so we've got Dan, Bruce, Terry and Casey and then I think we'll be at the end. So Dan? So I've been listening to the back and forth and I'm not sure what my question is. I couldn't resist the opportunity to say something. So one thought was my initial question was gonna be to Elana and that is sort of the mechanics of implementing the ACMG-59 in the Geisinger Health system. So how many people, who pays for it? How do you return, what do you return, those kinds of? I'll be going through that in a second. Okay, so I would echo what I think Laurie just said and that is that this appears to only happen in institutions where the leadership has drunk this Kool-Aid along with us and each one of us has sort of a different twist on that. So you have your ACMG-59 and we have predict but I mean the idea is that we're all sort of learning these lessons in a different way as opposed to the cancer space where there's this tight link to therapeutics. You can't prescribe expensive drug X without doing genetic test Y. So the problem I think that we have in implementing genomic medicine from cradle to grave is that we don't have a single payer system, we don't have a person who's gonna be a steward of that genetic information across the span of a lifetime and we don't know exactly what to use when. So this is a problem in pharmacogenomics, it's a problem in the rare variant genomic stuff and so I think that we're ending up being a fragmented community and maybe one way forward is to sort of have meetings like this where we decide as a community where do we put our oomph and I'm not sure how to do that but that's the sort of fundamental problem that I see right now and I wish more people would drink that Kool-Aid. Yeah and we might also learn from other systems outside the United States that are. Again who have the sort of, I hate to sort of be a complete pessimist but who do have single payer systems for better or worse and who do have mechanisms to aggregate these very, very large data sets. I mean that's the other part of the cancer business that by making people contribute data to very large common data sets everybody gets smarter about what variants do what and we're not quite there yet even in the pharmacogenomics space. Dan I just jump in here though and say I agree with you completely but the problem is those people are not in the room today and we need to figure out how to address that I think. I mean we can all go back and be as persuasive as we're capable of to our guys but yeah. Okay Bruce I think was, yes Bruce you're next. Yeah thanks. It was wonderful. So I think the discussion has focused on implementation into the existing sort of paradigm of medical care and maybe that's the scope of this meeting but I'm wondering where disruptive innovation fits into the picture and I guess that could be things like direct to consumer testing or app based medicine or social media and I suppose we could let natural selection take its course for those things realizing it's possible that will be the victims of it or is there a way to harness the implementation sort of science approach into this larger sphere than just existing medical care systems? I think so. I think that in all across medicine most of the NIH institutes have the studies that are looking at this particular app or this newer approach or direct to consumer advertising in some limited scope and we haven't necessarily thought about how to put those together. It's such a rapidly evolving and expanding base. The idea and that was that thing starting from if anyone still has that mindset that the best way to inform a population is through that initial peer review obviously were a few years past that. So I think it fits entirely into the space. I mean broadly I think implementation is really trying to understand what is able to be supplied, what is demanded and how do we try to use all of the knowledge that we have to try and drive toward better health and better health care. And so I think it is fragmented. There's so many different ways in which we can approach this. It's also understudied. So I think we don't do nearly enough in our NIH portfolio to focus on dissemination and how information is transmitted through a whole range of different technologies as we should. In cancer we have a health communications branch. Most of NIH doesn't really think about at least in as vibrant a portfolio about the ways in which communication is changing, the need for care, the request for care, et cetera. So yeah to me that's all part of this space. Okay Terry. So I wanna agree with what David said and sort of maybe build on it a bit. We ought to think a little bit about what we're asking for because it seems like early this morning we all sort of decried the fact that payers aren't paying for this and so we need to do research to show them that they should pay for that. Is that really the NIH's role? And then we heard from Howard what happens when a payer mandates something that then there's this whole infrastructure need that needs to be had. And so if the holy grail is actually getting people to pay for it, do we need to also to think about what happens once we have the grail and how we drink from it in order to be able to get this to actually work for patients' benefits? On that David or others or Howard, but it seems sort of the first time I ever thought about that was oh my goodness once they're paid for it then we have a whole nother set of problems to deal with. Yeah I mean I think again from our observations across a whole range of different health topics we have a lot of things that are paid for and people don't get it. So it is and people there have been tables that we've sat around where the issue has been all we need to drive for is that payment decision. But because particularly as we cascade outward and say well what is it that we actually need to provide in order to make this information used to the benefit of people and it's far more than what we often start with right? And so yes I think we see the economic aspects as necessary but insufficient and that if we assume that that's our one audience then we miss all of the other ways in which care is suboptimal and health is suboptimal. And I think also we can use this to our advantage. One of the, there's been a massive improvement in our region for lynch screening and the improvement is not implementing of lynch screening. Improvement is MSI testing allows you to get immunotherapy. And so now everybody is getting lynch screening not to find lynch but to find eligibility for immunotherapy which finds a bunch of lynch patients. And so can we harness the evil for good if you wanna call it that in terms of trying to make sure that we can get a lot of these things for and it's you know if I knew how to do it we'd already be doing it but you know there are some opportunities emerging to kind of use maybe inadvertently achieve our goals through a process that we didn't expect. If I can just add to that though I think one of the things that we consistently encounter and maybe it's an oddity of an academic health system but is you know we're all the true believers right but then you try to go to the primary care providers and it's really hard for them to even think about how to do this. You know we've all done experiments in our health systems where we try to implement pharmacogenomics for example. You're biasing the 430 debate. Isn't part of the fundamental problem and I'm looking at Eric now because this is sort of part of the fundamental problem of genome science is that we're looking at large populations and trying to find small subsets of those populations that because of their genetics deserve different treatments deserved a different approach to care and that's gonna be the fundamental problem with the payers because the payers want a population approach payers not interested in doing a thousand genetic tests to find one person who is different or a million genetic tests to find one person who's different. So that's the problem that we should have end up grappling with and need to think about in terms of a strategy for going forward. Although I think the OM in our name answers the question so yeah so you do a thousand genetic tests and you find people who are at risk a thousand different ways but then you have to have a thousand different implementations I know you know this I'm speaking to the camera. So guys I just wanna let's we just need to get to the some closure on this session so I wanna Robert do you have a quick follow up to that really quick. Just a quick thing I mean the challenge also I love the theme of fragmentation and the challenge is how do we take a public health perspective when this is fragmented across specialties so OBGYN, cardiology, pharmacogenomics all of that and across time because there's so much about genomic medicine that will land across the decades of a person's life rather than in some narrow window like a treatment perspective. So I just wanna keep those fragmentations in mind and I'm curious whether implementation science has precedence for looking across specialties and across decades in the way that I think some of us believe genomic medicine will really find its true value. Yeah so you were not allowed to ask a question just kidding but if you wanna respond to that that's fine. So yes I think there has been limited within and again I think all the caveat is within some of the stuff that we've done at NIH because we end up being siloed into different mandates different institutes but there is precedence for it and I would say that the best example is looking at all of the efforts to try and manage care for various chronic illnesses and in some cases multiple chronic conditions which requires thinking about some of the Ed Wagner's chronic care model thinking about broader across different systems primary care specialty care, allied health professionals, care managers, et cetera. So I think there is but I think it's how broad and how complex are a piece are we willing to take on, we need to but it's a question of are there manageable components of that so that the study doesn't have to be the universe. So I think that's where our challenge has been. And the last question of the session we'll go to Casey. Keep it short since we have a session tomorrow where we get to talk a little bit more. Let's get closer to the mic please. So my question is around health care technology fits with the framework. So I often do start with these different frameworks and models when I'm thinking about technology interventions and I run into the challenge of figuring out, okay, if it's not a perfect fit can I make an adjustment? And so I thought it was great that the paper that Lori brought up they actually proposed additions to those models and so I'm just curious how that went over and what's that process when you have to kind of make those adjustments? Well first, one of the studies that came out while we were working on the CIFER draft model was the ERIK where they looked at all the different implementation strategies that are possible and then they categorized them. And so technology is very heavily used as a implementation strategy. Clinical decision support they claim as a implementation strategy if you want to spread the awareness about how to respond to a particular patient and you get a pop-up alert in the EMR that's actually a way of implementing that change in care for the patient. So we strongly believe that implementation and technology go hand in hand, particularly around genomics, but in terms of actually adapting the model that's just a, I get people emailing me still all the time saying I'm interested in using this, this and this and what do you think and then we take that information and at some point like we are now with CESAR sort of publishing an updated version of what does this look like now that people have used some pieces of it and have feedback for us. Excellent, so in 30 seconds I'm just gonna summarize what I heard in the last 50 minutes. Headlines being standards, we started off with standard measures across various implementing groups including the patient voice, generalizability and dissemination, the need for infrastructure for implementation science, the need for funding for implementation science, having an executive strategy that engages the payer community, providing guidance to our community in terms of which implementation frameworks might be optimal for genomic medicine, encouraging disruptive innovation outside of more inertia-based healthcare systems and probably most importantly is the alignment of the economic incentives to implement. So I think that was kind of some of the gestalt for today and I wanna thank our panelists and also the engagement of all of you for a terrific first session of this meeting. Thanks. And now we're gonna take a break but I don't know when it ends. Yes, thanks very much Jeff. I will tell you when it's gonna end. So we're gonna take a break and we'll come back at 11.15 to reconvene. But thanks everybody, this was fabulous.