 Thanks, so I guess I would like to ask the crowd to reflect on some of the comments and themes that we just heard. So I'd be interested in knowing is there any program right now that is engaging the health economists, the payers, the policy experts at the point of designing the program as opposed to sort of as a secondary thought or something of that nature. I would also like to know whether any groups have really achieved this vision of the learning health system that requires the bi-directional flow and the virtuous cycle of learning. And then the third theme I think we heard is this notion of contextual evidence and contextual decision making. So any, I see a couple of hands, so I'm going to go with Mark and then Rex. So the, and I did sponsor the first meeting to my knowledge that focused on health economics related to several of the funded programs that were in the genomic space. Terry was at that meeting and a couple of the others were. Now this was more, it was not at the inception of the program, it was more after the fact, but I know that Emerge III has as its intent to have economic analysis more up front. The comment I would make about economics is that having worked in the U.S. payer system quite a bit is it's not something that U.S. payers pay attention to. They are much, much more interested in what is the impact on the patients, the clinical utility. And if there's demonstrable clinical utility then they generally figure out a way to pay for it. Now CMS is somewhat of an exception to that, but it's very different than in national healthcare systems where you have a global budget and you have to make decisions. That's just not the way it works in this country. So I don't think we should over, I think it's very important, but the clinical utility evidence is much, much more relevant to them than the cost effectiveness is. The second thing I would just point out in terms of programs that in some ways could affect issues relating to training, but also I think in terms of design, is that there is an NIH common fund program in dissemination and implementation science that NHGRI actually contributes to. And some of us have engaged with him over time. And they're very interested in working with us and figuring out how to do this because David Chambers at NCI is leading that. And they're funding opportunities through this and there are training programs that are funded through that as well. So I think that that's an important resource that we haven't leveraged quite as much. And I think I'll just stop there. So before Rex, I just want to first thank you for bringing up the dissemination and implementation programs. I think that's important to recognize at this meeting. I also wanted to ask you whether you think the Affordable Care Act, Accountable Care Organizations will shift the current paradigm from what you say of clinical utility focusing on clinical and patient outcomes versus economic ones. It has the potential to do that. But again, I think that it's early enough that the demonstration projects are still kind of finding their way and figuring out how to do it. And the reality is that as little evidence of utility as we have, we have way less evidence of cost effectiveness. And in fact, you can't do cost effectiveness without data on utility. So I would still argue that at this point, at this time, more emphasis on the impact on patients of the interventions would be a priority as opposed to the cost, although they are two sides of the same coin. So if we do think about this intentionally at the outset, then you can develop ways to capture that. There is one other thing that I wanted to come back to, and that is that one of the things as I was reviewing the program summaries that I was very impressed with, with Ignite, and I'm going to emphasize in panel four, is the idea that you guys came together and decided on a set of common outcomes at the outset. And I thought that was extremely important and is definitely something that I would highlight as a takeaway, that if we think about commonalities about how to do this across all of the programs, it's going to increase the speed at which we generate evidence. Thanks, Mark. Let me just make one quick comment and then go on to Rex and then Howard, is that rather than wait for the ACAs or ACOs to mature in any way, we might have an opportunity as the genomic medicine community to directly engage them because their mandates are to achieve cost neutrality or better, and they're obviously very sensitive to the management of their costs. So it's a similar, I think, system to what the Canadian system and others are just on a very small scale, but we should think about engaging them as an action, at least. Rex. So you said something that I guess I'd never really thought about before, but really struck an important chord, especially as we think about learning health care systems and how do we really capture evidence. And the thing that you said that really struck me was about the quality improvement projects and how quality improvement projects often don't get published. And that really struck me because in some ways I know people do quality improvement projects because they don't have to deal with IRBs and a lot of the other complexities that come with publishing. So maybe one of the things we should be thinking about is how do we think about strategies that maximize people sharing the outcome of their quality improvement projects that don't put them at jeopardy for all the things that maybe make research a little more complicated. And I know that's one of the goals of the idea of a learning health care system, but of the many great things you said, that one really struck me as something that at least I hadn't spent a lot of time thinking about. And maybe if we focused on how do we capture evidence that comes through quality improvement projects and make sure they get disseminated, that would be a really important advance, I think. Thanks, Rex. I certainly wholeheartedly agree. Howard? So I was on a panel last week in the UK, and this topic was brought up as well. And there was a part of this that I actually hadn't thought about. And I don't know if this makes the conversation better or worse, but I'll put that out there that the health care economists tend to think about qualities as a way of measuring what's the cost of a test versus what do you get in terms of quality of life years afterwards. But indeed, this other panel member mentioned that a lot of the cost that's associated with health care is actually outside the health care system. So social services and families actually pay a tremendous amount. And so if we're thinking about the economics, I think somehow we also need to think beyond just the health care. Because it's a much, much bigger number when you think about social services, long-term care, and finances around this. And maybe we're looking at this a little bit too narrow. I think that makes it a little bit more complicated because we're trying to figure out how to do this already with one area. But if what we're trying to show is the economics around this, then I think we need to potentially get some other people in that can help beyond just the health care dollar. So in our shop, Duke, we're self-insured. So we try to engage the employer side of the house in helping us figure out about absenteeism and the days away from the workforce as another type of measure. And I think that's sort of what you're advocating. Is that right? OK, I see a number of hands. Let's see, let's start with Steven. And then we'll go on to Gail and then to Mike, Steven. So I'm glad you started off with evidence, Jeff, because it really is becoming, I think, the critical barrier to broader implementation of genomic medicine. In our insight study as a stretch goal, we did include a health economist to design a cost-effectiveness analysis of genome sequencing in NICU babies. And now that we're into that, about a year and a half, we understand how difficult this is going to be. So first of all, you think about, well, if it's true evidence, it has to be prospective, not retrospective. And then we have to think about randomization and what's the control arm and is that ethical. So how do we structure that? And then one of the problems with precision medicine, at least in the NICU setting, is the diversity of outcomes. And so it's very different from a traditional design where you have a singular intervention. Instead, in genomic medicine, we have multiple outcomes related to myriad genetic diseases. And so thinking about how to bundle and design those studies and how to have it be testable and powered in a way that can actually show, at the end of the day, clinical effectiveness is not easy at all. Thanks, Stephen. And before we go to Gail, I want to ask maybe Pierre or others on the panel. So are you across, I'm thinking specifically about genome Canada and the GAPH, are you developing standard ways to think about the economic analyses? Are they program specific? Are you sharing in the learnings across your programs that inform study design and the kinds of outcome measures that Stephen just mentioned and some of the other issues? Yeah, so the complications are huge. And each type of disease setting will have its own issues in terms of articulating the value to. And it's not only that maybe I was too direct in my thoughts. It's not only to do with the health system. There are obviously social values that we need to integrate as well. So come back to the rare disease issue. Because that, it appears to me that on one end of the spectrum, that is a fabulous model system to look at some of these things. Because one can take, and we're doing this within the project on rare diseases, is to look at the articulating the value in terms of not only how many dollars have been spent within the health care system in terms of looking after the diagnostic odyssey in these families. And some of those, as you know, go on for 10 years, 15 years, whatever. So does that dollar figure, compared to, and if you implement genome, whole genome sequencing, what does that? So that's a very favorable, in most cases, scenario. But as well, you have the social, emotional, and other things that go around the impacts that this has, that genomic medicine can have on the families, and the way they live, and so on and so forth. So that is much more complicated to put an ROI on, I agree. But I think it can be articulated in a way that gives patients quite a strong weapon to say, you know what, in terms of access, we should have, as a group, we should have access to this technology and change policies in that way. And that's an angle that we're very keen on in Canada because of the socialized medicine organization that we have there. So I think, yes, it's very, very challenging to put a health care dollar figure on some of these. But I think mobilizing patient groups can be a very powerful weapon in us trying to, well, why would you not do this? Give me some arguments why you would not incorporate some of this new technology in some of these key areas. And I can see this, so if the rare disease thing is at one end of the spectrum, then you can see that things around pharmacogenomics, things around cancer, which are very progressive at the moment, we'd have some very interesting arguments to give, as well as hopefully a positive ROI as well. That's very helpful. Thanks, Pierre. Gail. So I have to say that, in my experience, insurers are very interested in the cost analysis. What we get pushed back from when we try and order panel tests in clinic is they tell us very directly, we don't want a panel. We know one gene at a time will cost more, potentially. But we think the VUS, as you find on your panel, will cost us more money downstream. And so we prefer to test one gene at a time, which is not very efficient. So the kind of cost outcomes analyses that compare like we recently published, panel testing versus smaller panel or single gene testing, not only inform the insurers, but they inform practice guidelines, which is how the insurers make some of their decisions. So to that end, in the CSER consortium, two sites, at least, including ours, have a randomized control trial that includes cost outcomes as one of their outcomes, as well as patient reported outcomes, surveys of what patients' preferences are, et cetera. But I think that building those in upfront and if you're going to say, how do we use this technology in a clinic, evaluating it in every way possible, it can be efficient. Do you think the payers would actually co-invest? In those actual studies? Yeah. I don't know. That's an excellent question. I think Mike was next. So just in this rush to fill the gap with evidence, in some areas, there seems to be a settling for a lower quality type data in this patient-like knee approach, where databases are emerging and clinicians are beginning to use data from smaller and smaller cells of clinical databases as if it's gone through the strict, rigorous evaluation. We're beginning to see some of that in this learning health care system, where they'll use, beginning to use data, whether it's QI, that's done rather rudimentary, or in the cancer panels, where there's small cells of patients that happen to look like a given panel that is pushing toward clinical decisions. And to me, it seems like we need to caution against the rush toward using whatever evidence comes down the pike, and that there needs to be a more traditional rigorous approach to the evaluation of some of this data that's coming from large systems before we will allow us to do that kind of clinical utility and then cost utility analysis. Where is the epicenter of actually making the decisions about quality of evidence, the size of the data structures? What do you see this happening? I know we're talking about it, but is this the group, or should we, I'm just trying to understand, how do we take those concepts forward and get a little bit more organized in our thinking? So I think that a general discussion about the development of evidence needs to consider the full range of the types of evidence that we're using for clinical implementation. And I think it's not just pharmaceuticals, there's those who are selling diagnostics, and sometimes in the area of diagnostics, as long as we can measure something, it gets an approval and there's much less work done downstream on the clinical utility of a diagnostic. And in some of the cancer, the heterogeneity in the precision oncology programs, vendor to vendor to vendor, where they're marketing different forms of cancer profiling, where there's variability in the precision oncology databases, the size and the interpretation of the data, there's certainly no standard. People are using different types of profiling. And I think it's wonderful to move the field forward, but that data needs to be assessed very carefully in terms of long-term clinical utility before it gets pushed to a payer and before you can begin to even think about doing the kinds of cost-benefit analyses that we're suggesting here today. I think this will be the last question before, Terry, you were flashing me a warning sign. So you have a question or? Okay. So you have a conflict of interest, but I'm gonna go over here. So I think we have to be careful about the term payer. I mean, as someone who orders genetic testing, the payers don't agree. As Gail said, there's some that say, just give me a single G. There are other payers that say, as long as you don't, if the exome's cheaper, we'll pay for that than all these panels. There are others that say, order a panel. So I think part of the problem is there's not a payer. We're not in a national health service. And so I think it's important to mandate that future studies look at cost effectiveness, but I think we have to be a little bit careful that there's not a payer. And the other thing I would say is context again really matters. The public is really pushing to not require double-blind phase three trial for every FDA approval and that for rare diseases, that's almost impossible to do. And so I think also depending on the medical context, the level of evidence may differ. Yeah, no, you're echoing things that we fully agree on. And also I think we learned from GM four or five about the heterogeneity of the payer community and I apologize for lumping them all into one group. Okay, Terry, less than a minute? Yeah, well so two things. One is, are you gonna do the summation from up there or because we could go a few minutes longer and stretch into the break if it's a good discussion? I'm willing to give it a whirl if you want to. I'm willing to try. Yeah, no, that's great. So just wanted to be sure that one of us was doing it. So my comment is evidence and quality and types in that definitions of came up actually at our GM six meeting or global leaders meeting, a number of them that said, well, you know, who is it who defines what evidence is and shouldn't we have some criteria for it? And I don't think that we've pursued that and perhaps it is something that we should consider needs a broader discussion maybe in one of these meetings or elsewhere. So I think I'm a little short on time and I want to give the panelists one last shot in a game called password, which many of you are too young to remember. There was something called the lightning round. So I'm gonna ask for a lightning round that's a 30 seconds plus or minus to give us your top level thoughts on the key messages that came from this discussion. So I'm gonna play a little jingle while this is going on. No, so let's start with either the right or the left. I'll start. Okay, thanks John. Test performance, we need to find out what it is for genomic medicine. Clinical scenario and what evidence is necessary in those settings. Yeah? No, okay. All right. So I think one of the things was the whole evidence threshold and teasing out the rare disease from the common disease and the importance of understanding what the payer's needs are even if there's no single payer that you're talking about. And I would just say that I think the role of the patients in this is going to be a major threat going forward. And I think they're going to be very important in changing policy, which will allow some of this technology to be more accessible. Thanks. So I'm gonna try to do some synthesis of what we talked about along the lines of the outline that Terry put up in one of her last slides. And so the first is the critical knowledge and gaps for evidence. I think what we talked about today were understanding where the goalposts live for the various stakeholder communities. So that's a gap, I guess an approach would be the kind of approaches that both genome Canada has taken on to assemble many of those key decision makers up front in the process. And it also sounds like some of the programs like CSER and perhaps others are taking a similar approach that the harnessing of the health system data has come up particularly for comparative effectiveness research as a gap, I would say. And that kind of ties into the lack of unified systems. As was just mentioned, we have gaps in test performance and in contextual thinking and in patient engagement. So those I think would be some, one that we didn't talk about, but I think is important and probably will come up later in the day is just having a deep phenotyping associated with the genetic information as well as longitudinal data that allows us to know how to tie the genetic information in an evidentiary way to outcomes up. That may tie into the lack of a learning health system framework that most of us suffer from. So key barriers to implementation, the fragmentation of the communities, the lack of IT infrastructure and standards. I think what sort of came up in a sort of tangential way is who pays for all this evidence generation. We're fortunate to have NHGRI paying for it or the Canadian government or other governments funding it through research initiatives, but is that really aligned with whom ultimately is going to achieve the benefit of our doing so? And I know we can't afford to pay to support the data generation around the millions of variants that are being generated on a regular basis. Approaching the gaps, I think Pierre recommended that we really embrace the rare disease model and the learning from that and then move from rare disease to more common disease or some other areas like pharmacogenetics and pharmacogenomics as well as to more strongly engage the patients. I guess I will hear across the day and maybe it's in our booklet, but I'm not sure how many of the programs are really optimally using patient-generated information. I know we heard from you, Gail, but maybe others are not quite as adept. In terms of training needs and approaches, we listed some of our recommendations. I think Mark added to that by encouraging that the dissemination implementation program become an opportunity for training. We didn't really have a chance to discuss that at any great length in the course of the discussion. And then lastly, I guess Harry had asked us to comment on facilitating the virtuous cycle. And I think what it comes back to is really enabling the learning health system idea. I think the, I'm glad Rex raised the QI opportunity where those programs exist. They're probably very opaque to most of us unless you know that something is going on in your own institution and you may not even know that. But you probably are, since you are the leaders of genetic and genomic medicine in your institutions, I guess if you were doing the QI project in that area or the hospital or health system where you wouldn't know more about it. But I think that clearly is an important component of learning health systems. From an administrative point of view, can we actually embrace it from a genomic medicine point of view? And I think the notion of the did come up but I think the notion of developing some larger surveillance programs. I think Dale you probably have written about this along with others is to think about how to monitor what happens after a test is commercialized and really get the breadth of information that is being generated in day to day clinical care outside of the research programs that are actually generating the evidence to bring things to the marketplace. There's a wealth of data that isn't being captured by laboratories and by commercial firms once the test is fully developed and on the market. But I think those are my thoughts from what you said. I'm sure I missed a few of the key points but I know Terry and Howard have been meticulously capturing everything you said. And last comment, Howard, do you have something to say? Well, there is time if others have comments that come out of your conclusions. I think there is no test equivalent of pharmacovigilance that currently that is widely applied. And so your comment about what happens post-marketing is a good one. Some companies are better than others in terms of trying to do those studies but they're often driven more from marketing than they are from truly utility and such. And so I think there's some opportunities for us to create that sort of thing in some level and it's never easy. It's not like pharmacovigilance is running perfectly in this country. So we can screw up the test version of that as well I guess. But I think that there are some opportunities to really ask those questions. The other thing that we kind of, even in your slide you talked about the high value or the low value or the high bar or low bar. The vast majority of the questions are in the middle. Almost always in medicine it's a choice amongst equals as opposed to awesome versus not awesome. And I think there's some opportunities to go in because the level of evidence coming back to Mike's comment, the level of evidence there's very different than should you not use something amazing. And so I think there's some things that we can push on both now but also throughout the remaining of the sessions on try to really get at how do we tackle some of those sort of things because that's where a lot of the decisions are. Thanks Howard, I guess just one reaction to your pharmacovigilance analog for genomic medicine it would seem that the Precision Medicine Initiative even though we don't know much about its ultimate form would desperately need that as a component and maybe even have some resources to support it. Just a thought, I don't know if that's gonna ultimately be true. I wanna thank all of you for your input and engagement into the first panel also wherever need be here and Jonathan for their insights and participation and I hope the rest of the, I'll be found as productive and I hope the rest of the day and tomorrow are equally so. Thanks. All right, well we will reconvene at 10.44 to give us a minute to settle in for our 10.45 start.