 So maybe I'll start off that and if people in the back, please just come to the mics You know mark I was really struck by your list of the forms you already have post-disclosure You know Caesar two is just getting underway. Many of us are returning these results But Caesar two for example is heavily pediatrics. So two questions number one How can we make sure the different consortia are using each other's? You know forms or you know, if you've already harmonized your metrics and we should harmonize with you And then a question that came up in the first session How are we dealing with NIH funded projects that are focusing on pediatric population? Great. Thanks before I answer the question I want to make sure the Eric gets credit online for a portmanteau Neologism that almost came out. I think cloud sourcing is a brilliant Word and so I'm going to make sure I add that to my lexicon. So thank you for that, Eric Okay, so in terms of answering the questions We have worked not so much with Caesar to but with ignite on trying to Generate standardized outcomes Across those and ignite has a toolkit that will include a repository for outcomes that are being used in Emerge ignite and presumably also Caesar to So that these will be available. I think your point about the pediatric aspects is is appropriate although Emerge does have the advantage as we heard earlier that we have some pediatric sites and They are in fact developing pediatric outcomes forms related to that. So we'll at least have a couple of examples But that's a very important Thing to note. So, you know the familial hypercholesterolemia, for example There's now there is in fact evidence that shows that having the genotype is an independent risk factor for coronary artery disease And if that's the case then thinking that that might be due to longitudinal exposure Then a pediatric outcome might be very different and as a pediatrician It's embarrassing that we've really not followed through with the lipid measurement Recommendations that have been put forward by the AAP. So those would be real opportunities to do that But I think we do have at least a nascent infrastructure in place to be able to share outcomes in a standardized format using a Standard red cap that would improve generalizability, but right now. We're sort of like the who's and who will No one knows about it. Yeah, I'll just add that the emerge outcomes have been walked over to see Sir by David Beanstraw Yeah Well, it seemed quite quaint to see to show you have us you show a slide where the assessment measures were a click on this PDF and download A piece of paper Which calls the immediate question of how many of the things on those pieces of paper Data collection forms actually already exist in the MR So if you view outcomes assessment is just another fiend phenotype. So you're just watching for a downstream Event to occur How much of that is already amenable to automation if you put the focus there? So that's a great point Dan and I can tell you that behind the scenes and I didn't go into detail on this There certainly are several aspects of many of the outcomes forms that will be captured automatically using the phenotype Algorithms that have been established. It's not as robust as we would like it to be I would look at this more as a pilot Implementation as opposed to something that we know we can do but it's another scientific question that I think is very important To be able to answer But the the reality is is that you know, most of the quality world still relies on manual review and manual abstraction There's the the implementation of automated methods to reliably capture data is still relatively limited But that is an area where we could definitely explore that as a scientific question Well, and many people I think this room is a little bit jaded towards people who actually get all their health care in the same system You know and I'm an adult with several chronic medical conditions and my data is in four different instances of Epic So I do think we also have to be Careful about not generalizing to the real world where many of us it may be in any HR, but like who's the HR? That's why we invented s4s Yes, okay Great, we've got a bunch of questions lined up. Please go ahead. Hi. I'm Maaren Schumer. Can you hear me? Okay, oh I have to hold it I Okay, I'll try to figure this out So I'm a clinical geneticist and a health services researcher at the VA in Los Angeles. I really appreciated the talks I'm not very familiar with the merge. I'm one of your external people that was invited today. I have a question about this area of clinical utility and cost-effectiveness research and I was Assuming that the clinical intervention here would be the return of results from your testing that's being done and I was wondering what study designs are under consideration to assess clinical utility and cost-effectiveness Do you have a comparator within the emerge networks? Are is there a control population a population of individuals offered testing who haven't? You know, they decided not to pursue it might that be a comparator for example Or do you have individuals who are genotyped and they haven't received their results? Might that be a comparator? I'm just kind of wondering because how are you going to show that there's really a net health benefit from any of this or from This genetic testing without that comparator group So I'll say one of my questions. Okay, you stopped them because that's People behind you Yeah, so the And I'll ask I'm gonna ask Josh Peterson to make his way to the microphone too because I'd like to get his if he's still here I'd like to get his perspective on this as well So you'll notice the assiduous absence of anything related to cost-effectiveness in the slides that I presented said economic Outcomes and economic models. We are not Configured in such a way to really be able to do cost-effectiveness analysis. I'd also push back a bit in the sense that Health utility is not tied to cost effectiveness that it's a piece of it And but I think we can measure The you know, what are the health utilities associated with this which then begs the question Can we afford those health utilities which is a cost-effectiveness question? I? Did cut a slide for length that Josh developed that began to talk a little bit about how we could use the 25,000 people in Emerge in that type of a comparator and I'd like I'd like Josh to just comment on that briefly Before Josh wanted to say I wasn't conflating utility with cost effectiveness I was just kind of responding to the question Which was post well, we don't need to argue. I'm just I'm reflecting what I heard Yeah, if I could before Josh jumps in I just want to add that there are site specific projects For example at our site We're doing a randomized control trial of family communication and so half the people get it after people won't get it And we are following outcomes in detail including cost outcomes So there there's sort of consortium wide Experiments going on and then site specific and I think more of the outcome the cost outcomes You might find at the site level. I would just add that as Gail said there are other sites doing these comparator studies I may as doing one on cascades screening of FH and I believe Dr. Green at Harvard is also doing a study where is giving disclosing results related to FH and Then a comparator group where they are disclosed after a delay. So, you know, there are sites specific projects Related to what just happens. Okay. I think Josh Yeah, I think some of this is already been answered already, but yeah, you can take advantage of the patients That had a negative result to try and subtract out the background rate of some of the health services that are delivered To estimate how use for example with ECGs, which are very commonly ordered You can look at the patients who had a a rhythmic Variant return look at the ECG rate and then subtract out the background rate to get an estimate for what's truly related return of Results, that's that's one way to do it but the other way that we're doing it kind of manually is just to look in the documentation and say, you know, here's a here's a variant that was returned and The assessment and plan is well, here's this variant return I'm going to order this this this this and that's a a clear link and for a lot of the studies Which are very specialty oriented the background rate is extremely tiny and You can be fairly certain that it's related to results. So there's a couple ways try and get at how much of this Cascated testing and potential health outcomes related to that are associated with the firm results. Great But I think that's not necessarily the issue. I mean, it's a matter of you can follow what happens But maybe that all would have happened without those results, but that's what he does I'm sorry, but that's what he just answered, right? They're looking at people's negative results and they're looking at actions that are extremely unlikely to have occurred Otherwise, I think people are trying to address your question. Okay, Richard So I think my question is timely I wanted to make a few statement of the obvious and then get to a question the obvious is I think that there is a gap between The grand vision and the operational issues that were all been engaged in that gap needs to be filled by explicit intermediate goals and the goals surrounding return of results are great But they're not sufficient to fill out the entire program and I hope that's through the course of the day We get to some other goals that do map to this intermediate space I think I see emergence of other grand visions in parallel to this program's grand vision So the other programs don't satisfy that need and then at the other end of the spectrum when I look at the slide I gave to Eric Bowen called my eyes glaze over It's very it's very hard to get people interested in the real nuts and bolts of what this program is delivered and yet These small very targeted questions that get answered are actually what are driving the the process forward And so how do we get a program to have intermediate goals that invoke the need to do those nuts and bolts things and still Maintain enthusiasm good science to still support that bottom level stuff. So my question is You agree So Well, I was just going to say that that's also the issue for Clinton Which is that we have all of these committees? We are like in the weeds trying to generate effective lists of pathogenic variants to then try to address it It's interesting people are either really interested in that problem and Often work almost for free on that issue or they are not so I would agree with Eric's comment about trying to crowdsource the people really interested in variant interpretation To try to help automate that process as much as possible. It's just one piece of that diagram And the one problem with that is what's missing is the actual data with which to interpret the variant, right? That's what we need to generate. It's not a matter of oh, there's not enough people looking at the data The data doesn't exist. Well, I would just push back a little bit That's fair. I would push back a little bit if you look at members of for example the Klingon committees They often include lot people frequently include People from invitae from gene DX from Ambrie. It's really quite amazing color How many of these tests they're doing? No, I'm talking about the emerged space. I'm right No, no, but I think that raises an issue for emerge, which is a lot of this analysis is going on in the clinical private sector and finding the problems that are specifically good to address in this space Compared to what's going on clinically. I think is an important issue, right? But most of the clinical tests are phenotype directed the person has the phenotype So running these tests across people who don't have phenotypes is the way to get at penetrance, right? Yeah, except for they're all running many more genes than the genes, but they don't have data Yeah, the clinical data is like and I would just add one additional piece I think this gets that point I was making in response to Ken in the initial session, which is that you know, this is What you've pointed out is extremely important, but it's almost never defined within the funding announcements in terms of that this is critical work to be done and the The way that the proposals are graded. It's about significance and innovation and this hard work This heavy lifting that needs to be done to really make sure that there's the coordination We can move the field forward is something that doesn't It's not well suited to the type of Application process that we use and so that's something that has to be recognized as well Okay, Dave's been waiting quite a while. Today is the sequence is the gift. It keeps getting great So I'm wondering if the programs have looked at ways to re Examine the sequence at intervals after the first sort of session To as so as new risk variants and new disease genes are discovered they can then that information can then be passed on for the Subjects and also not only how to do that, but how to pay for it Yes, so I mean I think that's something I mentioned as an opportunity and not really being addressed given our sequencing Timeline right now that we're getting sequence You know for another year and several months that doesn't give us a lot of time to reanalyze the sequence Till we're at the end of the program if we had sequence earlier in the program We could do that or we could take sequence that was generated and on a different phase and redo that And I think it is a really interesting question and an opportunity But we it doesn't work with our current timeline very well and specific to emerge but less relevant to Programs like Caesar is that you know we're we have a specific panel of a hundred and nine genes And so the only thing we would be able to go back and reanalyze would be information around those a hundred and nine genes You know 68 or of which we understand reasonably well with the obvious issues of penetrance that that kale raised So our opportunity there is limited as opposed to having a full sequence where we would really have You know, whatever 22,000 minus 109 is where we could go back and answer any literally any question that would come up within the context of a research Proposal Tim so I Wanted to sort of Reiterate what I think I've heard and and just amplify. So first of all Eric Borwinkel was concerned about Whether doctors are going to start to treat genotypes instead of phenotypes in in the in the electrophysiology world We they do that already and it's really depressing because the interventions are big and ugly So and I'll tell you a story offline But it's sort of it is it is really frightening for the record. I'm not concerned I just wanted the group to have the discussion. Well, I'm concerned So so a second a second Obvious issue that I'm not sure has been sort of stated crisply is that if you want to figure out What the phenotypic consequences of a variant that occurs in 10% of the population is you can do that And if you want to figure out with the phenotypic consequences of a variant that occurs one in 10 million you have to have an exotic phenotype or a family But if you want to figure out with the phenotypic consequences of something that occurs in one in 10,000 people you can do that in the contexts that we're talking about if your denominator is a hundred thousand or five hundred thousand or a million So that's the advantage of these very large data sets coupled to is that you can go back and start to get variant interpretation out of the EHR and we're starting to see that already and then the third the last comment I want to make is that Somewhere in in all of this is is not a statistical exercise that involves the EHR But there is actual functional genomics that needs to be coupled into this So we have variants that you know you can characterize in exotic we're not so exotic in vitro systems and coupling that into into Penetrance estimates there are variant estimates and then statistical methods to figure out how to Evaluate the contribution of two or three or four or four hundred variants Our outstanding challenges that emerge won't necessarily need to or can't necessarily address nor can all of us But we have to sort of keep our eye on that Okay, I think those are more comments and questions so Richard and then I wanted to bring up a different topic Very quickly, I just wanted to pick up on Gail's points indeed the design of this phase did Did mean that we had to wait till year four before every single I was dotted and T cross on every report But I think we'd be missing an opportunity to say that that only thing we got out of that was a lesson in the next Phase if there's one to build more sequence data earlier In fact right now we have the opportunity and a part way down the track to freezing two-thirds of the data and Asking of the group why can't we annotate it more quickly and just put it through the narrow portal of the current clinical Interpretation mechanism we have now and if we can work on that and solve it even in the next six months I think we'll have a completely different complexion to look forward to and we have now You guys anyone want to comment on that? So I just wanted because we have a couple of other minutes I wanted to bring up what some of you know of my least favorite topic the a sam g 59 so Emerge has really focused on it and the original definition of that list was in a clinical setting genes for which One should consider in an adult returning and there was a lot of parsing of disease genes where it was thought that a Diagnosis might already be known like NF one even though it's often missed and I'm sure there are other examples Like so from the point of view of a merge, which is really not about Actionability of a clinical test necessarily an incident or secondary finding Is that the best platform and if you're thinking about a merge and beyond are there other types of lists? Just genes with very high penetrance of whether for a severe disease whether or not the diagnosis already made I mean are there other ways to view the genes you would tackle in a similar set Yeah, I think there's maybe too much focus on the word actionable I mean I think that for us we needed some sort of actionable standard honestly to justify to IRB is the patients getting them back And so the fact that we had consensus across the network that they would that there could be a clinical utility to knowing this information Was you know was very helpful in saying these are things we can give back And honestly we have to give them back to collect the kind of data We want to know about these people when we find these things particularly as an incidental finding So I don't think we're trying to compete in the like what's an actionable gene-less space So much as have agreement to get the same data across all the sites You know we want everyone to to this to align to this so the possibility So instead of just you know one site returning familial Mediterranean fever and reach phenotyping Which is I think what we would have had till they convinced us go ahead like Now we have all the sites doing that and we're collecting that the same data for the same genes and I think that's the opportunity Yeah, I would just add to that by saying that the questions that Gail pointed out are really important that the ACMG list was based on You know a consensus process of experts And if you want to define the group that sat around the table as experts I like to think since I was sitting around the table that it was an expert group, but it may not be the case But we had no data at least in the population right right. I'm not trying to justify the list Yeah, I'm asking moving into yes a new network. Yes, what kind of genes I want all the genes in the next iteration see the problem we had was we knew we had to focus on on the 56 well Given that we were going to be on a sequencing platform that constrained how much additional we could get in there And so each site was able to nominate genes that they had particular interest in that we thought would you know move the science forward and Terry's looking at me And I know we're just beating a dead horse on this and I apologize for that But I think moving forward any compromise off of everything and whether we define everything as a whole exome or whole genome Constrains what we were able to do and constraints to what we currently know as opposed to what we could know and and and really Particularly for the for the question about reuse You know we look at the exomes that we're generating geisinger as being ones that will be there for the entire remainder of that patients Life and care at geisinger. It's an it's an ongoing resource and I think even though our projects are sort of circumscribed In four-year blocks We shouldn't be thinking about the resource in a four-year block Other than some of the pragmatic choices about will be prioritized I think the resource needs to be looked at as durable and that individual sites that say hey beyond funding What can we use this for? I think that also goes back to Dave Valle's gift that keeps on giving Basically the info, you know first you have you know inheritance That's you know one way that there's continued information. The other is you know Heart disease patients get cancer later cancer patients get heart disease later So there's longitudinal phenotyping and the third is the continuous reinterpretation of the sequence And so at the end of the day Even though it's a big swallow today It's probably more cost-effective to think about larger genotyping or sequencing platforms in particular targets. Yeah, Marilyn I'm just looking at him um I think this conversation and there were a couple of points maybe the one that dan made and another one earlier One of the other strengths of emerge that we've had historically is that we've kind of straddled between discovery and implementation so we've always had projects that were on the Not the developing or finding evidence about implementation in genomic medicine But also discovering what are the interesting things that we might want to implement You know thinking about polygenic risk scores and how you might use those or Starting to look for a gene environment interactions, which is some of what we'll be doing now with the geocoding But the 109 genes definitely limits our ability for discovery with with the exception of those 109 genes Which is why a lot of emerge sites are still going back to some of the GWAS and the imputed GWAS data that we have So that we can still kind of straddle between discovery and implementation But I think to to agree with mark, you know in in an extra future iteration Having all the genes again And I would love to not just have coding regions because what we're learning a lot from epigenome road map And end code is how important the regulatory regions are for disease risk You know if we still want to continue to straddle discovery and implementation I think we're going to need more genes in the future Right and I would part of what motivates my question is if you look at illness and adults I mean room room to logic disorders pulmonary disorders Almost almost none of those are made up of the current list and one can only imagine that there must be important Genetic changes that maybe don't mean diagnosis or not, but go back to what you were talking about severity might mean More severe COPD or not So, I mean, I think obviously the cost equation for this has changed from you know, four years ago Um, I will say that there was a lot of thought put on the SNIV side What else you know the pgx and very very extensive hla capturing For example, we have a very aggressive hla working group So, you know, I think that we understood what our limitations were and we got around them to the best That we could at the time and and hopefully I'm I'm all for a genome too But I mean, I think there's more we there's a focus on the the 109 things that are getting sequenced And people tend to forget about all the other stuff. That's also on the platform that we're getting So So my question is why do we limit ourselves to sequencing? Even though sequencing is something that keeps on giving there's all kinds of new information coming available for epigenetics that while that involves sequencing, it's a different strategy and Is clinically important? I think the looking at methylation or chromatin immunoprecipitation is going to have a huge clinical impact And be far more complicated than the germline sequence that we're working with And so that looks like a future opportunity for us that we seem to be ignoring Well, I don't know that it's being ignored, but john I would question Whether and this is I think a viable question for emerge for is whether this is an this is an appropriate question for emerge to answer Because as maryland pointed out, you know, we're sort of trying to straddle this Implementation and discovery piece and I loved eric's representation. This is a virtuous cycle or even a in some ways a manifestation of learning health care system That's really important. But in a learning health care system I really don't want stuff that we have a very little knowledge about what Impact it has on the health of the people that I take care of. I recognize that that's important I recognize it's something that people have to figure out But my question is whether or not it's emerge's role to figure that out or whether we should focus more on things So we have a little bit more knowledge about in terms of the clinical Utility or action ability or whatever term you want to use For that. So it's not an answer. It's just a philosophical question Go ahead. So we're I'm really thrilled that geisinger and emerge are sort of working their way through a lot of these things In advance of our efforts in the VA. We've gotten some considerable pushback from the primary care community. They were offered a For a project at low cost to do the 59 and they said, well, we don't think there's any clinical utility of these at all and I wonder if What the push that I like here is the ability to measure But I don't know if I want all the genes returned to everybody all the time And I think setting some very serious priorities where you the big win would be demonstrating That clinical utility question Um, and the even the the cost efficacy questions in a few select cases that are highly likely I you know when you say um that that the penetrance we have is unknown for this cardiomyopathic gene And so we have to return it to see what happens Sounds a little bit um in a research setting. Well, I Fair enough in a in a research setting But remember those chemical consequences and measuring a process measure like how many echoes did we do I'm not sure gives me an answer of did I save any lives or did I help anybody? You know, I have some of the concerns that I think dan was trying to express That you know oscardiologists can't see an echo and not do And not not leave it alone if we see something and and then what we'll be doing things that we don't know if we So it's got to be more than just measuring the process measure. So I I think some examples Some of the key examples where you could Really demonstrate a big win would go a long way For some of the the skeptics that we're seeing in our clinical community. Yeah, I I I couldn't agree more I mean, we need the evidence and I think emerges Is developing some of that evidence or other groups that are going to be developing some of that evidence I think I just want to clarify one thing that you said when I say I want everything I did not say I want to return everything. Okay That's a very important point my job and and this I think reflects also my answer to john harley, which is Nothing is going to touch my patience unless it reaches a certain threshold of This is relevant to them and that could mean a clinical threshold Which is to say I know enough about this brca one that I know exactly what to do clinically Or it could be as gail pointed out in a clinical research setting to say we really don't know enough about cardiomyopathy or long qt There are research questions that need to be answered around this That we need to do but anything that's not that proximate That needs to be kept away from our patients and our providers because It's just going to distract and as you say when a clinician gets something They they you know it's act or not act there is no try right you have to do something or not do something And I think we're predisposed to do something just because of the concerns about liability and and and things of that nature That's a simplistic explanation um, so I think that uh, you know that to me is is is is where it sits and what it really requires In in something like emerge is to really say we are going to have to define The threshold above which we think this is something that could be studyable Within an emerge project right but I would point out that there are genes on that list where there are good data That lynch is probably the best example where there are adults who were screened and had lower cancer mortality than groups that weren't They're brca what they're very good studies that show that Brca one carriers who've had uverectomies have lower mortality from cancer than those who don't so I think the list is very long And I think what may get muddied is that they're much rarer genes and genes for which we know less Um, and then genes and I know University of North Carolina, for example has picked a much smaller list I forget it's like 10 genes that they've tried to do projects showing populations screening would be effective I mean that's a project goal So I do think it's important when we're talking to clinicians That we do make clear the range of evidence Around each of those genes. It's huge. I completely agree. I mean, but I mean, but we do need to measure that Even those in those settings. I mean, there's an air force project that took cori el platform returned the genes they returned 44,000 results to 4,000 people 11 per person They had a woman almost scheduled for a double mastectomy because she had a She had a This was not BRCA one and two until someone finally said wait a minute. We have no idea what this means for you So we need to we need to measure it and you know, then it also begs the question in other areas And I calculated a BMI that a patient doesn't know that's more tells you more information about their diabetes risk than than a single Variant and what do I do I have to return that to yeah, do you know what I mean? I think it's And I'm I think that I think we're missing a very important thing Which is that this list of genes was supposed to be high penetrance And likely pathogenic was supposed to mean 90 chance or more of being penetrant and our data is showing wrong and wrong Okay, and so that makes it very valuable data. So we didn't go out to abuse the patients We went out with something we thought was going to be useful We're learning. It's not so useful incredibly critical data. Absolutely. I completely agree that that that penetrance information Yeah, okay So I'm gonna cut you off because there's someone who's been waiting and I think Maybe the last two questions Hi, this is Ali Gauravi from Columbia. I just wanted to say mark I agree with your assessment that we do need to look at the entire Everything whatever how you want to define it because if we want to be at the leading edge to implement You know, so the if the science and implementation We have to do this That's clearly the price point is there and other sites are doing it Your group is doing it and also the patients are getting these data as well So people are getting this on the commercial side So we have an opportunity to develop the sort of the the basis for returning these results and looking at the outcomes I think the other opportunity is to also engage other constituencies here for example You know the payers are the people who also need to be engaged in looking at these outcomes And so we can also work with Caesar and other consortium or we're engaging payers In that as we're trying to define what outcomes we're trying to look at Such as and also regulatory agencies for example the FDA When we're looking at the impact and the outcomes for genomic data And so that's I think not an opportunity to also interact with the community And engage all of us and other industry or our networks Right and do you have a brief comment or statement behind you? Oh, that's it. Yeah, eric larson Colleague of gales and just want to pick up on gales Observation and dan's and erics the other eric or one of three or four erics. I think in the room Evidence what what is the source of the evidence? I think that's so important and what I think gail was saying was when you look in a in a in a population-based sample as opposed to a convenient sample you can get very different results and and I think one of the challenges that the field and all of us need to ask ourselves is how can we Understand the the different results we've gotten when we've looked at different populations And I'd fight any of the panelists to think about as we implement We're going to be implementing to a population and it's going to be population management How does the evidence generation affect that beyond the virtuous cycle, which I believe in very much But if we don't have the right source of the information We may get ourselves into trouble I think the first thing is you have to know You have to know the source of the information and be aware Of the source and be aware that if you apply Inference from one source to a second source you may be misled Okay, last question If I may I have a very brief comment It's just in response to the please introduce yourself and you're gonna have to hold the mic. We can't hear you Okay, um, actually I was saying that I have a very brief comment. I'm the representative from FDA So this is to address the previous comment that we are trying to work with the merch and strongly We probably could provide you more details And the goal is to clarify pharmacoepidemiologic and pharmacogenetic applications and And develop what we call in silico biomarker discovery methodology for in silico biomarker discovery with the biomarkers being applicable to clinical and regulatory studies and I would be glad to provide more details. Thank you. Great. Thank you