 So the goal is, as we said, actually when we started the meeting, that we wanted to actually think about research directions that would be driven by what we've heard at the meeting to date. We've heard a lot of stuff at the meeting to date, so I think there's probably no shortage of potential research directions. But in order to help frame our thinking about this, we've got a couple of presentations scheduled. And then we'll have some time for discussion after we've heard those presentations. So I'll invite Mark Williams to give us his thoughts about research on pre-testing phase. Thank you, Rex. I'm sorry, and I'm just going to moderate, Rex. So we're going to have the presenters for this session present from their seats, and they can make some modifications to their slides as they go on. Right. So if during the presentation I have not captured something, or if I've missed something completely, then we'll have time after the presentation to make sure and go back and edit that. The other thing is that as I was going through this, we had to go through an exercise of, well, is this pre-testing? Is this testing? Is this post? So I would not necessarily assert that what I've identified as pre-testing is necessarily absolutely pre-testing. So at any rate. So it's a little bit sad that all of our engagement people are not engaged today, because as I thought about this, I think this is really, for me, the most important takeaway. And I included a concept that Karim Watson introduced in his talk at the USAG meeting, where he talked about engagement science. And I think this is kind of a transformational concept, much the way that the introduction of the term implementation science changed how we approach implementation. That there's actually a science around how we do engagement. And so as I think about research directions, I think we should try and embrace this emerging framework of engagement science and think about engagement across a number of different stakeholder types. So certainly patients, I think, is essential. Clinicians and other stakeholders, which would include systems, payers, public health, et cetera, et cetera. And there were two graphics that I wanted to steal here. One is we saw yesterday on the All of Us presentation. This was also for Karim's presentation at the USAG. And I knew when I saw it that I needed to present it at this meeting, because I think it's a really nice outline of the framework that could be used in terms of including engagement through the nested levels of individual interpersonal institutional community and culture and society, that we have these different components, outreach and awareness, education, training, capacity building, bridging communities and knowledge mobilization. So I think this is a really interesting framework that a lot of the pieces that I'm going to be talking about later could be mapped to. But then George Mensa yesterday talked about this dynamic relationship of meaningful community engagement, which is something that's being incorporated through the National Academy of Medicine. And I went onto their site and looked at this, and I think that there are also some things from this framework that could be applied. And I think the community engagement, while we might tend to think about that as, again, engagement with patients, I think we could define a whole number of different communities, patient communities, clinician communities, payer communities, where we could apply these core principles and develop alliances, expand our knowledge, improve our programs and policies, and ultimately help them to thrive. So these are two that I wanted to include in my deck so that when that gets moved into the meeting materials, we can have those as reference. So these will be the last two interesting slides that you'll see from me. And then we'll get to the more text-based slide. So here are some things that I took away from the meetings. Researching to a standardized approach to assessing the chain of probability to inform inclusion or exclusion of genes and variants for population screening. So from Les' talk, the practical probabilistic model of population screening, I immediately thought, well, this would be awesome. We could call this eBay's, but somebody already stole that name, so we can't call it that. The second is a comment that Ned came up with as one of his questions, which is the idea that we need to have evidence-based medicine 2.0. Well, what does that look like? What should that look like? I think that's a really interesting concept because as I flash back to all of our time on the EGAP Working Group, we've constantly beat our head against the wall of evidence-based medicine 1.0 and realized that it really couldn't accommodate the sorts of things that we're doing. So I think that's another interesting area to explore. I think we've heard a lot of talks about how we need to define and harmonize outcomes and costs. And I think in the last discussion here, we heard about how it's going to be very important to define cost from a different stakeholder perspective. Quality-adjusted life years is a very different concept than per member per month, but those are going to resonate with different communities, so we need to understand that. I've got a bar in front of my thing here that's, oh, there we go. Okay, so definitions of thresholds of evidence. We talked about clinical utility, which must include benefits and harms. They're two sides of the same coin to inform inclusion or exclusion of gene invariance. And again, the idea that we have to have different perspectives on these utilities, we need to understand clinical test performance and the specific aspects beyond sensitivity and specificity, including positive predictive value, negative predictive value, a number needed to screen or treat. We need to understand the penetrance and natural history of conditions that are identified by, that should be genomic screening. And as we heard today, we need to understand the timeline whereby that benefit is going to be realized because if we're dealing with a payer community, they may be looking at a one- to two-year timeline, whereas a public health community could tolerate from a societal perspective a 20- to 30-year timeline. We have to basically, I'm sorry, I'm getting boxes on my way here, research on what's needed for comprehensive and equitable implementation of population screening. We heard a lot about that and it was a theme that continued to rise up throughout our discussions. I think part of that is an opportunity to do pre-implementation research using some type of an evidence-based implementation framework, like a see-for-reame or something of that nature. So we basically, in as best as we can, map out Terra incognita before we begin to explore. Recognizing that it'll be only approximation, it'll be significantly wrong. California is not, in fact, separated from the United States. Only Baja California is, as an example. We have to, this is, and I thank Crystal Sotsi for her help in crafting this next bullet, focus on equity across multiple dimensions, including innovation equity, deployment equity, contextual equity, and equity metrics in the context of population genomic screening through the many perspectives of different populations and communities. It was a very powerful theme, I think. I love the idea that Ned put forward, which comes out of the newborn screening community, the idea, can we develop pilot studies for population screening for near tier one conditions, where we can get that last mile of evidence that's needed to potentially move them to tier one, or to say, no, this evidence does not support this as inclusion. So I think that's a really interesting thing to explore in the pre-testing screening phase. We need to engage with prevention, with the prevention research community, to co-develop genomic prevention research projects. That was something that, again, was very interesting to me. People that do research in prevention have a very different approach to how they do study design than those of us that are primarily in interventions. And so I think that's a really fruitful area for pre-testing research. Population genomic screening and research settings compared with implementation of public health settings. Different rules, different regulations, different policies. So there's a policy research agenda to explore these differences within the context of implementation research. We heard today about developing some sort of a learning or sharing network to facilitate shared knowledge. How do we move from a successful project to wide scale implementation? I think the projects that are going to be involved in the new learning healthcare system, the cooperative agreement could be a potential test bed for some of these ideas. And then research, this is sort of a stray cat that did come up, and I thought it did need to be represented there, but it's somewhat more limited, which is research into problems associated with using peripheral blood for screening, such as mosaicism, clonal hole, and hematopoiesis of indeterminate potential and use of other samples. So I think that is the list of things that I took away, so I'd be very interested in the couple minutes left related to my presentation, whether there's any modifications, anything I've missed, anything people want to add or subtract. Mark, I think I'll start. One of the things that I think came up pretty clearly in the discussions yesterday was the need for us to think about research programs that were studying the biology of penetrance, and that certainly I think would fall into this category, maybe more broadly even. Terry? Yeah, I was wondering if somebody could talk about either you or Ned or anyone about what does evidence-based medicine 2.0 look like and who's going to develop it. I think that's the research question. What does evidence-based 2.0 look like? And Ned, I think you have ideas, but I don't think you have any answer. No, I think over the past, now maybe seven or eight years, I've been working with the National Academy, and they always call me up when we need to redesign some evidence review and evidence to decision framework for something we don't have it for. And what that's done for me is kind of open up the door to thinking about where the problems with grade are, for example, in terms of bringing context into the message, how to deal with qualitative data, how to do qualitative data analysis and how that fits into an evidence to decision framework, how analogy and mechanistic data might fit into a recommendation. So I think grade has done a good job of thinking about new things, but without actually fleshing out or giving much in their very tight RCT-based framework. And I think the National Academy was interested in saying, are there other ways to think about other kinds of evidence and bring them together in such a way that your certainty that you're doing more harm than good is high enough to get you over the hump to move forward? So I think there are new changes in evidence to decision frameworks, and you're right, thinking about what needs to have a fit-for-purpose approach for genetic testing would be a good research question. I think it's implicit in what you said, Mark, but I wonder if it might be better explicit, which is I think one of the things we have to do as a genomics research community is move from disease-based cohort research study design to less biased ascertainment study designs, and shifting our research base towards genomic ascertainment is what we need to do in the research context to model what we want to do in clinical contexts, which is find these diseases in populations, and you can never answer that question if you only study people who have the disease at the outset. Anyone else? Aaron, with that then, thank you very much, Mark. And then the next presenter is Heidi Ramm, who's going to talk about what tests to recommend and how are they offered and what types of results are provided. Okay, great. So I'm going to talk about the research testing phase, what tests to recommend, how they are offered, and what types of results are provided. So I was starting just to try to define the different types of tests in terms of intended use, and I defined three different buckets, a test with a single specific purpose, like a diagnostic panel or genome sequencing, a carrier screening test, the CDC Tier 1 test, single drug pharmacogenomics, et cetera, versus what a lot of us do now is the opportunistic use of content from one test for a distinct purpose. So all of our secondary findings, returns from exome genome is really just an opportunistic use of that data versus what's now becoming the increasing interest and focus, albeit challenges to get there, is broad tests intended for multiple uses. So we run an exome or genome with the hope that we can use that in multiple ways, like virtual panels for distinct indications, be they symptom-based or screening, and those could be sequentially done or in parallel. So I think an area of research focus is really to define these or others I've missed types of tests and scenarios and how do they differ in sensitivity and cost, under what context is each type offered to really think about how we offer population screening type tests and in what context, and do we gain this efficiency or do we say, that just creates complexity and we need a dedicated test and we had a lot of discussion about that throughout the last two days. So I think that's an area that really has to be tackled. We also heard a fair bit about, you know, the uptake may relate to test complexity and the length and complexity of the consent process itself. So is this a very targeted or regulatory breast and ovarian cancer test that a breast cancer surgeon orders, or is it the whole cancer panel, and then all of a sudden you get NF coming up when you started with breast cancer and that complexity? Or, you know, I talked to lots of the physicians who were ordering pharmacogenomics tests and asked, why don't you order the broad panel? You could use it for other things, and they say, I do not want the responsibility, liability for all the other results. So like, how do we think about that? And then, of course, we've also heard physicians who simply aren't ordering exome or genome sequencing because they do not want to deal with the secondary findings return consent process and aborting a useful test for that reason. So how do we think about reducing that complexity? And as we were talking about earlier today, standardizing the consent process to a minimum thing, using CDS tools, so I think that's a whole area of work and research to find how can we really do that effectively yet simply. Also, when is a genetic counselor or genetic counseling assistant needed? Do they have to be embedded in a clinic and hired by that clinic, or can we use external, you know, counseling resources that may be in the same healthcare system, but not with the same clinic, sort of models of providing genetic counseling when needed but not when not needed. And how can we most efficiently transmit phenotype and the indication for testing is a huge barrier to useful test interpretation. Less important if we're just talking about screening, but certainly if you're tagging on diagnostic aspects that becomes critical. So that's a whole area I think that's important. Also, what level of certainty is needed to return results? And today we have this general sort of bifurcation. If it's symptomatic testing, you return VOS, likely path and path. If you're doing screenings, secondary findings, instead of findings, you only return likely path and path. And that's sort of typical, but we also have to think about... Sorry, I just get rid of this thing that's obscuring half my slides. Okay, there we go. Thinking about the fact that most of variation that we detect and return to patients is rare or unique to a family. 70, actually, I just calculated is actually 78% of the over 2 million variants in Glenbar have only a single lab submission. And if we return only pathogenic, those really well-supported variants, it will have a major negative impact on underrepresented populations. So how do we think about equitable interpretation and return of content? Also, should we indicate the presence of a VUS on a screening report? Most labs do not do this, but I just more recently discovered that color, health is indicating the presence of a VUS on screening reports. And they do it in a way that sort of isn't the specific variant, it's sort of a sub-note, by the way, you have one or more VUSs that haven't been interpreted and put on this report. And at first I was like, oh my God, what are you doing? I can't do this. We don't return VUSs in screening, but they found that the shock of patients when a variant was updated was so significant that they needed to sort of preempt that expectation. And so they have started with this. But this is an area of ripe research, right? What is the best way to do this? They've tried it one way, most others don't do it that way, but what's the right answer here? And I will also say in the next sequence, very classification guidelines, as Les knows well, we will add VUS sub-tears. And so this big massive bucket in the middle between likely benign and likely path will get subdivided. So it's a huge opportunity to study what do we do with each of the sub-tears? Do we put the U.S. laws on reports? Do we put them in supplements? There's all sorts of things that can be done. And we have an ACMG work group that I'm co-chairing to put some guidance out there, but most of the guidance is going to be somebody needs to research the effective approaches to deal with these sub-tears. So that's a huge area for research. And I will just put in some data so this is data that we're assembling right now from Baylor, Mass General Brigham and Quest, all of whom have been already using VUS sub-tears and showing the dramatic correlation with reclassification towards pathogenicity or benign. So if you're in the VUS low, almost no variant has ever moved to pathogenic. So this really can help us as our clinics are getting inundated with reclassification 70% of variants at MGH in the cancer clinic got updated and it's sending the physicians into a frenzy. Okay, there's also this really complex scenario and if we were to think about carrier screening as an element of the population screening focus, the ACMG standards suggest that you report path and likely path variants except that you should report VUS when the partner is positive for a path and likely path. Talk about that complexity. So most carrier screening happens sequentially. Mom, when pregnant, comes in, gets carrier screening. If positive, then that dad gets testing and then he could have VUS as reported but not the reverse. You did it in the wrong order. So how do we think about supporting an entire ecosystem of carrier screening that's effective and follows guidelines that needs to be couple-based and what if the partner changes, then you have to re-go back. So I think there's a whole area of research just around effective carrier screening, how to move it to preconception and how to do it couple-based. So then there was a number of discussions around how do we make test results most useful and understandable to the ordering physician, especially as we engage more non-genetics providers, defining the value of, we talked about this just in the session this morning, the value of EHR integration for improving the utility genetic testing. Also, labs cannot provide care recommendations that are specific to a patient, yet the physicians all the time ask, what do I do with this result for my patient? So what are the options and how can we study ways to actually achieve the guidance to a physician that they want without the overstepping of a lab who doesn't have the expertise to make lab care recommendations? So are there models to pair lab reports with physician consultation? What clinical decision support tools could be developed to guide decision-making after a test result? And that guidance may differ based on the variant level evidence, based on the confidence of causality, does this finding correlate with the patient's phenotype, as well as guidance based on patient choices? And we talked about, you know, the same result two different patients may want to do two different things based on lots of factors, perceptions of risk, importance of certain outcomes. So I think, you know, really studying how we make results useful and understandable and consumable by both physicians and patients is critical area. And then we certainly touched on supporting a genomic learning healthcare system. How should patients be consented for genetic testing to ensure the most robust learning from the data, ensuring the ClinVar submission can happen? But going beyond that to, which is really not happening today, is sharing case-level data, the genotype, the phenotype. Right now, phenotype sits in the healthcare system and the genotype sits often in external reference labs. How do we marry that data and have a flow into a gene-to-phenotype repository? And this is critical for defining the phenotypes of rare diseases, their expressivity, their penetrance. A great example I put here is the decipher database. This is based on real patient data. It's publicly accessible to see aggregate phenotypic data on patients with pathogenic variants. So that's a model, but how can we really funnel all data and develop systems for that? We need to create variant-level queries and share data in real-time through federated approaches. So how can we implement this kind of network of sharing? And then Bob and I were talking yesterday around just like consent to share data with family members, which is critical and genetic. So how do we implement that as well and learn from cross-family members? And also, we have really no mechanism today for how physicians or patients can provide data back to the lab after they've done follow-up and clarified the significance of a variant through perhaps clinical testing, like an enzyme-functional assay, imaging, metabolic studies, or even results of segregation testing where they don't always give the phenotype on the family member back to the lab and how do we transmit in a useful way that doesn't burden the physician, doesn't burden the whole system, which is definitely an issue. Then we touched on briefly the notion of genome reanalysis or also really reuse, as we think about genomes being used for multiple purposes. So what type of infrastructures needed to most effectively support reanalysis and reuse of existing data? What if data was generated from one lab but secondary use or reanalysis happened in another lab or clinic? Do we need to develop universal quality metrics to determine when data is analytically valid and does require orthogonal confirmation? So how do we think about that? Or do we simply share raw data and you just go back to the primary data? What are the best approaches? And what results can be used directly from a genome like well-validated pharmacogenomic variants for queried upon drug ordering versus something that requires a professional interpretation based on new symptoms, for example. And this came up earlier this morning. How long is an exome regime useful for technical advances indicate running a new test? And there's lots of models being developed for these long-term use of genomes. But what is the utility? How long is that? Is it two years? Is it three years? Is it five years? What are those sort of approaches? So that's everything I could think of. I'm happy to take feedback as well as gaps in what I might be missing. We have a couple minutes for thoughts about Heidi's comments. Christine. Yes, I definitely agree with everything that you just said. The bi-directional communication between providers and the laboratory is something that has been very difficult to establish. We did it with the UDN and it's been very successful sharing raw data and then hearing what because they have firsthand information of the patients and current information and seeing if that changes the interpretation. So we'd love to better understand that and have better guidelines. And then reclassification and recontacting, especially just effective mechanisms for recontacting patients because of losing contact and patients changing positions is also really important. Yeah, I forgot the recontact. I'll add that in the list. Good point. Can I just follow up, Christine? How willing do you think laboratories are to have to do the extra work to get that information back? I think they'd be very amenable to that. It could be through a portal. Basically, you deliver the results through the portal, the physician reviews it, and then check, yes, no additional phenotype information that maybe has just come up. I think they'd be very interested and obviously it will help with varying classification in the future. And some labs do this well. We would only do segregation testing. And in fact, the way we did it was we did it for free for VUS is if they provided phenotypes. So there are some ways to force that. And there also is an application, I think, built through the CSER program from the Mount Sinai group called, I think it was Genome Diver, where it was a portal for the physician. And they initially used it when there was a set of variants that were candidates for the phenotype that were dependent on indications or phenotypes that might be in the patient and the physician could go in, kind of review the six candidates that came up and say, definitely not this. Oh, maybe that. I'll do a test and see and provide that feedback. But that's a pretty intense sort of way to do it. Okay, thanks. So Bruce, we're really making you earn your... Yeah, Josh. Go ahead, Josh. I really liked your recommendation to tie indication to the test. I wanted to ask if the downstream benefit of that would be to customize the lab report to include sort of critical information like absolute risks or something that the providers would use, or is that really need to be... that interpretation needs to be a separate layer and based on laboratory rules, you really have to keep it reporting the raw data. So, I mean, I might not totally follow your question, but in the standard framework for diagnostic or symptomatic genetic testing, there's clear guidance that you may interpret the variants for pathogenicity, but then you need to provide a separate sort of, did I answer the physician's question with if they were asking for a diagnosis for a symptom, and that uses terms like positive, negative, inconclusive, different from pathogenic, uncertain, et cetera. So there is that notion of a indication-specific interpretation, but with respect to the sort of probabilities and that varies based on what test it is, and it used to be something that like residual risk was always provided for carrier screening, and for CF, for example, because you knew all of the allele frequencies and what the variants you were testing covered. Now, for the expanded carrier screening, we just don't have that data on so many of the genes that you can't really effectively provide residual risk, but there's clearly some areas where you can provide that for certain diseases, and so I think that would be useful, but I don't know how often it's done. Bruce, before I turn it over to you, did you have your hand up for a question for Heidi? Actually, I did, yeah. It was wondering whether it's realistic to consider having some kind of standardization of the formatting of reports across labs. One thing that has struck me over time is each lab has its own favorite way of presenting information, and if you're especially not an expert, which increasingly will be the audience, you could easily be in this lab by just not knowing where to look for the critical things, and if there were some kind of standardization, including like psychologists and design experts who are good at knowing how to draw the eyes of the relevant things that might actually reduce the possibility of error. It's a great question. I think it's been a real challenge because of the number of laboratories that use commercial software platforms, whether it's directly through the electronic health record, like we had to use our own pathology system that AP and CP used for reporting. Other labs use commercial platforms, whether they be fabric or imagine or whatever. So I think there is some challenge. It doesn't mean that we couldn't get all these groups together and really try to push on this, but the systems that have to share with AP, CP systems, where pathology is a bigger player than genetics, may make it a little challenging for those integrated with an AMC sort of system, but certainly so much of the testing is done by big external reference labs, most of whom actually probably design their own, that you could imagine trying to tackle this area. So it's a good point, Bruce, and I think some dedicated research to designing how to display information. And my point earlier about can we suppress VUS low or things that you shouldn't spend time on? I think that would be part of that research of designing the best report and where you put information to make sure it's seen or in some cases suppressed unless you are really savvy. Okay, well, thanks. Bruce, I guess for your second presentation for today, we're anxious to hear about your thoughts on research follow-up to testing. Yeah. So a couple of points to make. One is inevitably there will be some overlap with what you've already heard. It's not so easy to kind of split this all out. Secondly, I had to step out a couple of times for other obligations. So it's possible I've, in fact, it's probable that I've missed a few things, but I'm sure others will help to fill those in. So, oops, where I make my slide. Oh, I see what's happening over there. There we go. All right, so the first is on uptake of interventions. And although there's overlaps with, I think what Mark was talking about before you get into what are you going to do with the intervention? You do want to make sure that realistic expectations have been set. And that brings us back to the consent process and the need for community engagement and recognizing the risk of false reassurance and making a distinction between clinical testing and screening. And, you know, I think there could be a value to research into mechanisms of obtaining consent and making sure that there's an alignment between what the participant expects and ultimately what comes back. I think this point was made earlier also, but I'll just emphasize it. You know, we have an opportunity as we look at these various kind of sets that are subjected to the research-based screening to generate data on penetrance. The data we currently have, which largely is derived from family studies, is likely to be biased. But also we have to be a bit more precise about penetrance for what? I mean, penetrance is generally accepted to be an all-or-none thing. You either have whatever it is you're specifying to be the phenotype or you don't. But then again, what do you specify to be the phenotype? And when we say somebody is non-penetrant, maybe that means they didn't develop cancer, for example, or they don't have a clinical arrhythmia, but it doesn't mean that they don't have something that may or may not be important in their care. And so we have an opportunity particularly as we identify people on the basis of screening to follow them longitudinally and begin to learn what exactly is the natural history of these conditions. And somebody who is thought to be non-penetrant may not be so non-penetrant if you really kind of look carefully into the phenotype. I also wonder, as we talk about generating screening protocols, to what extent will the all of us dataset be a context in which you can model this? So in principle, now, you know, a million, it's a large number from some perspectives, but it's maybe not such a large number when you consider the entire population. But it is a very diverse dataset. Some of the kinds of things we're talking about, the risk is low, but it's not vanishingly low. It's in a couple of percent range maybe. And one could imagine that you could take a kind of virtual population defined from the all of us dataset and ask the question, well, what if we had screened these individuals for anything you might want to consider given the kind of genomic data that's available in that dataset. And realizing the electronic health records that have been shared are probably, in fact, certainly incomplete. It doesn't mean that because something wasn't mentioned, it wasn't there, but it still does give you at least some window on what the potential utility of having screened for that would be. So it just strikes me that it could be available for that kind of use. I think point maybe has been made but emphasized that cost and value is likely to be a moving target because costs change as technologies change. And so what's cost effect, what's not cost effective now might turn into something that is cost effective over time. I think the point's been made that we need to standardize outcome measures and come up with ways to compare outcomes across different studies. And I guess implicit in that, too, is many of the things that are currently done with population screening certainly was true in our project don't necessarily have complete control of being able to follow outcomes. And there may be some value to modeling this in settings where part of the consent process involves also looking at longitudinal outcomes so that you don't lose track of the participants as easily as otherwise might be the case. And then finally here, the need for point of care decision support tools that can include genetic counseling which I think often is the case in the way studies are currently done. And it can also include ways of integrating data into electronic health records because likely clinicians will be expecting that anything that they're going to use for clinical decision making is going to just be there in the electronic health record. And a point I made yesterday which I really think is something that could be right for research is the development of AI based systems because as I said the other day I just don't believe that as we do more and more wide scale screening that the genetics workforce is going to be scalable to be able to meet that need. And so I'm coming up with ways to build systems that can handle most of the common situations. It doesn't mean that there won't be the need for people doing counseling but it may expand the workforce in a way that we otherwise are going to have a very hard time accomplishing. The second area I was asked to look at was cascade testing and I think there could be a lot of work done in how to facilitate communication of information through the family how much of that can be done electronically what are the best ways to equip somebody to communicate to the family because inevitably that has to be done through the individual who is screened because who else is going to know who their relatives are and to look at what effect this has on family relationships obviously family relationships will impact just the ability to communicate this but then after the fact it could affect family relationships as well. I think this point has already been made but just to put it in this context if you have family data you have potentially the ability to look at the question as to why is the penetrance greater in some settings than in others are there modifying genes for example that could be relevant and it struck me thinking about this all of us really is based on the enrollment of individuals and although you can encourage your family to enroll I don't think that there is any current way of linking that they are your family but I wonder if something that should be considered as an option to enroll family members where you do create linkages so you can actually begin to study this within the all of us context. I'm actually asked to head a working group for all of us on the application of the all of us data set for study of rare disorders and this is one question I think we may put on the table as how it will be received I don't know but it does strike me as an opportunity. And then third when much of this has already been talked about particularly by Heidi transportability through the lifespan this question of reanalysis versus re sequencing and have the technology surely will evolve over time and the fragmentation of health care and then the question of where to sequence data live in a health system which currently I think often might be the case but if it were most people probably don't go through their entire life connected with just a single health system if in fact they have any health system that they are connected with for those that participate in research it may exist within that research enterprise let's call it but how available that will be for use outside that context is probably quite limited it's possible but that you could you know provide the data directly to individuals of course it's a big data set but there may be ways that that could be the case on you know some sort of memory system whether that's practical and will people remember where they put it and that sort of thing it's certainly an open question there could be so-called what I'm calling anyway DNA data banks that come into existence which span institutions so you don't have to worry if you move to another city and establish your care somewhere else that the data that was collected about you at the last place is unavailable to the next place or there could be and maybe this is encompassed in that commercial entities that exist specifically to ensure that data are available wherever and wherever you tell them you want it to be shared the last thing I'll do and you know you probably can't read this I actually can't looking at my computers we even but I'll tell you what it is and it was done on the fly so I wouldn't claim it's in any way complete but it's looking at things in a kind of a flow chart and it takes into account on the top here is the population and I can't quite read it because it's obscured but it was the laboratory and then it was the health provider and a geneticist speaking broadly whether it's a counselor or geneticist and then horizontally is the process of engagement and I guess I could say engagement and consent then screening then the outcomes and then future use and it sort of tries to establish a flow chart and in the margins are some of the kinds of research questions that can be addressed that are relevant to these different areas and like I say it was done on the fly so it's almost surely going to be incomplete some people find this to be a good way to organize information some people don't like it at all because it sort of forces you to put things in boxes which don't always fit really well so it you know I presented just for you know as a possible way of organizing information but we can see how useful it turns out to be so I'll stop at that thank you thanks Bruce are there any specific questions or thoughts related to what Bruce has just said just really quick I want to highlight two challenges with using prospective cohort studies to study the penetrance of pathogenic variants the first is just the numbers issue I was sitting here what during Bruce's talk running through my head in terms of like how many carriers are there likely to be even in all of us which is a really big study I was doing it for the NCIs can I for cancer prevention cohort which is much smaller and the numbers I mean they're there and they're both very diverse data sets which is a great opportunity but the numbers will still be small again it's a surmountable challenge we have 15-20 years of experience conducting collaborative studies so not one study is going to be able to provide this but across many we can get that information and the second is every cohort is going to deal with its own ascertainment issues so I don't know that all of us is going to be strictly representative of the US population there's who enrolls it's going to be just a little different but that again is a surmountable challenge by linking back to the enrollment population you can either see what happens in the US general population or if your target population isn't the whole US population it's maybe folks who are living in New York you could probably reweight the results to be appropriate in that setting as well so challenges but not insurmountable Bruce a reaction to that or I think it's certainly true we took a look since I have an interest in neurofibromatosis how many people with NF are in all of us the answer is like 160 or something so if you're going to do big modifying gene studies that's not enough probably to be statistically meaningful but on the other hand you know got 160 freehold genomes now in this population and you know it's it's worth something and I think for some of the traits we're dealing with the numbers may be pretty significant and for sure others it won't be so you know there's no study we can do short of sequencing everybody everywhere that is going to get completely around that challenge Erin thanks Bruce you know I have to say you're really doing an incredible service to us sitting up in a hotel room locked away with it with a very small genome invading your so thank you so much for doing this I wanted to react to your comment about using the all of us research program to model some of the conjectures that we currently have and maybe point to a paper not in today's New England Journal I'd love to take credit for you know having read today's New England Journal but actually this was sent to me by Gail she and Sharon wrote a Sharon Plon wrote an editorial on it and it's entitled actionable genotypes in their association with lifespan in Iceland and what the Icelandic group did was to take the ACMG 73 at the time because that's all there were when they were starting this effort basically identify everybody there this is the way I've interpreted it please forgive me if I've gotten it wrong take everybody there who had pathogenic or likely pathogenic variants in them put them in one box and then everybody didn't in another box and compare lifespan and there was a three-year difference in lifespan which is is really quite interesting I'm not quite sure what it means or how we would act on it but it seems to me that's this kind of research that could be done with some of these large biobanks and I was just curious as to whether you know people thought there were other kinds of questions that might be answered in databases like this that corner well so that's the Icelandic study was in 60 60 70,000 people sorry Dan you have to get the Icelandic study was in about 60 or 70,000 people yeah it's it's maybe 200,000 they have the advantage that they follow them for 100 years so they know about mortality 57,5900 so eventually you know all of us you can biobank all the other biobanks that have that have large whole genome sequences should be able to do this but Carrie Stephenson is doing this for a long long time and has all those records so we should be able to do it you're right true but you know can we go beyond that analysis to do some others maybe more you know they've then separated out the cancer genes and said those are the people who had died of, excuse me, those are the people who had those variants died of cancer so yeah so what other questions would you want to answer Dan? cardiomyopathies they allude to the cardiomyopathies but I think most of the data were in cancer I think but they have a graph that sort of follows people from the day they're born to the day they all die in their 90s so that's a pretty impressive and detailed and long-term follow-up so I think it'll be a wild for some of us to be able to do that yeah I suspect a fair amount of that is modeled rather than actual but again have to look at it more carefully Mark? yeah I think it's also important to recognize that not all penetrance is created equal what I mean by that is that size can sometimes Trump numbers when we're looking at a bias versus an unassertained population just to demonstrate that a little audience response here how many people think that the incidence of infertility in Kleinefelter syndrome males is above 90% Chovians? okay is that just I don't know or how many don't know okay well I can tell you that in medical school we were pretty much taught that individual Kleinefelter syndrome are infertile well when we did our copy number study at Geisinger and identified a couple of tens of Kleinefelter males nine of them have families and we don't think all of that is an opportunity we haven't demonstrated that definitively yet but it raises the idea that we only had 30 people and yet we saw a signal of a phenotype that we didn't expect to see that was really quite dramatic so in any case the point of this being that if we think about what is the purpose of screening then perhaps these big impact phenotypes are really the thing to be looking at and the other point related to that was the penetrance of what that was raised in one of the comments this is something that's come up in the clingent action ability working groups where we have genes and we have several different things that are potentially actionable based on outcome and intervention as we discuss them we say it's this one this outcome intervention pair that really drives whether or not this gene potentially has clinical utility and action ability and the other ones while they may have an importance to the individual they're not the driver for where the health benefit could be and as we assess the complexity of the gene phenotype relationships we need to be thinking about the large impact and the ones that are likely to have the biggest impact on health outcomes recognizing that other health outcomes may in fact contribute small amounts to the benefit or utility of an intervention okay just to ask the broader crowd here is there anything that you can think of that wasn't covered by the three presentations that we've heard today that would be good to add on to this list I can't remember if Heidi included it or not but the conversation we had with Bob about aligning better aligning the standards in the EHR space clinical space with the GA4GH research space to facilitate the genomic learning healthcare system was that in there Heidi okay anyone else yes sure something that I think came up a lot more yesterday and forgive me if it was in one of those presentations but is that the concept of false reassurance and just the number of negative results and the power that studies could have to dig more into that of how big of a problem that is what's happening and also how can you help mitigate those risks better yeah I'll echo that I was really struck yesterday by the number of times people mentioned the false reassurance problem and you know it obviously in the context of do no harm false reassurance is not a good thing so that should great okay I think that was in my list here somewhere yeah and I might add just a teeny caveat just as and I've forgotten was it Mike who was talking about you know penetrance is not only or incomplete penetrance is not only genetic and pointed to smoking I think we do have non-genetic many examples of false reassurances that patients take away from you know your blood pressure is normal today your blood sugar is okay right now you know whatever and there's not a lot of angst in the clinical community about that and you know there probably should be but there's not but again I think we do ourselves a disservice and my friend to my right will probably speak to this and focusing so much on the harms that are not unique to genetics so yes we should focus on harms and yes we should avoid them but let's not make it appear as though this is such a dangerous thing compared to everything else we do in medicine Mark the way I might frame that is again taken from Les's talk which is this is not exceptional and yet because we've had we've come out of 30 years of thinking about this as exceptional we tend to default to an exceptional position and so I think an overarching theme to remind ourselves of is that you know this is medicine and so we shouldn't treat it as exceptional unless there's extraordinary evidence to suggest that it is in fact exceptional if I could just follow up on that I'm reminded of the missing heritability days you know back in 2009 I saw it flash by here once or twice and I remember presenting this to our advisory council and the non-human geneticist said why are you guys getting so excited about this I mean in Drosophila you explained 30% of a trait or in this model organism or that model I wasn't here to correct me but at any rate we do I think do a little more self-flagellation than perhaps is good for the field or for the science I mean I think this just generally points to the need for better education and you know communication of what the results really mean and I suspect if we looked at other screening tests that doctors do patients misunderstand their negative results or don't even know if they had negative results or what they meant at all and so I agree it's probably not exceptional but we could model good behavior for the rest of our colleagues in how to communicate the negative result maybe that's the research as comparing our experience to some others yeah the way I like to say it is I have never met a surgeon who's afraid of scalpels we are the only specialty in the academic healthcare center that seems to be more afraid of its own technology than anyone else that practices medicine it's just utterly bizarre we have a very robust well-funded industry of people whose job it is is to come up with any hypothetical negative risk harm of genetics and genomics and risks and harms are real and if you do anything if you touch a human patient you have a risk of harms and the only way to avoid harms is to not take care of patients which is another harm so you know the Isaac Asimov quote I think applies right the uncertainty that comes from ignorance the same as the uncertainty that comes from knowledge and we can reduce harms if we take them all into account we can absolutely do that just one brief comment that unlike blood pressure testing and colonoscopy there are a lot of tests that one expects to do again there are also tests that if the result comes back weird you just say well let's run another one I don't think genetics is in this category and that does mean that it has to be treated a little bit differently not entirely correct but in some cases it's correct the persistence of the data I mean there are times when I'm sure Christine you get second and yet Heidi you get repeat you know this can't be so and so you send it back so okay thanks everybody I think we're going to move into the final wrap up here and so I'm going to turn it over to Teri good so this should look like laying the groundwork and yes I know I'm screen sharing I don't care there we go so we talked a little bit about this early but these are a series of things it's too dense I'm sure for you all to read I can put it in full screen but then I won't be able to see it at all so sorry just a moment I'm trying to get rid of a box that won't go away my colleagues had the same problem so anyway so what we will do is pull these together share the slides with you as well as a white paper but just to kind of run through them before you all take off we heard about basic principles of testing for disease and risk theory different and genetics is now moving into or could move into the risk testing which is different from testing for disease and the importance of disease prevalence in the predictive value of tests which has been done in clinical medicine for a long time not always appreciated the whole of genetic testing and reporting we heard would shift from sort of consoling people that now you have this diagnosis you need to adapt to it to actually motivating health behaviors to reduce the risk that we've identified genetically in the cases where that can be done in many cases it can so it can't we did hear some consensus around screening strategies addressing tier one conditions we also heard that hypertrophic cardiomyopathy has been added to tier one conditions if there's a reference on that because every time I pull up CDC tier one I get those three and yeah okay yeah and then every now and then you see HFE hemochromatosis as another one so clinical utility I think everybody recognized needed to be the synoplonone for implementing genomic screening and recognizing that if you return these findings they should prompt interventions that result in improved health medical action ability just because we can do something doesn't mean we should do it so engaging the population and engaging expertise should drive the list for screening Mike made the excellent point that APOL one and TTR are more important than some populations than others it's hard to choose or to know who exactly is going to have these variants and who isn't but I think those are important considerations at least to keep in mind especially as we heard yesterday American Indian Alaska Native groups found only five of the genes on the ACMG list being relevant to their population how one defines that as a matter of debate timelines and milestones for population screening will be important and I'd love to hear people suggest what those should be also some standard criteria we've been calling them the Richards criteria for selecting screening tests and populations to hold them in research on when we do and don't need either genetic counseling or as Heidi mentioned genetic counseling assistance which is another area how to incorporate individual patient preferences we heard did hear some suggestions about AI and online tools and how those might be used and I think we do want to try to harness those particularly given that genomic data are in some ways uniquely relevant to AI because they are so massive and a human can't manage all three billion base pairs on the other hand you can manage the five or six variants important to your patient and we heard several times studying the biology of penetrance and the evidence needed to prove pathogenicity so I kind of ripped through those is there anything up here that gives people like real pause or that I got completely wrong that you'd like to modify maybe is the Richards criteria do you mean Wilson and Younger criteria? No, I think the comment was made and I guess I can't get on it. We do very interpretation using a standardized set of class so we do the classification in that the probabilistic assessment that you had suggested indicated that there could be pieces that you could plug in and so it was and I didn't want to call them the B-secret criteria since those already existed in terms of exclusion but that was the idea that we could create a scoring sheet if you will where you would say here's the information you need to plug in to be able to make a determination regarding that was the point. So everyone's okay with this? Sorry. One of the things that I heard today that really resonated and I hadn't really sort of thought through was why HBOC screening is so much more effective than anything else and it's because of the efficacy of downstream intervention so I if you'd asked me yesterday I would have said well we should add TTR because obviously there's something and there's an intervention but whether that intervention is anywhere near as effective as an opherectomy in a 25-year-old who's destined to get a very cancer remains to be seen so I think that's something I learned today. That was an excellent point. Absolutely. Great. I'll see where we can work it in. Moving on the screening technologies we heard a number of lessons learned and slides will be available on the website. I hope those of you who prepared slides knew that we were going to provide them publicly but at any rate lessons that have been learned in optimizing high-throughput screening and testing and also the need to be very careful about adopting technological advances and establishing quality metrics which we've done for research sequencing and clinical sequencing and should be for other tests as well. We need research as I mentioned this morning on assessing compound heterozygotes. I hadn't really even thought of that Bob and you brought it up and I think that makes perfect sense especially when the second is one that has not been well interpreted. Distinguishing early from late onset forms of disease in terms of when one screens and what one does about it as well as the potential for pre-symptomatic management which obviously you'd love to catch somebody before they develop disease. That's the whole point of prevention. There's vast inequity in variant interpretation as there is in almost every other thing we do in medicine but in particular I think this is an area where Genomic stands out because of the terrible databases that we have currently in terms of underrepresented groups and something that I think will improve over time. We do have a 500,000 person we don't but Africa is developing one under Ambrose Wancum and other groups and Maharia is working with I believe it's Regeneron to develop one within medical schools. T4C, yeah so those will help with African ancestry and African-American. There are still as you know four sixth of the world's population that isn't covered by that that really needs to be. Let's see need to figure out handoff. Oh yes this came up multiple times about what once you have a positive test you know how do you make sure that people actually get into care and what is that handoff and that I think we heard could be you know researched in a way should it go to primary care should go to this group should go to that group whatever but that's a real opportunity I think. Better estimates of prevalence of monogenic diseases from biobanks and Jonathan may have mentioned that recognizing potential harms of false positives we've heard about several times and consider the number needed to harm which I don't think we look at as much as perhaps we might. It may be much smaller and much greater than we think. A lot of talk throughout the two days about the negative testing how people interpret that what's the role of genetic counseling in that and what kind of follow up is needed Dan made the point that patients with a very few very clear phenotype along QTE interval whether they have known genotypes or not you're still going to follow them somebody who's having many many polyps they're still going to need colonoscopy whether they have known variants or not but perhaps those without a phenotype don't need either genetic counseling or much in the way of follow up and those again are researchable questions that could be looked at in existing biobanks and then the importance of linking back with our basic science colleagues we had a whole genomic medicine meeting number nine on linking basic and clinical scientists and you know engaging with some of the large scale databases that are trying to interpret variants and their relationship to function high throughput assays et cetera so is there anything here that gives anyone positive that you'd like to add to you have your little thing okay did I exhaust you yeah okay nothing on the nobody no hands in the music okay yeah and those on the web if you just speak out if you want to logistics of populations sorry Terry taking a second to digest you went through them quickly I did in quotes where you have considered the number needed to harm can you say that and when you read it you know if someone's reading this without an understanding sort of the number needed to cause harm so you just might want to oh yeah yeah yeah number so Ned this was your idea I can't think of how to say it because this number needed to treat number needed to screen number put you put you if you say number needed to treat for a benefit then you say number needed to treat for her to treat for a harm yeah okay thank you still it's still you know does seem awkward but yeah I'm sure there's another way to say it like the number of screenings that are so that how many screening oh yeah it's the same as all you could say the attributable harm risk if you wanted important to specify the harm and who needs the health health like I'm a term is disutility that's a little bit dry okay alright we'll put that as a you know just so that when this ends up in the Washington Post we won't be I'll put a phrase together that has the the attributable risk and quote number needed to harm yeah that's a little bit easier that way it puts it in the same vernacular as number needed to treat for a bit yeah I'll send you some oh you'll send me something that even better okay great thanks other comments on this going into logistics of screening a lot of discussion about who manages the results that was alluded to earlier in terms of who follows up on these things and the excellent point that was made about barriers to specialty care are even greater than barriers to primary care and disadvantaged community something that I think we don't always appreciate primary care physician shouldn't be expected a provider shouldn't be expected to become genetic counselors looking at multiple models not only for clinical decision support but also for referrals and that sort of thing consider meeting this was Carol's suggestion that perhaps we should have a meeting of both genomic medicine believers and primary care leaders because we were sort of reeling about gosh how do we get you know primary care providers to take this up well let's not try to understand a little bit of what the various perspectives are how we weigh the evidence that's available and how we look at benefits and harms of this to try to address the skepticism which would be an interesting thing to do something we do have a convening power in NHGRI we could consider it immediately we heard from sort of many many stakeholder communities guidelines need to be simple and easy to follow for patients providers payers clinicians everyone we need meaningful engagement of communities we've heard a lot about this the point was made disparities start in utero you might even start say preconception but at any rate they are pervasive and need to be addressed with the communities affected the excellent point made and I think you know we heard about this through the PennChart initiative that integration with clinical care can improve lots of things if we make these data available to all the clinicians that touch a patient within a system as well as to the patients themselves can improve access to screening and follow up we heard about identifying and maybe examining past harms that have happened particularly with genomic screening and then avoid if we can building them into future systems and research gaps again repeatedly incorporating the need to incorporate social determinants of health which are usually cannot get from the electronic medical record but need to find some ways of collecting those identifying and limit barriers to participation and incorporate throughout principles so I think with that if anyone wanted to change anything I can take a breath take a sip of water any additions or modifications I only have 12 more slides did I skip one? Community engagement yes well and I think we talked a lot about community engagement I'm not going to walk through all of these but as was mentioned earlier only 5 ACMG variants relevant to one population that I had not realized is as large as 5.2 million in the U.S. so a group that we definitely need to address and the point was made when a patient is referred out of that community are they getting the tests that they need are they getting tests that are even relevant to them testing the variants that are appropriate for them etc the fact that there are other underrepresented communities that do not have the kinds of organized consultation and governance structures that is characteristic of AIN and tribes and communities so how do we, I don't know that we can address that but who is it that is the community and how to engage them and the point being made that the policies that govern research and public health are quite different Bob made a point on one of our pre-meeting calls and the state mandates something, the status I don't paraphrase it, takes responsibility a lot of responsibility for what happens in that and needs to be very very careful to avoid and reduce harms, am I saying it correctly? Excellent, okay great and the sadness about evidence-based medicine methods that have remained unchanged didn't fit so I put fixed for more than 12 years at least what evidence we need to get to more conditions and the very intriguing idea of finding systems that are appropriate healthcare systems that could test pilot those that are almost ready and one of the challenges being whenever you look for the really appropriate ready well-resourced institutions or healthcare systems there's a whole community of people that they do not serve that we need to understand how to serve exacerbate inequities, so but is that something that we could do to strengthen those or to reject them? The importance seems pretty obvious but I'm not sure I've seen it emphasize the importance of engaging patients in developing educational materials and results reports that they're interpretable and understandable and that they don't convey completely the wrong messages and that engagement is not enough we need true co-creation and participation throughout a project anything here that people would take issue with? Alright, just two more evidence needed to support screening we heard about value for Mark this morning and a nice definition of that and the need to identify health outcomes that are important to patients as well as to clinicians and to make the point multiple times that our systems essentially define outcomes based on the power people the people who drive it and that's often the clinicians and not the patients not that that should be determinative but it's something to at least be considered I thought it was interesting David made the point about confirmatory testing the cost is really relatively trivial after you do test on everyone if you're only pulling in 1 or 2% of the people as you said amortized $100 test would only be $1 or $2 per person so that's a good thing to recognize and I think we tend to see $250 cost for whole genome sequencing whatever and then $250 for confirmatory test and we go oh my god $500 times 350 million people we can't possibly do that and that's not how the math adds up but we've heard about how to combine conditions to get good value because testing more than one is clearly more cost effective and we saw that in the gossascus paper and in others interestingly engagement and leadership is a strong predictor of clinician satisfaction with the effort I might have thought the other way around if the leadership is forcing you to do something you don't want to do but maybe I'm just a cranky but at any rate that was and I think part of that is that good leadership would actually make sure that it works for people rather than you know and set up whatever resources they have available to make sure that it's smooth and feasible and all of that as opposed to coming up from the grassroots you may not have the authority to make the changes you need to have something smooth and work in your workflow Carol made some good points about the perils of paternalism and how clinicians consistently predicted much worse outcomes than patients actually did and the much greater concern it was they predicted 35, 40% great concern or reduced quality life or whatever and only 5 to 8% of patients actually reported those things and I think those are important to keep in mind whose perspective we want to listen to here and she made the point who mistrusts whom in this it may not all be in one direction we've heard for years I believe about how do we get genomic or sequence data to follow a patient across medical systems and throughout their life and we're still struggling with that I have a vision so having seen pharmacogenomics testing that was being done in Europe as part of a clinical trial they had a QR code on their phone and they scan the code and they can basically have access to their data we must be able to do that Amazon can tell you everything I've ever bought from them for the past 10 years so we've got to find ways to be able to do this and noting among the payers if there is clinical benefit there really is cost is less of an issue as long as there is clinical benefit but really reducing harms is one of their biggest concerns and if you screen people at younger ages where prevalences are lower you're going to have more false positives that's our friend Reverend Bayes and so how to deal make that balance and then the interesting idea of well gee and sequence it at age 18 or you could sequence it at birth but then not provide the results until that child is of the age of majority and ask if they're interested in it Mark made the point that's when you have the least interaction with them in the medical care system I would posit it might even be until age 30 you don't see much of them at least the males how about disconnecting that from the healthcare system when you go to vote or getting a driver's license or whatever and then some suggestions on post testing interventions for high risk people so finding the right follow up models helping patients and providers understand the recommendations and increase updates so that seems a major implementation barrier we need to address so are the things in here anyone would like to modify I'm not sure the younger ages thing is correct because I think actually the opposite is true right it's the reason why if you're looking at Wilms tumor screening screening WT1 in 80 year olds false positive the false positive rate is high because they should have had the disease so I think actually that's not correct right because the genome doesn't change the prevalence of the risk factor okay silly me that's alright so can we say it might get more down side so we're not even going to say it's gone somebody made this point I swear thank you Dan so did anyone make the point that screening at younger ages could have it down that's because of the quality you get more years of life added if you detect the disease at a younger age so he showed breast cancer was effective at 20 but it wasn't effective at 40 years of age right so that's not a down side that's an upside that's the reason for doing is the years of quality you add not because the prevalence is different so I'm the one who I'm the reason for that I mean yes it's all true it sort of depends on what the intervention is so what I had in mind was as opposed to prophylactic surgery if you were going to recommend people to get screened like to do metmography screening at older ages because the incidence is so low between 20 and 30 there might actually be false positives right so there's I think there is a potential risk for sort of looking too early I mean it's sort of like give the example of if you're looking like past the age of onset if that doesn't make any sense and the flip side is if it is a later onset disease there might be anxiety there's like nothing you know you're just like waiting for the shoe to drop until you're 35 maybe you could there we go the cost effective in the study did actually show that it's slightly less cost effective if you were to screen at 20 versus 40 and the reason is as if you're doing screening interventions well ahead of when you are anticipating having the results and so again if you think about the condition by condition for familial hypercholesterolemia you'd want to do that screen in the first decade of life but for hereditary pressure varicastral limb syndrome if you screen too early you incur costs without any incremental benefit in that period of time and so right sizing screening intervention timing and that was I think I captured that on one of my slides that I sent you about understanding the timeline of prevalence and tailoring the screening to that timeline yeah there's also a practical issue that my friend Ellen Clayton always points out because she's the mother of two boys that men between the ages of 18 and 40 just don't go to doctors and just don't do anything women have gynecologists but men don't do anything so so you have to think about how to if you're going to do it at 18 how to counsel people to pay attention is a big challenge I would think maybe move it to bars there you go yes yeah absolutely okay great thank you and then the obstacles to screening session which was our last session before the three earlier summaries we did hear that again payers care more about health outcomes than cost savings so maybe not all of them would are acting that way but anyway it was noted that there you know when you're doing screening you're asking the payers to take on the upfront cost of the screening and maybe getting very little of the benefit if people are only in their system for two years on the other hand they do pay for you know hypertension treatment and cholesterol lowering benefits of which accrue over decades and so there's a little bit of a disconnect there and it'd be interesting to try and figure out you know why and now 30 or 40 years what of research Dan into low cholesterol before people were convinced and some were still not convinced and hypertension treatment et cetera we don't want to wait that long for this so everyone will prefer simple low cost and low risk screening if only and the point being made that the USPSTF is weighed very heavily by these these groups which is good to know and it was great to have a colleague from our office of disease prevention in in the meeting and we can continue to work with them a number of research opportunities on data and terminology standards types of derived data and knowledge management and we heard and many thanks to Kate and Dan for you know basically a roadmap to a genomics enabled EHR in the pen chart genomics initiative and they have made many of those materials available we very much appreciate it there's a website that we can probably add Kate had sent it to me so we can we can include that as well and you know she listed you know a very nice list of challenges both anticipated you know okay we need to start small we'll walk down and into an unanticipated that there actually was a lot more demand for dissemination than they had expected and then several points about incorporating equity into implementation science which is a useful thing to do anything here we'd like to modify you look thoughtful Jonathan are you great I won't go through these these are the ones we just heard so so we'll we'll leave those aside I do want to now with my my remaining few minutes to give many many thanks to the planning group which was Jonathan yeah Jonathan Gail Bruce and George who are I'm sure with us in spirit and then many thanks to the to the people who helped make this meeting possible Jenna Cohen from the ACMG Alvaro Brandon who I didn't see at the meeting today but we appreciate his background support McCoolen and Gerald and then Jonathan Narula Riley Wilson from my group who are taking what will be an excellent summary I know Meredith weaver from the ACMG and Emma and Lindsay from UNC for the PDF booklet oh yes Lindsay yes up there with yes hi Lindsay so great and all of you for coming in for staying to the bitter end on what I understand is a very nice day outside so we should get out and try and enjoy it Rex do you want to have the last word just to thank everybody I thought it was a great meeting and see you all next time