 So the scariest thing about being asked towards the end, Dan and I being asked to co-chair was to lead this session. Dan actually very nicely, we've both been taking separate sets of notes but his I think are more organized around major points and so he kindly agreed to use his laptop while we have this discussion. In just as a reminder, I think everyone remembers that we're going home tonight so unlike some of these workshops, this is kind of art as opposed to the second day being everyone comes back and synthesizes things. This is our chance to synthesize and since we're ahead of schedule that should leave ample time. So Dan I think you are going to walk through some of what you considered major points and then we'll open them up for discussion. Yeah so this is what I thought were the notable observations and recommendations through the day and let me just sort of scroll through in case you didn't see yours on the first screen since we're is it was organized by evidence generation, really the topics of each of our panels, evidence generation, data analysis and interpretation opportunities, reporting and return of results, assessment of the effects, oh good I get to type on correct spelling errors, assessment of the effects of return of results and then the overall discovery to translation process. So these are things at the level of just how e-merge has run to date over the last decade. So let me go back up to the top and then. And I think you combine electronic phenotyping with evidence, right? Yes so evidence generation opportunities so phenotyping is one form of key form of evidence used by that so I roll those together and so the opportunities and again this was focusing not so much on operations or the way that historically e-merge has done things but on ways that things might be different. So Dan is it possible for you to project that on the WebX? All you have to do is get on the WebX and then share your screen. Okay because this is like a really important thing to capture for people to be able to see. Okay yeah so let me just go ahead and open the network login page somehow I got logged off so let me log in and I won't do this for all of you let me just get on until I get back to where I was and if somebody actually right so while he's doing I actually probably still have it in my in my other email so let me see if I got it. So while he's doing that for the electronic phenotyping were there specific points that people felt came out of that discussion they're probably on his list but in case they're not that I got the WebX yeah yeah okay let me click on the link here and get started but she thought if you want to come up and make sure that I do this in the right order and you may have the login and so bypass that oh I can't I can't I'll put in your email okay good thing we're ahead of time well Sharon you had asked about were there were there points about electronic phenotyping and I did want to ask somebody I think it was Ken raised the concern that that there weren't any EMR vendors at the you know at this meeting and we always struggle with this because how do you bring one without bringing you many right and if you had any suggestions on the best way of of doing that that would be wonderful to hear I would say in Cesar we struggled with this as well because of the idea that we had to have at least several of the major vendors and we at least had some phone calls where we had different vendors represented on different calls but but I would agree that that was the issue Dan you probably have more experience with this did you have your hand there are empty at the table now you can now can I let me go back to also sharing here well just to say I mean we did in Emerge as well had at least one meeting where we had the major EHR vendor Cernar epic and GE come you have it and it it it's limiting because they don't want to talk in front of each other so there's also that problem that you have to face so they were really looking to us to tell them what to do so just to tell them it doesn't quite look like what you're showing so so if you can see this we still we're still seeing your your you know choice of screen and we're okay I think you have to select that window okay oh wait a minute this doesn't cancelling a shutdown let us cancel yeah I was gonna say we don't want to do this you don't want to do this and what I do want to I have to get rid of this problem that I'm I'm disconnecting the projector okay yeah that's where I was was duplicate but apparently it left okay am I still sharing my screen but now it's not up here oh yeah what what the technical issues are being worked out so I think you could just invite everyone right and like if they don't come they don't come one way to quote-unquote invite everyone is go through the EHR Association okay where they they'd be happy to be the middle man and say hey you know we'll put a call out the question of whether they come and whether they engage I think depends on sort of whether they you they view the use case is really important some vendors may already have a genomics working group so at the very least you should be able to get them to join I think a lot of vendors don't I mean they'll just make a cost-benefit analysis of is it worth actually sending a person no I was gonna edit it they have to pay their way to or whatnot but but I think the WebEx kind of option definitely opens up more people to potentially come I think it's hard with that notion of vendors don't want to tell each other what's going on that might be sort of separate discussions with specific vendors including the vendors customers and if you sort of pull the emerge sites I assume that it's primarily going to be two vendors anyway so you can just have you know two vendors that you have separate calls with what's as long as dance we're gonna say it's a generic problem it's not just the EHR vendors it's the you know there are two large genomic or genotyping sequencing vendors and you can't invite one without inviting the other and they don't want to talk about each other and there are and there's a only about a million companies that are trying to develop uh you know software solutions for all the stuff we're talking about here who do you invite who do you not invite so I I'm happy to be talking amongst ourselves for a while but your idea of going through a trade organization is not a bad one that's that's worth okay all right okay so let's go ahead oh well is that well it's oh I see I took it away from myself okay so suddenly oh so duplicate all right we got a weird formatting problem duplicates now you can share your screen all right now share my screen yes okay okay okay where were we okay um so uh highlighted among the evidence generation opportunities were um the general category of better phenotyping methods and technologies and first among them increasingly automated phenotyping since it was clear it just doesn't scale it's too labor intensive and so better ways to do this that are increasingly automated but incorporated elements such as the ability to do longitudinal phenotyping particularly with respect to exploiting time to create more accurate phenotypes the ability to find not just binary states absence or presence of disease but rather continuum of states included within that in a lot of cases the ability to infer hidden physiological phenotypes and a recurring theme throughout each of these the application of machine learning methods including the learning of these of latent states using deep learning methods and issues that would accommodate the bias that's known to occur across institutions and within institutions um in the clinical data uh yeah I would just emphasize the continuum of states I think that was a recurring theme that you need to encode both the degree and the severity um and that that may also make it more attractive um to physicians if you're not simply just saying disease no disease but perhaps a genotype predicts the severity of an illness and could I ask before you before you go on from there so increasingly automated phenotyping is something that we've been saying for low these 10 years um if if not longer um so have we made progress in the past four years on on increased automation I would ask the joshes and george I mean that the example I showed from harvard that shawn has I mean that was the the hard work was creating the 300 cases but then the thing was at 0.96 or point something around 0.95 um ppv at the end of it so that so so you still had the hard we're doing research into how to reduce the number of cases you need to achieve that so that's still state-of-the-art but machine learning we're already evaluating 100 cases for the I mean um curating 100 cases just do the evaluation so if we increase to 300 we're able to learn a spectacular phenotype I mean that's not bad how long did it take to do the whole phenotype shawn like a minute yeah I'll repeat what he says how long did it take to to curate the 300 cases I guess is the question well it happened over a long period of time stopped and started and then stopped and started Beth what would you say stopped and started about three months I mean because you know you'd be like oh no we need some more cases oh no we need some more cases so um so it takes a you know I mean the other thing is of course we're exploring these new kinds of ways that we can use fewer cases and then use kind of a cluster method what we call silver standard to put the kind of initial set of attributes that we can use out of the box which are really like counts of ic9 codes and counts of mentions of synonyms of the diseases and that gives you kind of a something that's close and then you apply this denoising algorithm to pull in the rest of your stuff while diminishing that that and you can get this increasingly in an automated fashion that I think Josh has actually been writing about so exactly so silver standards and then Peggy slide on active learning are kind of where we're headed to try to reduce that curation time okay I just want to break that problem in a couple parts too so the other part is how long does it take to transport it from one place to another and how much works does it take to get all the covariates out I think we've actually done a big step forward with what we're doing with the covariates across the entire set and I think that will make a lot of those a lot you know trivial to get as opposed to each of us running things and you know the other thing that and then and then moving towards a common data model should actually make you know everything but the nlp I think um relatively much more plug-and-play transporting instead of all the time we've been implementing something but so those things have sped up you know if we talk back at merge one we're talking about a year to 18 months per phenotype I think it takes us less than that when we're actually working on things now so considerably I mean we you know we're we're trying to do how many how many phenotypes we're only halfway through and we're two years in so though a lot of us spent our phenotyping team spent time doing picking our sequence examples too so those didn't necessarily count as phenotypes but they were phenotypes things that we were doing but um the uh but the other component is our phenotypes are getting harder like I'm not sure that that things like like the statin mace algorithm or some of these others that we've done you know we don't really know generally how to do those in the grab bag machine learning fashion yet so it's it's it's it's new it's new work okay I think yeah can I just comment the other thing Terry is if you look at the phenotypes the more recent phenotypes are much more complicated more complicated so there's a big difference compared to merge one okay so I think we don't want to rehash the whole day so we we should keep going so a short comment and that is that that we're we're we're obsessed by saying how many phenotypes have we done well you know ages of phenotype and gender is a phenotype and the highest cholesterol you've ever had measured as a phenotype so we have a million phenotypes and the phenotypes you're talking about are the complicated ones so it's it's it's not finding diabetes I think we've we've cracked that part it's finding the diabetics who then take a medicine who are then then have a biomarker measured and then and then go blind and so those are the complicated ones and and so I think we're we're developing the standard methods to do that and and at the same time exploring these newer methods the machine learning and and what have you and and so I I think it's it's uh um a little unfair to say well you only have done 40 phenotypes or you've only done 27 phenotypes or 46 phenotypes or whatever because I think we've done a lot more than that and the phenotyping group does a lot of other things that they're not recognized for like picking out patients who we're going to study and what have you so I and I'm not I'm not on the phenotype working group but I obviously have an interest in that but concrete example autoimmune phenotype 43 separate diseases being identified so is that one phenotype or 43 funny it tells you which of 43 diseases you have so that took several months to create but it's 43 different diseases that we would have in the old days called 43 phenotypes impressive I feel it but it is still the case that the diagram of how it's done looks just like the emergent one diagram of how it's done and when you compare 175 to the 8 000 diseases that human beings get growing uh in the ear of a rare variance it's uh it looks like we're falling behind so other discussion I mean I think holding out the notion that there could be entirely different approaches to phenotype extraction seems like a good thing rather than just optimizing that you can row even faster in the next iteration than you're rowing now okay you'll see the new data sources are interlaced through multiple methods here but one we heard in the context a bit of phenotyping was the ability to do phenotyping based on gene by via by environment interactions and on the outgoing side of evidence that is emerge as a source of evidence for clingent and other genomic medicine resources as as well as these novel sources that are outside of emerge now so incorporation of environment directed consumer genomic data social media we just heard the notion of peer-to-peer phenotyping as as it exists in the undiagnosed diseases network family history and other patient reported measures online disease focused patient communities all of those are sort of new classes of data but they actually have special relevance to the topic of phenotyping and that's why they're included here but you'll see them other on your other headings as well Dan I would just add to that the you know with the geocoding I think we do have we saw at least one slide about that today we do have the opportunity to really execute on the environment piece not you know in probably a pretty robust way but certainly with lots of work to be done okay so we'll put environment especially in the context of geocoding okay other comments about evidence generation again we rolled up two panels in that one okay then turning to data analysis and interpretation opportunities and we've heard these throughout each of the panels the first is the notion of that we just heard from Heidi of real-time variant interpretation that incorporates some of these other new data sources such as patient data as well as the traditional way of of matching publicly available knowledge sources to the patient's own variants the methods to and this one a major issue since so little is known about you know medicine now and that is methods to efficiently do continuous data reinterpretation over time is front and center as a challenge for this consortium but all the other genomic medicine consortiums as well semi-automated to fully automated interpretation via analysis pipelines for which there's some promising early developments methods for assessing pathogenicity and variant penetrance such as family cascade testing and the application of these methods to address this sort of these fundamentally difficult problems in complex traits of pathogenicity and variant penetrance methods for efficient collaboration with other research consortia particularly we heard earlier today for pragmatic trials and very variant characterization and and we just heard this sort of the opportunity for eemerge to add a genomic dimension to other research programs such as all of us mvp or all of us as as the archetype for that mvp all of that critically we've heard through the day depends upon standards based data exchange so that needs to be a kind of fundamental bedrock of of emerge as it as it moves forward the opportunities to do public health genomics we heard in our novel and disruptive technologies including the the linkage of emerge data to health information exchanges a formal approaches to representing and accommodating uncertainty in the analysis and interpretation of the data a big omics data linkage if the car mentioned this so the the joining of emr derived phenotypes with other classes of omics data the inferring of genotypes we heard this from matt of inferring genotypes from non-traditional data sources such as drug experience images and even internet search histories so these as supplements to to basically help triangulate on on genotypes and their relationship to real-world manifestations the application of deep learning to the characterization of variants of unknown significance and specifically we heard from that drug targets and toxicity predictions associated with the primary genomic data we've heard a variety of speculation about how we might do either crowd or cloud-based annotation methods crowd-sourced and cloud-sourced and then high-performance computational methods so things that are simply are within reach now with advanced supercomputers that were no long were not previously computable they were just sort of trans-computable so I'll stop there for others observations about new opportunities in data analysis and interpretation that they had in mind or they heard or this wasn't actually the right version of what was said well Dan I think after we do this I think we should go back a little bit to evidence generation because we talked about the phenotyping but there was a lot of discussion about cost-effectiveness versus clinical action ability versus that I think we need to capture as well but we should go through this and some of it okay is captured in here okay so other comments on that that general topic of data analysis and interpretation and again this is the new opportunities where we think there's a grounds for e-merge to really make progress in areas some of which are quite novel okay so let's then move on well so if no one else will I'm happy to so Sandy talked about one of the challenges with accessibility of clinical data that if physicians have to spend 20 minutes finding something they're not going to do it and is there a way that we can make you know use nlp or other tools to synthesize information and present it to the clinician at the point of care when they need it so Sandy I may be paraphrasing you wrong but isn't that what you were yeah the one the one thing that I'd modify is I don't actually think it requires nlp I think it requires just pulling the data from the different sources where it's at into one place to be used yeah so a variety of tools to synthesize and present a point of care yes jesse about the clinical decision support I think Sharon made a point earlier about how many people have all of their care in many different places and so until we solve the interoperability clinical decision support is going to be at a huge disadvantage because there's just going to be no matter how good you are about looking at the data that's in the system there's a lot of data not in the system that would be impactful on that decision so cds is coming up next oh I'm sorry maybe we're that we maybe we're there no it's coming up next too late so data analysis interpretation was different than cds but save that comment and go find a mic Josh we already passed evidence generation by yeah so an evidence generation so I think it's worth spending a few moments thinking about uh pre-coordination of study design which by that I mean in our current iteration of emerge we have a lot of different study designs some of them are fairly naturalistic uh some of them are rcts we kind of have a whole would you mind picking up the mic you're kind of going in and out thanks okay so my point is that right now we have a wide spectrum of study designs across the different institutions in emerge three and I think if we are going to pre-coordinate a study design for the next iteration it's going to have to be in the rfa right and I think this gets into the some of the general themes from the evidence generation so maybe this should be a separate bullet point that we need to come to terms on what are effective cost effectiveness research in genomics what are standardized forms for clinical trials both for what you're talking about but also for clinical utility in many other aspects of medicine now there's like a checkbox and you say I'm following this format um and someone has given some deep statistical thought to those formats in advance which I don't think we have yet and so that relates to having more standardized forms across the consortium and also across other consortium I just wouldn't use a term coordinate um because at the point that I made earlier was a different study designs looking at the same problem in different ways is actually stronger so it's more of a synthesis of different study designs so if you have a pragmatic trial and ignite and you have an observational study and emerge looking at the same issue that could provide different sets of data that would be useful in terms of the generation of evidence if it's a if it's a standardized format for which there's been some serious thought that the study design itself is going to provide that I think was intrinsic to my comment but I accept that as an explicit uh well I would say genomic medicine is I mean I also spent a lot of time in clinical cancer trials and they're very specific designs and people have given much statistical thought to how you do those and people will compare across designs but we're in a new phase here that I that I think does not always lend itself to the same degree of of infrastructure the extension that I would make to this is that one of the things that PCORI has done is that they've actually funded methodology um that they um we may not have all the methods available to us that we need to do the types of studies the end of one was a particular issue that Eric had brought up earlier and so um is there a um a place in this for funding novel methodologies to address these problems okay and can I just add to that you know I I think one of the problems and it's a debate I don't know the answer but you know standardization only works when you know what the right answer is and I'm not sure we know what the right answer is yet so I think there is still some value to sort of experiments between different places doing different things so we we need to make sure that we've sort of gotten 80 percent through that space of those kinds of experiments before we say okay we're going to lock in on a particular design or even a particular set of designs and you know I'm agnostic about where we are in that spectrum but we need to at least have the discussion about have we hit the point of diminishing returns of doing those experiments between different strategies versus too much standardization so I think it's I think it's just a balance that we need to pay attention to so shall we put but but not too much well but then I think we well but I think that I think this gets back to Mark's comment then then I think we have to have thoughtful ways in which those different methods are being attempted so that we don't just wind up with heterogeneous study designs without a real reason attempted and evaluated yeah um we we've been saying throughout Emerge 3 that we have been doing we've been actually experimenting with a variety of different ways of doing a variety of different things and we can use that to learn from it I think that we ought to begin to synthesize some of the lessons so I would concur with not so much the rigidity of standardization but by the time we're beginning Emerge 4 we want to have some ideas of best practices of best ways to do things we can return to that when we go ahead and move on to the reporting and return of results where I'm not exactly sure where collaboration dropped into this topic it doesn't seems like it it transcends lots of them but focusing particularly on EHR integration so this was the panel that focused on CDS and that there are opportunities for user-centered design for and in both the creation of display centered and event-based CDS for specific conditions but also along with that a reasonable mandate to build foundations that promote shareable e-cds and embedded within them are the challenges of knowledge representation of complex decision support knowledge objects and that was put in the context particularly of open source specifications for those and determination that is the assessment of genomic CDS that embodies deep knowledge rather than simple rules that are the sort of superficial drug-drug interaction style CDS that causes such high rates of overrides and annoyance as well as the next issue which was confronting alert fatigue and other I generalize that to other usability issues for genomic CDS we heard about this opportunity slash tension about having viewing externalized CDS services as an as essentially an ancillary system so whether you run it inside connected to your own executables or you call a web service is kind of one of the designs of experiment an experiment that that that e-merge could well do because of its technical sophistication in creating CDS and its track record of having done it in its own institutions as well as understanding the evolving technologies by which you do these cloud-based services another version of that was the idea that you would have genomic apps as supplements to clinical systems and I heard it correctly that included the idea that you might have sort of natively executable code for epic or the three or two or three major systems where you could actually just plug it in if you wanted to own and operate the CDS service and the incorporation in clinical decision support of provider preferences for those that give their decisions support to providers the next one the major topic there was actually part of the disruptive and novel ones and that is direct to participant or patient technologies for communicating both the data and the interpretation of the data that confronts the problem the fundamental problem of non-scalable limited resources for expert consultation so the use of board certified genomic medicine specialists and or genetic counselors just doesn't scale to broadly to a community setting so the confronting that that how much of this can you automate the delivery not only of the data and its interpretation and do that in a way particularly as with with participant facing or patient facing technologies that is sensitive to health literacy that incorporates in this case patient preferences as just as we incorporated provider preferences in the clinical system and we heard specifically the facilitation of family sharing as a goal that could be addressed in this direct to participant mode of communicating results so let me stop there and oh a whole bunch of hands but you were first and then you're next so I just want to respond to the genetic counselor lack of workforce issue I think I'm the only genetic counselor here but I but I do think it's you know I understand that's an issue that's talked about a lot but I think rather than just saying that there is a problem with workforce we also need to address how we responsibly return and deliver results it's not just about a workforce it's about what's the right information to get to patients and what's the right way to do it what's information that we can maybe use other kinds of technologies to deliver the information what does need a counseling one-on-one session so I I guess I'm just looking for a little bit of a deeper thought process about that and you'll see it in the next heading which happens to be assessment of the effects of return of results I think actually you're you're next so I agree that the I think the consensus of the group was that the the notion of shareable CDS across different groups and that being foundational infrastructure was was something that we want to do but I think Ken's point is really important for us to to to capture the idea that there will need to be resources that go into that and therefore that that probably implies that we need to narrow our scope to a particular kind of CDS and I just want to I just want to make that point that that we do need to in some ways focus and I think that that's the implication of choosing to do this kind of share yeah so this actually isn't an implementation group and we're if you if you were in the business of writing FOAs if one would ever be written you'd actually want to create it in such a way that people would thoughtfully look at that and say well one of the things I have to confront is is constraining the problem so it becomes doable so I would put that more on the respondent side than on crafting an FOA that predetermines there should some be some way of addressing that issue because I think it is a general issue in when you've just got too much to tackle at once so to address Maureen's issue about genetic counseling and how it can be applied it would seem to me that the genetic counselors become application experts for how to use these technologies and to make sure that they are done as well as we possibly can do them at the same time provide scaling it's a small community relative to all the information that's going to be there and people are going to really struggle to swim in all of the kinds of data they can find and to have someone that can guide them through it will be really I mean guide them through it maybe not necessarily a person-to-person so much as the modern way of doing it through social media connections yeah and actually you know I I intentionally said provider preferences that's that's all the providers who would be using this including probably primarily you know counselors not not so much well at ASHG last week or two weeks whatever we go there were actually a couple of population-based examples one from Israel and one from the UK where they're really limiting the use of genetic professionals to the individuals with positive results and show data on a couple of different trials of using short educational materials for physicians and patients for consent very short kind of negative results and then at least in one way I mean we still probably can't scale that but there was there was some data from a couple different countries showing they could pretty effectively utilize the expertise of the genetic counselors specifically for individuals with positive results for high penetrating gene in in population scale testing yeah and one other quick thought just on on the foundation layer it came up a couple times about knowledge repositories and the existing word that can be built upon I think cds implementers will be attracted if if they're sort of that dimension as well and then that can be made translated into you know focused pilot projects for use cases so enhancement of existing enhancement of existing knowledge repositories may I add something about the genetic counselor yes yeah at the ASHG this beaker presented a randomized clinical trial for maternal results between genetic counselors and the web of return and then he found that for 500 people or something like that there's no significant significant difference between genetic counselor return results and the web uh return of results or what kinds of results I mean for positive and negative uh for I think it's positive both positive and negative okay no more yeah I mean it strikes me that all the things that we've sort of brought out here are really um getting at the issue that we need clinical re-engineering uh that this is really about how we can get out of the traditional way of doing things and into a new paradigm that actually will scale and will be reliable and and so this again comes back to uh to me the fundamental question for Emerge 4 which is you know should we really embrace that idea and say that this really needs to be about implementation science and clinical re-engineering as opposed to iterating against something where we know what the barriers are and we're going to inevitably come up against those barriers if we continue to try and do the same thing okay well actually we're going to get down to overall discovery of translation process and and implementation science and such there so thank you for reminding me we have two more topics to go and and the next one is the assessment of effects and I we sort of anchored it on the effects of the return of results but it's actually a more general assessment of of effects the first idea that we heard actually attended to your presentation of presenting pdfs as the the forms at which you'd captured data to assess outcomes is the opportunity to do develop as is really as part of a phenotype specification is the phenotypes that represent outcomes and process measures are themselves in essence detectable phenotypes in many cases and I connected that to the notion that that really satisfies the criteria for what's called closed loop decision support where you give the the guidance and then you just in you don't have to have a separate a case chart review to see whether it was followed you actually prime the next part of the rule to detect the good and the bad outcomes might result so it's a form of it would reinforce the idea of a closed loop cds and particularly would help inform at a national level a learning healthcare system because it has the attractive property that you learn whether or not the users follow the guidance all right so the general notion of the evaluation of the effects of clinical decision support is prominently featured we heard just now in the most recent session that correlation of attitude and beliefs with the actions taken by recipients of results in their interpretation an important I think frontier for assessing the effects of return and results wearables as a source of quality of life data and the ability to do sentiment and mood analysis as a subset of looking at the effects of return it results the financial impacts as experienced by either health systems and or patients the evaluation of a patient engagement with science so that viewing that as an act as an outcome of an intervention that a patient becomes more engaged with in the science of understanding their own disorders as a measurable effect the development the spontaneous if you will development of communities as a consequence of genomic medicine findings become becoming available the evaluation of the effect of both standard and non-standard approaches to delivering results on provider patient relationships right so encoded in that is that you'd have traditional provider focused cds does that change provider patient relationships and then you do the kind of the disintermediation approach you just deliver direct direct to patient information what effect does that have on provider relationships it's been looked at in other forums but it hasn't really been looked at I think in one as complicated as genomic medicine and the idea of partnering with participants for both the design and and research design and reporting approaches which isn't really the effects of return of results so much as a kind of overall thematic approach to perhaps emerge for that is prominently represented in all of us from day one it was participants as a peer level co-equal um uh partners in doing the research so does that stimulate ideas or rebuttals just to try to stimulate a question in terms of your financial impact on health system and patients the question that comes up a lot in Caesar is well what information do third party insurance companies essentially what kind of what kind of data do they require to approve payment of genomic testing and whether people think that's a goal of emerge or not yeah so we I think we could add impacts on health systems patient systems and patients and payers potentially we actually did add payers down here in this new in the last category as well as a new recipient of emerged data don't see the other I guess I'll uh since Sharon's intent was to be provocative I'll be provoked um uh I I certainly I think it's an important thing and this comes up a lot but the reality is is that um we shouldn't tie the success of what we do to whether or not payers pay for something I mean it's an unfortunate reality that the decisions that are made relating to payment in this country are rarely based on you know rational evidence-based decision-making there's a lot of other factors that are out of our control not to mention the fact that it's very difficult to engage there is no one source of truth when it comes to payers or thousands of payers and so while I think it's really important to think about that the reality is is that if we can get the health care systems and the patients engaged around these questions as we've um uh we've actually done some work uh or I've done some work previously to this when I was in Utah using business case analyses and these types of things to develop metrics that the administrators and the business people pay attention to you can actually implement things where it's agnostic to what the payers actually doing because say this is the right thing and and we can make it a financial argument that this makes sense so we should have it on there but I just I really wouldn't want to tie the success or failure of the program to getting payers to pay for stuff okay well taken uh yes so I think there was a suggestion I think it was from Mark about encouraging sort of patient-centered data governance and transportability and I don't see that actually put that in the last overall discovery to translation process maybe we can move on there because um I had had just sort of on short notice I was just sticking these under what I thought a reasonable heading was and I didn't obviously hit them all okay well so where where did you put patient-self phenotyping so self-phenotyping was was in the developing apps I think Maryland suggested uh no air group suggested developing apps for patient-self phenotyping okay so let's okay well let me put it in there um so that was it under data analysis I I know I didn't type the word self-phenotyping so let's let's just actually put it under the better phenotyping methods and I'll just add another bullet here so we have it it says um there is down here um in the last one let me just go through the these um the overall discovery of translation process uh was um we made the observation what emerge in a variety of ways is doing doesn't scale and and then viewing scaling as its own research problem incorporating particularly implementation science for principal approaches to scale is um clearly a different um vector than emerge has taken to date um the discovery translation process we've heard repeatedly is critically dependent upon uh data standards and emerge has the opportunity to create new types of standards for new types of genomic um medicine data objects and things that were particularly called out were the family history uh what might be called next gen vcf a variant call file data formats that would incorporate things such as quality uh information uh importantly in the discovery translation process patient centered data governance uh versus healthcare institution centered governance um and uh here payers appeared again as engagement of the consortium with new partners and that included public health agencies and I in that same session we heard payers as potential partners the general methodology of having small pilot studies particularly for highly novel or disruptive methods um partnering with um patient groups and other non-traditional organizations such as um pharmacies and participant provided supplemental data for phenotype characterization that's sort of a subset of cell phenotyping but the idea that there might be genomic promise measures that could be contributed to the overall promise patient reported uh measures by this consortium kind of developed and validated by this consortium okay so that was the end of that and also the end of my list of notable things so I clearly must have frightened something and mark remembers it um well uh it's near and dear to my heart so but I I we've not really captured I think the really important point that David Valley made is that you know the gift that keeps on giving how do we use sequence longitudinally over time um that's redundant how do we use it uh recurrently uh in the in the course of the patient event I didn't see that really captured and in particular in the cds discussion uh you know the the point that I brought up about um using real-time cds to identify individuals where it would trigger you to go back and look at the sequence for use cases beyond just well I'm going to prescribe a drug and is there a pharmacogenomic variant which is a really banal sort of use case um although hard to implement nonetheless um that I think is is really critical and ultimately the value proposition you know for um sequencing is going to be having a reliable way to use the sequence over the course of a patient's lifetime uh Zach Kahani published something a few years back saying if you really amortize the cost of the sequence uh you know over a patient's lifetime it's about 50 cents a year that's not an unaffordable metric but if we do things like we always do which is one offs and we have to redo it all the time now we're now it's a problem now there are issues relating to are we going to get better at sequencing yes are we going to update the knowledge well it's difficult but those are all interesting scientific questions that could be addressed through projects like emerge oh no Richard yes Richard the question of the role changing role of the physician I didn't see it captured anywhere particularly I think we saw the data governance patient centric versus health institution centric but that doesn't really express the thought yeah so it was um this effect of standard and non-standard effects of return of results on provider patient relationships with me that's a little too cryptic so if you'd like to add some additional uh words or clarification to that how about the role of the physician in so I I think I would like to generalize that to the role of providers um since right I think Richard I think is partly addressing and I think maybe it was Gail or I forgot who in one of the introductory comments noted that all results were being given by genetics professionals um and that's different than the patient's physician generally or at least and so I do think we have a lot of verbiage or word words up here about really engaging patients more and we have stuff about the electronic health record but really how do physicians interact with genetic data for patient care moving forward who are not the genetic specialists I think is an important issue does that capture well actually that's an additional thought I mean there's also the the route that doesn't involve the physician or the genetics professional but oh right well that I think we've tried to capture some of the directive patients I would like to add add to that into what you the point you just made sure and I I think sitting here listening to the discussion today the thing that strikes me is how little there seems to be how little of an effort there seems to be to involve general physicians I mean the genetic professionals are going to buy into this but I think medicine with a capital M if you want to call it that it seems to me that if genomic medicine is really going to work we have to involve physicians broadly and that's not an easy thing that's a very very difficult thing to do I hate I hate to say because I'm a physician but I think innovative ways to to present the sequence information to them to show them how the sequence information improves the care that they can offer and peaks their curiosity if they have any curiosity left is is really vital to this this whole endeavor from my point of view yeah a point well talking I think at no point did we actually differentiate that there the the level of genomic literacy is highly variable in in across clinicians as well and we didn't actually incorporate that in our research agenda I mean we have to come to them with solutions you know we can't come to them with science projects and we frequently bring science projects that we think are cool and they say I get a lot of stuff to do I don't have time to do this but if we can actually identify the problems that need to be solved that involves genetics or genomics that they may or may not realize involves genomics that's where you'll begin to get the buy-in is when you start solving problems that they're struggling with yeah and I I would actually just add since we duplicated provider preferences and patient preferences that we put health literacy for for participants and patients we just need to put health literacy for physicians too it isn't that different that's true that's I think that's a key issue and you need to be present sort of pragmatic solutions what one nice thing the point was made earlier this afternoon is by continually continually engaging the physicians as new connections new sequence connection between their patient and their patient's problems show up they will be reminded that you know that that patients really are a different one from another and that this information influences the way people get sick and how they respond and all that kind of stuff and I would also emphasize educating or putting special emphasis on younger physicians it may have to cut your losses at some point good point all right we're almost to five o'clock straight up right be the sort of normal time and NIH meeting would end yeah but but you listed this is ending at 5 30 so right I did but I do have a couple of yeah yeah so there we have a little bit of time left turn so I'm not sure where this fits in your in your rubric and I really appreciate you're doing this is really hard to do on the fly so thank you very much Dan for pulling this together somebody had mentioned it might have been gale the the opportunity with the 97% of negative reports that we get and and only a few sites are reporting those now partly that's because not all the sequence not not all the sequence data were verified for the negative reports but about half of it was and that could be very useful clinically so so you know trying to exploit that might be a useful thing and I'll let you type that I have one more can I react to that yeah because I didn't during the thing and you know we've we clearly have made the decision not to return negative results I think it's acceptable to put this on the table as as a something that would be really good to study I think that this is a really interesting scientific question about what is the value I think there's a lot of concern on our part about what does negative really mean certainly what does negative mean if you do 109 genes and what does negative mean when we have a lot of unanswered questions about how well sequencing performs and what does negative mean in an indication versus not non-indicated test and so but I think that that's a scientific question that you could develop a methodology to actually answer and that I think I'd be really interested in as opposed to and I don't think you were necessarily implying that we should just begin returning it all and and collect the data but and I would just remind we are a research organization and those are important research questions so so yeah absolutely and I mean just to be clear we are returning negative results at Northwestern and part of the goal there is to understand what you need to do to educate participants to understand that the negative result doesn't mean what it means and what it doesn't mean and I think just like people are pretty comfortable seeing a normal range around any analyte measurement there's a normal range around whether your genome has told you something or not but it's an understanding that just because your genome said you don't have a BRCA mutation doesn't mean you're not going to get breast cancer so I think it's a really important research question and I think I think there are some sites in the merge three that are really tackling that. I was just going to say we're also talking about the future right so we're not we're not trying to reevaluate a prior decision I think the issues of learning from people who test negative I think is an important issue that could be addressed in in a merge four or in the future. Yeah and I think there were a couple of things about family history and family testing so so one had to do with you know apps for family history and more standardized methods or at least you know best practice methods. Another that I thought was really intriguing was was contacting relatives directly rather than trying to go through the relative again that would need to be I'm glad Mark isn't listening because he'd jump all over that but anyway you know the questions what legal and ethical challenges that raises there are some health care systems that do this and outside of this country and maybe that's something we should look into so so the sort of the direct relative contact and legal implications. Well and I think it sort of came up a little bit in the disruptive discussion but you know you could imagine a Snapchat equivalent for families that allows you a private way to actually share some of that information so I think there's some interesting approaches. That's pretty cool. Okay well thank you very much we think it was a really effective day and we got through quite a busy itinerary. We do have the summary slides and obviously the slides from each group. I'm sure if you have like some burning thought that did not come up Shethul should Rong Ling be the source of any other commentary or or okay yeah I just want to ask if you're I don't think your slides are confidential right because we webcast webcast yeah so they were there not now so in that case in that case your slides will be posted on the website we since we have the website for this workshop so if you don't have any objections or posted your slides on website. Well and also typically after you know after these meetings we write up a summary of recommendations and then we send it around to the participants and just ask you to add modify suggest you know etc and we'll hopefully have those in a couple of weeks. So I think as the the senior NHGRI person here I should adjourn us and thank you all very much for coming and for your help with this and we'll keep you posted so thank you.