 And I are going to be talking about partially our experiences in the eMERGE EHR integration work group and some of the areas where we've discussed with the group as future directions for eMERGE 4. One thing that came up earlier is that eMERGE has been a great test bed for discoveries as well as implementation but also for eMERGE 4 it may also be a good test bed for this idea of a learning health system where we have clinical practice and research coexisting and we're able to provide the most up to date evidence to healthcare providers to use in their care and also to learn from what's being delivered through measuring outcomes such as what was discussed in the last session over time and improving on care throughout the time. And so what we're kind of talking about is decision support as a way of delivering genomics knowledge so that clinicians can better act on that and then the outcomes that are coming from the outcomes group might be reported on a regular basis and then in terms of implementation science I guess some of the work that we've done within eMERGE 3 has been more informal in terms of understanding what are the needs for formats that are delivered by the sequencing centers and so what we've found through doing informal interviews are that people want the raw data and the structured data and the PDFs and each site is kind of taking different approaches in terms of what they're doing with those data and in terms of return of results we're asked to define our terms and so I consider clinical decision support broadly where one form of decision support might be consultation which is kind of what we're doing currently and like study teams can be considered the consultation service that provide support for healthcare providers and delivering the findings from reports. Another thing that we've done in terms of implementation science in eMERGE 2 was there was a paper by Tim Hur where the barriers to implementation have been captured and so one thing that we've also done is ask the participants what are the anticipated barriers to implementations that they see in eMERGE 4 and so what we're doing I mean eMERGE 3 and so what we're doing now is every monthly meeting we're roughly capturing what are kind of the barriers that people are running into and so when we're asked to go through some challenges that we see as well as future directions the main four things that came to mind to me are reproducibility, timing and data quality, diversity and replicability I won't go through these now because I'm going to go through each briefly. In terms of reproducibility some of this came up with the phenotyping workgroup where we have phenotypes that are being developed across the sites and we want to be able to implement them broadly. We might also consider a similar model with clinical decision support where we already have data being shared among the sites. There's a point about DNA nexus how data across the sites being stored on the cloud and so if we're also able to apply decision support models to that data there's potential to be able to make them accessible at the different sites. There are several potential models with one being bring your own data so you bring your data, train the models and maybe return those. Also rules can be shared. I think the example that we've pursued the most within e-merge has been this docuBuild platform where groups can share content and can at their local sites be able to brand them and provide local information. For example if you have laboratory results or interpretation of the laboratory results but at your site the insurance to cover that test may be a little bit different so you may want to have local information about that and you may want to brand it a little bit differently. So we're starting to explore that idea a bit. We were asked to talk a little bit about standards. A lot of that was discussed earlier but one of the main challenges to this is that institutions have multiple systems so there's the EHR system and then there's ancillary systems that are often holding the genomic data and so having controlled precabularies for bringing that data together is a concern. Talking to some of my Omen colleagues one thing that's come up is that LongQT syndrome for example treatment is informed by the genetics and by EKG and so you want to have both of those as part of the diagnosis but in general in the EHR you might just have LongQT syndrome and so how do we resolve that in the context of decision support. So another consideration I started thinking about data quality and realized that there's multiple aspects that affect decision support instead of in terms of data quality. First decision support can happen at multiple time points so it can happen before, during and after the decision is made and the inputs may be different depending on where you're coming from and thinking about the patient and side effects. Some patients may tolerate side effects differently than others or may want to value quality of life and so how is that taken, how is that factored into when you provide decision support and so that's one consideration. Also with an e-merge we've done some work on providing decision support for pharmacogenomic use cases where you have a disease indication, you order a medication then the alert is fired that says a patient is at risk of side effects or reverse drug reaction and so then the change is made but in terms of decision support timing this is after a decision has already been made and so what's come up in our work group is that we might want to bring that upstream where decision support is fired based upon the disease indication and this is where the phenotyping work group efforts come into play but as others have brought up being able to understand the weighing sensitivity and specificity and how we actually implement those screening algorithms will be important and so it may be that approaches like CDSKB could be improved to also include these measures of timing inputs and data quality requirements for using them effectively at your sites. The third point is around diversity and so the main point I wanted to make with this is that now there's capabilities to use digital approaches to recruit study participants and this is upstream of decision support but if we're considering the learning healthcare system and we're including only study participants that are willing to participate then what we're learning from those patients may not impact as broad of a population as we might want to impact and so some of these approaches are being explored in all of this program where different digital strategies to recruit are being explored and also considering whether we want to recruit using under a research protocol versus operational protocol might be considered as well here and then the final point that I have is around replicability which is at any point in time you might want to know what data was used, what evidence was available and get the same answer so be able to replicate what your analysis was at that point in time and an example for clinical practice is here and this work that eMERGE2 did with CSER to really classify different types of genetic test results and how they might change over time and why it's important clinically we do think about replicability a lot more in the research realm but in the clinical realm in this example we have a 43 year old female patient with a personal family history of breast cancer has a VUS reported in BRCA1 and it's reported as such and so then there was no recommendation but nine months later there's a revised laboratory report that reclassifies the variant as pathogenic and so then the recommendations change in response and so one clinical care provider may want to know why the decision why something happened in the past and be able to understand why that was changed in the future and so being able to track the provenance of what changes are made and when they are made how those changes influence retrospective data analysis as well as the impact of those changes on patient and research conclusions and for all of these there are actually approaches that exist and are being explored for that replicability issue. Gene Insight is already part of eMERGE3 has some approaches for notifying clinicians of updates and the evidence and so that's one approach for replicability for Sandy will go more into some of the scope considerations I know we've been talking about scalability but scoping that scoping in terms of engineering considerations and if we want to have a broad impact will be an important consideration for in terms of reproducibility having agreed upon standards and a model for enabling sites to use the same CDS in terms of timing inputs and data quality and if we want to do this upstream patient screening approach we need to maybe narrow down to one specific timing of CDS that we want to focus on what kinds of input and data quality requirements that we want to use and document that in terms of diversity we may want to support a range of digital strategies depending on what our goals are what our research questions are and replicability choosing the standards and services that can be accessed across the network and so the main point is that considerations are not new and existing approaches should be assessed to determine if they're sufficient should be improved etc. So with that I'll pass on.