 Great. Thank you. Thank you very much and and so can I thank the response George for the intro What we thought we'd do is open it up to Community questions we have a few here and we'll wrap up as as time moves up moves out I guess the I think you know sort of a transition here We saw from the examples from the CDS and HL7 world and with with EHRs that Ken presented as sort of paradigms for thought As we think about phenotype in our in each of our research Stories and and you know, we've seen an emerge that there is a growth from what we've done in research cases and Fee KB published algorithms to their use and clinical applications both within our institutions for finding populations as well as other Decision support metrics. I know many people probably have thoughts or questions. Let me just I think open it up to folks for ideas and questions Yeah, so for those of us not in the field since I I thought Ken's Talk was quite interesting. So just I'll give one example. So he mentioned HL7 clinical quality language So how does that relate to what emerge has done or is doing? Yeah, so There will be many perspectives. I'll take the first crack And so so a lot of that work is in the space of using an EHR for reporting metrics And sometimes the Welcome to WebEx enter your access code or meeting number followed by pound Enter your attendee ID number followed by pound if you do not know your attempt All right the So so when you think about these two processes You know most of us emerge have actually built parallel research repositories and a lot of times These are you know sort of the clinical world is looking at numerator and denominators for reporting characteristics and may have different Premises around sensitivity and specificity in the research world and You know the other thing that's interesting is most of them are contextualized within you know current practice So they don't have to deal with the decades of different mappings of lab codes and things like that And and but I still think they're useful paradigms to think about And provide some examples of what's happened in the contextualization of current quality reporting on EHRs Which you know don't have to deal with you know 644 different albumen codes Like we have to deal with you know at Marshfield with I don't know how many 100 decades of not quite that long, but you know of EHR records have highly evolved over time as we have in Columbia and everybody else here So I'll note so CKL is basically an expression language So when you want to say like if this is true, you know, then consider an exclusion criteria met, etc so it's it's like, you know, if they've had a Diagnosis of this within this time frame then consider them to have met the diagnosis requirement It can be used for any kind of phenotyping. It's specifically usable for things like clinical distance support as well I do agree with the data mapping issue, but that's kind of orthogonal to the issue of the clinical quality language That's more an issue with the data model And I think Simi is probably going to end up being the solution to it That actually fixes it the place where the vendors currently are focused are these US core fire profiles And the other quick thing would be the difference in use of NLP just to highlight that which you brought up CKB is an awesome resource and currently it's generally in the word docs And I was wondering if there are plans to make it computable Or if that's even worth it because of the issues we've talked about that it has to be done differently for any different place So why bother? I guess I'll take that That so two couple responses to that is so the FEMA architecture that George mentioned is actually built in to VKB as a beta and that was built off of some of the National Quality Forum based XML standards, which probably could evolve to eCQMs The as one answer to that we also have OMOP Modules that George and others have put in a sequel that could be directly executed and that is now searchable So there's a you can search for all the mobile mob entries So those are two ways in which we're trying to promote reuse of Computable things. There's also nine modules have been uploaded, which are computable and have been shown to transfer between different EHRs Just to point out So the Ironically the phenotyping work group never talks about EHRs because the presumption is the academic medical centers has abstracted their data from the EHR stuck it in a data warehouse the ones that are Emerge collaborators and it's about Consistency as the conversion Josh is talking about Could everyone on the WebEx, please mute we're hearing some background And then there's a spot There's a step that I didn't mention that I should have which is that we have our local databases Then there's the central database Where the e record counter we're actually putting more and more variables centrally that we can all share so that's actually is defined to be the same data model because in fact it's shared among us and We counted it earlier as 15 variables But actually if you look at it each of those variables is a zillion different levels So it's really thousands of variables that are being included and put centrally that we can work on and I don't think any of us What's that and curate it so that we've gone through our decades of data and figured out We used to call our lab test this then we called our lab test that and we changed it probably six times in the last 20 years And we've done those mapping so when it goes in the e record counter We're all going to the same format and as Michael pointed out That's where a lot of the work is the the pain of phenotyping is getting into a consistent format I think this this will work Um, so use a couple of you have mentioned that for managing Analyzing drug use that rx norm is useful at at least saying what what medication we're talking about but that there's not standardized ways of documenting route is that like a easy win task that could be taken on to ask EHR vendors and Others involved in getting these primary data to come up with standardized terms for for route so we could at least Manage that so I think I've added up solid address it. So There are actual standards for route. There are things like the FDA route codes There's no med codes the issue is it just hasn't made it up to the US core fire profiles and I've had this conversation with folks at epic concerner and They're totally open to these they just want to use cases and the justification of why it's needed And this actually came up because we've been working with the CDC to get opioid management Guidelines on supported and that was a stumbling block where we identified a need to map locally And they're totally open. So I think as a pragmatic matter the approach that other Initiatives can take e-merge can take is to take important use cases that are understandable to them and to their client stakeholders us basically and say this is why we need it, you know It's getting it into the standard is the easy part actually it's getting the vendors to adopt it That's hard, especially because a lot of these codes are not already encoded in it internally So again like labs we say like well, that's great Some of them are already mapped in our HR system But whenever we do analyses the mappings are oftentimes wrong or absent So it's it's it's the kind of thing where the standard is only the first step The degree to which it's actually supported by your vendors probably really critical So Can you hear me now? so You know one of the the tasks that that could be part of the next round of a merge Is essentially scaling up outcome assessment as we sort of return results to a larger and larger cohort of patients clinically So I wondered if you all had some discussion about how to take your existing phenotype definitions and turn them into Outcome assessment tools so that might add for example timing Emergence of different kinds of phenotypes I'll jump in there the I think it's a great use case because if you look at some of our phenotypes that have combined other Phenotypes and use them almost as modules. So if you look at diabetes If you look at myocardial infarction in the setting of drugs so pharmacogenetic phenotypes Which one of the group efforts that the outcome group has worked on Have often have included sometimes compositions of multiple other Emerged phenotypes and that's a use case. We've seen in other people that have adopted the emerge phenotypes I remember when we looked a couple years ago at type 2 diabetes for instance And we saw that 40 groups outside of emerge had used that a phenotype in various different ways and some of those were as outcomes such as outcome after Transplant as a development of new onset diabetes after transplant I mean they They may just have to be modified to improve their sensitivity in some cases My question goes back to something that Josh touched on It's the difference between the amount of phenotyping need needed to drive research discovery Versus what is the ideal in the clinical collection? And I would say that one of the lessons we've learned here and in other arenas is that in research discovery a little bit of phenotyping can Go a long way as long as you can actually access it So my question is is there an opportunity to calibrate the difference and to Guess determine whether what I'm saying is actually true and then to build that into the future planning Let's we try and do the perfect and it's out on the good that's an interesting thought and and maybe one way to do that is to Kind of survey the different emerge sites for which ones truly do have this research repository So an abstraction of their EHR and which really are trying to use their EHR as it is they have maybe a mirror version But they're kind of running their scripts off of the mirror of the clarity database as opposed to a full abstraction of clarity and all the other databases with an epic into a research database and See if there are differences because I think there are some emerge sites that they have that full research Abstraction and others that are trying to kind of pull from the mirror version of the EHR So I think that's an interesting thought and I don't think we've done a lot of that yet so this is Sean Murphy from Harvard so My question is about this incoming EHR conglomerates, so there's been Activity and epic for example to create something called cosmos Which is going to put together all the EHR data from all the different ethics or whoever signs in and then Something's going to be done with that data repository and my question is has anybody in this room ever participated in a EHR based conglomerates such as that which which worked out right I Heard many of these being initiated and I just don't know of any and I always hear this suggestion that You know we do this and so forth, but I've never heard of one that actually worked out and you might think like why wouldn't that work out? And it's actually because EHRs actually do things in many different ways I mean there's like a hundred different types of flow sheets You can have an epic that could be managed in different ways and they're not necessarily going to be put together in the same way So I'm just wondering has anybody done that the only closest success I can think of is when GE abstracts data from their Set of EHR vendor customers and creates a database that they share But they're only taking a subset that was shareable and they don't interact with the Sites it's just part of their contract that the data gets goes thrown in there and so that have you seen it done for science George Have they validated so so you know I don't know that they validated it's been used in some studies I don't know how it's validated Okay, well, that's used in some studies accounts because what I see a lot is I'm not defending cosmos But I know I know but quality right they define quality by it's what the code is period right? There's no validation. It's just that's what it is and and I just never I just don't have a good sense of is it useful for studying science And I would just note that The bar for studying science and the bar for clinical care and financial management for health system, I mean it's You know, it's not that research is inherently needs higher quality data than actually taking care of the patient too So I mean it really just comes down to what's the quality of the data that got entered by the user My experience at least with the epic is if you share clarity queries with others you can reuse them Now there are value sets that are Institution-specific, but then so you need to do multiple iterations, but I've been pretty impressed by how easily I can Share queries across institutions at least on epic Yeah, I wanted to respond to that as well. I mean this is really an implement safe implementation science question and We've not really we've done I think Studies within emerge that have looked at things from an implementation perspective where we've mapped sort of the variability of Implementation and look to see are their best practices But we've never used an implementation science framework like re-aim or something of that nature to be able to study it systematically in a reproducible And reliable and valid way and so I was struggling frankly with Ken's talk because While if Chris shoot we're here, he would clearly have a thank you for the attention to standards But the reality is is that if we look at projects like emerge, you know that the primary goal here is Discovery and implementation and so the amount of effort that it takes to actually promulgate standards You know exceeds the available resources and would be an opportunity cost related to the primary goals of implementation and discovery And so it's a real challenge to figure out What's the scientific question of the standards issue? But I think it could be approached in an immediate emerge for by taking an implementation science approach and saying well What's the cost of not doing this and what's really needed? Where are the gaps and so that to me would frame a scientific question and mark on the respond to that as well Right, so I was just gonna say you guys have a couple minutes So Marilyn if you want to respond and if Marilyn you and Josh want to do summary Um as I've been sitting here listening to the conversation and thinking about the synergizing with other groups Adopting some of these other standards kind of changing the way we do things that would mean We can't do 27 phenotypes the old way in emerge for right so in order to transition and and try out different machine learning methods or Do some of these other things we would have to take from somewhere else unless emerge for would have both a Do it the old way budget for phenotyping as well as a try some new things phenotyping budget And my guess is that is not what we would be talking about so so I do think we should think a little bit about you know, is there a way to To combine the two of maybe experimenting with some of these alternative implementation of phenotyping strategies But but we wouldn't be able to do that at the same time generate 40 new phenotypes We'd have to think through maybe we generate, you know a fewer number But in a we come up with faster more efficient ways to do them I don't know so that's kind of what I've been been thinking about Josh. Do you want to? Just because it was mentioned in a couple of comments I would caution against us moving in a direction where we're using the EHR as our research database Most of us have changed EHR's When we switch to epic in two years, it's going to have one or two years of data So you need to have that separate longitudinal store and that's what we're doing queries against because I don't want to get confusion and how the I wonder a lot of the challenge it seems that I'm hearing about phenotyping is figuring out how to take the kinds of measurements that we have Used that we currently use and putting them into some sort of electronic Homogenized database that we can then ask questions from these what what are by the time they are collected retrospective data and it seems to me the future of phenotyping is going to be digital real-time physiologic monitoring Using micro devices and nano devices and I wonder to what extent You are thinking about those kinds of things Sites are doing pilots on that different sites are doing pilots. There's no I don't think there's a net white white study