 In terms of the outputs of this conference, we will be generating a paper, but I think also we want to be able to, if there are things that people want to work on, you know, whether it be as working groups or whatever, after many of these meetings, we've established groups to work on specific projects, and so we will just kind of see how those things develop over time. There's a little bit more free fall, free for all, not free fall, I hope, still waiting for the coffee to take effect. And so the other thing we're going to do is at Blackford now, we're going to try and tag team this morning, and so we'll see if this doesn't turn into an episode of whose line is it anyway, but at any rate, hopefully that will be worthwhile. So thanks, Mark, and any illusions or jokes about Laurel and Hardy or Simon and Garfunkel or anything, please keep those to yourself. Frickin' frack, that'd be good. You know, I thought the conversation yesterday was incredibly rich, and I appreciate everyone's time and interest and enthusiasm and insight being offered so freely. You know, as someone who's been working in CDS for more than 20 years, it strikes me that actually when we consider genomic CDS, we may need to address many of the complexities and issues that have been inadequately addressed in CDS to date. So in a way I see a green field, a clean slate opportunity. We may not get to solve everything, but hopefully we'll have a picture of what the entire solution space, you know, would look like for genomic CDS, and I think that may then back inform or fill in some of the deficiencies we have with the current state of the art in CDS, which is typically, you know, the cross-sectional one encounter-based alerts and reminders, and all the resultant fatigue and related issues. So we're going to summarize the discussions in brief form. I think we have enough time to invite you to ask questions as we go. Mark and I will split this up, and if there are key themes or key points that are missed or overlooked, please do jump in, and we're willing to capture those as well. Yeah, so Jackie, I don't think we actually... Can you send a copy of the slide deck to Jackie? And so since we can't make notes on the slides, then we can have Jackie do that if we do want to make some adjustments. Thank you. So Mark will be playing the Phil Donahue roving in the audience and asking you what's up, and I'll stick here. So you recall the objectives. We'll come back to this at the end when we try to summarize. But we wanted to compare the current state with the ideal state, identify and engage U.S. in the international health IT communities that might be interested in this, and finally get to this notion of the prioritized research agenda. I think number two has largely been accomplished. We had a variety of different initiatives described, and potential engagement across these initiatives would be very exciting. It seems, in fact, like there's sort of an opportunity at hand, sort of, you know, I don't know what to call it exactly, but it is an opportunity at hand where many forces are... the confluence of many forces may actually force the issue around genomic CDS in ways that it hasn't happened for other types of CDS to date. But the NHGRI and IH funded projects like eMERGE and CSER at night, of course, the work and newborn sequencing, ClinSeq, CPIC, potential examples and initiatives to relate to. The IOM Action Initiative, I've heard just a little bit about, the ONC and AHRQ CDS ongoing initiatives, VA, the CDSC, Healthy Decision, the work at the ONC to describe use case one, use case two for the CDS knowledge representation at the ONC. The open info button, open CDS work, Ken Kawamoto and others have been leading the smart fire. That just sounds so attractive, actually, smart fire. But, you know, I think we could get some marketing mileage out of that and maybe others as well. You recall the key questions. I won't belabor these. And in number one, Dan just gave such a nice and eloquent, as usual, overview of what sort of are the key issues and what might an ideal state of genomic CDS look like. And Dan described the ideal GCDS for users as being current, repurposeable knowledge to different settings, health literacy and numeracy sensitive, providing explanations and recommendations and learning and adaptive, that it is a closed loop with an assessment function that returns impact and outcomes and process change data to the knowledge engineers or the CDS implementers that they can update knowledge and make it more appropriate and fine tuned. Equally importantly, of course, is the notion of what healthcare organizations need today. And they need a system or a system of systems which allows us to improve quality and reliability, tracks the GCDS events, and follows up whether followed or not, and continues to allow continuous local and national learning. And interesting questions were raised about operations research and clinical research, but the notion that GCDS may have to span that spectrum, I think, is clear, and provide value to the healthcare organizations, which came up later in discussion in the ROI comments, as you recall. Building blocks, the notion of decision support packages, knowledge packs is a word I've used to describe these before, but recognition logic for genotype and phenotype in the EHR, the guidance, of course, for the clinician, the patient, or the family. Recognition logic for the closed-loop decision support. How do we take that outcome assessment of process change, quality change, cost impact, what have you, and feed it back to update the knowledge, recognition logic and knowledge structures. We need authoring systems, of course. Ideally, these would be multi-author systems where collaboration could occur and consensus could be managed and uncertainty could be managed in the authoring environment. Event monitors embedded within EHRs and PHRs to listen and look for those triggering events, which we had another rich discussion on. And then system-generated alerts at the teachable moment, which may or may not be the same thing as the clinical decision moment, but both are related, and this logic would be nice to be available to the clinician, whether he's perusing a record or actually seeing a patient. Automated tracking of outcomes and versus the user decisions, whether you abide by the guidance or not, it's critically important to know and understand, and upload your experience to this notion of the CDS Public Library, the Quid Pro Quo. If you download, you must donate, and you must share to create this virtuous learning cycle in the learning community. Actually, it might be worth stepping back to that slide. One of the things that we wanted to do as one of the key objectives was to try and define what is the ideal state. And I think that that is a little bit challenging to try and envision, but I think this is at least an attempt to at least say, what would the key elements of the ideal state look like? And so it might be worth pausing here for a little bit of discussion to say, are there things that are represented on this list that either are not represented as you would envision them in an ideal state, or that perhaps shouldn't be here, but more importantly, are there aspects of what we think would be ideal state that we haven't represented that? So take a couple of minutes to look over that, think about it, and if anybody has any comments, we'll just take a few minutes for discussion around this and add and subtract as need be. Terry. I'm wondering a bit what you're thinking would be the content of the upload of outcomes. So would that be something that could somehow be automated, that there are summaries of what happens, or is it for each individual in use of the CDS, or maybe that's a little bit too mechanistic, but I guess I'm not seeing how that could be made easy for users and you'd want it to be really easy for it to happen. So ideally it's something that the user is completely not aware of and uninvolved, and has to take no action other than to do his clinical or her clinical course of duties. And the kinds of measures that are useful to upload include both the firing of the alert, well, recognition of the state that the alert could fire, the firing of the alert, the acknowledgement or not of the alert, and then a process or downstream measure of whatever the alert recommended, did you do it, and did it have a cost or quality impact? And I think that the question that you're asking then is, you know, we know from the work that's being done in eMERGE, PGX, and others that the process measures there, the firing and the actions in that can be very easily captured and in fact most commercial EHRs with their decision support are able to develop those types of logs. The challenging piece is, you know, what happened to the patient and how do we aggregate data? And I think we heard Ken talk yesterday about some of the early work looking at how do we retrieve quality measures from electronic health record data. Again, not requiring clinicians to somehow enter it, but to pull that there. And we're very, very early in our ability to do that and I think most of us would think or say that the quality measures that are being pulled at the present time are relatively trivial and certainly not granular enough to really be able to track back to particular decisions. But I think that we at least have the opportunity to begin to explore how it could move in that direction. So it's not an impossibility. Dan, you want to add into that? Yeah, so I think the starting point here, sort of the null hypothesis to reject, was the empiric discovery by eMERGE that if you use EMR data, you can find a combination of structured data, codes, labs, meds, and then NLP analysis of provider notes to identify a phenotype. And if you think of the desired state of responding to a decision support intervention as essentially a phenotype, then you could reuse those same elements and just have a decision rule that basically says if the rule, if this event already occurred, if the rule did fire, now we're looking for this second set of codes, labs, meds, and NLP description of some kind of state that would be meaningful with respect to responding to the rule, or not responding to the rule. So does that address your question, Terry? Well, that to me sounds more like the kind of first stage of the process outcomes did they respond to it, didn't I? What I was a little more concerned about was the outcome. So how do you assess the impact? And that's hard for me to imagine how that could be automated, but again, maybe that's a bit more in the weeds and we could talk about how... This is an area actually where the quality measurement, quality assessment world, and the clinical decision support world really should try to come together. In the NQF work around the quality data management framework, you know, the whole vision was that we would define kind of these atomic objects or subset or classifiers that could be used in numerators or denominators, whether it's on the CDS side of the coin or on the quality measure side of the coin. That connection is rarely, if ever, explicitly drawn. And of course the measures, the actual measures aren't the same or the cohort definition in CDS does not exactly equal the quality measure definition, but they should be related. So I've got JD and then Chris. One thing to think about with respect to closing the loop for the outcomes data of using the rules is to look at that like we view public health data today in terms of infection control, what have you to say, okay, every HR system can then provide a download of their rule activity based upon the content they were using to feedback in and close that loop long-term. Second thing with respect to using all of these quality measures is to feed CDS work. That came up at the AMIA meeting a couple weeks ago and wow, we've got this HIPAA log jam in between those two to help get all this great research that's happening over on the quality side over here to be used on the research side. Yeah, I think those are both very good points. Chris. It really gets into what is the definition of your Lego piece. When we did Sharpen, for example, we recognized that clinical decision support, quality metrics, cohort identification were at the end of the day all effectively cohort identification because quality is a numerator and denominator. They're both cohorts. And decision support is it is for whom the cohort or for whom the rule should fire. What is that cohort? In the context of getting interoperability, consistency, and scalability, ONC, I think, has started to recognize that, for example, their healthy decision, which was based on the virtual medical record, was dissonant with their view of what HQuery, HealthQuery would look like. They were working with different Lego pieces. That, in turn, of course, was different from what was happening in meaningful use with CCDs and the like. I think if this is going to generalize, the community really needs to have a consensus on what are the data element pieces that we can agree upon and from there assemble quality, assemble decision support, assemble various logic types. And the leading candidate this week is the partnership between CIMI, the Clinical Information Modeling Initiative, and FIRE, which is a physical implementation of effectively that logical model. But I think we're not going to make a whole lot of progress in these kinds of building block idealizations until we have agreement on the fundamental units, molecules, if you will, of the underlying phenotype characterization. Chris, I couldn't agree more, and I guess one thing I'd throw out is to think about the CIMI work and FIRE work and the work that predates it just a little bit, but it's still ongoing, and Betsy, I think, is in the room, with the NLM Value Set Authority Center. When we were building the CDSC rules, we were creating value sets all over the place. Everybody does the same thing. You have to define a diabetic, that cohort with a specification, and if we don't agree upon that specification, we're stuck, as you say. And I wonder, is this part of something like the Value Set Authority Center, or is it something that in the library there are atomic building blocks which we use in our authoring environments to do either quality measures or CDS specification? Betsy, do you want to respond to that as Blackford kind of puts you on the spot? Well, I didn't regard him as putting me on the spot, but... Obviously, if we decide on what the building blocks are, then we will need repositories of the building blocks. And we... NLM has initially been focused on the value sets, and we are now... have done some prototyping work on the next up, which would be the sort of common data element part. And as we know, a variety of organizations, NHGRI and the lead with Fenex, have looked at common data elements, and there is work ongoing to sort of look across these different efforts and see if we can bring the NIH groups to greater agreement across them. And I think that's possible. And then the issue is how does that marry up with the common data elements that people are interested in outside in various types of patient safety reporting and quality measurement. So I certainly do think that working toward a common definition of how these should be defined and then making them available to people for reuse is important. Yeah, so I've got Josh and Clem and Chris. And so Josh, I'll go with you first since I was... I'm assuming you're probably going to be talking a little bit about some of the Fenex and other work of that type, or maybe not. Two points. So the first was related kind of this idea of these building, how we would do this process and close loop part of it. My knowledge, and I certainly am not an expert in this, so I would invite correction here. I don't think there's a standard around how we report that micro-level querying of what happens to... what happens when a user interacts with CDS. So I don't think these are queryable elements in sort of NQF formats or HL7 or that sort of thing. I think everybody has their own thing. So even the sort of ability to aggregate up and do that would be hard today. A lot of movements have obviously been done around the quality forums and how to get quality metrics. And I was just going to say on that in regards to value sets and that sort of thing, that the way you define these things depends on which circle you sit in. In a merge, we want very high, accurate, precise phenotypes and we don't worry as much generally about getting great recall. A lot of the quality metrics don't necessarily worry about having a PPV that's quite so high but maybe want more recall and a systematic way of approaching it that can be applied across many systems is more preferred. So it's interesting when you think about the value sets as we've done work with value sets and representations using HQMF compared to merge phenotypes that you'll look at the same problem different ways and I don't think that's necessarily a problem. I just think it's something to be aware of. A couple of things. It smells a little bit like the exceptionalism that we heard about yesterday when you make this a separate thing, recognition for genotype, phenotype. You know, it's just finding patterns in the record and of course the phenotype is just a general way to say everything you want to find in the clinical record and it's very ill-defined. And I think the element that you'll find in records now is closest to an OBX. It's the part of CEMI, the little element within CEMI but there's panels in records which might be analogous to what's in CEMI. And there's issues then. So you'll find that in every medical record system. These little things with name value pairs which says this is a diastolic blood pressure. This is the result of the prothromba and bad mutation and those things can be aggregated with the logic. So I think we have to be careful about putting too much into locking too much into the building blocks higher up but having a good logic that can put them together is sort of what Chris was talking about because it's the same thing looked at many ways and if we make subsystems that are all different no one will be able to support it. Now there's a second question that's trying to standardize the elements, these little bitty elements that are in all these records, across records and we've tried to do that with low ink. The common data elements is another activity especially for research kinds of things that aren't always found in medical records to do the same kind of thing. So I don't know if that helps. And I think that clearly some of the things that we've been looking at in terms of assembling phenotypes from more elemental particles as opposed to starting with the phenotype and then building up because again, we see these things that we like blood pressure that are present in so many of the different phenotypes so to have a library of those that can be pulled I think is one of the things that could make this potentially easier. I've got Chris and then Lee. Can I just respond to Clem quickly too? It's a great point, Clem and I, I'm reticent to jump head first into the quicksand of the curly braces problem and for those of you who aren't informaticians living in that world this is where local representation has to somehow be mapped to a standard form and I think that there's an implementation detail here that might be able to move beyond the curly braces problem with sort of an edge case and whatnot but we can come to that later. Okay, Chris and then... Okay, I have three points. The first two dealing with dogs and tails. For those of you that know my career, nobody could be more committed to vocabulary than I have been and value sets of course are the best things since the dawn of something. However, Stan Huff has taught me, much to my chagrin, that the value sets are the tail and the dog is really the clinical model to which you bind a specific value sets. Value sets without context are meaningless and the context is defined by those micro models so in the simi and fire stuff it's value sets that are bound to those structures that become relevant and those are the things that should be curated and met. Having value sets without any binding to a model is a purposeless exercise so it's value sets are the tail. The second thing is this relationship between data elements and research, be it for genomics or cohort identification or whatever and standards and other relationships in clinical practice. It's become my mission to try to persuade the research community that they should give up making data elements and that they should adopt the data elements that are being created in the clinical space and if they don't work, have the clinical standard changed to accommodate the research use case rather than make research data elements because it is inevitable that we will have a dissonance between a research perspective on clinical data and a clinical perspective on clinical data and if we think about ACOs, if we think about quality management, if we think about all those things, their really research methodology applied to clinical data albeit not necessarily for a research use case and the underlying physical structure should be identical. Finally, Josh's point that we can't all aggregate things similarly today because we have different baselines and structures. I could not agree more. However, that's not where we want to be and to the extent that we can define a common API, to the extent that we can make, if you will, all EHRs at the end of the day black boxes that will respond similarly to queries and inquiries ideally premised on something like a semi-fire query model, then it doesn't matter that we have differentiation within because if they're using the same common elements, if they're using the same query interfaces, that is the evolutionary state we have to aspire toward and I think it would be a lot easier if the research community were to partner with a clinical community to find what that state might be. Thanks, Lee. I want to make a point of, in the first point of knowledge representation, I'm wondering whether the report, for example, interpretation of the genomic results on the report should be also part of the knowledge of representation because I see all the elements you put here right now are still trying to standardize the terms in the EMR, but what I'm saying interpretation of those genomic tests, those things should, would that also be a part of the standardization as well? Yeah, I think that's a very good point and so if I would maybe, and so Jackie, this would be for the first bullet, add a sub-bullet and I'll just kind of frame this on the fly as representation of genetic and genomic results and related to the limbs or laboratory information management system or something like that because we recognize that while we use the overarching term EHR, those of us that do this know that the limbs in the EHR frequently don't play well together and that is an issue, so I think that that would be worth adding. I wanted to just scroll back for just a bit. First of all, Terry, I'm sure you're really glad you asked your question, given that you were avalanched under a volume of acronyms as we just so love to do, but I think that was really a key question because to me that really talks about something that there is considerable passion about around the room and in some ways I think is very helpful from a prioritization perspective. So with that editorial comment, any other Ken, reflection? Apologies for being late. I was putting out a work fire all night. Apologies if this overlaps, but just responding to what Chris said about the models are where it's at and do not overlap work. I'd say having worked in standards about a long time, probably worse than having no standard is different but similar standards. It might be worse. And we are dealing with so many consequences of having to try to take similar but different standards and try and emerge them because you simply can't map them one-to-one. You almost never can do it. My recommendation here is if you think you're creating a similar but different standard, don't. Work with this group that has the standard and have that change because otherwise we'll have another effort five years from now where we're saying, well, we have so many standards to choose from. Let's start over again. So we might want to add something. We don't have any sort of caveats and lessons learned, but maybe we need to create a slide of caveats and lessons learned. And what I'm really taking away from this from your comments and from Chris's and others is that while we have these funded research projects and where we're creating phenotypes that what we really need to do at our individual institutions is to not separate ourselves from the clinical work but say, okay, this is what we're trying to do, what's happening in the clinical space and try and reconcile those so that we have a shared understanding of what that phenotype is used in either the research or the clinical side. Is that a fair way of stating that? I think so. What typically happens is people come from slightly different perspectives. Like, here's the clinical research realm, here's the distance support quality measurement, data exchange, and we all have slightly different approaches. And honestly, it's usually not because there's actually different requirements. It's usually just because you're working with different groups. I think that actually practically testing that can be a challenge. I know in the CSER group we were discussing potential future projects and somebody said, hey, why don't we just, I think it was me actually who said, hey, why don't we just get our data and see if we can hand it off to another CSER group and see if they can do the same thing that we're doing on our data or they can do the same thing with somebody else's data. And it was roundly dismissed. And I think it was because people realized how little interoperability there was in anything that we were doing where we're talking about genomic medicine. If the genome really is the one test to rule them all, then much of what we're talking about relies on real interoperability and that needs to be tested not just by the people who developed it, but by somebody else who didn't develop it, and has a different data set that can look at the stand, whatever standards were created and say, does this actually work at a different institution? And I think that's dramatically different than how projects are done and grants are given. Now maybe that goes into the next hour of discussion, but defining what interoperability means is really key because if I can define what interoperability means for myself, then in the end, I can be like epic that just hired a lobbyist to convince Congress that they're interoperable. And that's one definition of interoperability. But I think that real interoperability is something that needs to be tested and needs to be tested on multiple systems and you can't just say, it's going to be interoperability because we've defined it to be interoperable within our own system. Yeah, I think that's a good point. I think one of the things that was really very prescient about the Emerge was the idea that right from the get-go, the idea was that the phenotypes are going to be created, they're going to be tested, and at least across that consortium that they would be interoperable. Now a lot of those phenotypes are now available through publicly accessible things, but I don't know that we've systematically looked to say, are others using them? Do they actually work outside of the Emerge Group? And that would be something that, again, could be, I think, relatively easily tested, at least as a proof of principle, that even these types of phenotypes that are developed in a research setting could in fact be developed in such a way that anybody with a certified EHR in a data warehouse could use them. I think Emerge really actually is a very good example of a strong attempt at interoperability, but could even, like you said, go the next step to see, can it even work outside that group? Yeah, I agree, Dan. So to carry on with Emerge and respond to Josh's comment, it seems to me that the major opportunity and or blind spot, if you will, in phenotype specifications are things that would be kind of intermediate physiological states. So you have a rule to reduce the risk of renal insufficiency due to antibiotics or something. So you have to be able to recognize is renal function deteriorating. Most of the phenotypes to date are kind of diagnostic entities. And so these are things where you don't have the likelihood, for example, that there would be an ICD code that gets you in the neighborhood. But with respect to the idea that Emerge was not worried about false negatives, right? So you were trying to go for high specificity, positive predictive value. You didn't really worry about the ones you missed. It seems to me we would have that problem if you're trying to make sure you identify every case of the dependent downstream observable set of characteristics, e.g. phenotype. But it's kind of healthy in the sense that what it does is give a conservative bias to the interpretation of the impact of the rule, right? Because it may be that if you can't find the salutary effect that breast cancer didn't occur or the patient lived, that what happens is it's interpreted as the rule having less effect than it really had for almost all of those cases where you miss their false negatives. In other words, you don't find them, but they actually were a good outcome. That may be... I mean, you have to look at the use cases of classes of these downstream outcomes of decision support. And you could have sets of them to see how robust phenotype detection would be for different kinds of states in the EMR. But it's kind of a nice research problem that extends the work of eMERGE, which was at the level of diagnoses in most cases. Sue? I just want to make a comment that, you know, interoperability has different levels. And I think, you know, if you just... if you don't understand and we don't interoperate useful information as for clinical care, right, then it's easy to say we accomplish interoperability but not on very useful information. So I would hope there's a separate discussion or there's some discussion in terms of what's useful information that we should exchange for the care of the patient, especially in this area of genomics. I think in the EHR, it's quite simple that we try to exchange everything. But given the volume of information in genomic medicine, right, it's often not possible to exchange everything. And given the implications of the time span, right, that it's not episodic, but over a large period of time, right, it gets even harder as we have multiple encounters with the patient, right, and trying to determine over the lifetime of the patient, right, what information should we appropriately send or what information should we appropriately request for the care of the patient. So I think, you know, that's something that some efforts should be spent looking at that. Good. I think that we're... I'm sorry. Quinn, go ahead. Well, I'd like to weigh in with both Ken's comment and the interoperability discussion and I think we so often make up these parallel worlds and everybody says, well, we'll map them together later and it doesn't happen and it can't happen and so I just couldn't be more strongly reinforced with stop being unique across the world and I think there's a big risk of it in this area because of the exceptionalism. But in terms of interoperability, I think the word is kind of useless because in its complete form, you got to have your business rules and everything else lined up and it doesn't happen. So if you've got some order going over to a hospital and you're assuming there's somebody on call who's going to handle the problems when something happens and it may not be the same in another hospital, but if we focus down, maybe it's more toward the point of what does the data look like just sending the data, not worrying about the timeframes or the reactions or stats and all these other variations. I think we can actually get agreement on things but it's a big, huge word and everybody throws it around and I don't think it's very helpful unless we narrow it a little bit. Great. So I think it sounds like we're pretty good with these. It sounds like there's no one that's identified anything that we've completely fanned on and I think we've identified a couple of things that are obviously of great interest to the group that we'll circle back to at the end when we start to talk about potential projects. So why don't we go ahead and move through some of the individual session synthesis and go from there. Okay. So question two with Bob and Jim talked about the data issues and based upon a very rich discussion again and some of which we've touched on again this morning. Try to establish this hierarchical set of knowledge representation and technical standards. The relationship between the data issues and the knowledge management issues obviously is closely intertwined. Define these standard trigger events. What will be the event triggering GCDS and we talked a lot about that. Define methods to maintain provenance of the data and knowledge. Have sufficient metadata around the data constructs and the knowledge artifacts so that their provenance and lifespan can be well defined and managed. Assure the interoperability, sorry Clem, of data elements between record systems. Recognizing patients are mobile. Data must be available where the patient is being cared for and also family members and descendants may have interest in these data as well. And one of the things I like to say is that not only should we have interoperability around data but the knowledge as well. So that the right data, right knowledge is available wherever. Assess the current and future legal regulatory policy environment and address the obstacles. We had rich discussion about when to exchange data across care environments and kind of the political and financial barriers to those kinds of things. The business case came up. And then the public health role. How do these data pertaining to the state or inference around genomics not only relate to the individual patient but the longitudinal care of that patient and then even the generational impact of those inferences or those data in that understanding and does that therefore impact potentially public health considerations for different genomic states. Anything to add there? No, I think that that's I think that's a good overall. Brandon? I think one thing I want to point out when we're talking about defining the trigger events for genomic CDS is that we don't reinvent work that's already being done out there. I think what we need to do, we don't need another Adam's writes paper on types of decision support. What we need is to look at the different workflows that will have genomic information and figure out how we can plug in or utilize these different types of decision support triggers so that they can be supportive of genomic glue-guided care and use genomic information. So instead of reinventing the wheel just kind of using the wheels that are out there and shaping them for genomics and genomic information. Yeah, and thanks for that. We actually, again we get to the end that was one of the things that we did pull out as a potential area for research investigation, looking at it in precisely the way that you just articulated it, which is we have the tool box, but what we really don't know is which tool to use when and under what circumstances. So that's, I think that is an area that clearly from the discussion people were very interested in and would be something that could probably be moved forward relatively quickly. Ken? Just to add to that, so there are some existing lines of work and I think beyond just defining the events, we need a mechanism to actually consume and publish those events. So standard-wise the info button standard includes a set of contexts for doing that kind of decision support. So that's a starting point that's already a standard. There's also an event communication and subscriptions service standard that's now a draft standard just ballad in September and that includes an open source created by the VA in a sandbox. So we already even have not only a standard in this area, a draft standard, but actually an implementation that's open source. So I'd recommend that this community work with that group and probably just with HL7 piggyback on what's already been worked on for about two years. Right, and so again, one of the points of putting up on the second slide, the fact that our second objective was accomplished and we identified all of these different opportunities, I would say that underlying anything that we're suggesting here is that we would immediately seek out and collaborate with those groups that are already in this space, whether they're in it from the perspective of if you will, general CDS versus genomic CDS so that we can take advantage and not, you know, reinvent the wheel to use a cliched term. So I think we can let's assume that is a given for the purposes of discussion going forward. As we begin to drill down and identify specific projects then we may want to specifically call out, okay, here are the other groups that we need to involve in a project that we all as a group decide should move forward in some way. I guess, just Brandon, one follow-up comment on our paper on the partners CDS types, you know, I would keep an open mind and I'm not a geneticist but it seems to me that some of the uncertainty management issues will be different than kind of the classical inference that we do for all the rest of decision support and that might suggest, you know, a new paradigm for things that we may have to figure out. Okay, so this is about the knowledge management. Right, so this was a tool and Josh led this and again the key questions were what are the necessary elements of knowledge representation to achieve the ideal state, what standards exist, what type of decision support architecture is needed and governance issues and this was a very fascinating discussion that about the first 75 minutes was pretty, was without form and void to some degree and then we suddenly had a let there be light moment I think to some degree and we're able to pull out some synthetic elements that I think are useful. One is to study the implemented genetic information to develop standardized ways to represent knowledge and we identified some specific areas that seem to be right for this. The IOM Action Collaborative, by the way, I should inform the group as a whole that Blackford and I were kidnapped by Sandy and JD to be a part of this so we will we can chalk up as an action item that there will be a direct relationship from this meeting to the IOM Action Collaborative related to that so may as well declare victory on that one as well. Data sourcing and portability, the portability issue is obviously a recurring one. The representation of AHRQ and AUK and that they're standing up for immunizations, a national CDS for immunizations that will be able to be consumed by any certified EHR and the invitation that we've received to say hey, if you have a few genomic CDS use cases, let's see if we can put them up there and see what happens so I think that's a really interesting opportunity. This was somewhat provocative and we didn't necessarily talk very much but we were talking about a time, not now but in the future where if we do in fact understand some of the elements of successful genomic CDS that as projects that are funded by Genoma or NIH, much as we're required to deposit data in dbGaP, the genomic data, that if we develop CDS as part of this work that there be a requirement that this be deposited in an open source repository. Again, a lot of presumptions one that we know how to build them in a standardized way, two that there actually is a national repository but I think that this is something that at some point down the future could be very powerful in terms of moving more things into the space. There was a lot of interest in the idea of where do the data come from in traditional genetic testing from panels, from direct to consumer testing but I think that there's a recognition that more and more we're going to be having data that's coming out of exomes and genomes and so the question is if we have that type of information then how can we feed that information to CDS systems across a variety of different questions and so we could develop some test cases across the different funded projects to explore how this might be done taking certain elements. We've got PGX projects in Ignite, Emerge has a PGX project Caesar and the newborn sequencing are pulling other types of genomic data so could we link that and then we have undiagnosed diseases program that is going to be looking at rare and ultra rare disorders so how could that be used to potentially generate CDS so there's a lot of different use cases that could be developed out of these different projects to kind of span the globe of possibility and then I think there's and this was reflected in where we started on the first slide. The need to really do this end to end project where we go all the way from standing up CDS rule 2 saying how does it really impact patient outcomes you know what are all the steps that are needed to successfully do this some of which would be relatively easy and that we have tools for today some of which will be quite challenging and that we're going to have to sort things out but this is ultimately where we need to go because it doesn't do us any good to continually say well we think this is going to be really beneficial for people based on the fact that we're smart people and we have a strong belief system that this is going to work you know in God we trust all others bring data time for to generate a little data in this area and so this is something that I think is really rising to the top will be challenging but one of the things that we should be thinking about there is you know if we think about projects that involve okay well let's look at cancer prevention well now we're looking at a timeline of patient outcomes that is decades whereas there may be other projects that we could test how to do this that would have an impact that could be realized over a much shorter time frame so I think selection of what the pilots could be here will be critical so that's sort of the synthesis around key question three any questions comments discussion around this Terry so I'm curious about the end to end project based on CDS because one of the when you think about what question you're testing it's going to be hard I think to distinguish between did the CDS work or did the genetic test work and maybe you don't need to distinguish between those maybe it's one big package but you know if we wait for genetic test to be proven to have an impact on outcome we're waiting a long time I think before we can implement CDS so do you have any thoughts on you know how and whether we need to distinguish those questions so one of the things in the class CDS research is to look at a process impact which can occur hopefully near-term after the CDS intervention and may or may not relate to a quality outcome downstream and process measures things like number of adverse drug events or number of CPOE orders and adverse drug events detected near misses serious misses etc same kinds of things could apply here did the genetic inference was it acknowledged was it acted upon was a process event measurable and that's all before you get to actually a clinical impact right so it seems that the process outcomes would be pretty easy to distinguish the test impact versus the CDS impact but when it comes to ultimately did you make people better or you know did you avert a death or whatever I don't see how you can distinguish the two so maybe I can help you know sort of linking this in a binary fashion that we have one observable downstream state and we get to choose one and only one but you could imagine for example prompts related to cancer where you have or any disease a chronic disease that has mortality that might be one of the hard endpoints but that occurs after a lot of other things happen so that having a kind of nested cat multiple occurrences model for things that you could look for that would link to the decision support event would allow you to combine a mixture of hard endpoints that might represent somebody lived or died as well as intermediate process variables so it's not an X or that you do one or the other you could devise the data structures and a surveillance in a kind of event based surveillance that could look for continue to look for those things over long periods of time the library continues to accrue value as people live out their lives and so it doesn't have to be cross sectional it benefits from the inherent longitudinal capabilities of EMRs. Josh. Yeah I just wanted to say that it's really important to not only measure process outcomes but understand why people take particular actions because the reason there may be outcome differences based on genetic variation is often dependent on the potential confounding by indication at the point of taking an action so you have to understand both phases. I think the point is important that if we if we choose an example where we're saying well you know we really don't know if this genetic test you know predicts this outcome if we do this in relation to genetic outcome or not and we're going to use a CDS methodology to you know sort of test that it may be much more difficult to separate out you know was it the test or was it the process as opposed to taking something where there's a you know what we would deem a sufficient amount of evidence to say that there is an impact on outcomes and then look at you know the implementation. So I think there are different ways that you could approach it. At the end of the day I think about it from the patient perspective if the outcome is better to some degree who cares exactly what it was that got you to that point and that's in some ways in a very superficial and somewhat pejorative way the difference between quality improvement and research and research the question is we really want to understand precisely why we got the outcome where we got whereas with quality improvement we're more about making the system work so that we get the outcomes that we want and we're less interested in what different components of the system contributed to that particular outcome. Paul. It's likely already planned but in that end-to-end project that you're suggesting that would be a great way of determining the standards and knowledge standards that sort of thing. I would think that all the discussion about figuring out the standard workflow and standard triggers and defining the use cases would come first and then this would also test those standard approaches. Thank you. Ken? I think just a note on the confounding of distance support with the genetic testing I think it depends on the intervention you're talking but if it's a simple genetic test the docs can just remember what the algorithm has less than seven pieces of information that are being used at one time. Then I think it makes sense to test the genetic test separately from the distance support. I think it's in the cases where using those genetic tests and figuring out how to use them you can't really conceive of how you could do it without distance support. Well then distance support is part of the intervention. The caveat is that a lot of distance support interventions just failed because it had to be really closely tracked. I think if we had the capability to really track whether or not it was followed then you could disentangle. At least you could say maybe this failed because people didn't follow decision support. On the other hand you could say it failed and they perfectly followed decision support then you go back to your genetic test probably doesn't make a difference. You're absolutely right about that and that's why as Dan pointed out it's critically important to capture all of that information because then you are able to learn more rapidly. I'll go to Dan and then Jim. I still think actually when you class these outcomes you may have the ability to infer causality. It was the decision support alert that caused the favorable downstream. But even if you can't establish causality it becomes essentially a kind of biomarker of process and that when the decision support was associated with one set of factors that caused perhaps a favorable set of outcomes but you may not be able to assign causality directly to decision support in a sense I'm still happy because now you've been able to measure things and know that even if it's not directly the CDS was directly the cause of the favorable improved relative to people that didn't follow it then you're still in a good position of having been able to do a kind of health services research relative to quality. That was a long winded way of saying I think getting unduly hung up in having to establish a clear causal relationship between the intervention before we believe it's useful and valuable is probably holding ourselves to an unnecessarily high standard for whether we've made the world a better place. Although it may reflect the fact that our payer friends tend to hold us to that standard that sensitizes us. Jim? So when I see process standards I'm wondering if you include standards for the representation of the processes in a computable way so that for instance I don't want to write a medical logic module for every drug I write a medical logic module that says somebody is ordering this drug what does the genetic information and the knowledge tell me I should do in terms of dose modification recommendations or something but have that sort of a dynamic process and it would have to be standardized to do that. Yeah. Josh? That's a really interesting point I think it remains to be seen in some ways how much we could generalize it I mean I think it's and it also gets to the idea of all these external knowledge bases like FarmGKB and things like that that aren't necessarily held in the computable format but they could be and what the future applications could be. Right now if you look at what happens within Predict I mean there are all specific things carefully worded for every drug genome interaction which is obviously not scalable if we think about all the central things we eventually could learn eventually it could be our drug interactions. The other thing I was thinking about is kind of building off what Josh and Dan have been saying you know so the specific case that Josh was talking about if you and yesterday we talked a little bit about if you look at for instance in Predict over a course of time about 55% of people that are poor metabolizers on Clopidogrel with Subtucie19 poor metabolizers get switched to an alternative agent but there are known contraindications of Pracigrel which is the drug that's been around the longest for there and when you remove those people that are older have had prior strokes things like that that number is closer to 70% get switched and what strikes me about that particular story is the fact that the EHR has the information within it to answer the question as to why people didn't follow the decision support the recommendation and I think the cool thing about the potential for doing this kind of closed loop process is as you accumulate decisions you can look at what associates with those decisions in a more automated way and build up temporality metrics as well as diagnostic metrics demographic metrics and this could all kind of run in a kind of real time data mining you know networking kind of fashion and as you discover those things in the backend which you can't necessarily do once it's aggregated out in like what happens in like quality reporting but you certainly can do locally and you actually could do it when it's aggregated out depending on what you share great Jeff two quick comments one on bullet three I'd like perhaps to capture the notion of an economic analysis as part of this to build the business case and also the notional idea of doing this whatever the projects end up being as pragmatic clinical trials so that we understand the value proposition and on bullet two I'd like to expand the suite of potential areas to include cancer which is not really represented in that series of projects and also infectious disease the notion of microbial sequencing for diagnosis is important and thirdly things like risk stratification through family history case of Jackie I'm going to give you some specific so for the bullet three add economic slash business and pragmatic trial methodology and then to bullet two add can I expand that to somatic sequencing so I just wanted to note that three of the Caesar projects are actually doing somatic sequencing yes yes thank you but we didn't I think it's better to be explicit than implicit there so we'll add the somatic and then the second one was microbial sequencing so those are additional use cases that would be desirable quick comment this notion of how do we measure or assess the impact of the CDS intervention goes kind of well beyond even all the things we've been chatting about certainly the IOM report building safer systems describe the socio-technical context environmental training physical as well as the EMR and usability issues itself so one notion is do we really want to treat the EMR as a black box or do we wish to define sort of the standard instrumentation that would apply to each and every EMR and define kind of these outputs that we need for these kinds of analyses in other words could the EMR produce logs usage logs that are then standardized in some way for those key things that we wish to measure from the EMR use itself I don't think we've ever really thought about that certainly every EMR has some kind of logs of use and whatnot but if we were to define that set of instrumentation measures we wish to have for every EMR it might really support the research that we're considering here yeah definitely if you look at it from the eyes of us I brought up the notion of public health reporting if you look at it that same type of vehicle and so okay here's the data sets that we need which most likely are captured in every EMR today you can have a convenient vehicle to get that data expressed on a timely fashion so let's Jackie let's add that it may not be most appropriate to be on the slide but just so we make sure we capture it so it would be standardization of EMR process of EMR processes measures EMR process measures Casey so I really like the idea of going from end to end for some of these projects I think what we'll find is that is that there'll be a lot of unstructured data that we'll be interested in that'll need to be structured so maybe engaging the NLP community in some of this so that we can get at the data that we need to trigger the decision support and I'm not sure if this falls under the data issues or under this knowledge management yeah and I think that's a really good point I'd probably just for a placeholder just Jackie add that to the sub bullet under bullet three just say unstructured study of unstructured data something of that nature as a placeholder Mark and then Ken Thanks Mark couple of things the logging systems and EHRs are primarily for troubleshooting so the requirements should raise the bar in terms of what those logs need to capture how accessible James and I were talking about at St. Jude where there's definitely to bring a troubleshooting resource to a research resource there's some work that would be required so some further thoughts needed around that second thing is that years ago we did a study on HIV genotyping as a prototype for genome prescribing and found high frequency of variant medication orders that were contraindicated by the viral genotype but we went back and surveyed those physicians in many cases it was an error and they admitted that other cases it was salvage therapy that was in the absence of decision support but I think that was a good way to start to understand the dynamics and that's referenced in the book and third point to the notion of phishing out and testing most CDS systems it's not an all or none roll out so you can have a clinic where the CDS is running and another clinic where it's not running but all clinics have the genetic test available so you can do experiments of that nature or you can have one modality in one clinic and another modality in another clinic so the infrastructure can support doing that type of analysis right and that is very naturally maps to what Jeff was talking about in terms of the pragmatic trial cluster randomization and these sorts of things so that's a really good point Ken? I really like the logging issue but just a few caveats I mean what's log depends on what options there are for people to often times to say this is what I did or I think it's wrong and that's usually customer defined so just being aware of that and the other issue is one that seems to happen a lot is people see an alert or reminder and they actually cancel out so if you just looked at logs you might think it had no impact but then they're actually checking some more data and then actually doing what you recommended so it's just a caveat and perhaps what we really just want is the other data so we can actually tell whether the in a sequence of time after something was shown the intended outcome occurred because if you just somebody said okay I'll do it and I'll push a button to make it happen or cancel you'll often times get misleading results I think that's a good point and one of the recurring elements that I've heard over the course of yesterday and today has been that we need to make some investment much more with the user and to take advantage of some of our tools with qualitative research methodologies and some of the quantitative methods and user interface studies which we haven't done as much of in the context of the projects that we're currently doing but look at the targeted audience and then begin to understand how they're thinking about these and it's a pretty standard practice in EHR research it's just we haven't done it as much in the context of some of the genomic projects that have been done and that is something that also we move to sort of our last slide about more overarching projects that could be taken away Clem? To the first point about having standardized processes for what the computer is doing I think that's the right direction but I think what I've heard is we really want standardized or some minimum set of data captured per event and then the process of how you analyze that to discern what the process will be inventive things that people will figure out and recognizing that you can't get everything you want and then the idea of doing parallel studies has been my pet thing for a long time if not randomized and it is possible especially because rollout usually is in waves and I think we ought to think hard about it because it solves a lot of the problems of some of the other realities and like the third thing is decision support usually doesn't know everything it needs to know so one approach to say well we'll make the physician put it in and of course then they get home at midnight instead of 11.30 at night and so there's a reaction and so the systems really have to build something like nursing gathers this or that if you really want these things they hit it on the nose we only had one study that was a 90% penetrant and that was the one where the infectious disease guys collected all the data related to MRSA and it was perfect and everybody knew it was perfect but anything else was less than perfect so we got 50-40-30% you know responses great why don't we move on to KQ4 so this was implementation issues Ken and Casey ran this group we've already talked about some of the workflow issues and so I think we're going to again see some recurring themes so this is actually a nice transition we talked earlier about when Brandon brought up the idea that we have different approaches what we need to do is test these in different cases and then also look at what are the end user needs and learn from that so we've really kind of discussed that I think pretty well we've talked over the idea of what are the return on investment, the business cases what problems are we trying to solve and again what are the extensions, the lessons learned on genomics workflow and then the involvement of the patient which is something that we haven't talked about quite as much at least this morning but the idea that the patient is the constant actor and so what is the role and we also know that patients in many cases are very interested in sort of having a role in maintaining data and helping perhaps even up to and including interpretation in some cases I think that that leads to some really interesting opportunities and it's been my observation that we're beginning to see some synergies between the NIH and PCORI there have been a couple of RFAs that have been jointly issued from like National Institute of Aging and PCORI around patients centered projects to answer some key questions and this could be a potential opportunity to think about centered patient engaged comparative effectiveness trials that could be co-sponsored by PCORI and genome obviously much to be decided there but it just seemed like a potential opportunity in a very different kind of space and then a lot of discussion yesterday afternoon about the idea about wouldn't it be nice to have this sort of developmental certified EHR environment and toolkit or sandbox and then of course the questions of well we've been talking about this for quite some time but how would this actually happen but it clearly was something that I think there was a very strong endorsement about the idea so it seemed worthy to bring forward for at the very least inclusion and discussion so questions about the synthesis of key question 4 Ken? I think another thing we were discussing was we should apply and evaluate existing non-genomically focused distance support efforts and I think it's sort of in that last point but for example the various ONC CMS related standards being proposed this community should evaluate whether that would work and really point it out if it doesn't. Okay so I think we can represent that's probably worthy of its own bullet and that would be I would characterize that as evaluation of existing CDS standards to test their feasibility within genomic use cases to basically determine do they work or are there gaps to be filled. Alex? I'd like to mention there was the dual use item to be discussed during the implementation in other words the utility of genomic data for research purposes and how the dual use will be implemented and we discussed two levels of quality one is validated data which is clinically actionable versus larger amounts of data that is of lesser quality but still of a high enough quality to be used for research purposes. Okay so we'll also represent that. It comes up I think later. Okay so it is somewhere. We just looked at these about 30 minutes before we started talking so I don't have perfect recollection of all the slides. Dan? I was trying to look for the announcement of the new BD2K centers but they're sort of envisioned as being topically interdigitated with a range of issues related to data that may include things like patient acquired measures and devices and such that would be a natural alliance for providing a state-of-the-art platform for doing CDS at scale so to speak both inside clinical environments but also at the PCORI end of patient kinds of things. They're not announced yet. That's why I couldn't find them. That's why you can't find them. They will be announced very shortly. So Jackie maybe going back to that early slide where we talked about all the different potential partners that in that NHGRI NIH funded just specifically add BD2K as part of that parenthetical grouping so we don't forget to look. Explicit question mark. Okay. Go ahead. So the next one, in the overall synthesis after the discussions we tried to pull together the rich conversation and identify kind of the key research areas and we list them here and on the next slide we'll try to tie them back to the fundamental key questions. This case has come up several times. It is something relevant to CDS in general and therefore GCDS I would suggest as well in all the ways that have been described. Get an understanding of the clinical epidemiology of genomic practice and decision making current state so that we can make assessments of what the delta would be with implementation of GCDS. Work through these standards issues at the multiple levels that have been described from the terminology, data structures, knowledge representation, uncertainty management and the transaction layer. No one actually did address except a little bit at the very beginning kind of what would be the ideal presentation layer standards and that's something we might want to add to this list is presentation layer. We've talked about the CDS engine fire off of a lot of... I realize looking at that it's not nearly literary enough and Chris said something earlier. For whom does the CDS toll or something like that should be how we should... Then it doesn't end in a preposition as well. That sounds much more like Chris. I think so, yes. For whom does the CDS engine toll? Working with HL7 and other efforts underway has been discussed. This idea of a demonstration project but ideally it strikes me that we're going to need to do a couple of things in this demonstration arena. One is think about laboratory design evaluation and assessment from the whole spectrum and then field assessment as Wyatt and Friedman like to say so we can get as much understanding as we can in laboratory assessment before we go into the clinical environment and begin to test these tools. At the same time collect best practices from CDS implementers. People are doing this across the country in different ways in different states. I mean states of... not only states of where you live but states of implementation. We talked a little bit about the role of public and the public health, public's health this generational issue, screening issues and portability and interoperability. Sorry, Clem. Going back to the sub-bullet, the best practices to start there in the CSER eMERGE work that Brian is leading in terms of collecting at least in terms of representation of data in the EHR and then the work that the eMERGE EHRI group is doing to collect information in the outcomes that Josh and I are leading around how a CDS being done. So we'll at least have some preliminary data to inform a particular piece. It's probably not a bad idea, Jackie, just to again put a parenthetical statement and say eMERGE PGX and then eMERGE CSER so that we can distribute those data once they become available. Clem? That list of things I think are all good things to do, but some of them aren't typically supported as research things. For example working with HL7, it would be good if it was, but I think it's very important and rather than say for Synergy, I think it should be within HL7 otherwise you're going to do it differently and it's open to everybody and I'd also think we should put a bullet down there to say concordant with the national standards that we got already in meaningful use because you do want phenotypic data lab data, X-ray data other sort of stuff to help with this whole business. I think that'd be important bullet so we don't forget. So changing a sub bullet 2 to work within HL7 and then the third sub bullet would be consistent with existing standards through meaningful use, etc. I would also say SNOMED and LINK and all the other organizations. All of the standards and terminologies. Sandy? I would just say I totally agree with the need to work with these groups and the need for transaction model, data model. I do think that there's a lot of efforts going on surrounding this that can be very synergistic including the ClinGen activity working on the data modeling group the IOM is working on contributing requirements to these efforts but I also think that this is an enormously challenging area that requires work to define the fields in depth and I think that there could be a need for some sort of coordinating body across all of these different things. There are overlapping membership between these groups but I think that there may need to be some kind of funded effort that provides the resources required to make this really happen in as robust away as we needed to. Ken? Just to follow up on Clems point about a lot of the things that are needed. For example, shown here are not typically funded through research and just noting other agencies like O&C, CMS, VA, they typically use contracting mechanisms or have people on staff to monitor involved. So for example when we talk to work within HL7 it means there are people who need to be attending calls every week, pushing the genomics agenda and whether it's internal staff within NIH NHRI that can do it or contractors. That's not typically research funded but it's really important because if you're not at the table your agenda does not get pushed. Yeah, I think those are very good points and again as Teri has reminded me on any number of occasions in these meetings, the intent is not to come with our hands open to genome and say give us more money to do all these exciting things but it's really to identify where are the key areas and then determine what would be the best way to do it. And I think clearly the point that Sandy made about having some type of a clearing house for all of this that we can all point to as we begin our project so that again we don't go out assuming well there's nothing out there because I don't know about it and start to build our own thing when there are items that not only could you begin to use to move more quickly but then could also extend to make them more useful overall. That would seem to be a very opportune action item although as is pointed out whether or not that would be how that would be instantiated is a little bit. So two quick thoughts on this point because this issue came up for CDS Consortium which had people from all across the country five demonstration sites and two things one to leverage the current standards infrastructure we actually hijacked the CCD and made it into the data exchange package we needed for the inference in Boston and that worked okay and it sort of got at some bypass some of the curly brace type problems by having conformant CCDs being sent in and it was worked. The second thing is around the governance though. Coordination of all the activities is exceedingly important not only for all the reasons being described here but for then the implementation issues of what does the local clinicians say or feel or have or react to and having that medical authority from the sites where these tests were being done helped to assuage some of those kinds of things. Jeff. I think there's a behavioral science agenda that is not maybe actually captured here and what I'm thinking about is some things we talked about yesterday trying to understand what the patients views of the information that they might receive and how they might use it provider behavioral aspects and how do they want to receive information and then there's also health system administrators and how they see the value but also how they would use the information. Yes, slide two of the potential projects does I think reflect some of that. Now we did I think limit it perhaps to the patient caregiver but you're right there are other stakeholders within the environment that are clearly influenced. We tend not to think of administrators or insurance executives as necessarily being the targets for clinical decision support but clearly there are influences there in terms of what they're willing to support within the environment and that business case return on investment would certainly be a place where that could be that represented but I think we are in agreement that a lot of what we've been talking about in the issue of cultural change and transformation came up multiple times in the discussion so part of the research agenda has to be focused on whether you want to call it social cultural or social technological or whatever that has to be inherent as part of this. We're on key question five regarding the first bullet so newborn screening NLM in an H Hitspe activity developed a standard for delivering newborn screening results using HL7V2 and Loink and it's being adopted by some states and that's the grist for some decision support and it's got every distinct test across all the states available and you might want to mention it there and the other thing is I don't know what the immunization model refers to, you might want to enrich that state in that line. The immunization model is what we heard about not in the course of the discussion but sort of informally in discussions with the UNC folks where they've basically taken the ACIP recommendations committee, the immunization practice committee basically that defines the guidelines for immunization. Those have been translated into XML and are going to be posted on ARC in a form that could be consumed by any certified electronic health record so that's going to be the first instance of sort of a public health view of CDS. Are they decidable? I mean, have they got elements, data elements you can find it, you choose on or is it just sort of a loosely described thing? Ken, do you want to? So I'm actually part of that project so yes, it's in a decidable form it's using the healthy decisions format and in the immunization space there's also the use of the approach that HLN consulting is a company that's built that's in some commercial systems going into the VA it's open source using these same standards so there's a lot of immunization seems like a really good topic because it's hard to do but and a lot of people want help with it and there's already a precedence for having a third party take care of that kind of stuff. Could you put a microphone please? Put a URL or a reference in there so that we can appreciate how much is going on. Ken, do we have something like that that we could point to or? I can forward information. Okay, that would be great so we'll just make sure that Ken distributes that. I mean, again, the reason that I put that on there is because essentially that's a road that's been plowed that we've been invited to say, hey, give us you're tired, you're hungry, you're poor pharmacogenomic CDS that you're working on and we'll try and stand it up and see if it works and so one of the things we heard consistently is we need some early wins that we can actually test rather than waiting 7 to 10 years and I think particularly among eMERGE PGX sites where we could easily select three things where we've got good evidence and we have groups that have built the CDS and implemented it that we could really take through this in a relatively rapid fashion and that would be, I think, pretty great learning opportunity and pretty exciting as well. Paul. Thanks. It occurs to me similar to what eMERGE 3 does with having its own genotyping site and its own RFA for a genotyping site and all the centers that would get that would deal with that site. When you get to the end-to-end test rather than having every CDS site that might participate in this national sort of genomic network you might carve off and say let's have an application for the variant knowledge management a single variant knowledge management a single CDS knowledge management and then the sites that actually do the end CDS and that could build up some standards and that would mean that those knowledge management systems are motivated to work with all the sites in the standardized fashion. Okay, so I'm not sure that maps to any of our existing bullets so I might just add that as a concept that funded, I'll just say funded CDS center and then in parentheses similar to sequencing center although hopefully without all the negative connotations that we were talking about last night at dinner associated with that but just as a concept to perhaps look forward that you don't have a distributed model but that you have something that could move things forward. I think at Betsy first and then Brian. I just wanted to follow up briefly on what Clem was saying about newborn screening there's the HL7 message, there's definitions that combine SNOMED for the tests and excuse me, LOINC for the tests and SNOMED for the conditions and it's being tested in various states so it seems like if we are moving that in they're now genetic testing that would be a very you know that seems like that would be a path to get that going. Yeah and the thing I wanted to add to that is that in college of medical genetics and genomics as many of you know had developed newborn screening act sheets that are directed at clinicians to say what do you do if you have a positive screen what are the next tests and those are actually built constructed as L2 artifacts where you know the college hasn't had the resources to convert them into actual coded CDS rules but they're there and would be very amenable to that type of computability so that would be one more piece that could be built on to that. Yeah and the ACMG was right in the room when all of this other work was done with multiple federal agencies so. Yeah and I think that while this is again strained perhaps a bit from the NHGRI portfolio the reality is that there's a very large newborn screening translational research network that is in place to basically study some of these types of issues and so there may be the opportunity for synergy between the NBS TRN and some of the activities that are taking place through genome. Brian. I just wanted to ask for clarification on the central CDS center idea whether this was specifically about how it relates to the issue of interoperability is this something that there would be a way for people to not have to worry about interoperability with outside systems or would it be a way that everyone would be forced to cooperate with each other. Since the idea was first proposed about 12 seconds ago I think that the answers to those questions are probably to be determined. I heard it proposed and I wanted some clarification about what was being proposed. So Paul, do you want to respond since I think you were the one that brought it up? I think it is a way of demanding interoperability. It is a way of ensuring that it's going to be vital to have standards between the knowledge management systems that could change monthly and the healthcare institutions that are rolling out the CDS. It has to be centralized knowledge management systems and this would be a way of ensuring a motivated knowledge management systems whether it's the CDS or the variant knowledge. Yes, I think both of them are yes to your questions. Just a quick thought Brian, great question and one of the things that the CDS Consortium did was to have a common authoring environment so everyone was actually using a common tool set more or less at the end. It wasn't quite perfected but to create the artifacts and again to Clem's point this interoperability word applies at so many different levels. The goal was to have these artifacts to be importable or interpretable at that level of interoperability and the disparity in our systems but then the implementation within each system could be unique. So we actually had screenshots of how one vendor would do it and another vendor would do it so people could compare but it was up to them how to do it at the implementation layer. Let me do Jim first. So a thought occurred to me which is sort of a an inverse utility to doing this. We've had conversations with CMS who's interested in using the genetic testing registry and ClinVar, ClinGen as a way of determining whether they want to reimburse for certain genetic tests. One of the issues with that of course is clinical utility and if you're building CDS that works on structured variant data then it would seem to me we could actually invert that and be able to say this test in the genetic testing registry tests for these variants which are used by this decision support system to get to this end and that connection could be done computationally and CMS could take advantage of that and there'd be an incentive when people were registering tests in the genetic testing registry to actually provide the variant data for the tests. Right now most of them just say we test for this gene or something like that. They can provide the variant data but there's not a motivation for them to get to that level of specificity. So sort of a library of these available that's structured to the variants and structured to the resources would be I think very powerful for supporting clinical utility. So I think on the previous slide we do have the library mentioned, well I know we have it mentioned somewhere but not perhaps not on our summary slides. Yeah. But I think the point that I'm trying to make is that since that's a use for decision support system which is not decision support in the EHR there's another advantage on the other side. And I think that's an important point I want to capture it. I just want to make sure I can kind of put it in the right slot. We'll not take any time searching but we'll note that and try and insert that somewhere. Ken. Just a comment on the tooling issue. I think Cures for Alignment is quite useful because a lot of these standards we're talking about there has been funding by groups such as ONC to build tooling for them. So for example there's an open source editor for some of these standards that's been developed out of ASU with Davide Sotara and Bob Greenis for example. There's open source tooling we've developed called Open CDS that works on these etc. There's ongoing work and there's commercial vendors who actually now can output from their commercial tools into these standards. So I think if you align with a lot of the existing standards it's not like this group needs to come up with the millions of dollars that's already been invested for example building tools. You can just leverage what's been built. Yeah Jim. Go back to the first bullet. Were you intentional in picking a back of ear? In one sense it's in many senses it's the most simple for pharmacogenomics but then you mentioned how it was in the label. Was that the thought process there? Yeah I mean to my knowledge this is still the only one where FDA says you must test for this before you use the medication. If you set aside the companion diagnostics which would be another potential case that could be brought forward would be a companion diagnostic, a BRAF or K-RAS or a lotNib or something of that nature. But that was the reason that as we think about things to move forward the challenge that we've all had in our own institutions is well I believe the clopidogrel I don't. I believe the warfarin I don't and so sometimes we move forward with implementations sometimes we don't but if we can say hey look this is one that we have to do it so we can obviate the pushback from some of the content folks that in some of the other areas may raise objections to saying well I don't think you should be putting that one on a national repository. So that was the there was intentionality in terms of choosing that. That makes sense because I do think there's many different ways that this has been implemented so there's still a lot that could be learned. There's a couple of other I mean in some ways the ones that are purely avoidance of adverse events where there's not an associated efficacy questions associated to it are easier to swallow and so the Tegertal in the 1502 is another one where it's not mandated but you can think of certain parts of the country where there's a high Asian ancestry admixture that type of testing could be extremely important to do probably not so important for us at Geisinger to implement that one as a high priority given our admixture so I think this is the final slide Mark we're looking at the back of your label Dan Hello Oh I'm sorry hi Dan We're looking at the back of your label and it doesn't say you absolutely have to it says you should and the language for the carbamazepine is actually similar so it's a all of which is just to say it's a moving target and I think that the regulatory agencies are going to get more interested in this rather than less I would think so yeah and but again you know so should that in the word in the language of weasel words may be one of the highest imperative weasel words that we use may as opposed to may or can or should should so the word is should in the carbamazepine really okay as near as I can tell it just says okay I can live with that Mark good okay so this is we thought you know just since we were focused on the key questions that we would try and you know do a mapping exercise I don't think we really need to spend a lot of time on this we've had really good discussion to this point but this is just sort of our sense of where the topics that we talked about the potential research topics that we talked about in the prior two slides would map to in terms of the key questions and so you know the first key question is clinical decision support and essential element and the successful implementation of genomic medicine we talked about the idea that we have some use cases that we could move forward with relatively quickly and that could in fact inform an answer to that key question we spent a lot of time on the data issues and a lot of the projects relate to some of that same with knowledge management and implementation and then I think what we're going to do for the rest of the time is to go back to those two slides that preceded this slide and spend some time on each of those projects to take the temperature of the room about your sense of prioritization and to talk a little bit about the logistics of how that might take place so what I'm going to propose we are actually our break was scheduled from 1015 to 1045 what I'm going to propose is that we actually move our break up and that we take our 30 minute break from now until 1015 and then do all of our prioritization when we come back and then basically end either when we think we've discussed everything sufficiently or at our hard stop at 1230 so there's a certain incentive perhaps to be efficient and get our work done early that I don't know if it would be unprecedented in one of these to actually get out a little bit early but we could certainly target that as a possibility any objections to that is that going to be problematic for any of the logistical issues the webcast folks anything like that okay hearing no objections then we'll