 We have time for about 20 minutes or so of discussion, questions, clarifications, et cetera. And I'll do a five-minute wrap up at that point. So Mark, we'd like to kick things off. Yeah. So I wanted to build on Dave Valley's comment earlier about, you know, the gift that keeps on giving. I think there's an opportunity in a network like Emerge to think about a different form of decision support which might be categorized as diagnostic decision support. So again, to pick up on the Dan Rodin's suggestion about chronic renal disease, we have instances where people as they move through their trajectory of health present with a problem. When we look at a sequence in a healthy population, we're going to have a very high bar in terms of returning results because these are individuals that have a very low prior probability of disease. But when somebody develops a disease, it's a very different metric. And in that case, you might go back and look at things that you wouldn't have otherwise looked at from a population-based perspective. And so you could imagine a scenario where a decision support algorithm or rule fired when chronic renal disease appeared on the problem list. And if that patient had a sequence, you'd go back and say, show me the variants in all of the genes that are associated with chronic renal disease that might be irrelevant if this patient could affect management. And maybe there's a GLA variant and this individual has Fabray disease, in which case there's a specific treatment. That gets away from some of the problems that Mike was referring to earlier where what do we do when we dump all this information in there? Well, if you actually have a condition and a genetic explanation that explains the condition, now you have a very cogent medical management issue. And I think that that's a use case that could be explored in a context like Emerge. I would just say that that type of example is exactly the kind of thing that the EHRI working group expressed the most excitement about, something where there's a focused clinical problem we can deal with, you know, really making sure that we get the structured data we need, you know, really robustly, and then measure the effects. And then once that's done, look to general. It really adds a clinical context to the question. And I think we can get our heads around that a little bit. Josh Pearson. Yeah, so I had a comment which is that, so just based on like 15, 20 years of experience with CDS, I think one of the things I've observed is that these broad superficial systems are based on simple rules have really high failure rate. And it's 90-plus percent, so drug drug interaction is a good example of that. And I wanted this group when they're talking about CDS to maybe think about the other end which is deeply modeled clinical decisions that include genetic data, also include other kinds of clinical information. And this is the kind of thing in our system at least. I've noticed people are very amenable to interacting with. So for example, our working calculator is very successful, includes lots of different clinical pieces of data. It gets 80-plus percent response rates where they agree with the recommendation. And on the other side of the coin, we have plenty of CDS which, you know, people ignore blow-through. So, you know, so the challenge here is since we're dealing with a lot of different genetic scenarios, we could easily fall into the trap of trying to do a little something for each one of them. And I think we want to make sure we do solve the problem very well for each one that is included in our next one. So I completely agree with that comment and I thought all the talks were excellent. Everything really resonated. I guess just one question, comment just around those lines is I think the technology is now coming that we can create alerts reminders pretty well with some of the emerging technologies. I think the main issues that we've really contaminated the field with alert fatigue where we've put in so much sort of useless stuff into the system because we have... And the way I interpret it is because we can't really share, for the most part, we've been basically building our own fairly low quality noisy stuff into our system and people have just realized this is most of the time not useful so I'm going to ignore and we mostly have like 90% override rates, right? So my concern would be if you build things, even if it's super-duper good in the context of lots of really meaningless noise that people are going to just ignore it. So I'm wondering just with that concern whether it might be worth taking the app approach where somebody has... People will only use it essentially if they decide to go to it but you aren't contaminated by the legacy that we've put into our EHR systems. A couple of thoughts. I think the app approach is actually, of course, coming into its own with the emerging standards and the smart on fire, smart container type of work. And it can actually be a dialogue, of course, can be interactive and can be, you know, sort of abide by more of the ten commandments, if you will, for CDS. That's very attractive. I think the other thing to think about though with sort of CDS in the future is not everything has to be an alert or a reminder at a level. In a way, I'm encouraged by pharmacogenomics just given this specificity at one level but, secondarily, might there be... Might not there be surveillance algorithms? Some other conditions are amenable to this as well. You know, that are sort of scanning and anticipating or predicting, you know, and they're only popping up when absolutely necessary, if that makes sense. Certainly some of the cardiac risk decomposition algorithms have been very successful in this regard. It's not an alert until the patient is actually trending and you need to know. Eric? Yeah, Eric Larson from Seattle. I am carrying on, I think, the last two comments. In most systems, the demand for CDS by advocates exceeds the supply of the ability of the system to actually incorporate it. So, I really liked Sandy's comment about evaluating something at the beginning before you implemented and being able to tell a story. So, maybe you could comment on all four of you or three of you on what would be a convincing story that we could develop in Immers4 or just the field could develop that would make a system like, say, all of Kaiser want to implement this kind of work. I can speak to what we're finding internally. So, within partners, we have this effort to start building smart on fire apps. And, you know, and we do find that the demand exceeds the supply and what always happens is it starts with somebody with an idea on how machine learning could be interjected into care in order to improve a decision. And where it inevitably goes is that the clinical process itself is so mixed up that you don't need machine learning. You just need to start pulling data together and enabling folks to contextualize it in order to make a difference. And then that can set you up for future machine learning based interventions. So, I think one way to do this is to, one of the things we hear again and again is clinicians having to make a decision and having to spend 20 minutes going through the EHR, gathering all of the data that they need in order to make that decision. So, just looking at the conditions where we might be able to provide a uniform display that enables assessment of that condition, maybe a place to start that folks would value because it would save time. Just to follow on, and it's a sobering comment and useful to hear. I mean, and having been at this for a long time myself as well, it's all about, in the end, is it solving a problem that I'm confronting? I don't have, I'm not otherwise equipped to solve, or am I saving time for myself or possibly the patient, or perhaps is it related to quality, or God forbid, money? But these are the hard metrics that practicing clinicians have in their mind when they're trying to use these systems. And we've been challenged with the usability of the systems themselves, not to mention the CDS, but point well taken. For the, so a few of threads have come together. The sensitivity, I think in George's slides, he showed you want PPV if you're doing GWAS and you want sensitivity if you're doing clinical decision support, but you also want specificity because you don't want the alert fatigue. I think the app approach is interesting, but I worry that people just wouldn't pull. But I'm wondering, we've talked about how a success story, sort of a really clear win would be great, but how much of that is, how much of that to date has been sort of low hanging fruit, digitized, picked out those two specific drugs versus asking the doctors, where are your pain points? What are the questions that you find yourself having a problem with this and want the clinical decision support? Did you see the question in there? So I was just gonna say, I think that a lot of these groups spend a lot of time engaging the clinical populations and understanding what the problems are, but I think partnering with groups like P&T committees or Tuber Boards or have you to find out what are the challenges that are difficult to address might be one approach to kind of get to what are the problems where we can help. And since we spend a lot of time on pharmacogenomics, maybe if one challenge, I believe where this could be helpful is if a patient has multiple medications and figuring out which is the one that's not being, is an issue. And so maybe pharmacogenomics could help to explain which one should be taken off of the list. It's a good point that I think it's good to go to the source of the consumers and ask them what they would prefer in such designs and how we have attempted and may have to do one of those studies of designing CDS for FH where we did qualitative and quantitative interviews of physicians to see what they would like in such a tool. So I think it's important to take that into consideration as we design these. The tumor board idea, the tricky part with that is that my impression not being in on colleges or anything is that tumor boards tend to deal with the sort of zebras. They're like, wow, this is just a weird case, but we're looking for more of the 80-20. Like what happens 80% of the time that people are looking for answers for? One other, I mean, maybe related quick thought is, you know, it's part and parcel of the knowledge engineering, knowledge management exercise too. So it may be, and what partners did and hopefully still does is, you know, have these subject matter expert panels that then are devoted to a domain and responsible for a set of the knowledge assets that are in production use. And they're very tuned to both the state of the evidence, the current problems in practice typically and will refine if you will what you're targeting yourself, the systems on pretty well. But that's expensive to do. That goes to the other problem though, not everyone can do that like partners did. So therefore we have to have the sharing paradigm. I totally agree with this thread of thought and the only other stakeholder I think that that'd be worth bringing in are financial folks. So whether it's department administrators or division administrators, because I think you can create the best, most interoperable solution. But then if you go to our healthcare system and the researchers and the clinicians are really interested and they say, well, we think it's going to cost like $50,000 worth of effort or 100 hours of IT time, whatever. And then they ask, so why do we want to do this? And if you don't have an answer to that, like, hey, we've done the analysis, it's going to save us $300,000 or whatever it is, right? Then it won't spread. Whereas if you do have that, then it's more those folks are going to learn at professional clinical conferences, et cetera, and go to their IT guys and say, get this done. And it's very different when the IT informatics guys go and say, we want to do this, but we don't really have a good financial argument for it versus the clinical financial folks come to IT and say, get this done. On that note, the CDS is really at an interface where there's a lot of stakeholders. I mean, ultimately the patient, but you know, there's folks in the IT community, there's folks, there's the clinical folks, there's economics, et cetera. What are the units of measure of success? I mean, are they clicks? Are they time? Are they dollars? Are they adherence to pathways? Lots of different things. I don't know, maybe you could comment on that starting with you, Casey. Units of success for CDS implementation. I guess the easiest one, I guess could be a measure of adoption in terms of if they even looked at the alert or did they just ignore it? And that's something I guess also is being maybe explored in the outcomes group as well in terms of what kinds of log data can you capture on these for something like display-based CDS. It might be under the context where it's relevant. Do people choose to go and view that decision support? And I think it's really dependent upon what kind of use cases we choose to focus on. Yeah, I agree with that. I think the way that we approach this is we say, okay, when we're going to build an app, what clinical and economic needles are we looking to move? And how will we measure them once we've introduced that app? So like for the platelet app that I showed, the economic needles are, it costs us a certain amount of money to transfuse a bag of platelets. And if that bag is wasted on a patient who immediately rejects it, there's a cost. So how much money can be saved through transfusing less platelets inappropriately? And then the clinical needle is, how do you reduce the flip side? How do you reduce the number of transfusions you have to give these patients in order to maintain the platelet level that you want to achieve? And yeah, so I do think a big part of this whole process is selecting which apps you're going to build based on how significant you believe those needle movements will be. Howard, I guess the only thing I would add is the way you set up the question was the multiple stakeholders of CDS. And that certainly is true. There's both a societal perspective, kind of the public good of improved care. There's the patient perspective, the provider perspective. I just, one thing I would add though, from a CDS implementer's perspective, we had a metric the number needed to remind. How many times did you have to ping somebody to get them to do the right thing? And it was an interesting way to differentiate CDS alerts and reminders, one from the other. Just one more thought. A question in the back, Manoli? Hi, Manoli Pereira, Northwestern. I had a question or maybe a statement or thought about it, maybe a bigger picture, part of CDS and pharmacogenomics in particular, but maybe in other parts too. And one of the problems with CDS and physicians or clinicians using that information is there are other things that negate the importance of genomics as we implement in patients. And I think a great tool would be where you have the genomics or predictive of let's say a pharmacogenomic response or phenotype of some sort, but then there are other patient-specific factors that may change the recommendation for that patient. And I don't just mean by algorithmic sorts of ways which there are but other ways as well in which disease and or other drugs the patients are on may make that information either more or less important. So we'll go ahead and since the time is winding down, maybe get a comment or two from each of you and then I'll close with some summary. So I guess a quick comment and... So actually, let me let somebody else go first. Can I comment on that comment? I just wanted to second that comment that I think physicians, you know, these alerts pop up and you're like, well, that's not really relevant. I have to give the drug for this reason or that reason. And I do worry also that alerts are set up and then data changes, right? So papers come out and say, well, that genotype is actually not that helpful in this patient population or whatever. And so you do have to avoid... I mean, it'd be nice actually to learn from that if there was some way to not annoy people and say, why are you prescribing, you know, and give people five choices or something? But I do think alerts that physicians just feel like that's not relevant to my patient very quickly gets phased out. So I can say my comment now. So I was just, when you were talking, I was thinking about maybe the decision support could be like talking points for the physician when they speak with their patients versus, you know, and how that would be made available might vary. But somehow getting to what the patient is values and taking that into consideration in whatever the ultimate decision-making decision is important. So whether it's passive or interruptive type of alert, just having the talking points to include their opinion in there. If I could make two points, I think that I do think it's important to not, to view clinical decision support as not just limited to the event-based, alert-based, decision support that I think that these apps may be a better model for e-merge. And to the last question, I think that there is a real kind of fork in the road in terms of the potential objectives for e-merge phase 4. If we were to choose to build genetic-specific apps for managing genetic results in the EHR, I think that we can make great progress in increasing the availability of updated genetic results. I think that we could energize the creation and adoption of fire-based standards that a lot of other things could be built on. But that type of support is likely to be less powerful in the context of specific clinical scenarios than support that was built for those clinical scenarios would be where if we choose to focus on a couple of clinical scenarios, then we do exactly what was suggested. We look at how we model all of the data that's required to truly make good decisions in that area, genetic or non-genetic, and we look at how do you present that data in a way that saves people time, leads to it being adopted, and produces the outcomes that we're looking for. And I guess my last comment would just be to re-emphasize the hierarchical nature of knowledge that's relevant to the exercise and the distributed network nature of that knowledge which might need to be pulled together. But then to emphasize the app idea, before there were smart on fire apps, we actually did an experiment building a smart documentation form which based upon a knowledge base guided the end user for diabetes care or cardiovascular care, what data to review, what to document, and what to order in a condition-specific order card that had all the convenience factors of handouts, patient care ed, referral, and all the rest of it. And in a randomized control trial, extremely well received and was impactful on the quality of care. So I think this app idea is really relevant because frankly the current crop of EMRs, I haven't used them all, but they may not have all the feature function that we need to actually do this decision support right. So externalizing in an app that lives within the context of the EMR makes sense. Just in time I better summarize because I'm going to kick it off. So no more questions or comments. Sorry. So summarizing that, it's really highlighted the pass of impact for EMRG4. I think there's a lot that could be done that could really move the needle as you're saying. I think stimulating shareable CDS is definitely achievable, but it's going to lead to a need to maintain these resources. And that's not what the NIH is good at. There sure should be good at it. So that has to be really well thought through for that. Should we be doing any EMR integration in the EMR? And I think a lot of the comments were, no, we should be doing it on the EMR, not in the EMR. And so there's an opportunity to push through that. We already talked about measures of success. Reproducibility was highlighted as one of the initial points. I would argue that that's impossible because reproducibility assumes you're doing the same thing twice and there is no two EMRs the same. And so it's really more, can you replicate the basic points in more than one EMR? So whatever the word is for that, maybe it's reproducibility. But that part needs to be in terms of setting the bar. The targets of CDS also were brought forward by Blackford and others. Is it the immediate clinician? Can we be stimulating CDS for other providers, even those outside of our system? I mean, we look at it as in a very short timeline. Who's CDS for now? But we don't stimulate CDS for the family practice person back in Ohio because the person's embossed for specialty care, whatever. There's an opportunity there. And then, of course, the patients that were mentioned. And then lastly, how do we emphasize the value of clean clinical data? Some of this is not easy to control. Most of this is easy to control. But the whole system would be better, not just emerge with that in place. And so I think there's a lot that could be done. But I appreciate your times. And obviously, there's one more question for the break. But I've overstayed our time. So on to the next session.