 Thank you, Dan. So we now have an hour for questions and discussion. Since it is a large group, what we'll ask is that if people can sort of, you know, flag us for attention, we'll try and keep things organized. Sometimes we'll have issues where there may be some follow-up points that may come out of order, so we'll try and manage that as best we can. But are there any specific questions or comments about the ideal state? What we'd like to come away with, I think, you know, Dan has put forward some ideas related to that, and I think what we'd like to do is to get a synthesis from the group as to, you know, are there points of agreement, points of disagreement, things that we can then reflect on as we tackle some of the individual aspects of data and knowledge representation and implementation. Sandy. So I thought that that was a great talk, Dan. I had a question for you. So given all of the different interdependencies within, when building a clinical decision support role, especially if it involves data moving across organizational boundaries, seems like it's going to be a long time before we're nine sigma in that area. But at the same time, we're probably already, I mean, I think in the rules that are stood up, we're already well above one sigma, so we're better than healthcare. And it would just be great to get your thoughts on sort of risk here, and, you know, sort of what, how to think about the appropriate level of robustness needed before you decide to launch. So Russ Altman wrote an interesting article about pharmacogenomics, where he believed the standard was not perfection, but non-inferiority, right? And I actually believe that's appropriate here as well, because if the comparison is just the behavior of autonomous individuals doing what they remember and do by personal bias, then to the extent that a clinical decision support system can measurably and reliably improve upon that baseline standard, as opposed to would it ever make a mistake that, you know, that you wouldn't deploy it until you knew it was absolutely perfect. We have at the other end the experience of the FDA shutting down blood banking software because of a few mistakes that were made, therefore, thereby unleashing the natural behavior of people who did not have decision support assistance and the mistakes dramatically increased. And so I think that tension between a perfectly behaving system and we've got to achieve some near perfect level of performance before we believe the risks are acceptable is an unnecessary impediment to progress. That's a long wind of way. I think somewhere there's a sweet spot where you believe the benefits outweigh the risks, but in fact, you are willing to accept the risk that sometimes the wrong advice will be given and then you'll learn from that and you'll be better as a result of it. Does that help? Yeah, I think that's a really important concept. And I think that a lot of times and certainly in some of the discussions that I've been party to related to, you know, regulation of clinical decision support systems and some of the policies around that seems to assume that, you know, current practice is gold standard and that, you know, we have to somehow, you know, meet some, a standard that's impossible and all of us know that that we're really not talking about any sort of a standard in practice and so I think that that concept of non-interiority is a very important one. And I'd love to add just a quick coda. You know, I think one of the challenges we have in the CDS side of things is actually measuring performance, as Dan pointed out. To get that cold closed loop idea, we need a way to differentiate that CDS which works from that CDS which doesn't. And there's some interesting ideas around a new metric called the number needed to remind that might allow us to differentiate high-performing alerts from reminders from low-performing alerts reminders, but that's a key component. Mark. That was a great talk, Dan. I think if we look at the headlines about the Ebola incident in Dallas, that was a failure of decision support and to have checklists available. My concern is that the reaction now is everybody's going to download PDFs and that's the best that we can do is human consumable decision support. And I think that's a really important, I hope it's not what affects one of us affects all of us moment because you know, there's that feeling that this could be a, you know, it could go any number of directions. But I think it's a real informative example of where we are versus where we need to be in the ideal state. As a friend of mine posted on Facebook yesterday, I think it could be a Boll and Rick Perry or it could possibly go wrong. Jeff, I think you're next. We're being recorded. There are also some people about that. And I'm sure I'll have problems related to that, but that's all right. So Dan, I also really enjoyed your remarks. In the ideal state, it's implied that their heuristic nature of the system that you proposed will establish maybe a better clinical standard than it are currently in clinical guidelines. So, how do you envision that reconciliation between what professional organizations would say is the appropriate thing to do and what a ideal state that's learning faster than professional organizations can actually ever react to? How will that kind of get reconciled in your future state? Yeah, so I think the power is in the data itself and in its transparency and in its volume. So that the reason you have to do guidelines with groups of people who give expert opinions is that by and large you're trying to reason in a space where you don't have access to all the data and you're trying to extrapolate a small experience to a larger one. So the closer you can come to not doing just statistical sub-setting and extrapolation where you actually have the data of the whole experience across the entire industry, the more powerful the data itself becomes about the reality of whether something works or doesn't work. So I actually believe that if you had guidelines committees of professional societies and then you empowered them with these classes of data at scale that they've never seen before, they'll work a lot faster and be much more effective even doing what they were done. So I don't see attention but rather actually an improvement of the historical process of guideline development. And so maybe in your in that commons that you envision would be surrounded by these professional organizations who are really using that as their primary data source perhaps. I think another issue related to the guidelines is that many of us that have tried to convert guidelines into decision support and Bob and I have been working on this in the pharmacogenomics realm as part of the CPIC Informatics Group is that we love to write guidelines with what my friend Alan Morris calls a weasel words meaning that we say you may consider or so they're not adequately explicit. And so I think that it's not only the translation of the evidence but it's translating it in such a way that we get away from this reliance on well consider this or may do this or that to really say we really this is what the evidence shows us so this is what needs to be done and that can then be translated into and and of course we don't have an established evidentiary standard at which point you know and that's why we frequently it devolves to individual organizations thought leaders to say well what are we going to do as opposed to looking at you know what you know we as a specialty or we as a health care system are going to do. So Brian. So I have a question about the very nice talk down about kind of what these desiderata actually mean in relation to each other and it may help me define for myself at least what the scope of this meeting is. So in desiderata one and number nine both talk about separation of different types of knowledge. Number one talks about separation of the primary molecular observations from clinical interpretations and nine then talks about the separation of classification from clinical decision support knowledge. And if we talk about standardization and be able to develop knowledge across different institutions then we have observations on one level classifications on another level and then a knowledge base on another level and if observations can clearly be machine machine readable not don't necessarily have to be human readable and these classifications are somewhere in the middle that I don't think there's any standardization on at all and then this knowledge base which is the ultimate kind of thing which we'd like to develop and I guess the question I have then is this meeting about that higher tier of CDS knowledge clinical decision support knowledge and does it rely on is it foundation all this classification system be standardized if there is truly separation if there's really three tiers in this desiderata that need to be separated from each other then which tier are we discussing here in this meeting. So I'll take a stab at that I think that to some degree the purpose of the meeting is not so much to necessarily reconcile you know the 14 elements or to you know define but I think it is to understand and prioritize so I think that you know if this you know from your perspective is something that is a key element that needs to be understood and somehow made functional so that we can achieve ideal state then that's something that we should be spending a lot of time saying how do we actually accomplish this and so I would look at it as being more as we go through these things finding the things that perhaps we don't have agreement on where there is some disagreement or confusion and then try to say okay well then what is really needed and I'll go Jim and then I want to let Ken and Brandon weigh in on this as well since you know we really haven't given you the opportunity to talk about you know the genesis of your desiderata. So one group that Dan didn't have on his table who are autonomous and who don't use standard practice for reproducible results as researchers and that raises in my view the issue of the base of the pyramid which is the genome itself that that is still fluid how you assemble reads how you call variants and so I would introduce that we also need to consider a practice to have a stable renewable base of that information that's refreshed by best practices on a regular basis and that is accessible to every health care system and decision report system not necessarily reproduced in everyone great yeah so I think I'll just just talk a little bit about how we started the desiderata I was at Utah I did my PhD with under under Ken and the goal ultimately was to build a decision support system that could support the genome and using the stuff that Ken has done in OpenCDS we applied it to the genome and we were using Dr. Macy's desiderata as a framework for doing this but we realized as we started building it that there were a lot of other implicit requirements that we knew about that weren't articulated in the desiderata but were were essential and we we essentially stopped the development and until we define these desiderata because we felt that was really important to have this guideline and structure to go forward and so so we did that and you know in any good project you get sidetracked by other ideas that come up and that's exactly what happened and it's turned out to be a good thing and so it's really neat to see it kind of really come together and I think we're headed in the right direction with this and it's really it's not to kind of force people to do this but support guidelines and ideas this is what to be aware of when you are building decision support so I'll just note with the desiderata Brandon really ran with it so appropriately named Welch at all desiderata and we did consult with the community send out email so this serves so it was really this kind of a community consensus building it wasn't just like we just came up with it it was asking people what they thought just wanted to add two comments for what was discussed one was with regard to the classification issue and I think this brings up a good issue of when we're thinking about genomic distance sport we should probably think about what can we learn from what we've done in distance sport for 30 40 years outside because there's a clear analog there are two things like the Cernar Malta and the MetaSpan and the first data bank and NDF RT where there are commercial and some non-commercial hierarchies that are developed and how do we use that what's why is that happening the way it is what are the issues I think we just have a lot to learn I think along those lines too I really liked Dan's comments about nuclear nuclear power aviation and health care and I think it really brings up the point of we probably have the current state because that's where all the forces are making us go so what is the root cause of us not having scalable distance support of you know it really primarily being institution driven sometimes vendor centric and I would posit it's because right there has not been an actual business case for implementing distance sport to date and I think that's changing I'm certainly seeing it at our institution where this building distance sport and standardized care now is actually a business case and I think that's a good thing and we shall align with those larger level issues that occur at the C-suite and at the executive level what a health care system should do and I think if we align with that we can make a lot of progress fast occurred I just wanted to comment as a physician we've been suffering from decision support alert fatigue because decision support hasn't been very specific as a neurologist I can't tell you how many times I've been told the drugs that I'm prescribing a standard of care have a contraindication but that's not based in fact and I think interaction yeah and so you know the classic example in my case would be for Parkinson's disease drug to use a specific MAO be inhibitor with the standard dopaminergic drugs when the when the concern comes up from non-selective inhibitors or MAO inhibitors that are used in psychiatry and I think we have the same risk of doing things in genetics again that will lead to if we don't parallel develop the size really please for Dan's comment about you know that we have to have this feedback loop to do it because if we don't have this feedback loop exercise really quickly we're going to have a lot of differences and so as an example something like that we see right now so we're in the new we have a newborn screening project to UNC as well as an adult exome sequencing project for medical genetics and the genes that we're going to wrote things that we're going to report in the two populations are completely different and if we don't have context specific information you're going to lead to this and then we don't have that fast it's going to lead to this distrust of the system that's going to be rather destructive I think those are excellent points and and you know I think we in in our presentations you know we tend to devolve to showing alerts because that's pretty much what we have in our armamentarium but I think we all recognize that there's a lot of flaws and so you know one of the takeaways I think in terms of an ideal state is are there different ways that we could potentially do this the other point that you know comes up with the drug interactions there was a very interesting meeting that where we had some of the vendors over producing this is that what we can we can give you just the significant ones but our lawyers say we don't want the liability that should follow the clinician and so the systems then get everything that and then we have the situation that we all I mean I would take in point you know the use of plavix in neurology has different indications whether to do sip to see 19 genotyping or not and it basically and so it's context specific and the real reason is because in neurology the difference between using aspirin or plavix doesn't make that much difference in the outcome of patients you have to treat a lot of patients to get an effect a huge number of patients to get an effect with aspirin and you know it's kind of an irony in neurology that when you have a drug that's so ineffective at presenting disease it's the common practice now to when you have a patient has recurrent stroke on aspirin to call aspirin an aspirin failure and change to plavix even though you know the effect on outcome is a 1% effect and so we change the drug and so we have this problem if we were starting getting the you know the alert that we should be testing the genotype to put some on plavix it's sort of silly because it's not a very effective treatment and that would you can have improved that it's actually a good recommendation and because the drugs are changing even so that you won't even need to do it anyway no it's more about Jim Mostel's comment but yeah two quick thoughts one I think Ken's point about actually following the dollars the Willie Sutton's law you know it's going to be very very important and relevant here prior work I was involved with looked at the value of healthcare information exchange for example and the value of ambulatory clinical decision support and those are big numbers 78 billion 44 billion respectively nationwide stochastic simulations and in those analyses we did nothing about genomic decision support which actually could probably be you know orders of magnitude higher would be useful to look at that carefully the second thing I think is we talk a lot about guidelines as the you know the primary source of knowledge and I think we're in the midst of a world evolving from a guideline based evidence-based approach to a practice based evidence and the work at Shrine and Stanford and some of the stuff looking at big data sets of EMR data comparing patients in real time to the patient before you is going to be extremely relevant here as well so it's a big data data science part of this not just the guidelines so I guess my comment is that in traditional clinical decision support system workflow at least I learned is a big component of that and in my interactions with clinicians part of the problem in the application of genomic medicine is really in which part of the regular workflow do they actually apply these things so so I'm just wondering what's the sense of this group in terms of when we try to talk about clinical gene support in this arena how do we factor in the workflow and is that an important consideration you know that I mentioned that briefly in the opening remarks I made that the workflow idea in an EMR is completely non-standardized you know physicians use EMRs in you know a wide variety of ways I just saw a report I can't remember where it's from but I'll look it up that in fact there is no standard you know across a analysis of New York City EMRs reported by Rainu Koshal and colleagues and this idea of standardizing the workflow may be you know anathema to a clinician's practice but in fact at Vanderbilt for example there's some clinical practice redesign to define a standard operating model those kinds of ideas are thinking to be very important just like the checklist in the surgery room or the checklist and the 747 cockpit but there is no yet there's not yet a standard definition or taxonomy for workflow in EMR and I mean the interesting point that that raises then is whether this whether than trying to force a standard whether this would be another opportunity for an adaptive system where you know not only would the there would be adaptivity related to knowledge in the presentation but also where do I need it in the workflow versus where do you need it in the workflow which is something that there hasn't been a lot of exploration on I don't think go ahead Dan so there's a little interesting as outcome of tenor if that relates to standardized workflow and that is every pilot now in transport category aircraft is required at takeoff the captain puts his hands on the throttle the first officer puts his hands on the captain's hands and they move together thereby giving them both the authority to reject the takeoff and that's a workflow that didn't exist a simple mechanical thing to do that is now embedded in the industry and we have relatively few examples of that in healthcare of people deciding well there's a simple mechanical workflow thing you can do to solve the problem and I think that's what we need to get to is the receptivity to solutions that are not always high-tech sometimes they're quite simple but they need to be done every time for every clinical setting that's appropriate it's interesting to think about that from the perspective of in the operating room about how that might particularly work but Lee so I see a lot of effort trying to building up the knowledge base for genomics both from industry and also academia so I'm wondering whether this knowledge base developments is part of the scope of this discussion or the cds you know discussion we have here is more interesting the interface between cds and the knowledge base yeah I think that's a really good question and the way I would characterize it is that we're probably not primarily focused on developing the knowledge base I think there are other initiatives that are being funded through genome and others that are working on the knowledge base but it's much more about accessing the knowledge base and then doing whatever is needed to interpret that in knowledge representation to drive the cds so that's how I would define that so I think Alex is next and then well my question is was exactly along those lines one big difference in the three cases you listed you know nuclear power plants aviation and healthcare is the amount of knowledge and the kinds of knowledge so aviation and nuclear power plants are both physical systems and operation of these systems does not involve rapid evolution of knowledge up in physics or aerodynamics so the issue you know there is one item which is the on the list of 14 which is linking of the discovery science I would say you know coming from the research side I see that expanding in five or six sites up topics and importantly actually discovery science creates an incentive for sharing knowledge and improvement of care in a monitoring of phase four clinical trials and and sound which is actually a key to adoption of knowledge sharing as as creating an incentive for knowledge sharing so that was my comment about the research side that it creates that incentive and that this healthcare is different from these two other systems in that regard so actually at the conference in San Diego it was our presumption as well that look these are physical systems are well known and we discovered that it's actually not the case that that nuclear power plants have thousands of sensors hundreds of control mechanisms and then when things go off nominal and change very quickly they often get a massive information overload and they don't know what the problem is and so it's not as certain as you might imagine in any aviation also being overtaken by rare variant kinds of co-occurrences of events and systems failures and too much information one area not enough and another so we thought as well that healthcare is sitting in an entirely different space and it turns out they suffer from the same issues of complexity information overload and having to make decisions that are decisive at a time when you're not certain that you even know what the problem is so it's a little closer than we thought so if I can just take moderators prerogative and follow up a bit on that you use the term in your talk in a different way than I did that I do in terms of closed loop and I think you were talking about closing the loop relating to making sure that we understand the outcomes related to action on decisions but there's a form of decision support you know closed-loop decision support where it acts autonomously without you know the intervention of clinicians and obviously that requires high evidence base and high reliability so my question to you is again in a comparison to the nuclear regulatory or the nuclear industry and the aviation industry how much reliance is there on closed versus open-loop decision support well so we have a pretty mature instance of that in healthcare and you know the FDA device guidelines for anything that removes a healthcare provider from a key decision loop and that poster child is the insulin pump that autonomously regulates blood glucose and so I in that setting there the engineering specifications and the proof that systems cannot fail and that there are safeguards that if they fail they fail safe and safe and all is actually a pretty mature engineering kind of environment but I think in genomics we're not going to be in closed loop in that sense of having a device that makes a decision on the basis of a SNP or a haplotype for the foreseeable future and so our problem is complicated by this filtering of the information through professional decision or patients or both to put this conference in one context what's your best educated guess of the timeline to rampant personalized medicine and the whole genome sequencing and relatedly is it going to be the IT aspects that are likely to be the bottlenecks which will be mandatory or will be the genomics knowledge and sequencing. It's a very interesting question and yeah as prediction is difficult particularly when it involves the future right? Who thinks that the Yogi Berra said that? It wasn't it was Niels Bohr which I think is fascinating but I think the interesting question there is that I think it will happen sooner than we predict is pretty obvious there's a lot of enthusiasm around this. I think that knowledge should be the bottleneck but likely won't be. I think that in some ways the fact that we're having a meeting about decision support which is an IT solution to this even when our knowledge is relatively infantile is an indicator that we're looking for solutions to a problem that we don't particularly understand. So I always get very concerned about the idea that you know we're pushing things out that perhaps are not ready for prime time but that is our inherent nature and so I think in as much as we can look at this from the perspective of saying we clearly will need these types of solutions to be able to effectively use this because if we don't have them then it really would become the wild west so this may be a non-inferiority type of a scenario but perhaps also the idea that we could utilize some of these things to say you know to filter out information and maybe this gets at the paternalism that you know the paternalism versus you know open access that in the context of healthcare delivery we always make decisions about data that we should or shouldn't use and you know how do we you know build our systems to you know use best best information. The challenge that we've had of course is that that always is right now these decisions are invariably local decisions and I think all of us envision a time where we have to have something that's really more spread than it is you know relying on each of our systems to somehow be able to solve this. Quick one and this may sound odd coming from a person who works for an EHR vendor be it Serner but I really don't think we can get there without some kind of common shared infrastructure to pull this off because right now every EMR out there mine included has its own native internal CDS tooling and that's standard but from where this group wants to go and where we need to get to in terms of something that can grow that can scale that can benefit the community can benefit research it has to work with all of our disparate EMR type systems but I think something common's got to stand up in the middle for this to enable this to flow. I just don't see any other way around that. Call me crazy but I just don't. The alternative is death by PDF. And it's good to follow on JD because I think it's helpful to contrast I think carrying on on Ken's theme. I think another barrier is that we don't have a good framework for managing lack of consensus so with the alert fatigue issue there's good consensus around the science and so therefore to an experienced clinician those alerts feel obvious in genomics there's not consensus and so I think there's a greater need for some sort of decision support but at the same time the organizations in this room probably don't agree about how to handle warfare endosing and in genotype and you know that's an area where there's reasonably well-developed science but there's not consensus and so and then from the EMR vendor from the FDA oversight perspective from the liability perspective they'll be reluctant to provide that knowledge especially with a lack of consensus so I think it's going to be useful to really think about what is a framework for managing areas where there is lack of consensus maybe it's a matter of a clearinghouse where we are very descriptive about where there is a lack of consensus in each organization can choose among disagreeing protocols but at least they're choosing something and then standing behind that with their own internal legal structures. It may not be generalizable because I work in a fairly peculiar clinical setting of many patients or nearly all of whom are phenotypic outliers but one phenomenon I've noticed is that the for trainees the more structure and guidance we provide to them it seems to me at least anecdotally the less they think and I wondered how did the nuclear industry and the airline do they observe that phenomenon I did it reduce the ability of the operators to recognize situations that are outside of the CDS or decision support that they were using and if they observe that what did they do to train people to overcome that problem. So they both trained for team based decision making both of them and they have explicit guidelines that took down the old captain of the ship model actually when I got my pilot license in 1970 I was early on exposed to this standard joke in aviation that rule one is the captain is always right and rule two is when in doubt rule one see rule one and it was only said in partially ingest and so these extreme hierarchies that were based on the learned experience professional being the top of the pyramid were in fact identified as the source of potentially colossal unreliability and so training to this joint decision-making model of multiple persons participating in key decisions is evident there but it still is the case that in both nuclear power and aviation there is assignment of responsibility so the pilot in command the PIC is finally responsible for among all the decisions made in collaboration with others one person takes the responsibility for saying okay this is what we're going to do and so there are hybrid models and in an intellectually challenging and cognitively intense domain and both of those are the expectation for training and the amount of training it takes has actually gone up substantially so you imagine that if they get if they get dumb and happy because they've got too much decision support there they would actually be less training because you'd be relying on systems to do it for but in fact they have raised the bar for the level of the individuals understanding the behavior of the systems that they control and so with glass cockpit avionics complexity has dramatically escalated over the old ground gauges and it's gotten a lot harder a whole generation of pilots is just left because they can't do it and so the observation there is that you haven't seen this devaluing of the human cognitive element but rather it's also been clicked up a notch in terms of the level of sophistication of the reasoning required by the practitioners even in these highly automated environments so you've been through two different training environments yourself medical and aviation I assume you're not trained to run a nuclear power plant to what you're thinking about the prospects of the cultural change in medical training the challenge of that cultural change which I view is a really huge issue compared to the cultural change that had to happen in aviation training you know so I'll just insert here that I mean we could obviously spend an entire five days on that because that is a bigger issue so I don't want us to necessarily get sidetracked and something that's a key issue but probably not directly relevant to what we're about but but please do respond briefly to that because I think it is important I think the power is in the data and the power is in in systems that provide real value at the point of care so in places like Vanderbilt that have you know highly automated workstation based care done by teams the students learn from the data as well as from their elders about socializing in the profession so the transition from the appagy of the profession being the learning autonomous professional to this systems model is achievable and it mostly exists on the it depends upon the existence of effective systems actually running in the environment that becomes a very powerful educational tool it otherwise if it's just the faculty opinion you're right things will not change okay yeah and I would just I would just point out that you know in in organizations that have made the transition what we hear from the providers is that because a lot of the things that are you know they would normally take up a lot of time on which our routine that can be managed by you know systems based care that what they actually find it to be a much better environment because they're applying all of their cognitive assets to tough nuts to crack as opposed to you know using on a routine basis so I think we're beginning to see that emerge in in health care systems that it but but the cultural transition to that point is really challenging and clearly we are still training our practitioners in an apprentice based model the distributed apprentice based model but nonetheless it's a master apprentice and that is not going to necessarily move this forward very very quickly sure Jamie from ONC and this is just such a you know exciting conversation we were just talking about this packet ONC just this week about this whole loop of CDS and especially the concept of the public library I have a question though because you know especially on based on what JD was saying with Cerner you know each of the each of the EHRs have their native CDS and we'd love to get to this ideal state where EHRs are chosen based on the usability of their platform rather than the the accuracy of their CDS or the you know the you know CDS values and so but Blackford you said something about you know us moving from evidence based medicine to practice based evidence and if we're moving in that direction how does that affect the knowledge processing and how does that affect you know creating this public library of CDS thank you a great question and thanks for coming you know I think this there's going to be a spectrum of evidence from you know that which is expert derived and consensus oriented to that which is purely numerically derived if you will from analysis and it's a spectrum because of course each you know the knowledge will go in each direction from database evidence to consensus and from guidelines to be informed by updated data if we have a true learning health system you know ideally we would have a open knowledge repository of kind of the classic guideline based decision support artifacts but also be able to not only rate them and assess their performance with a variety of measures of CDS performance but to combine that with you know derived parameters if you will from database analyses going on around the country the shrine experiments at Stanford for example were very interesting reported in the New England Journal and you know I don't know enough about that assessment to say whether or not there were generalizable knowledge artifacts that would come out of that that could be put into the open knowledge repository but that would be the idea and it goes back to the virtuous learning cycle you have to not only support the decision support but then measure its impact contextually so you can feedback and it could be both population based rating from users if you will of knowledge artifacts as well as their performance in practice. Do you mind if I ask one more quick thing is there's this step that I'm not hearing conversation of and I don't know if it's the you know if it's the conversation to have here or not but I'm going to throw it out is this step between knowledge processing and then the you know public library of CDS we need to have some sort of service that creates that knowledge into computable logic and I'm just going to throw that out there and we. Yeah I think we'll have a whole session on that and I would find that within the knowledge representation sphere so I think we will have a lot of opportunities in that area to really drill down on that. Jim I think I put a teaser though just just a teaser because you're right on target and Ken and I and a number of other folks have done experiments in exactly that thing of creating taking the knowledge artifact whichever resource or however it's created and then putting it into a cloud based service so the Sebastian experiments the CDS consortium experiments did did exactly that and that's the trick to getting to JD's point you know how do we actually make this actionable this knowledge of this resource usable in disparity of Mars and then there's a host of issues around that Brandon I think I had you next okay okay so I wanted to comment on just two things one was the other fatigue issue because I mean that's the main thing operationally you hear about which is why are you bombarding me with all this useless stuff or other words for stuff and I think this this relates to the the intertwined nature of genomic CDS with other CDS where we could have the most perfect genomic CDS that's 100% accurate if it's in the sea of things that people always just click the X cancel out it's just going to be completely ignored so it just we need to create this feedback loop the closed loop system for everything so we can turn off things completely unrelated genomic CDS so the genomic CDS can pop up towards the top and actually people will look at it rather than just cancel it out and ask wait that looked slightly different from the usual thing I click out what happened there and I think the other comment with the notion of ideal state is I think it would be really useful to come up with the clinical use case scenarios and just the storyboards of what we're talking about what is the experience of the clinician what is the experience of the IT person where it's like oh I just go to this public thing and just download it and I just need to customize these things and now it starts working I think having those targets and define those targets so it's not so amorphous would be really helpful to get everyone on the same page and to say get some agreement on this is exactly what we're talking about of what we want to see happen and then we can just concentrate on oh let's just figure out a way to get that done I think getting that agreement is really important I think that's a really important point and we actually went back and forth for and actually had some use cases that we were going to potentially use to tee up and we ultimately decided to focus more on on the desiderata but and I apologize to those of you we've had this discussion in other contexts about about the use of use case which is sort of a term of art which in the within informatics and everybody else kind of scratches their head and said what are you guys talking about yeah right so so this is really though you know sort of the bread and butter for for informatics design is to say what what is the problem that we're really trying to solve let's define the case and so it really is extremely useful and then building out from that so I think that's a that's a really good point to make so I had I want to make a comment on the the implementation side of the CDS in our institute we have like pharmacogenetics expert board or cancer tumor board to really implement those clinical decision and I think at least in the foreseeable future I think that's probably model in our institute to get an implement so whatever I think the CDS we're talking about here I think it's to some extent the local influence that how this system can really help them to improve the implementation you know I think this kind of short-term thing but I just wondering you know from this model to the next level model say you know push in a more general setting I think this will take some sort of process how do we do this so just one of the comments I have yeah that's something that's coming up and I'm definitely flagging that related to the idea that right now you know where everybody that is doing this is doing it locally using their own methodology is how do we make it more generalizable so I'd hiding next then Adam that Brandon in thinking about the airline versus healthcare environment I'm thinking about the incentives in the airline industry which I see as efficiency customer service and safety being top priority or is in the healthcare industry although those are factors much more high priority is reimbursement which centers around fee for service and lots of visits and lots of protocol you know procedures and things like that really going far against efficiency you safety and and you know customer satisfaction so I'm curious as to whether we see the need to move to accountable care environments as necessary before we really can make any movement in clinical decision support because the healthcare industry is not willing to put up the dollars into clinical decision support until they actually see true healthcare savings in a different model for reimbursement or if we think the fledgling efforts in accountable care reimbursement models that are starting to evolve are actually going to start to push that clinical decision support need and and sort of which comes first yeah I think that's a really good point and and you know it's interesting as you're fucked about you know the healthcare system versus you know none of us I think when we're booking our tickets on our various carriers look and say well gosh which one is most likely to get me to my to the right destination without crashing the airplane and and and yeah and and who's the pilot and all the sources we don't ask those questions because they've gotten to the point of where that's just inherent you don't think about that you know the fact that we're even talking about issues of you know we have to improve patient safety I think is is in some ways an indictment on how we've approached you know medical care in the first place but I think to some degree the point that you're making we have sort of implicitly decided that business as usual will not sustain in this country and so we are in some ways assuming an ideal state of a healthcare system that is in fact focused on value reliability and safety and and while I think that that may be illusory to some degree for the purposes of this discussion let's just pretend that that is in fact what the motivation is but when we actually go back we are going to have to deal with these you know with these types of with these types of issues so I had Adam next so I'm just going to go back a few comments and pick up on the knowledge base first and Dan thanks for your presentation I thought it was great I wanted to pick up on one of the items you noted which was trying to maintain an updated knowledge base and have it constantly updating and I want to frame it in the context of how the current system works and I'll maybe pick on JD for a second because not with standing your current comment from what I understand from the system we're not that is much more of a pull type of draw and right now it's more of a push and so I think what happens is that knowledge base is being given to the HR vendors the HR vendor has to then go and update the system however often they do that to be able to have that new knowledge coming in I think in order to get to the ideal now ideal base we want for genomics we need to think in terms of how the current ecosystem works for for data transfer and flow between different providers and vendors because there's an entire you know party of third parties that are involved here that we haven't really been discussing and I've had the conversation with groups like first data bank who would like to have that sort of cloud support the you know the push model but that isn't generally the case here so I I think we do need to consider the ecosystem that we're working in figure out what kind of cultural changes or where business structures that would need to change to accept that type of a system you know it's a great point and with the ONC person in the room you know one of the things that's being considered an MU3 is this idea of the open API's or at least some standardization of an API construct that then service providers could use you know for a variety of different things and the idea of the ecosystem is right on target too because there may be you know predict services offered from Vanderbilt there may be Billy Rubin tool.org offered from Stanford there may be other things on from the mail and that ecosystem services should be a competitive marketplace and it's starting with companies like IMO which for example intelligent medical objects provide all their stuff via a service from the cloud so we're getting there but a little regulatory pressure might help. Yeah and plus with the emergence of the smart on fire type application stacks last standard that's getting us to the point to where we can truly have an ecosystem like this and especially Adam back to your point you look at the ultimate deliverable out of a CDS action it's an order and that flows through a very traditional pipe through an EMR. What happens before that if you look at what happens today take computers out of it it's you would go get your buddy in the hallway and you would have a discussion you guys would converse and say this is that OK great and then I'm going to do this. It's enabling that type of a conversation where it's not an order an order is going to come out of the outside of the system but it's a conversation where you have to interact with services and ecosystem to move the data around in a meaningful way. They can be tracked because at the end of the day as we talked about risk earlier today that whole thing is a medical decision process that at some point the FDA is going to put their arms around as they should that has to be tracked and time-stamped and version control that kind of stuff but it's it is an ecosystem that we've got to enable to build. Adam also you know the discussion is taking place again in the context where we know that genome and others are are trying to fund some of these efforts like you know Clinjan and and ClinVar and that there are in fact aspects of those projects that are looking specifically at integrating with with the HRs and so you know I think there we are cognizant of the fact that there isn't the ecosystem and we need to be cognizant of that as well. It doesn't necessarily need to be the sole focus of what we're talking about but I think as we begin to develop ideas we have to you know to some degree you know give them the test of you know how realistic is this and I think this gets to what Blackford was pointing out that ideal may be here but if the distance is is enormous then maybe that's not the one to start with maybe we take take one that's a little bit closer to ideal where we might be able to achieve it and assuming that you know there's not some sort of a if this then that type of thing. Brandon I had you next. Yeah so just one comment with regards to I guess scope of this meeting the way I look at it is we have genomic decision support we don't have genomic decision support and we have a separate decision support it's really decision support and genomics is one aspect of the decision support and I think a lot of things have kind of come up like the interface the interoperability with the EHRs so the workflow integration that's that larger decision support issue that we're not the only ones working on there's many other people on cause I already think about this there's a lot of groups already out there working on that I think our time would be more efficient if we looked at the specific aspects related to genome decision support and assume that larger decision support issues that people are already working on and let's focus more time on the genomic aspects of it so we can be more efficient. Yeah I think that you know this is an important point and we're about an hour and a half in so but one of the sub bullets under this key question was is genomic clinical decision support exceptional compared to other clinical decision support and as I've been listening to the conversation I've heard aspects of both of that being presented and so I would not necessarily accept as axiomatic that you know genomic CDS is a subset of CDS and that solutions that will come from the general will be necessarily applicable I think it's something that we'll need to actually accurately assess to at least within our group say you know is it exceptional if so how exceptional what are the exceptional aspects that we need to address and it may end up coming down to you know the points that you're bringing up which is let's just focus on you know perhaps the knowledge aspects and less about the structure CDS but I wouldn't necessarily presume that at the outset Jim I realized I skipped you because I leaned over to this gym instead so my fault and then I'll get to you Josh sorry I just want to make the point I think all the discussion around alert fatigue is really important and I think we have the opportunity to think about that prospectively and some of the work we've done with CPIC and deciding workflows we've kept alert fatigue in mind and actually I think genomics gives some opportunities to be very specific it represents other challenges but I also wanted to mention that I think we have to keep in mind that CDS is much more than that interruptive point of care alert I think that's where our minds often go right away for CDS but it's much bigger than that and so we just need to keep that in mind as we discuss. Thanks Josh yeah I want to go back to Heidi's comment about what's going to motivate some of this genomic CDS and I think one reflection on our own program is that you know patients are extremely enthusiastic about this area and so even though you may need as a prerequisite a system that can create CDS or deliver CDS a lot of this may be motivated by patients and so one of the interesting aspects I think we'll get to a little bit later is what aspect of the knowledge are we exposed to patients and are you know how does that affect the physician communication piece so I just wanted to make that comment thanks. Thanks Betsy. I was struck as I always am by Dan's comments but the issue of one if it happens to one of us it happens to all of us and I have been thinking well where is an area where you could promote that most quickly with greatest success in a hospital competitive type of environment and I think it might be in an area that's been getting a little announcement slightly about we should have a national database we should NCPI should be involved in building and whatever and it is this issue of the antibiotic resistance and also the already successful stuff that's going ahead in terms of foodborne illnesses and tracking them down because it seems to me that it's very hard for a hospital not to think that if it happens to one of us next week somebody with that antibiotic resistance strain could show up in my area and then how would I control it and it also is an area where you know finding out whether you've got one of those and what's happening to them so it seemed to me that that might be a place where you could while we're worrying about payment and all the rest of it which is a big mess as Heidi has said we might get people to think about well yes if it happens to one of us it happens to all of us and maybe make some progress there while we were trying to solve the whole structure of U.S. healthcare we'll take that on in our next meeting just a comment to follow up from the question around is genomic medicine CDS significantly different from CDS in general and how should we approach the specific goals of this meeting we actually have 14 points here that we've used to quantify what we think is important for genomic CDS although I actually think what was done is quantifying a mixture of genomic CDS and general CDS if we were to go through the process of ranking these for CDS for genomic CDS where they end up different rankings maybe the areas that we should target if we want to go after genomic CDS and not CDS in general because as I look at this some of these rankings and I come purely from the genomic side are not so important for they're not the highest priority for genomic CDS even though they may be very important for CDS in general. Great thanks I'm sure that all of you would love to have yet another survey to fill out but we may we may decide to do that anyway so Dan. The business of whether genomic CDS is exceptional or not there is one aspect I think where it sort of treads into territory that other CDS doesn't and that's the business of the family for some of the pharmacogenetic variants it's not that we've been implementing it's that's not a big issue but when we start to get into risk prediction for serious even monogen so-called monogenic diseases cancer susceptibility some of the cardiomyopathies for example the CDS has to extend to the idea that we detected this variant in this patient that patient doesn't have much of a phenotype but it is also entirely within the realm of possibility that that members of that family who share that who share that particular variant will have a phenotype and that muddies the waters considerably but it is one aspect in which you know what you deliver to the clinician taking care of that particular patient may include advice on what to do about kids may include advice to advice to what to do about kids to adult doctors or advice to pediatricians about what to do about the 80 year olds so it's worth thinking about Last and then Ken we'll give you the last word another cultural question my impression is that aviation has done a phenomenal job in sort of taking the adversarial punitive components out of problem assessment and my impression is that in medicine we are the exact opposite of that that we have a highly arbitrary highly adversarial punitive way of dealing with errors which leads to that rubbish that was mentioned earlier about including every possible conceivable risk that might be in the package insert which is mostly clinically useless just to CYA what do you think are the prospects of changing that in healthcare and is it necessary to change that to adopt these models like aviation so there is a thing called the aviation safety reporting system ASRS and if when you mess up when you break one of the rules as long as you don't bend any metal or commit a crime doing it you actually get a get out of jail card get out of jail free card from the FAA for reporting that you caused an error or you had a difficulty that caused you to deviate from reliable practice and it is a fault-free separation of understanding the cause of the problem from the assignment of blame and we have those two so tightly integrated in healthcare that I one could imagine an aviation safety reporting like system for medical you know things that clinicians know that cause them to make mistakes or not produce best practice but I haven't seen a an initiative to do that the other thing that's that relative to the it the motivations to improve rapidly as an industry apropos of what Heidi said is David Gabba who is anesthesiologist at Stanford was at this a nuclear power conference observed that there's a big difference between these industries in that when they deviate from liable reliable standards they they lose the means of production they the airplane gets taken out of service the power plants taken offline they lose their means of producing income and he said you know if in healthcare as an anesthesiologist he said you know when when we have an adverse outcome or in a may result maybe a patient dies we just call for the next patient and and that that if an OR was taken out of service when there was deviation from reliable practice there'd be much stronger organizational incentive to get better quickly yeah I think that anybody that's ever attended a surgical morbidity immortality conference would agree that you know we're really good at assigning blame there are some examples where when there's been you know a strategic failure or a sentinel event that more of a root cause analysis type of approach without blame is to understanding the vast majority these are not individual but it's their systematic errors and so some systems are doing this the interesting thing and I think Intermountain probably has the most data on this is that they find from a liability perspective an individual practitioner liability that the CDS and the work that goes relating to vetting the knowledge and guidelines behind the CDS actually provides better defense because the documentation of why decisions are made is much more robust in those situations where CDS has been activated because they have in fact closed the loop and capture all of the relevant information about why that decision was made and so they've actually found that in those cases they're easier to defend an individual physician that might be flagged from a liability perspective so Canon then we'll move to our break and knowing I'm standing before break I'll be quick so I'm really interested by this notion of genetic exceptionalism and my just practical thought is if we focus on genomics being different it'll be harder it'll be harder to get other people to help us it'll be harder to say oh you're working on something so similar let's work on it so my thought is it might make sense to say what are people doing in general distance support to try to scale for ONC CMS coming up with standards the health services platform work with smart on fire et cetera and say instead what are how could we use this platform and what would we need to add to it to support our use case because bottom line resources are finite and it makes a lot of sense to combine resource I think I'm so glad you're co-moderating the implementation session that's going to be because that seems to me to fall squarely within that so I think that that may be something to tee up as we move into that space this afternoon so with that again another thank you to Dan for an outstanding talk and much as I was anticipating I didn't figure we would have a crowd of shrinking violets that we would be having a difficult time extracting information from so Blackford and I and as well as our recorders are are trying to capture this and our job will be to synthesize all these good ideas into something that makes sense so we are now on break until 10 30 convenience start talking about data issues