 take their seats. We'll get going on the afternoon. Boy, this is a compliant group. It's usually we've had some of these genomic medicine meetings where we've had to essentially fire off rockets to get everybody's attention. So that's, I'm glad we, we have them in just in case, but so panel two, which is now, I think, a nice transition from the data discussion because a lot of what we talked about related to data was also about knowledge associated with the data. And so we're going to be focusing on that in our second session here. And co-moderators for this session are to my left, a tool butte who did not introduce himself because he was actually, he missed Dan Macy's advice to the keynote speaker because he was giving a keynote at Georgetown earlier today. So I'm sure it somehow went okay even without Dan's sage advice, but so we'll let a tool introduce himself and then Josh Peterson and they'll lead this session. So hi, I'm Josh Peterson and just to give you a little of my background, I've been working in primarily drug clinical decision support for 14, 15 years and more recently in drug genomics. So that's a little bit of my bias. And reflecting on what we've been discussing this morning, I think there's been a lot of agreement about the importance of encoding genomic knowledge and the fact that it needs to be maintained over time, preferably in some accessible central repository. But I think there are a lot of issues that we've not touched on that make this one of the more difficult clinical informatics challenges of our time. So I wanted to bring up a couple of those and then turn it over to a tool to talk some more. So one of my observations in clinical decision support in particular is that, you know, CDS has a pretty mixed pedigree. There was a lot of work in four to five academic medical centers initially. And this is transitioned in some cases, particularly drug knowledge to that's now managed by a number of commercial providers. And this is these these commercial providers have subscriptions and hospitals and practices either by these individually or as part of an EMR contract. But the evidence that these large knowledge bases change care for the better is fairly low. And there's also a great deal of frustration at the point of care with some aspects of this knowledge. So reflecting on why that is, I think some of it is the potentially the vehicle for how the knowledge is delivered. Some of it is the workflow and how it's integrated. But I also think there is aspects of the knowledge itself that we should talk about. And it's already been brought up earlier that very frequently this knowledge is under specified. So there may be too few elements that are included in the rules. There may not be sufficient exception handling. And I want part of the discussion to be essentially how genomic clinical decision support might be able to avoid some of this fate. Now there are some great examples where a central clinical decision support facility has worked very well. And one of those, if you're not familiar with it, is in immunizations where there's currently a central repository actually managed privately that is subscribed to by a number of public health facilities and hospitals and they provide the rules that then apply to a patient's status in terms of whether they need immunization or not. So that's working quite successfully. So that's sort of one issue. The second issue that I think we can discuss is to deconstruct a little bit by what we mean by knowledge. I think we discussed some of this earlier where we focused on how to specify genetic variants. But there are a lot of other clinical aspects of a rule that we've not touched on including how diseases or drugs or other patient characteristics should travel along with and how much of that should travel with the genetic variant. And with that I'm going to turn it over to the tool. Great. Well thanks for having me here. Just a quick introduction. I'm an associate professor at Stanford. I think I'm in the room because I got involved with one of the first interpretations of a patient presenting with the genome. Happened to be a faculty member at Stanford. And to do that we built one of these private knowledge bases by curating the genetics literature. And with that I'm a founder of a company called Personnel. So I have a major kind of disclosure there. And I'm an investigator on ClinGen which is funded by NHG. I'm by no means am I a principal investigator. And so I have a couple of thoughts. I'm the kind of annoying panelist who has missed entire meeting but I can kind of guess you know what's going on here. And I've chatted with a few people. But so I got maybe six points I wanted to bring up. And if that isn't none of these are controversial enough to spur on discussion. I should just get on a plane and go home. All right. So the first one is genomic exceptionalism. So I caught a little bit of the first panel and I kept thinking why do we make such a big deal about clinical decisions for genomics compared to other lab tests? I'm an endocrinologist. We measure tons of growth hormone levels. I don't think an abnormal growth hormone triggers anything in any one system. We require a doctor to look at that and make a trigger in their head on an abnormal insulin level. So we're here for genomics because we love NHGRI. We do want to think about this as being important to decide on. But it's just important to put this in context in my view in terms of how do we manage knowledge in non-genomic fields? If we had to think about high-density types of data that's similar I would have to bring, you know, to my mind comes the field of radiology. And do we have systems today that trigger on raw results from radiology pictures? Or do we have perhaps triggers on the interpretations of that data, right, what's coded into, let's say, the interpretations? So just as a simple mental exercise, you might want to look at this agenda and just do a mental search and replace. What would this agenda look like if we just replaced genomic with radiology, right? What are the data issues that impact radiology, CDS? How do we manage knowledge for radiology clinical decision? And actually probably the answer to some of these questions would be we would just let the radiologists use their head for better or for worse. We wouldn't necessarily code any of this stuff. Certainly not with the level of rigor that people seem to be talking about at least this morning. So just a genomic exceptionalism we're here, and we're deeming importance to this, but just think about other fields that do measure a lot of data that don't have this kind of rigor. The second point I want to say is that the knowledge bases that we're imagining here in some context, like ClinJan, ClinVar, and others that we're building, they are extremely context-specific. I think a lot of this was brought up in the first panel, but the problem is we're still learning what the relevant context really is. So obviously there's age-specific. You could have the highest prediction of childhood disease, but if you're 90 years old you're probably not going to get it at that point. It's sub-population-specific. We don't even have a standardized vocabulary for all the ethnicities in the world besides the federal government five or six or whatever. It's environment-specific. My good friend Paul Weiss says in a famine, nobody's obese no matter what your genes say. So it's obviously environment-specific. And sex-specific, and sex again is back in the last few weeks about reporting more. I think the challenge here though is that the genetic studies that we've had and been successful with in the past decade have not really had the same kind of clinical relevant rules in terms of how they were published or even run. So for example, it is not infrequent to run into GWAS is where the controls aren't even the same age as the cases. Because the thinking is, oh well DNA is DNA, you're going to get the disease or not, you would think there's different pretest probabilities of diseases happening. So the genetic science is great from the genetic's perspective but might not be up to par in terms of other clinical science and we're distilling from this clinical things to do. But we have to watch out for that. Odds ratios versus likelihood ratios. The statistics are going to be there. So it's context-specific. Third point I want to bring up is the clinical actionability part. I think maybe the phrase was mentioned. We don't want to think about all variants, but we want to think about clinically actionable variants, but I'm still stuck with what that means. We have had super rare N of 1 variants where because we figured it out the parents of a child now screen their IVF embryos to make sure the next child doesn't have that. If that isn't clinically actionable I don't know what is because I think even if there's no trigger for that I think people make actions on some of these things. Of course they have to be reliable, they have to be reproduced. So the trigger, but the triggers cannot be refer this patient to a geneticist. I'm kind of wondering what is the action part of the actionability that we're going to have to build into these knowledge bases. There are not enough geneticists on the planet to handle all of these as had been written about many, whether incidental findings or not. So the actions in this future knowledge base have to be actions for a primary care doc or even simpler. Here are the 10 things you're going to have to deal with before they're even seen by geneticists, probably in another state. And so the action side is going to have to be a lot more specific. There are certainly more radiologists than there are geneticists. That's a reason for exceptionalism. Number four for me is dynamic applied knowledge. I think that was already brought up this morning. I heard a hint of this that we're going to have to reinterpret patients. The patient has changed because they get older. The call on the genome might have changed because we have the raw reads and today's bioinformatics call on that variant and the knowledge base has changed. And all this has to somehow magically happen even without a new encounter and certainly no encounter to bill for. So we have to do this and we're not even clear like when we're supposed to be looking at all these variants. Point number five is this challenge of even the best knowledge bases I think of today. Don't capture whether we were right or not. For me, a lot of people are making the story about the BRCA-1 database, for example, and how it's annoying to some people that there's a company that has a lot of these known variants because they've been running the test longer than most people. But at the same time, it also kind of hurts me that they don't actually even know what happened to the patient. They got the variant, they got the genome. What the hell happened? When the surgery was done, did they even see a tumor? None of that folds back. None of this is even planned to fall back into these databases today. So did we even make the right call? A lot of times our outcomes are bioinformatics predictions on what these variants really mean, but we really don't know what exactly happened after that quote-unquote clinical action was taken. There was no workflow to get that back into a knowledge base. And then the sixth and final point I'll make before we're turning it over to the whole crew here is how much of this is going to be done in academia versus industry. The challenge I think you're going to have, I guess, in this room is if all of a sudden you make this so useful, all of a sudden industry is going to run with this by the end of the day. Because I think there's still this struggle. Is this useful? Is this the right time? And there's also this title, but you can make this so useful all of a sudden others are going to run with this outside of academia. And there are going to be questions on open versus closed. Is one approach faster versus slower? And what exactly is going to be academic here? The development of these CDSs or the testing of these CDSs? And with that I'll turn off the microphone. Great. So I think we've teed up some issues. I'd look to the moderator and say, is there a place where you think there would be a logical jumping off point based on what we heard in the first session and the points that both of you have reflected and we can use that to get the discussion going. And then I think once we push the rock off the top of the hill, the avalanche will start. Well, I think one of the things we can start with is the first question there. What are the necessary elements of knowledge, management, representation? So we spent some time on genomics, but how do we represent other aspects of clinical care and how much of that should travel with a genomic result? How much of that lives in the CDS system? How much of it lives within the genomic result itself? Okay. I'll push you. So what do we do with radiology today? Do results follow patients? Does action follow patients today? I have a CT scan or I have a nodule. Do I make a big deal about it there? Yeah. Where long did you talk to me so far? The light's on. So clearly it doesn't, but I'm not sure that's a good thing. I think there's a clear literature that we're missing a lot of diagnoses based on the fact that that information's not portable. So again, we want to avoid the mistakes that we've already established in other data classes by having our results be more portable and more informative. And so the portability part can be very standard oriented, but the conceptual part is what needs to travel with the results so that someone can use it and perhaps take action on it. So this just goes back to the first question of necessarily elements of knowledge management. And I think one thing that we haven't ever dealt with yet and I think is emerging is the notion that there will be a lot of different sources of information that come from all over the place and trying to understand the review, the level of review and the provenance of that information and how respected it is is going to be an issue. I think to date there's been so little sort of clinical decision support and largely coming from professional societies really well vetted sources of information that sort of most people can see as respectable sources and there's few enough of them that we can sort of just manage it. But I think with so much knowledge and so many sources now it will be difficult to just assume that this is just obvious to everyone and the need to really develop ways for our community to have different levels of how good knowledge is. And we've been struggling with this in ClinVar and worked to develop this star system where we have practice guidelines, get four stars and expert panels get three stars and then things that come from multiple sources get two stars and we're constantly, we have another iteration of this because it's really challenging and some of them are highly reviewed in terms of what gets four stars and what's not getting, or low touch, but it's been difficult and I think in general and that's just looking at pathogenicity assessment which it's not getting into actionability and all the other layers of clinical decision support that come off of, after the first question of is the variant pathogenic for something. And so I think trying to figure out how we manage that vetting of sources I think is something we have to tackle. Just to second that point, it should be how do we manage the data and also how do we manage conflicts or disagreements in the data because when we think about the patient down the road I think everybody thinks the patient will be going between many different systems. So you're going to have different measurements of the same data type that don't agree for a single patient and we need to work out how to deal with that I think. So I think the other aspect of knowledge management that I think we need to kind of focus on or perhaps pay some attention to is this whole concept that knowledge is also fit for purpose, right? Because we all can take the same data and we can derive knowledge from it and depending on what we want to do, right, we may have different things. So part of knowledge, capturing the knowledge and the representation of it and tying it to the purpose of what we are trying to do is an important aspect of it. So I think that's just not a dimension of it. At least my belief is that the final state of ideal knowledge management will give a full interpretation and I just want to bounce my assumption off of others. Right now there are pharmacogenomics committees and people and interpreters. Is the final ideal state self-sufficient? I guess it's the question. I guess since we only understand a small single-digit fraction of the genome and perhaps only partially understand even that, you know, at least in my mind, the notion that it is an instruction set you carry with you that will have utility as rules look at it over time and so that's one of the reasons you separate the data from the interpretation, not only to keep it away from PDFs, but because it sort of sits there as a data resource to be queried over time by a series of inquiries about the nature of your personal molecular variation. It seems to me a sort of appealing, enduring separation of what lives where, that you carry multiple copies of your genome. It'll include somatic mutation, but maybe as you age you've got multiple WGS copies of your own genome and it just waits to be asked a question, so to speak, as the book of humanity that's your own version of your own story. And it should persist over time in a way that is queryable and sort of not bound up with either an interpretation or a particular access mechanism. Just to build on that, back to Atul's radiology challenge, genome is the index of all the molecular information of relevance for human health. So if you believe in the central dogma, you know, at least to some degree, of all the molecules in the cell being derived from DNA and proteins and so on, that means that we can use the genome as an index for all that molecular information, so we cannot use a radiology image for that purpose. As true as that is, the genome doesn't tell me how much I ate at Cheesecake Factory last night either, so there is this important part of the environment, which radiology does better capture than the genome. I would argue that there is no such thing as an index for the environment as there is an index for molecular state of the cell, which is the genome. Can I just riff on one of the points that at least three of the people just made? So there seems to be consensus in this room that we will all agree to one knowledge base. I'm just looking up how we classify colon cancer, for example, we have a Dukes classification, Astro-Colar classification, TNM classification, many reasonable universities and academic medical centers have their own knowledge bases keyed around these. Do we all think we're all going to agree to one way to interpret this? I just want to bring a few axes for consideration. One is the approach we take to knowledge representation probably needs to be standardized where we want to standardize, but the actual content may not because there may be disagreements between different groups. I don't think we're ever going to agree on anything, clinical knowledge, really, where everyone says this is definitely truth, at least for a significant portion of things. I just wanted to note that it seems like the second question about architecture probably influences the first question of the knowledge base because your approach and architecture towards integrating knowledge into systems and how you architect these systems probably will influence how those knowledge artifacts need to be implemented. So I guess just to make that a little bit more specific, I currently think there's three main approaches that are mainstream proposals for how we scale clinical support and knowledge sharing. The first is the notion that you develop clinical distance support artifacts using a standard approach where you have knowledge in a standard form. This correlates well to, for example, what's being worked on in HL7 and ONC, CMS, et cetera, for the notion of standard order sets or standard documentation templates or standard rule definitions, and that's one approach, and the idea there is you define it in a standard approach, you send it off to different institutions and they interpret it their way. Another approach is the notion of services where you would have just a standard interface to say this is how I will provide information needed for a decision in a standard way and this is how the standard outputs will come back. In that case, it actually doesn't really matter how the knowledge is represented. It's just you need to talk in a specific way. And yet the third one is something that's been described as smart on fire. This is also really being worked on with the notion of a health services platform that Mayo and or Mountain, CERNR, et cetera are working very hard on. And this is the notion that a lot of vendors already support this notion of embedding a web-based application into the system. So CERNR has this, Epic has this, McKesson has this. A variety of systems have this notion of embedding a web application within your native user interface, and this is an approach to say standardize how we integrate those systems. If you think of it in those terms, then it almost nothing else matters. You just need to work on that API and the way you build in interoperability for genomic medicine is simply how do you build applications into systems. So I think it might be useful to just think a little bit in terms of what is the target integration approach we're targeting, and then based on that, that will inform what that knowledge needs to look like. I sense a bit of a potential bifurcation in the discussion here, from starting with what knowledge is needed and how do we represent it to, I think what Ken has put forward, which is to say should we leave that to some degree undiscussed, but think more about delivery of whatever knowledge representation there is to a decision support engine. I mean, that's a very crude way of I think saying what you're saying. Maybe not necessarily, I guess what I'm saying is do you envision, for example, that the knowledge base will have a standard structure and everyone will agree to that structure and you will be sending these knowledge bases to different institutions, or do you more think that it's going to be centralized services that will take in input and say using the standard interface will tell you what we think in terms of the interpretation, in which case it really doesn't matter how you represent the knowledge as long as everyone agrees on the interface. I think this is a central point and I guess what I'd argue for is we try not to depart too much from what standard practice and the rest of the sort of technology industries and whatnot. Certainly the target implementation can define the architecture and can define then the knowledge representation elements that we can. I guess one of the points came up earlier, Atul, maybe you weren't here yet but the level of this knowledge being specified in a hierarchical multi-level fashion. That might imply that there could be a variety of sources for sequence data, variant data, interpretation or pathophysiologic state and then clinical significance, etc. It sort of reminds me of the NCBO and using services like that in conjunction with other CDS activities. But one of the things we did in the CDS consortium work, which helped with, I think, healthy decisions a little bit, was think about that KM schema, the knowledge management schema and what are all the facets that you have to have to actually represent rules, alerts, templates, forums, guidelines, info buttons and order sets. It's broad, it's a big thing and something like that might exist here to try to coordinate then representation going on in different instances if you will and facilitate the services approach to accessing that knowledge. So reflecting on that perhaps is being a lesson learned from seven years of hard work in this. Five years of paid work and then two years of unpaid work. But working on the less. I mean, from your perspective in this discussion area knowledge representation does the schema, does that seem to really be the central element and if so then should that focus what we're talking about what sort of a schema should be developed related to the different levels of genomic knowledge? I think that would be a very reasonable start and I would ask Ken and the ONC folks, do you think that the current work underway with healthy decisions could be extended for this use case? I would say probably although I would doubt any knowledge vendor or academic project would natively store it in that way they'll probably use ontologies or databases to store it it would be the interchange format. Basically this would be an XML schema that is a common standard way to represent for example say if this and this and this is true then conclude this and do this I mean it's not really rocket science probably the biggest thing that would be needed and underlying any of these approaches is a common data model for the data you're talking about so I think it's in a way to represent knowledge you first need standard approaches to represent the data you're talking about in the knowledge and I think if it sounds where that is not yet the place I think that should clearly be the first target. I'd like to build on that and iterate what Ken just said. I'm actually doing a research project right now that's looking specifically at applying the HED schemas to pharmacogenomics knowledge and what Ken just said is exactly correct what's really missing the sore point that's missing in order to apply that standard to this domain is a common data model and common terminologies to support some of the concepts that are represented. By the way that doesn't mean that each EMR has to adopt a common data model it means that there needs to be a translation step from the native EMR representation of those data to whatever that data model is for inference. That's absolutely correct I'm talking about the exchange of the information as opposed to the storage. So at the risk of perhaps moving in retrograde although I don't know that that's necessarily a bad thing it's interesting that in the data discussion data model never really came up as part of that and so I guess do we tend to think not saying that these are completely separable but do we tend to think of data model as a data issue or is it more in the knowledge representation Clem? A couple of things I was first going to ask Ken which of the things he thought was right but you may have sort of focused us on the data. Data models in my experience have been the worst tar babies on earth and no one ever gets out of it once you start grabbing them in collective manner you can do it in your own institution but that's maybe too controversial but I think the issue I like to go back to a tool's idea of the radiology and Ken's idea and we really have we've all talked about a couple of different levels of genomic data and when you get down really deep about how they do it and how many repeats they do it and I don't think we're close to being able to get that standardized but higher up certainly in terms of what I'd call it Jim's level you know that we got to just name all these buggers each of them and we got to have them a code for them and we have to have some standard that probably isn't too hard because they're so regular I mean there's some well I shouldn't say that but there's a lot of them are a lot alike except they have different effects but a tool's idea is that we don't worry about the bits in the x-ray and there's a lot of similarity certainly at some level and I really that was a tickly I think we all had a like maybe not all of us but many of us and you know we may need to step up a level to get started quickly and simple things like interpretations are what you know that would get to your point and I think Ken may as a younger man he may be worried about me as an older guy who's been working for 20 years and you still don't see enough going on you'd like to see something going on at least in your lifetime right so so let's like it's so if we find levels and find some areas to make some really hay with and I think it's absolutely true if we don't have the data what's the point of the what's the point of the rule you know I want to just address one thing and again you know we've I think we've all tried to develop analogies and metaphors for what it is we're doing comparing it to things we have familiarity with I think in you know in Dan's talk earlier one of the points and one of the very first slides was that in fact in genomes we do pay attention to an individual pixel if you have that specific base change that causes sickle cell anemia that's a pretty darned important pixel to pay attention to out of the entire image and so in some sense there are some differences perhaps when comparing this to how we think about images not that that is you know what we need to focus our time on is whether we believe this analogy is robust but I think we clearly do want to you know reside in what are things that would fit within Blackford's model of closer to the ideal and important that we could actually put some effort on and move them forward and so you know that's I guess the question in the knowledge representation piece now is can we identify a couple of things that we would characterize as being in there and I would just also reflect on the slide that as we went through the different desiderata that these were the ones that we thought might relate more closely to knowledge representation Alex regarding the pixel radiology image is usually interpreted in the context of the current knowledge and the pixel may not be relevant and can be forgotten whereas a pixel in the genome may be revisited 10 years from now when we have more knowledge about what it means and actually it may become important so I guess that speaks to the third element which is you know knowledge management governance and particularly whether we want that information to be portable because that pixel gathered at one institution may be interpreted at another and we don't want necessarily genome sequencing or other high throughput experiments to be repeated the notion of you know radiology or pathology starting from the lowest hanging fruit makes sense the interpretation because right now I mean we're oftentimes we're at the level where we know that it was a you know mammogram or a colonoscopy but the rest is free text so before we talk about like how we represent the pixels of the slides in the path report or the pixels in the radiology imaging we would first say well what are the interpretations we could potentially standardize and maybe that's a really useful thing for this community to say let's just standardize the interpretation the easiest things first knowing that there's a lot of hard stuff after it that would argue to some degree to again you can argue with the specifics of it but an approach like the ACMG took and saying you know we think these are the 56 genes or Washington these are the 112 genes that we think we actually know enough about to be able to go from a variant to an interpretation to something that we could actually you know convert into actionability and potentially through CDS so the appealing part about thinking about interpretations is of course this isn't all theoretical there are genomes being done every day right and their interpretations being made every day but other clinical communities have made a lot of progress so the radiologists I guess have something called Radlex that they code their radiology finding thing of course the pathologists have had snow med for years and it's grown to be other things and one could argue or decide whether either of those two perhaps snow med is enough or not enough right to cover these interpretations but it could be the lowest hanging fruit in terms of capturing some knowledge so just to pursue that a bit farther so in terms of thinking about what Blackford was saying about schemas would you be looking then at in terms of thinking about you know the analogy with snow med or something of that nature would you be thinking about developing schemas around interpretation of genomic elements is that where you're going or do you have a different idea I'm not sure exactly what you still mean by schema though here right in this kind of context here but if I again I'm just drawing from the other clinical analogies here there are certain radiological findings do cause triggers to happen certain reanalysis of interpretations middle of the night x-rays reviewed again with better eyes in the morning that kind of thing so there could be simple and complicated things one can do whether you roll that all into a schema or not I think you could decide right I think it's actually two different concepts that we shouldn't confound one is an architecture for capturing you know whatever it is in the slots or in the rooms of the architecture could be standardized classifications you know a gene Lex or ClinVar whatever the right thing is not my field but the schema is arbitrary it has to be extensible of course as well but then it is a mechanism with which to define and share I guess if we were to get really really concrete on what could be developed I would think of something like extend the fire profiles fire is the very it's rapidly becoming very mainstream for health care data standards primarily because people can understand it I mean it's a very understandable mechanism and for example saying let's create the profiles for representing genetic test results at the interpretation level where we might say there may be tie-ins with the SIMI initiative which is developing these kind of models and you say you know these are the snow med codes you're going to use to represent these genetic findings and these are the snow med codes you're going to use to represent the actual interpretations but I think if we think in terms of interpretations there's a rich history of how this has been done and the only thing you need to do is get the subject matter experts together to say we agree that these are the top priority interpretations and top priority genes we want to create some standards on so that when I and my institution want to write this rule it's a high likelihood the lab would have reported that so instead of sort of trying to figure out how they might have reported a SNP or what not it's like oh it's probably going to be using this line code and for the result I just need to use this snow med code which means you can create a shareable artifact that hopefully won't require that much translation when it's used locally. So as part of ClinGen we have our knowledge working groups that are actually doing that with some variant gene and variant annotation not so much the variant but certainly gene annotation related to some of the action ability and so Alex maybe I might go back to you and if we think about places where we might be able to develop some synergy in terms of extending work that's being done in an existing project into something that would be more generalizable to talk a little bit to where ClinGen has been thinking about how to move this forward and doing some of the representation that Ken is talking about. Well ClinGen had the luxury of starting a year ago and as opposed to some of these projects we had seven years of experience now what happened in the last seven years is a revolution in the web driven by commercial dot coms and so on and revolutions in the semantic web technologies three months ago we had a new standard on the web called link data standard emerge which is maybe comparable in impact to the HTTP standard and so on at least in the opinion of Berners-Lee the inventor of the web he thinks that link data will be as important as HTTP standard which we all use to browse the web. So a lot of new developments and we tried in ClinGen to actually look at where the state of the art of technologies as of year or two ago and ask how can we recruit these technologies to address these key problems and we're still experimenting with it that's the short answer the long answer we're still very hopeful that because we can use these latest link data technologies we can use the semantic web technologies to do the data modeling in a new way which still translates and it's you know integratable with these older legacy systems we will be able to provide the flexibility and a distributed framework and this is the key actually where multiple parties can work semi independently and as long as they agree on the basic approach and basic technologies they're still interoperable both at the programmatic level through APIs using link data platform 1.0 as well as on the semantic level using RDF, RDFS these technologies that are ontologies used for example to represent ontologies and so on and using these two levels of interoperability of knowledge representation using semantic web technologies and programmatic really using APIs I think opportunities are really enormous in terms of sharing knowledge and having a distributed loosely and the key is loosely coordinated efforts where the community can agree on a basic outline and still works energetically and the Klingon is for us a practical test of these technologies and it's still early it will take a while for us to report on the results but we are very hopeful I just had one point there was a couple of mentions there of defining your algorithm using the knowledge that you have to build your rules or your guidelines and then in addition having some sort of limitation on the genes or the genomic loci that you would apply them to I think that's an error I think you build your algorithm and then you can apply that algorithm and only those genes for which there is sufficient knowledge would make it into the actionable category I think setting up a specific set of genes which was suggested is a bad idea I agree with Liz that you want a consistent approach to the process and then you can enable various groups to define their lists or whatever at the same time I think there is utility and as much as the ACMG guideline had some criticism the number of people that have been so delighted to have a list has been pretty impressive and the number of people that have simply taken that list and done all sorts of things with it so I do think that there is this balance between developing a robust approach and standard that we can apply lots of things to but then still defining some interpretation and approaches and knowledge that we all agree on and this notion of can we agree on an interpretation resides in a number of issues that we have to agree on one is the standards for interpretation so what terms do we use do we call them pathogenic or deleterious or actionable or we have been working to come up with standards for just how you label Mendelian disease pathogenicity but we also need them for somatic variants or complex traits and you can't fire decision support rules if everybody is calling things differently and then the evidence to put them into those categories of course is another level of complete disarray of standards that also been trying to build and then this idea of which things actually meet those categories and those are the list approaches which genes and which variants in those genes etc. to that point if you don't have the standards for how you name things and how you interpret them I'd like to combine all three I don't think what Kevin was saying would preclude what you were saying and I think the items you described are what we got to do first just so that this could work in some sense at least certainly the standard the basic Mendelian mutations are fairly a lot representationally they're a lot alike it just has different effects and it shouldn't be hard to create that first level but maybe be a little more ambitious and not just say this is the interpretation I think we're close enough with Jim's help to identifying that variant of the real important ones you know not the ones that are scattered over 45 genes or something but I think that's and the three audience get together so just coming back to what Heidi just mentioned about the need for standard terminologies and interpretations for these things I absolutely agree one of the things that I found though and I'm sure others have noticed this as well is that even if there are standardized phenotypic terms and how we talk about these things and exchange the knowledge at the level of a national resource like a ClinGen, ClinVare project there still needs to be some sort of freedom at the local level for how that knowledge would be interpreted and applied in a clinic and so we can talk about standardizing how we exchange this information back and forth but there still needs to be a way for people to localize that once it gets into their own site as we're using the term interpretation here again it seems to me that we're maybe developing some different levels there's a variant interpretation which is you know is it deleterious it's benign orphan pharmacogenomics it's ultra rapid you know and that there may be different of those types of variant interpretations that would need to be developed for different types of contacts but then I think there's the clinical interpretation which overlays the variant interpretation which is that even if we all agree that this is a deleterious then what clinical knowledge do we want to attach to that that would you know enable a decision support rule so I think in my view at least it seems important to separate those two to some degree and to realize that there's a number of people in the variant interpretation space the college both CAP and ACMG you know the CPIC group and others that are trying to get their hands around you know can we all agree to what we're going to call things but there seems to be much less concerted effort to deal with the clinical interpretation type of issues and so that's not to say that we should necessarily as a group stay out of the variant piece because it is essential but you know is that are there gaps that we could address in the clinical interpretation piece that would be more than are not being attended to at the present time so I had Liz next and then Brian I was just a general informatics point if you have rules with which you can exclude particular genes that shouldn't be being considered I would say build the rules into your system don't build the list of exclusions based on those rules into your system just a follow up yes I want to agree with Heidi that creating discrete types of rules for what interpretations of variant interpretations is really key but there's somewhat of a chicken and the egg conundrum that laboratories have little incentive to report their variants in discreet ways that can be translated to these using clean gen to these rules and if those rules don't exist and if there's no way to use those rules downstream otherwise laboratories are just doing extra efforts to create a discreet field that's going nowhere and I think we have to move forward I think we are moving forward discussing moving forward we have to move forward on multiple fronts to create the standards for potentially an ideal state to the if we create structures that fire off of link codes for example it's to handle the level of genomic data that will be generated and then so those just greater I think what Heidi is saying those standards are important and I think that the clinical interpretations will always be local I have never yet seen a clinical decision support that does not allow a clinician to take that whatever the alert is or whatever that the information is being provided by the clinical decision support alerts to take that information in the clinical context where it is so I think that it will be impossible to force a clinician to not to be able to create their own clinical interpretation with whatever the variant interpretation is I wanted to I wanted to really try to expand Rob's comments about the variation of local interpretation of those clinical decision I think one thing about the beauty of the allow people has a little bit more flexibility to interpret results actually is really encouraging them to do research on that I think this is one of the things I think people can use this CDS system to do additional research given the flexibility you have to interpret the results I just want to follow up on that and I think we should make a distinguish about the interpretation of a variant for example when I call a variant pathogenic to me that says it's capable of causing disease doesn't mean it has in a patient or will in a patient but is it capable and to me and although the patient in front of you is a piece of evidence that informs your collective assessment of a variant and what it's capable of doing and therefore as a community single standardized interpretations of variants in terms of what they can do but that is entirely separate and speaking to your point Brian and others what a physician decides to do with that standard knowledge and whether they think it causes a phenotype in their patient that's shown up in clinic or whether it ever will or whether they feel the need to act on it or not which if it's a terminal end stage cancer patient with a cardiomyopathy variant with a 13 year old with a cardiomyopathy entirely different decision making going on so I think we want to separate you know a stage of knowledge and interpretation that we do have to agree on and get to in a standard way from the practice of medicine which is making decisions off that knowledge I think in some of the discussion about how specific I'm not disagreeing on what you said how specific this interpretation hierarchy allows I think forgets that the purposes of the rules is that one can tailor the rules to make it be smarter and I think most rules because I did a lot of rules in my life you know you hope to maybe get 50% specificity and the rest got to be the doc I mean you can't assume these are robot doctors because it's so complicated but I think we're forgetting that in all this stuff you can tailor it by all kinds of stuff with the rule that the kind you're talking about you need some grist to start with you need some you can hang on to like the things you talked so I'm Josh so I think you know we've talked about a couple different avenues and directions from this and I think it's important to think about this problem maybe in terms of what we need to do to be able to standardize maybe interpretation and language around that interpretation of the raw level genetic data to what actually is transported at a higher level that could be shared amongst stored within an EHR could be acted upon by different agents and potentially shared across EHRs to other EHRs and then the level at which the CDS itself would potentially be standardized in terms of what it did and that logic be shareable so I think there's at least three to four different layers there of standardization and we might want to think about breaking those things apart certainly in terms of representing an obvious thing that's been said a couple times I think probably bears repeating is getting past a pdf storage device and having structured data with some sort of standard which could leverage Loink but could also incorporate you know would incorporate you know RSIDs and other sorts of standardized nomenclatures you know probably redundant even potentially to things like star nomenclatures even though we know there are issues with that but we don't have to exclude existing standards we can replicate and include multiple layers of standards on top of it. Okay so we're about halfway through the discussion here I must admit and this is an unusual admission for me I'm not necessarily identifying a clear path forward at this point and that may be because there isn't one but I would look to my session moderators and to others to kind of say are there things that we've heard to this point in the discussion that rise to the level of here are some things that we might potentially focus on and move this particular piece forward well I can summarize what I've heard so far so among a number of things we've heard that we need to potentially enhance existing standards for querying and representing genomic knowledge particularly ones that are starting to be adopted more broadly there's also a significant gaps in the way that clinical interpretations could be coded although this may need a lot of flexibility to account for local concerns and local variabilities which likely will be needed and then we need several layers of standardization so we have the varying interpretation then we have not only clinical interpretations but also a certain level of action ability and you know one of the things we've not touched on necessarily are is the third question which is the governance issues arise in a knowledge management because I think that is sometimes a sticky issue in this area so one of the concerns may be who's going to make not only the standards decisions but the decisions about where the data lives and how it crosses between organizational boundaries I think I can see a lot of potential for aligning around architecture probably aligning around terminology I think the as I mentioned earlier getting consensus on how to interpret variants is a much bigger prospect but what feels like it could be within reach is a description of best practices for the P&T committees who are grappling with this so if there's a checklist process I really like the star evidence rankings that Heidi described so there's a if we think about it developing some documentation that these P&T committees can use so that there's harmonization even in the process of evaluating what to incorporate locally even if the content varies I think that would still be a step forward if I kind of build on that something that strikes me with the governance is with the folks implementing pharmacogenomics CDS I think people have gone into it this is a new area we need the governance right up front and it's become sort of standard to have some connection to the P&T committee like Mark is saying I think for other types of CDS it's been oh the vendor provides it for us they must be right and then you get it and figure it out oh we need to put some governance back to this so I think this is actually a positive point where we have some best practices starting to come together for genomic CDS with pharmacogenomics maybe more so than others I was going to make a very similar point which is that for different types of genetic tests the interpretation occurs in different places and that may affect how we think about this for example what is being transmitted across organizations so a number of us are involved in an Institute of Medicine action collaborative looking at different use cases surrounding establishing clinical decision support and one of the things that we found is whereas in rare variant in tests where you're assessing rare variants for germline disease labs will often interpret those findings and then you know there's the potential for the provider to just look at leveraging that interpretation whereas with a pharmacogenomic test it seems that very often those come as just variants uninterpreted and therefore in order to target them with clinical decision support you have to apply an interpretation you know from the simple primary care practitioner's point of view I would hope that as we think about this knowledge representation and the KM dimension of it you know how do we arrive at the scales that will actually help drive the actions in the appropriate way you know we have clinical scales for lots of different things and in healthcare and physical exam and assessment and so on and so forth but where I think it can be be fuddling some time for the uninitiated is how do you take the interpretation and determine what's the appropriate action could we have something as simple as you know 1 through 4, 1 through 3, 1 through 5 and have a common gradation if you will across interpretation given the variation in penetration and expression this is where I realize it's challenging but again what I heard was there are areas where the expert community agrees on the content that aren't expressed in standards so that seems like a really low hanging fruit if people agree just make it codified that you agree and make it that's the lowest hanging fruit the next part might be starting to look at where there are existing standards for representing say knowledge and if you agree on the content then start seeing if you can put things into words because essentially there are efforts to develop these standards and like you were talking about using the HED healthy decision schema and that kind of work and then probably after that is identify there is where there is actual semantic disagreement in the community of for example what to call deleterious or what not entered work on those but it seems like if you make a guiding principle start with the easiest things and then move down your previous question Mark seemed to relate to specifics and I am focused on what standards exist or are needed I think I understood that there is a lot of work in variant knowledge databases but you are rather focused on interpretation and presumably the CDS knowledge database if so I heard just in a discussion the other day Arden may have some shortcomings there may be other standards that specifically are less than ideal or maybe there is an ideal one out there and I'd be interested in what that might be or from the CDS architecture standpoint which is one of your questions is it a service oriented architecture so it's only a plea for specific maybe we've all agreed I guess that we need one of these but I'm still not clear which audience are we talking about here so are we talking about CDS to replace a geneticist are we talking about a CDS to augment a geneticist or perhaps we're talking about a CDS to assist someone ordering a genetics test or something else I don't think those are ores necessarily I think again from a context perspective and certainly I think in some of the groups that are more mature in the pharmacogenetic realm we don't see the role of geneticists or genetic counselors in fact we envision a time when we won't see physicians in there that'll be the realm of the PharmD to manage that much as they manage kinetics with some of the medications that we currently utilize and so that you know that would be an audience for that type of in other cases there might be an augmentation I think I want to mention this because it could really impact whether this is actually low hanging fruit or not we don't hope to test every night call emergency medicine doc every subtlety of interpreting every chest we teach them to interpret the most important things immediately and for some things you could say that's low hanging fruit but to teach everyone the subtlety of everything is much higher hanging fruit so a tool missed the comment that there's multiple use cases sorry that there's multiple use cases and not every use case is the same and that we need to acknowledge that pharmacogenetics is different than germline is different than somatic it's different than prenatal testing and that everything we say here won't necessarily apply to all of these equally at the same time so I wanted to get back to a point that both Liz and a tool raised regarding and I think others have the fact that there may be differences of opinion about knowledge and that we may get to a point where we really can't develop a single opinion about everything and so how do we think about an environment where you can show the differences and one of the decisions we made with ClinVar was in the beginning everybody wanted a clinical grade variant database which only contained the things we understood and then I said well most of it we don't understand so that doesn't mean no good so we ended up stepping back from that and saying well let's just be transparent everybody can say whatever they want but everybody can see what everybody says right and that's the way it is today you can go into variants you can see lots of conflicting interpretations and while that's sometimes eye-opening to people that we actually disagree on stuff at the end of the day it's transparent about the fact that we don't agree on stuff and that's okay but this review status that we're trying the lowest level of review is actually just provide your method for how you arrived at your conclusion and a test that you did a comprehensive review we're not individually reviewing all those variants and those assertions but we're trying to establish transparency and provenance of you know and sort of getting this governance issue a little bit so I think you know if we were both to create an environment that allows a clinician to have access to all sources of knowledge be they different as they may that we will get beyond you know where we are today where I go to a physician get one answer and I have no idea if that's represents the entire standard of the community or that's one of 2,000 opinions that if I went to 2,000 other physicians I would hear a different thing and I think that would be incredibly useful but I feel like sometimes or a lot of the time in the EHR we hold this standard and Dan spoke to this well in the opening keynote that you know if we try if we wait to implement until we all agree on everything and it's the perfect scenario we will never implement almost anything I think part of the challenge is in the space there's a notion of trying to standardize the knowledge itself like how you call a variant or something like that and as an analogy like if we're creating standards for quality measurement or distance sport we're not creating the standard that this is how you treat patients with diabetes that's not really the informatics aspect of things it's how do you express something so if you take an example there's quality measurement sport that hinges on identifying people who are sexually active females are sexually active and it's not that we're creating a standard to say a female shall be considered sexually active if they're taking oral contraceptives or have diagnoses of this that would be beside the point what we're trying to do is say how do we express the condition that somebody is taking oral contraceptives or has had a pregnancy test without an imaging study after it or accutane prescription that kind of thing so perhaps separating the notion of how do you represent knowledge and the standardizing the knowledge itself is something we can do this is an old problem this question of perfection in knowledge versus trying to get consensus and I've heard many proposals to address this over time I personally think that having consensus gradation of levels of confidence is a way forward I think we can all agree on a large number of variants that nobody has a clue what they really mean fine they're put into a category called nobody has a clue there are other variants that's a technical term there are other variants I mean we can talk about Leshnian syndrome or other Mendelian diseases where you know we're pretty clear what's going on okay they can go into high confidence variants you don't need to think about this very long before you could entertain a number of intermediary confidence levels you know whether it's a 5 point scale a 99 point scale I don't presume to know I think over time we can frankly register the levels of confidence and belief or acceptance of particular associations and as data becomes more clear a bit of knowledge can be promoted or demoted accordingly I think to say that well we'll never reach perfection we'll never reach agreement is a bit nihilistic because that means we'll never have any confidence we'll never have any confidence in our system or shared library if we consider this goes back to Dan's talk as well I mean what's the state of the art this week basically every health care system including mine Mayo Clinic makes its own darn decisions about what it believes and what it doesn't believe well that's a fairly down yes well some write it down and as a consequence we're basically leaving it as an exercise to the reader and to completely forego any effort to have some formalization even though it's improbable everything will end up in the I believe this is absolutely true in every circumstance category that's another technical term I do think that we can assign levels of confidence prudently can I just add something here Jamie that's a really interesting point and one of the things that we can look at is how we can test that when I when you brought that up what I'm thinking about is I don't know if you're gonna have a follow up to this meeting but if you do it would be great to invite AHRQ who now has gotten 16 a lot of money in order to in order to look at how to disseminate knowledge it's under the context of patient centered outcome research but when you're talking about data infrastructure it can be applied generally just a layer of complexity over what Chris just mentioned about this and what Heidi was talking about as far as a confidence system goes harkening back to our initial discussion this morning about the context sensitivity of some of the interpretations these confidence measurements would then necessarily have to also be context dependent so a strength of an association or the believability if you will of an association between a given variant and a particular drug interaction might be very high but with another drug or another phenotype it might be mediocre at best and we'd need a way of being able to call that out from NHGRI and that's to that point for ClinGen not only are we looking at clinical validity with the gene curation but there's also the actionability working group led by Jim Evans and Katrina Goddard and they're being very careful to when they're making collecting evidence on actionability to look specifically at a particular gene phenotype pair because of that very reason and the other thing is that the thought is when we're assigning sort of they're developing this semi-quantitative metric of actionability and they're collecting all the evidence that falls into each of those categories so they will come up with an overall score but they're making all that evidence available so every institution can look at it they have access to the underlying evidence can then set their own threshold that are more context specific I think one of the other things to note there and what I'm beginning to hear I think emerging perhaps as a nascent crystal from this discussion is a standardized approach to that it's standardized as reproducible as something that other people could use and so what I'm beginning to detect in this discussion is that perhaps the most fruitful area here is to think about how to represent accumulating knowledge in a standardized way along with things like the provenance the reliability if you will and how do we represent those sources so that if somebody is reaching out and saying well I want to do decision support around this and if I point to a knowledge base A or B or C even though the answers may be somewhat different it's transparent about how the answer was derived and so I don't know if that's something that we can use as a point of departure in terms of thinking about what a whether it be a research or an assessment of the current state would be in terms of this but it seems like there's a fair amount of agreement around the table that this may be the most fruitful area in this particular domain well I see us circling or here circling a lot and yes that's a good thing to do but we're doing it already Jim's doing it you guys are doing it and there's activities there this is a meeting to deal with decision support and we haven't clarified I guess yet we're in the decision space this is for I don't think it's to help the super expert I mean I think it's to help the clinician but that's because maybe I'm biased I'm a clinician and then the issue about do we have to standardize the decision rule I don't think we do yet we don't know enough but we can't do anything if we don't get some pieces some grist that it can decide on to even start and so that's back to sort of the finding some subset of starting to really do things with it and then institutions won't vary it they don't have the same resources they can't do they can't reform across the country and they don't have this kind of a capability that kind of a capability so they'll be different and they'll have different kind of data from different labs and they may have to be different but I think we've slipped away from trying to find a way to get started with decision support but I maybe I'm wrong I just wanted to insert one little bit of nuance I understand where you were coming from with respect to we have to take the context into account but I've also heard the that arguments made slightly differently and that we don't want to confuse the clinical question that's being asked with the assertion of the confidence of the association of the variant of the phenotype so that you don't say this variant has low confidence in when you ask this clinical question let's say for if the patient has cardiomyopathy is it reasonable to report that as causing the cardiomyopathy in that patient versus predicting cardiomyopathy in someone who doesn't have it versus using it as a prenatal diagnostic those are three different clinical questions and the confidence of the assertion of causality doesn't change but it's the utility of that association to answer the clinical question that changes does that make sense? I was intending to mean not changing the confidence of the question but change the confidence of the association itself of the phenotype and pharmacogenomics has this when we do CPIC guidelines which are notorious for metabolizing just about everything under the sun there could be variation in a given SNP enzyme that has a significant effect with one substrate but not another I understand but I'm saying that people have said that you should in fact change the confidence assessment for some variants because you're asking different questions and I agree with you that drug A versus drug B versus drug C in this SIP variant doesn't change the answer but the association of a variant with a phenotype doesn't change because that's based on all of the underlying available current evidence and you don't change your assessment of that depending on which clinical question you're asking it's my point so thank you I'll take a stab at another direction so it seems like one of the things we've not necessarily touched on as quite as much we might use to reason on the genomic result and a little bit about how much of the information we might need from the EHR to actually act on or actually act on information and generate a recommendation and so one of the things are we really too early in the course of generating standards and representing variants to really get to that point or do we think that there are subsets of genomic knowledge where we can get all the way to create an essential repository of that kind of knowledge I would interpret that as being cognizant of the point that Clem is making to say can we find some things that we can actually do something about and if one takes the temperature of the room based on what people are actually doing I think that the area where there is some amount of agreement in the implementation spaces around the pharmacogenetics in some ways that's not surprising because it probably is the simplest example and we have a fair amount more knowledge in a lot of cases and in fact in one case we do not to dispute Brian but to at least put one level of agreement the FDA says you have to test for 5701 before you prescribe so there is at least a regulatory agency that says you must do this so in as much as we can all agree on the fact that if the FDA says it must be so then that would be one that you could actually then say okay well then let's use that as the use case and perhaps take the subset of pharmacogenomics to define a set of use cases that could then be tested along the spectrum of knowledge representation that we have been discussing and then take it one step further which is to then say okay and can we in fact represent that as something that people that didn't actually participate in creating the rule could point to and actually utilize in their own system even though they didn't have to construct it locally and just take it from really from one end to the other to some degree I'm sorry to be talking too much I was hoping there was something behind me you were pointing to so there's two things if you look at genetic lab tests they ask for a bunch of stuff by tests that differs which correspond to things you might find in the medical records so I think clearly you got to have some of that stuff and you can even find some examples secondly at Harvard and you might have been involved in it there was a neat paper that described the distribution of genetic tests two years ago I think in general medicine and there weren't any genome-wide studies these were more focused there was cystic fibrosis trisomy hunting some of those kind of things which are sort of simpler so there's a list there's 15 or 20 that constitute 95% of the volume which would be something to look at I don't have the list memorized do you know what I'm talking about? actually I don't you should have been acknowledged at the risk of I may have just done a non sequitur of some type and completely shut things down or is that something that everybody I mean is that something that we can agree to that that could be deliverable from this is that we actually just take examples and see if we can identify a way to move them forward interesting conversation with Jacob Ryder when he was here, were you standing here with me? yes I was Jacob's admonition to us was come up with 5 good things to do don't try to do 20 or 100 or 10,000 let's just get 5 if there's 5 drugs or 5 conditions or 5 things which can focus the energy and actually allow us to do some demonstrations he would be very happy maybe you'd be very happy and I would say focus on the different applications of the genome the other is pharmacogenomics so you can look at a focus and risk assessment for prevention the other is could it how to assist a geneticist versus how to assist a primary care provider because each of those nuances both the application but also the end user is going to provide some different insight on how the genome and the genome information is used so if we do that focusing on different stakeholders and applications would be useful so in the spirit of this conference not what would we do to optimize genetic testing as currently done locus at a time purpose tested stuff but sort of leading the duck to the expectation aiming or skating to the puck or about leading because one of the implicit premises that make this interesting is the falling cost of hold exome and hold genome and I think anchoring to the clinical aspects of that kind of use case seems much more appealing than trying to build out from the current how would you optimize her to new or something that you're testing for in the kind of locus at a time model because most of the transformations and the reusability of the value of the data really turn on our getting millions or hundreds of thousands or at least thousands of low cost observations that then can be used at some point in the future I don't know if that makes sense but I think what it does say they should sort of at least lead to this low cost genomic era and I was just going to say we could take those use cases and then base them off of genome or exome wide data for real patients that we currently have in the system who have in many cases a Mendelian genetic but also an incidental and a pharmacogenomic aspect that may help their care well this was a along the same lines that if you want to sort of take the whole genome whole exome as a given then it would seem to me you might want to pick the ACMG list of incidental findings because that's going to be sort of the agnostic stumbling block that somebody's because they presumably ordered the whole genome whole exome not for one of those or maybe for one of them or something but what happens if there's something in the other ones what would you report to them that would enable them to respond I would argue against using that gene list if we're going to do it let's do it for the whole genome and see what rises to the top in terms of clinical action ability it's an interesting debate I would second that I second that I mean I think that's kind of a monotonic use of examples of kind of the same thing that you use a genome for and I like your example better of picking a heterogeneous set of examples that have very different clinical attributes like prenatal and carrier screening and maybe some one thing from that list but doing very different pharmacogenetic very different things with the genome as use cases that could be broadly generalized then expanded in the future it's still starting with WGS absolutely so I'm sensing potential opportunity here because there are a number of funded projects now that are specifically looking at sequencing and return of results so there's the CSER there's the newborn sequencing projects eMERGE 3 I'm sorry the eMERGE PGX and then eMERGE 3 which will move more into sequencing so there's a number of different projects that are moving in this direction but the sense I have and certainly the people that are participating in them can disagree with me is that there probably is still a fair amount of struggle in terms of or at least individuality in terms of the approaches I know certainly with eMERGE PGX that while we have 10 centers that are implementing we're all kind of doing it our same way and we're trying to aggregate data to say you know what are the different ways and Josh and I have been working on some of the outcomes related to that but if a group like this or a subset of a group like this were to say okay here's a standardized way to do it that could then be exported across this heterogeneous set of return of results then we may be able to learn something collectively across a number of different projects that could be knowledge that could be more broadly applicable well there's enough strength in the room that wants to do the whole genome and I certainly don't want to oppose progress but if you just do that though nothing writer can use for five to seven years and so I think if we ought to at least tackle some of the more prosaic problems so that we can actually be sure because it's down in the database I mean it's not going to be up at interpretations if you're dealing with the whole genome trying to figure out what's real and what's not real not that it's wrong maybe we could propose both I just worry that we won't get anything actionable in practice situations in the near term if we only do that and what I heard from Dan though is that he's saying that we can use the existing data that's not been generated using whole exomer genome sequences but think about it from the perspective of if we get this information out of you know exomer genome sequences you know what could be different about that so we can really kind of look to the future while still acting on I think I heard you say that just the idea that at the level of what I think what I heard Ken saying that at the level of these known genes I mean most people order tests they're not surprised they're looking for the mutations that are known to have effects at that level you could do decision support tomorrow with a little bit of settling on some of the issues with the whole genome the challenge is we don't know what they all mean so the discovery what they mean is this connection of deep databases and lots of details about the underlying which the other problem doesn't mean so I'm just trying to preserve that we could do both maybe I misunderstood your genome it's as if you had run every test in gene test dot org so you basically have the results available for everything that's ever been viewed as clinically interesting and at least you have the stuff to do you have the gris to do some testable hypothesis work about the the marginal value added by getting it all at once and certainly part of it would be pharmacogenetic and some people think that's immediately actionable but you'd also have a lot of other things that would be very interesting research to do I agree it's good to be good research I just fear that it'll people will be down in database algorithms rather than things that could be decision support at the clinical level I think I had Jim Semino next you're right okay and then JD just kind of following on what Clem said and what Don said first off in terms of what Clem said we've already got people out there the audience who's out there that are doing pharmacogenomics with rules and decision support today using stuff that's a well-trodden path etc so that road is very well paved as long as the data is codified the right way and every client or EMR vendor can help the client upload that kind of content where all this genomic stuff makes the problem more challenging is the thought of taking as you just said the entire sequence and saying okay let's flow that into the EMR but to back up for just a step and look at it from the standpoint of we're about to face the challenge of because I see it in at least my neck of the woods with my clients I know I don't think we're any different than the other EHR vendor to where you've got clients that are looking at getting whole genome sequencing platforms stood up for cost reductions in their molecular labs because they're tired of running multiple assays multiple panels and so they're getting these genomes they want it because it's cheaper but they're like okay great one I've got the ethical dilemmas to deal with what's with all this other stuff that I may have looked at may not have how do I handle that and all the familial notifications and stuff to where do I store it and then three how do I make all these meaningful attachment points which brings us back to the whole knowledge index base of okay how do I measure it how do I tell what it is and then to get to Clem's point of how do I flow that through the natural path into an EHR so that's the trajectory that we're on which is good that we're all talking about that though but I think people are saying about how there's some stuff we're doing today that is working that if we put a little bit more structure around it we can take another step to the next level as opposed to I know we're looking you know five years ten years down the road and everybody's doing whole genome sequencing but some of that's about to start happening now I mean I think that the point is one that I was trying to make as well is that there are people that are implementing you know for the most part pharmacogenomics but again the knowledge representation is all one-offs and we've really not you know looked at that for the most part systematically emerges trying to do a little bit but we didn't get to the point of saying you know can we all agree to try and do it the same way we're basically just trying to capture experience and then and then see if that synthesis yields any any knowledge that can that could you know go across but so I think there even though as it is you know it is somewhat prosaic and it's being done I think there's still some things that can be learned from that that could move others forward more more quickly particularly those that don't have that content experts that could necessarily manage the knowledge representation and that is where I think the largest audience for this is though because if you go with the logic of the fact that there will be the number of locations in healthcare that will be in need of consuming this type of data will be many orders of magnitude larger than the number of places in healthcare that will be generating this data and then you have a dilemma of where you know that type of expertise cannot be replicated at every level at every institution so you've got to build the bridge you've got to build the system smart enough as Blackford I think an instant I think I'm trying to who it was at Vandy was telling me how look at CDS not as replacing somebody's brain but leveling the bar leveling the stage where everybody works so they can focus on what's organically net new same kind of thing you've got to get it to where it works out in the clinic environment because that's where the line of healthcare is going to be delivered but JD I'm not sure who said it first it might have been Dan in a way CDS should be considered as power tools for the mind and only that yeah I want to just really strongly endorse Dan's comment earlier of why we want to do it from a genome and I think one of the very exciting opportunities we have here is to think about ways to generate clinically useful answers even though no one thought to ask the question right that's the power potential power of genomics and why it is a bit disruptive to the current medical paradigm of using the clinical data to select a test to apply to the patient you actually apply something that's getting close to all potential tests and then use the data to generate hypotheses or questions and answer those and provide them in practice predictive preventive medicine whatever you want to call it again even though nobody thought to order the test for that reason in the first place and I think piloting how that could look and how it might work within a healthcare system is the really exciting disruptive opportunity we have so I want to just come back to something tool said earlier which is really getting to the outcomes part so I do agree with Blackford with the notion let's pick some smaller not the whole world but let's pick some scenarios and some have been mentioned and I agree the ACMG list to me isn't clinical scenarios necessarily you need to pick those scenarios but we all should think about what are the scenarios where we could build the end to end knowledge cycle so that we could actually figure out how useful to have used that information and that we pick scenarios where we can get all the way to that last question because at the end of the day there's tons of knowledge in our genomes but I bet there's a small number of use cases that are actually useful to use that information in terms of outcomes economics and the things that will end up paying for the use of this knowledge so just in picking those scenarios let's think about ways that we can get all the way to the outcome part of it I think the point to just I've been hearing over and over again is what are we trying to achieve what should happen in 10 years from now or 7 years from now versus what should we be doing now it does seem at least from a standards perspective it's always useful when people are already doing it but doing it slightly differently and you're just trying to make sure people who are doing it slightly differently are doing it the same way and I really like the idea of focusing on those areas because you know that there's actually people who found that that's something useful to actually implement in a clinical setting now you're just saying how can we do it so the next time we want to do it we don't have to redo the whole effort and potentially have a grant to redo the whole thing where it's instead oh you thought our idea was cool well here's the approach where you can do it using operational funds and just ask your IT team for 8 hours of effort and you're done yeah this isn't actually different from what a couple people just said but I'll say it anyway that well I was just struck by the comment that you know we already have a knowledge management system for pharmacogenomics it already works but it's a targeted test so I mean maybe it's really about the lesses example or Heidi's example if I had done a whole genome WGS not worrying about pharmacogenomics I was looking for something else how would we realize that should be fed to the pharmacogenomics expert system and could we sort of recognize that situation and then recast the data so it actually would go into the existing pharmacogenomics as opposed to changing the knowledge system could we recognize from the genome that we should feed existing knowledge systems and then talk to them and Paul it sounds like you're beginning to define use cases that might be turned into pilot tests eventually given the relevance for lifetime and beyond portability seems to be like it needs to be baked in if it gets to the point of RFAs and that sort of thing I'd wonder if it might be advisable to insist in the RFAs themselves multi-institutional pilot tests so that it's all standardized or and or you could insist on academic slash vended solution working together in collaboration one feature of existing CDS is that there's logging of decisions and responses made to especially in the alert model that's an underutilized resource for outcomes research so for example you could do an outcomes project where you evaluate the outcomes of patients where the alerts were ignored versus those who weren't as we look at the architectural considerations I think that closed loop architecture needs to also include thoughts around what does the logging site when a CDS supported decision is made what are we logging for example if we log the strength of evidence scoring regardless of what that is then we can go back and our assumption would be that there'd be better outcomes with the patients where there's a higher level of evidence so we need in terms of thinking through that closed loop dimension that needs to be on the table as well really quick on the concept of a pilot coming up in December when is it December 8th meeting for the action collaborative we'll be presenting Sandy and I the seed if you will of a pilot we've already found pharmacogenomics where we will itemize out minimum data elements and we've already got interested parties between Surner, Intermountain, Partners AAREP is probably going to help out as well so we've got the right people in place to start this as a pilot so there may be something that can boil out of that as well down the road just a final comment on the genome I think it's important to realize that it's not fundamentally any more difficult to interpret based off of a genome than it is off of any other sort of molecular test assuming coverage etc so I think if we could at least use some of the use cases and go off of the genome data then it would be more like where we think we're all going to be in 10 years where everybody has their genome and is carrying them around etc going back to the issue of portability of genomic data because genomic data follows the patient through her lifetime with potential utility for pharmacogenomics or other decision making based on the variance whose interpretation is not yet out there the issue is how to achieve it now one model would be one central repository owned by the patient patient registries and so on may mediate that another is really not portability in terms of different EHRs accessing the same genomic data repository but them being interoperable and because this is a new area actually it presents an opportunity because tenders can be defined early enough so that all these EHRs have the same genomic module that they talk to and the data can actually move from one module to the other and also EHRs can talk to each other's genomics modules so to speak he can go first that brings to mind this consumer genomics that we've already seen and that is a certain number of people out of their pocket would just love to buy their sequence and have somebody do something useful with it and we really don't have a mechanism for the admixture of different input sources of data which would have to be vetted and have certain quality measures but underestimating that power source of 21st century genomics that people may not want to wait for their doctor to order it they'll just buy it because it's not that expensive and having a pathway for the archival retention and availability of that clinical decision making seems like a pretty smart thing to do I'm going to make one last retort on this and I have trouble saying how I think this is a great idea but I don't think it is this is titled clinical decision support there's a big research activity which is great I love research that is you've got the curse of dimensionality in this one what is it, how many in Bayes Parade 4 billion, talk about a curse I mean I've ever heard that you must, that expression it creates lots of problems in interpreting stuff and how are we going to actually do something for this round if we only do that to describe both because it's not the same you've got all those other ones we don't know what they mean yet and plus we argue about them but then you can't interpret them so you don't do anything well it's a different problem that's all I'm saying it's a different problem so I think that this actually that is a perfect concept to move us into the synthesis portion which is where we're at at the present time and so I'll turn it over to Josh and a tool to see what is extracted from it and I actually have a couple of thoughts on that as well and we'll see if we're concordant or not alright well this is a really interesting discussion I'm actually going to summarize a little more about what I heard at the end which is I think exciting that people are starting to think about deliverables so first was the concept that given the existing funded efforts and pilots around pharmacogenomics that should potentially be an example where the knowledge would be encoded in some of these existing and emerging standards and there might be an opportunity to achieve interoperability that has escaped earlier efforts the second potential deliverable was a pilot that would first imagine and then potentially actually demonstrate how genome sequence data could be represented in the EMR and perhaps the second part of that would be how to extract specific variants out of it, what kind of value can you achieve clinically with that kind of information and as was pointed out the challenge there is the very high dimensionality of the data and the fact that we know at this point very little about most of it I'm going to pause there and see if the tools are things to add I'm just going to riff on the beginning parts we brought in a couple analogies of other fields I mentioned radiology, others brought in pathology and I think I still think it's a useful exercise to compare to these other fields we're not in this alone there are these other fields that have done more or less and have had years or decades of head start and maybe they're still far behind we did talk a lot about the software side of things like the software data model schema there's some we're ready to define standards here, there's others I think we're not even sure exactly what we're calling these variants and I think in some ways I agree we talked about the real world use cases end-to-end solutions but I also think there are funded projects already that are addressing some of these ClinGen has some of these with cystic fibrosis and others but we could probably use more I'm not sure we really talked about governance much we did mention a couple colleges obviously one college that's really interested there'll probably be others but if we're in this mode of trying to adapt national standards into our local interpretations compared to chemo order sets compared to all these other things we customize I'm not sure there are enough experts at all of these sites to be able to even customize things even if that was desired I'll turn it back to you and then we'll have a discussion from a sociological perspective because it really I had exactly the same experience that Josh had was I was really struggling to kind of say what's going to come out of this and within the last ten minutes there were three things that popped out that just seemed to be obvious and a tool and Josh picked up on the three and so I think at least if inner compatibility is any sort of a measure so the ones that I came up with was a study of implemented genetic genomic information to develop a standardized way to represent knowledge we heard about the pilot opportunity with IOM we have emerged PGX work that's being done but I think the other interesting piece of this that Dan mentioned towards the end that would also fit is that we have lots of sources of PGX data including 23andMe and other things like that so we could in fact potentially explore data source, provenance and portability in that use case as well the second one is the whole genome sequencing, whole exome sequencing use cases how do we feed information to CDS at the appropriate time across a heterogeneous set of clinical questions and we have several projects that are in the space already these are the newborn sequencing and Emerge 3 to some degree and so perhaps developing some use cases that could be distributed to those groups and say hey we'd like to see if we could test these across multiple different projects and see what works and what doesn't work could be an interesting idea and then the third thing was the point that Heidi mentioned which is and this is probably a little farther down the road although we are trying to do a bit of this in Emerge PGX which is looking at the end-to-end outcomes going from knowledge to decision support to capturing what actually happens to the patient based on that that may be something more in the realm of exploratory how might we frame a project to begin to look at that I know one of the challenges even with 11,000 genotype patients in Emerge PGX is to actually accumulate sufficient numbers to really be able to look at outcomes but if every project that was in the sequencing space said we're all going to look at CYP2C19 and SLC01B1 now maybe we have 50 to 100,000 and maybe we could actually start to generate some data about how to capture the outcomes and what do we actually see so those were the three that I gleaned as well Jeff I think that was a great summary Mark I just wanted on this last point I think we should also think about including the Ignite Network projects which have three PGX projects two genetic risk testing projects as well as a family history project all using CDS and all measuring outcomes Thank you for reminding me of that and you're absolutely right and shame on me for not calling out the project that actually arose from the presence of the Genomic Medicine Working Group which actually is convening this as well in the last couple of minutes before our Liz Super quick it's a small dataset but it's clinically generated whole genomes there's the UDN that's going to generate some clinical genomes the UDN it'll be a specific use case and probably ClinSeq too for that matter that we've been doing so in some sense maybe what we're talking about is you know and there probably is such an inventory of projects that are actually doing this but just I'm sorry Dan I guess it goes without saying that all of those or most of those are happening under the aegis of NHGRI so that would be an obvious way of aggregating the right word or the wrong word all those data to the larger good I guess everybody knew that I have to say it thank you so any violent disagreements or any omissions errors of omission that we've done in this session or does everybody think that we perhaps grasp the three Adam this is neither of those but just something that I've been trying to push with the action collaborative is to think flexibly about the system a bit so that we're addressing what we need to today but also able to accommodate what might come up tomorrow and one thing that we've just started discussing on the round table is whether there's actually some type of component that could be added into the CDS for education for those that might be interested not actually having it embedded but maybe like a link that's in there so that someone could actually get that just something to consider while we're thinking about pilots or use cases how that might be integrated into the EHR and thanks for bringing that up because we had a little side discussion that you know we talk about clinical decision support as if it's a single entity and the reality is that there are different flavors of decision support and what we're mostly been talking about is what some of us would call active clinical decision support you know lurking under the system and is then identifying clinical contexts and will fire rules but the reality is that there's what some people call perhaps pejoratively passive clinical decision support which is the hallway conversation the tome on your desk or something that is linked through an info button or some other methodology to a clinical decision support rule to say wait a second I got this alert and how can we deliver that point of care just in time education so we haven't been explicit about that but I don't think that was meant for the purposes of exclusion and so it's good to bring that up Ken. Just to follow up on your point about info buttons so info buttons is a way to get a little I button for example next year problem list or lab results where you can get context relevant educational materials that's part of meaningful use standards now there's an open source implementation called open info button that was funded initially by the VA fairly widely used at this point freely available led by my colleague Guillermy Delfio at the University of Utah and he's actually been using it for pharmacogenomics information that's available operationally in some locations so I think that's one for example where nobody clicks on it unless they want to and if they do they can find the right information related to genomics. Yeah I think that's something that Casey is leading study effort in eMERGE PGX on the use of info buttons in terms of the implementation there. I'm glad you mentioned Guillermy it's interesting how implementation happens sometimes I stumbled into Guillermy at Intermountain many many years ago and talked about this idea of creating gene information sheets that could be tagged into the EHR and how we could link to resources like gene reviews and text home reference. He says well I'm in charge of that why don't we just do it it sounds really interesting so we did we stood it up I don't think anybody knew about it but we could watch the numbers go up and eventually got a presentation so again it's dangerous sometimes when you meet people and no one's paying attention to what any of us are doing so. With that we're right on time so thank you everybody for your contributions we will take our full half hour we'll reconvene at a quarter to four for our third session.