 Okay, so these are the questions this morning that I said we would be talking about today. The first one is what's necessary to integrate data sets and evidence into the EHR for clinical use? The answers we came up with were A, we need to pay attention to both scalability and access because I think those were both important issues. We need to share decision support logic, if not algorithms, and make a publicly available library of these that people would be able to add from and take algorithms out of. We need the ability to draw from multiple sources, wherever they are, and integrate them. So whether it's Ensemble or ClinVar or whatever data set, we then are going to have to have data standards that are required to actually integrate across those sites and we'll see a recommendation about that in a few minutes. We heard very clearly and I think this is really an important point. We are not doing a very good job with the better validated tests that we already know about. We heard a lot of discussion about the BRCA tests and a variety of others. So we should start with those. Those are ones that are pretty high on the list already and if we can't get those in and do those in a useful way, we're really not doing our job. And then we wanted to ask the question, and I don't know, Jim, maybe you can just hit it. If they may be just a yes answer, I mean, is ClinVar an honest broker for variant information and is that a role for ClinVar? Good, we've answered one of the questions. From that session, did any things that were missed that people can think of? I just want to ask, we haven't really talked much about somatic mutations and the role there. Is that, are we kind of taking that for granted? I mean, that's really where some of these processes will actually be driven first, maybe, from oncologists and do we need a little more emphasis on that? That's a great question. So we should add, what is the role of, can somebody, Erin or Terry, can you just make a note of that? Anything else from the first section? Sorry, yes, what a good idea. Yeah, I would add that from this clinical decision support perspective, trying to keep recommendations above the line in terms of standard format for how to describe clinical decision support versus the technical implementation of that. So across CERNR Epic GE, we each have different ways of implementing decision support. None of us are very likely to make deep changes underneath the hood. So I like the examples that Howard gave on his table that seemed pretty nicely defined for dosing recommendations. So if efforts are focused on very clear presentation so that any organization using any EHR system can then read that guideline, that would be my strong recommendation for the approach. So I assume the idea is that this library would be sort of a library of decision support logic that could then be implemented in whatever system. That's right, my recommendation. Focus more on the logic rather than the implementation of the logic. Let each system focus on how to implement, focus on very clear, precise description of the logic. I'm not sure you mean honest broker when you say honest broker down there, because that's got a very specific meaning for ethically controlled data. And there's nothing in ClinVar itself which is in that category. So the concept was to be the place where you could, if there were two conflicting pieces of data, I think the plan was that both of those data sets would appear. I'm just saying that it's just that phrase honest broker has a very specific meaning. Okay, great. Alan. I was gonna say I think in terms of evaluating the evidence base to decide what's implementable, I think that whoever the committee is or whoever the body is needs to make sure that they're calibrated with what actual clinicians believe is the required evidence base. Because in the end that's where the rubber hits the road. Yeah, I think that we should, that would go under one of the ones actually from yesterday's discussion, I think, but yes. Grant Wood with Intermountain Healthcare. I also work with HL7 Clinical Genomics work group. We've developed data modeling standards and data transmission standards around sending genetic test results from the lab to the EHR. I'd like to make the suggestion that we add a bullet that this group help further pilot and test that data standard. That could be really helpful. That's all. Yeah, and I think one of the things that came up at the workshop, I'm completely forgetting when it was, but we had a workshop on electronic health records and genomics that NHG are sponsored in April or something. Or October, or anyway, I think one of the important things for me that came out of that was I don't think the genomics community is talking to the medical informatics community, so we need to bring those two groups together. I think that's a really important point. Relatives that the comment was made, yeah, the billions of dollars right now, billions, 50, 60 billion dollars right now is being spent in this country to build a new information infrastructure for healthcare. It's being spent, it's out there. There's all these different initiatives, and genetics could be part of the intellectual driver to make it realize that we have to build a different system than we have now. What we're doing now is we're building an interstate highway system that just connects the repair shops. We're building something that just connects hospitals and doctors, okay? That is not going to work. We need a network so you can go anywhere you want to go and get the information you need, if, you know. So I think you made a recommendation earlier. If the agency can go out there and say, look, can we, I know this is probably politically difficult, but you're spending all these billions of dollars. Can you give us just a little bit so we can try to coordinate this in a way that makes sense? Because if we go down this path of just connecting hospitals and doctors, we're not going to have what we need going forward. And there's a recommendation that addresses that very directly coming up. Yeah, yeah, yeah. Let's do back here first. Hi, Lucia Handorf, NHGRI. So there have been a few comments made about identifying and interpreting genetic variants in the context of specific populations, such as those representative of the population at large. So I know there are a few of us that are familiar with the creation of GWAS data. So for example, I do the NHGRI GWAS catalog. And where those populations are captured in GWAS, we definitely have those. But I think it's important to note that in the follow-up literature, where there's a lot of additional studies looking at, looking further at whether these variants generalize is not captured by us. And I think by the scientific community at large. So we may want to note that that's a gap of existing databases. So I think probably what we should do on this slide is add a bullet for capture of broader population coverage. One of the things that I thought didn't really come out very clearly to me at least is in this situation where essentially there's one of considerable debate and uncertainty that gets resolved into guidelines. It would actually be a really good idea if there were mechanisms in place for the longer term follow-up to see what actually the clinical outcomes for patients were. For example, in those found to have certain rare variants but which were considered at that time to not be a clinical significance. So for example, Professor Raman has outlined how in her framework she's able to achieve that. But I think that's something that would be quite important and I'm not at all clear how you would achieve that in your system. Have you had any thoughts about that? I think that comes under the looping discussion. So maybe we can revisit that in just a second. Maybe like next. Anything else on this? Any other comments on this particular? Let's go ahead and move forward. Okay, so the question is how do we create a dynamic loop that recognizes the anticipated rapid increase in available evidence and upgrades clinical action and ability validity recommendations? So I think this is that sort of loop that you were just referring to. So recommendations here were establish something for the sake of nothing else than a name, a clean action curation function to build upon ensemble, ClinVar, and other relevant databases. So one of the functions of that might be to make sure that we're capturing, as we talked about yesterday, the one-off examples and to make sure that things that are put into bin three have a way of getting or bin two A have a way of getting upgraded to the other bins. I think that was the, is that capture what you were asking? And I think that it really relates to the last bullet that you haven't gotten to where we really talk about how do we design studies and design data collection to basically define and obtain data on outcomes as things move into the clinical space. I mean, that's what we need to capture well in that last bullet. Need to maximize interactions between epidemiologists and informatics genomics investigators to facilitate obtaining needed information on clinical validity. Again, that's probably also a related thing. A key part of that is probably to establish a training program that crosses these disciplines. That's one way to make sure that we bring people together. There was concern about data loss and privacy threats and how that hinders research. So we need to make sure that it's clear that there are strong guidelines. I mean, there actually are strong guidelines, but that they're reinforced that that gets in the way of our ability to actually do the kind of work that we want to do. Patient portals seem to be an important part of what we have going forward. We need patients to be actively engaged and we need them to actually argue for data access for research. That is one way of sort of getting past some of the research clinical interface issues and we'll also feed back to a prior recommendation that we need to encourage NIH, NHGRI perhaps in specific, but to actually stimulate these discussions with OHRP and other appropriate sites. That ClinVar should incorporate what bin a variant is in currently. And I think we heard that ClinVar was planning to do that earlier. And then we need to collaborate with larger data warehouses. For example, what we heard from MedCo today to conduct large scale studies to get to better evaluate the outcomes. And MedCo is just one example. We need to, as much as we can find out where these other sites are and engage them fully as well. And this is a great opportunity for an international collaboration. So this is on the loop. So questions related to this, Tim? So on that thing about data loss, I mean there's a kind of opposite to that. If you get trapped in that, there's a huge opportunity to create, to use these data sets and create virtual cohorts. If you can do the informatics and if you're not blocked by those kind of things that would just enable, make it easier to deliver this sort of clean action duration. So are you saying that by having the appropriate security models actually in place, we can go a federated approach? Finding ways to lower the transaction cost for doing a study and then doing a virtual selection, analyzing the data to ask a specific question. If you've got to go for all these hoops, then people just don't do it and that data's wasted. So I agree with all of those. I was just wondering, they all sound a little passive, in terms, I don't mean that in a critical way, in the sense of being able to set up these infrastructures so that you can have the information. But I would like to see, I don't know whether it's the reading group to see something proactive and thinking about how we're actually going to assign bins or decide on the in silico programs that we need and thinking about how we're going to get the people together to do that, how we're going to fund that. I'm not sure whether that's on the next. There are, so I think some of those, maybe you revisit this when we get to the, there's a couple of slides that are sort of more action item oriented at the end. So I've got two things that I think are more than just wordsmithing. On the concern about data loss, it hinders both research and clinical care. And I think that's an important concept. And the other is, under collaborating with larger data warehouses, again, I want to emphasize, broadening the concept that really what we need is what some of already exists in Europe, is national disease registries, where all the data is out there and it doesn't matter where the patient goes, it's trackable and the better research can be done. And clinical care can be, there can be continuity of clinical care. You don't want to add a national electronic health care system, a health record to that? Physicians have unique identifiers now. We're required in the United States to have a number, I know or have a name on just a number in a database. And I've forgotten who it was, was mentioning that there's a health system where every patient has unique identifiers. Scotland. So, but that changes with your, I mean, right. Rob has his own number that he's created and I think we should just operationalize that. Could I just say, in relation to registries, we don't talk that way anymore, we talk about queries. You just don't need to create a registry to store a particular data. You just run the query every hour or every day or every week. At the same time, we do have specialist registries. Yeah, no, I agree, but we were getting away from those very quickly. But we don't have that big overarching data warehouse that you have. So, in, we can't. They have, you know, they have, what was it, 50 million of them? And Blue Cross Blue Shield has 99 million of them. And they have data gaps, too. I mean, you know, Medco doesn't have all the data on all the patients. And so you have to recognize what data is available or not. I mean, it's a great idea. I mean, would love to have that. I was thinking maybe the thought would be to do something like the FDA funded Sentinel program. I don't know if you're familiar with that. But what they did there was to say, you know, fund a center, and this gets into Harvard, which your plat has this. And when they're looking at rare safety events related to drugs, send it to the Coordinating Center who sends it out to the approved Medco-like entities who then do the analyses based on a protocol everyone's agreed to, so it's the same data. Then they come back and they pull it so they're able to look at questions. Something like that in genomics that would be international. With some funded entity or Coordinating Centers or something that had people who agreed to be part of might be an interesting thought. So what we might be able to operationalize that is to explore existing data aggregation models that are, you know, not purpose specifically for genomics to say which of those could be potentially adopted or adapted for use in the space that we're interested in. Helen? So, yeah. I mean, the thing that's critical to that, given that you have different providers with different IT systems, is the common data model. That's the first step that has to happen because essentially what they're doing is just sending out generic code to run on pretty well standardized data models and then just essentially pulling the estimates from the models rather than actually having to capture any individual level data. And that's, you know, pretty well worked. I mean, we do that already for when we do meta analyses. We write just generic scripts, but we get everybody to agree data format in advance. So there's quite a lot you can do. But the first step is to have interoperability and agreement on the magnitudes. Because it's all doable stuff. Right. And the NHGRI in some ways has already taken its first step in this in the Emerge because one of the things that Emerge is doing is to have an individual center develop some type of an algorithm to be able to extract a specific piece of clinical data. But the requirement is that it has to run in all the other centers. And so we actually have a little test laboratory that could potentially be scaled to try and answer some of these bigger questions. Other questions on the loop function or suggestions? Next question was what decision support and physician education will be needed in the clinics? I think the discussion on this said that we want a system to send this is sort of the gale suggestion from this morning. We want a system that you can send sequences to that will guide provider to focus on relevant variants. I think we've talked a lot about the infrastructure that might lie underneath that, but actually that's a nice goal to work towards. We need to further explore provider education. I think that probably ends up being a topic for a whole nother workshop instead of discussions. The clinical decision support systems need to be scalable rather than institution specific. And we need to explore open models and patient open models both for decision support and more broadly and patient controlled information. Yeah, I think, you know, we were as we were discussing at launch the second bullet is a pretty wimpy bullet. I think what we really need to do as I look at this now is to say we need to explore innovative education models and we need to measure their effectiveness. So we need to basically try out new things and see what works and what doesn't and really move away from always defaulting back to the usual suspects, which we know don't work. Other comments? All right, so David has one. You know, work with a practice for a period of time and provide feedback to their early experience with trying these new technologies. So basically, I think what you're saying is explore the opportunities for pilot programs in the real world to use those as learning laboratories. We have done 15 traditional CME meetings related to pharmacogenomic training and they're very helpful to get started. But the problem is that, you know, they have an impact for a month or two and then there's a gradual attrition of focus and what I think we're trying to do now, I think it was a good strategy is to build in an ongoing interaction and to build it related to the actual practice that the clinicians involved with. And I think again NHGRI does have a sort of a laboratory in which some of that's being done through both the merge and then the interactions between a merge and PGRN which are becoming increasingly robust. So that's one place that some of that could be piloted. Just to flesh out this idea of strengthening the education bullet, I think that's an area where the debate that Nazni, I pronounce it right, and I had, it's certainly testable to find out what are the outcomes of for a variant where you're not sure what it means to in some cases return an answer of saying treated as negative versus in other scenarios come back and say treated as intermediate, we're not sure what to do. So exploring different methods of reporting, exploring the education and instruction and wording of the report as it comes back including management recommendations and bringing that back in the research environment and looking at what happens as an outcome, I think are concrete steps and maybe this gets into the next question as well. Those are concrete steps that can be taken to shed more light on these debates. We're going to put your name next to that one and you'll have ownership of that particular context. We've talked a bit over the last few days about how as geneticists we made the sort of preciousness about genetic information and how that's coming back to bite us a little. And I wonder in terms of the education whether sort of trying to reverse that a bit would be very helpful. Certainly when I'm talking about clinicians about using genetic information, getting into mainstream, some of the things I say is well you're doing it all the time, every time you diagnose a Marfan or a Down syndrome or whatever, if it was a clinical manifestation of a genetic invariant, you don't have any problem with it. And I wonder whether some of those things we can try to readdress so that people are sort of more comfortable about it. You have a whole genetic exceptionalism problem. Okay, so now as to the list there's two slides here that focus on what NIH, Welcome, Trust, et cetera could think about doing. Some of these will sound familiar because they are sort of reiterations from this morning. First is to serve as a convener in conjunction with other NIHICs and professional standards organizations to foster discussions on clinical validity and actionability to ensure that variants placed in BIN2 have identified pathway to move out of BIN2 into either BIN1 or BIN3 to create and support a resource for clinical annotation that extends ensemble, ClinVar, and captures variants of unknown significance and one-off variants and the condition they're associated with. So this would be the ClinAction resource we talked about earlier. Ensure that discovery of disease gene and gene drug associations continues through funding initiatives. That means just making sure we're continuing to do more fundamental work that gets more of those associations out there. To target research discoveries, discovery research to determine clinical validity and actionability, catalyze discussion with OHRP regarding IRB guidance, that was from this morning, coordinate with AHRQ, the Office of the National Coordinator, VA, DOD, and where possible commercial EHR vendors in this EHR integration topic. Organize a workshop on data structures, data standards for clinical use of variant data, and to maximize ongoing interactions among existing and probably should say new databases as they come along. Consider training program, integrating genomics and informatics and probably maybe even epidemiology and maybe even medical informatics. And then a policy analysis to determine and then develop policies needed for implementation of variants in clinical care. So then maybe I'll start with you, does that capture, did those two, did that more address your, our passive nature or passiveness, passivity? I think it gets, I can't remember who said it, this sort of central issue about deciding what is clinically valid and the complications of having sequence variation, which is increasing on an exponential scale and how we actually go into decide what been there in. And all of the in silico things were set up in a sort of general way, were set up to not really look at whether it's clinically having an effect in humans, they're looking at different things. I think we're trying to use a lot of things that already exist and existing paradigms and I'm not clear that we've looked at it from the, that if you were going to look at it from fresh whether those are the things that we would actually use. But none of them I think are going to give us what we really need. So I don't have the answers and I don't know whether, but I really think it's a really important focus for us to try and address. Steve, I would say that the fourth bullet point to me is exactly that point, that that's what I think the workshop needs to get to the data model that is going to be exchanged or tested and the methods and data sufficiency to actually answer that question more than just interopter rate in exchange among databases. We do that already. I know this is words missing, but the words are important. In the third bullet, it's not EHR integration, it's data interoperability, data liquidity. We'll never integrate EHRs, but if we can get data. Integration of data into EHRs. Let's just, can we just say data integration or data liquidity? Because interoperability, interoperability, but please just take the EHRs are only one source of information. And if we just integrate EHRs, we have not developed, we won't be able to do the things you want to do. If I could just comment on that specific point. So data is not enough because it's pretty easy with current EHR vendors to get data out. It's just, it's very, very well proven that if you require clinicians to use a separate system that uses that data, they will not use it and you won't have impact. So we have to integrate with the actual user interface for clinicians who are using EHRs. So maybe the words missing would be in EHR and other health information system integration or along those lines, but I think it's important we have to realize that even though the US used to not use many EHRs, because the federal government pumped billions of dollars into it, we are getting much more adoption. And it's just a reality that we're going to need to deal with. And just, this will come up again, but we don't have to get the final wording on these today. We will send the, you know, these will be massaged and sent around for people's feedback. So you'll have more opportunity. I sort of feel we've kind of danced around this issue that Elaine has raised and that also Professor Raman has raised and that somebody in the French horn section, as he's referred to, raised. Oh, there you are. You may have, you may have already beaten me to the punch. I think the trombone may have entered before. What I wanted to, what I sort of feel that we haven't articulated, and I don't know what the answer to this is because it's not quite my area, but what information is needed in order to create greater sense of security about the decisions about these variants that are of uncertain significance? What are the data? What are the designs? What are the methods of inference from the data that need to be in place to improve those decisions? And I feel that's at the nub of the matter, and I'm not sure we've really discussed it, have we? Well, I think we've touched on it, but we certainly haven't articulated it into a captured bullet point. And I think that the two things that I would say about that is one is, yes, we need to, you know, define the set of necessary elements. In other words, what are, what are the questions that we need to have answers for to be able to think about, you know, using something and to develop those, we're going to have to have interactions with various stakeholders to know what those questions are. The second thing, though, that we haven't represented as I look at the list again is also a discussion of the methodologies for how we develop evidence and the fact that, you know, we can't just default to the, to the same old, same old, at least in my view, that we have to look at other opportunities. And Jeff wrote this slide, I thought, that showed different things that CMS is looking at in terms of evidence, you know, clearly says there's all sorts of different kinds of evidence, and in particular, we need to develop real-world methodologies that can answer the questions that are going to be identified as we're going to do this work, and that's something that we should, we should take away. And so it, does that kind of capture what it is? Yeah, I guess what I was talking about was something that's a specific subcomponent of that, which is actually defining what the ideal set of information would be, and whether it's ever knowable, and what you need to do to get some path along that line of knowability. So we're into epistemology now. Yeah, I just want to follow up on that, and also on Steve Sherry's comment, which is that what I see is ClinVar sitting there establishing a system and essentially a bucket that's going to have well-organized data. The problem I still see is, how do you get the data in there? What's the mechanism, the policy by which you're going to get phenotypic information from the place where most variant is being generated, which is in clinical laboratories now? So yeah, or you can at least comment on it. So we started out talking with Donna and Jim about what ClinVar is going, you know, can do, could do, and they are talking to the clinical laboratories. And so Heidi has actually put in a grant now, but we've talked with them, and we've come up with many of these data elements. What are the evidences that you want to put in? And there's a big long list. And so the question now is, how do we get this manageable that we can submit? We've talked about that it has to have a clinical curation, have to have levels that we know this is put in with the research. It was only in one family, as opposed to this is what the clinical community is calling it at this point in time. And whenever we did that, we need to have the evidences behind it of why we are calling it that. And so there'll be, you know, at least what I envisioned from the discussions. You know, there's going to be different levels that we could do with it. Heidi has put in a grant that hopefully will be funded to get some money to start putting us in these circles. And we're doing the model of muta circle, if people have heard about that. So different laboratories who have different expertise in different genes are going to put together circles of those of us that have experts in it. So for example, ARUP, we have some very good biochemical geneticists. We perform biochemical testing. We can have, get biochemical enzymes. We can take it into a research lab on occasion and try and get a functional study done. And that's kind of an interest of one of our medical, you know, several of our medical directors at ARUP. We've reached out to other laboratories, Mayo Planique, you know, Emory, other places that do the same genes and said, would you be willing to work with us and have a conference call and, you know, come up with a consensus and discuss what these are. And so that's the expert curation. So these discussions are going on. It's not to say that we've come up with the solutions, but we have at least some ideas Now what I'm hoping with this grant, if it can get funded, what I need is to take my clinical database, which we do have some clinical information. You know, it's not like an in-depth clinical notes, but we do know the indication for testing. We know if other tests have been done, we have, we collect family histories, we collect known family mutations. So we have a fair amount of information that can be shared. But I can't just dump it into a database to another database because this is a clinical one. I need to, first of all, make sure what I put in is accurate because sometimes we may have made a mistake in the database and it doesn't really correspond with what I've reported it out as. So we need to go through that. We need to make sure that it's stripped of anything that could be considered PHI. And then we, then I need to get it somehow to get it, build something to take it from our database into the Excel spreadsheet because I can't do that manually. So we've got to build some type of a script so we can do that and put a process in place that it's going to be reasonable. So we're getting pretty detailed here. So is there a concept that we can extract from what you've been talking about that either should be represented in one of the existing bullets or needs to be articulated as a new bullet, something a big overall broad concept? Well, I mean, the discussions of the data elements of the evidences that we need are being done. The fact that there has to be clinical curation and the fact that there has to be a way for clinical labs to be able to input their data. I guess those are the three main points. The point is to fund Heidi's proposal. Yeah, fund Heidi's proposal. Yeah. So I think the bigger picture part of this is that there are major obstacles in getting phenotypic information necessary for into the database. And I'm a co-PI on Heidi's grant application, so I'm speaking from that, trying to put that aside, is that most clinical laboratories get little or no information about the phenotype. You may order a test not because you think the patient has this disorder, but because you're trying to rule it out, in which case the phenotype of that information, that person's going to be quite different than a person where you have a high clinical suspicion. So I think this issue of how to get the phenotypic data is a huge obstacle and we don't have a good way to do it. Well, and I would argue that I think NHGRI has made an important step forward with the Immersion Network because that's precisely what they've been doing is how do you get information, phenotypic information, out of an electronic health record using a portable system that can cross multiple EHRs. So that's I think an important step that we need to... Yeah, but I think what we need to do is we need to represent that as a bullet of things to do that we have to explore other situations to get the phenotypic information. I think that's a good takeaway that we don't have represented there. So could it be something like decipher for genomic variants, right? The example's already there. Yeah, ISCA is another example. So again, look at model. Audrey, did you want to say something? You've been very patient. Yeah, thank you. I just want to say something that I thought might be missing is something that we've heard over a couple of days is the need just for more data, particularly on populations in order to do this well. Yeah, there are a couple of bullets I think that do address that. Gail? Yeah, so I heard talking about biochemical and one area where there is a national collaborative to collect now outcomes, Dana, and phenotyping is the newborn screening network. And so it's not going to answer a lot of the adult common variants, but as a prototype, since people already contributing outcomes, data and natural history to a central site with all the regions, that might be an area that one could start with. Again, as a prototype to start building some of what kind of clinical information will be out there and how to link databases where someone's already collecting some phenotype data. So basically, again, identifying existing models and see how we can extend and advance those. Matthew? So one opportunity that's been not really discussed, although it's touched on tangentially, has been functional studies and the value of functional studies for showing in particular genes, which are already known to be clinically valid and clinically useful, functional study of individual variants. I think there's an opportunity for potentially scaled functional studies of specific genes of known utility. For example, wouldn't it be great if we knew what the consequence of every mis-sensitization you could possibly have in BRCA1? Now, there are some technologies that are just in their gestation that could potentially do this kind of analysis. And it could be very much focused on those genes that are already known to be clinically valid and useful. And it would be a path out of BIDN2 that's not that we haven't currently discussed. So as the room explodes. So to extend that into a broader concept, I think what we're really saying is explore new ways to assess variants that are novel to determine their impact. And so functional studies would be one, there may be other things. But that's the under the overarching thing. Well, and a lot of that gets to sort of the fundamental biology where there's you know hundreds of other grants across things like the NIH, which are doing precisely that on a gene by gene basis. Yeah, but it was just very much focused on getting things out of this BIDN2 that we can get out of BIDN2. Because those studies are not, don't have the same clinical focus. But it is, I think it's worthwhile just having it as a noted functional studies, because otherwise we're talking very much about informatic solutions and not experimental solutions. Howard. Another bullet to add and Helen gets accredited for reminding me of this one. You asked the question, how will we ever know that we're done? And the answer of course is we're never done, because another great invention is always around the corner. And one of the things that's been discussed a few times is the need to annotate not just the data, but tools and algorithms and methods and procedures. Things that were used to generate what we did need to be annotated as well, so that looking back someone can rationally figure out why something that now looks stupid looked logical at the time. So all the metadata associated with the data is? Tim. Part of that is around evaluation of algorithms. I mean, I don't know if people know there is something called KG, which is actually running a competition to work out, you know, see if different informatics algorithms, you know, how well they can do. That's happening in California in December next week. Other things. Elaine. One of the things I would like is to have more communications between the clinical lab and research labs that actually could take it into the functional data. When I try to reach out to them, there's really not much interest. But if we could get a clinical research in a collaboration going on that what we find in the clinical can go into the research, then back to the clinical. I'm talking the clinical lab, not necessarily the clinicians. Yeah, maybe I could just make a quick comment on that. It's an important point. One of the interesting things I think is that we found that there have been people, you know, sort of squirreled away working on their favorite gene for decades. And, you know, nobody's ever been interested in it. And when you actually find those folks, and you know, and you can't just go to any research lab, you have to sort of dig in the literature and find the person who's been working on, you know, X, Y, the Fox E1 gene and, you know, 1E. And it turns out to then be related to hypothyroidism. And they go nuts. It's like, oh my goodness, you know, somebody's actually interested in our gene. But that probably will take a little more linking because you have to figure out which lab and which investigator is really interested in that gene. So it's something to think about. Yeah, I think that, you know, this is a really important function. What we're really talking about here is brokering. That, you know, what group and can NHGRI or some other entity actually provide that brokering function where people, a clinical laboratory could say, gosh, we'd like somebody to do functional studies with this. And you kind of maintain a network or some type of a way to bring people together. And I think a lot of the things that we're representing as bullets could benefit from this type of brokering because there are people that are out there doing it. The hard part always is getting them connected. There's a lot of PhD candidates that would love to have a list of potential projects that they could use for their PhD theses that would be a great target for that. I would echo that. But I would like to think that it might be useful to have a more formalised approach for that. Because I think one of the problems is that even if you are a research lab, we've worked on a number of genes which we're no longer working on and why I no longer have funding for. But people contact me about a gene I found, you know, eight years ago. And I have to say, well, I'm sorry, I can't work on that now because it's quite difficult to get ongoing small funding of that kind. I also think this issue about how you follow up the BIN2 to decide to reassign them is quite difficult to do in the clinical setting. Because then your clinician has to go back to all these different people, have to counsel them, have to do all these things for something that's probably not going to be pathogenic. Whereas in that situation, it's much easier to do that as a research and it's probably more appropriate and potentially quicker to do that. And it's more likely to happen. So I wonder whether that is something that's an infrastructure, an actual pathway. There's a variant that's been found. It's probably going to be innocent, but we'd like to do more work on it. Can we put that into some kind of formalized research or get quick funding for it or allow it so that we can actually get that done? So this sounds remarkably like a valley of death issue. So Eric, I might put you on the spot and just ask if you think that this is something where the new Center for Translation, Advancement Translational Sciences could potentially play a role. I would be very cautious in making any promises about that new Center because everybody sees it in a different way and it's not that big and it has its own agenda. And I think if you want to see progress on this more quickly, I would not, I mean, first of all, the Center doesn't even exist yet. So I wouldn't count on that if you think this is high priority. I think you should be telling us this and have other institutes figure out how to make it happen. Yeah, so I just wanted to kind of, I think one maybe bullet could be to clarify what we mean by binning and, you know, I sort of proposed one definition of it, people have used it in different ways, I think here. And, you know, from my perspective, binning isn't about whether we understand what the variant does or not. It's not a question of it's a known pathogenic or VUS. It's a matter of is it clinically actionable or does it have clinical utility or does it have clinical validity but no defined clinical interventions. That's my personal definition. Obviously, everybody can, you know, take that idea and run and call it other things. But I think the issue of, you know, whether you move things from bin two to bin one has to do with can we demonstrate that there's clinical outcome improvement by doing something with that information? That's a research question. The question of whether we can change a VUS into a redefined variant as a known pathogenic or known benign variant, that's to me a different question. So I just want to, you know, but both important, obviously. So in thinking about that, I'm thinking about ClinVar and the information that would get aggregated and I think there's maybe two types two roles that might serve. One is aggregation of evidence data that needs to go into the adjudication process to evaluate clinical utility. Then there's another role which is transmitting the messages out, which need to be defined, structured about if there is clinical utility, what is it? What's the evidence? How would it go into a decision support system? And I don't know if a workshop could cover both of those topics but there's not a lot of structure in the messaging world right now for the clinical information. And I think we could put a fine point on some of these questions when we actually get down to that level of saying here's the specification of how the data should exchange or here's the specification of the data we want to judge and the algorithms we use and how we're going to record it. You know, it's going to follow a great deal from that. Five minutes to go. Other comments. If you have a wish, we're all ears. So I would like some help getting umbejian type analysis so I can know that if I reach a 99 percent to be able to call it pathogenic versus a 95 percent right now it sounds like all of us are just kind of picking those numbers out of the hat. So if there and I know that people have worked on statistical methods I don't know how to get those translated into the clinical mark. The first time I met you we were talking about this and I'm still struggling. Yeah well and there's no obvious solutions to this. I'm reminded of Lewis Carroll whose character Humpty Dumpty probably said it the best you know pathogenicity is exactly it means exactly what I say it means neither more nor less. And so in some ways I think you know we do the best we can and we have to but we have to be able to do better and I think one of the things that you demonstrated in your talk is is you know which relates to directly what Howard was talking about earlier is how do we help clinicians visualize that and understand what it is exactly that we mean when we say that this is unknown or uncertain or that this is definitely yes or definitely no. And it's those are those are rich areas that haven't been well explored and we need to do better John. All right well then we can officially declare victory. I think the first skirmish has been won maybe but the war still remains. So the goal the next steps then for this is we'll write up these recommendations and share them around with this group in the next few weeks we'll probably ask for a fairly quick turnaround time on these so I'm sure you'll all be watching your email inboxes with bated breath waiting for this this this this set of recommendations to arrive. Everybody should know that the video from this is going to be posted online together with the slides so if you have slide issues that you need to fix you should make sure you get those fixed and those will go on to genome.gov at some point in the next few weeks couple of weeks and then the ultimate goal is to develop a manuscript from these discussions that will be shared around as well. Most importantly thank all of you for all your time over the last two days this has been an extraordinarily intellectually stimulating meeting I think it's been productive I know from some hallway conversations that if nothing else comes out of this we've already gotten some people together to do some fun things that might not have otherwise gotten together so that in and of itself I think could be a metric of success. I want to again thank Erin and Karina for all their hard work in terms of organizing this and yes and safe travels to your homes.