 Great. Thank you very much. So the second one is also related to all of the things we've been talking about, and I imagine you'll see some areas of synergy and some areas of overlap, and we'd love for you to warn us of the latter and encourage the former. So this is an effort to basically unify the many efforts ongoing currently to identify potentially actionable genetic variants. I'll explain what we mean by that. We recognize that genomic studies with a variety of technologies are increasingly identifying variants that have potential implications for clinical care, whether they have real implications or not is what the previous discussion was about. These really seem to be unavoidable in genomic skill research, certainly in genome-wide association studies. Chromosomal anomalies sort of hit you in the face. You can't miss them, and we need to come up with a way of dealing with those, which I think we've done initially, but much more is needed. Direct-to-consumer testing, patients bring this stuff in, and clinicians are then challenged. They ignore it completely, or they find ways to potentially use it. And sequencing for clinical care is, as Rick and others have pointed out, going to be generating even more variants that people don't quite know what to do with. It's also fairly clear that as we try to move toward implementation evaluation and then effective implementation, we do need some paradigm-setting examples. So when you raise this with clinicians, a lot of times their first response is, this is three billion base pairs. I can't deal with, you know, with three medicines, let alone that size of a data set. And really, if we can sort of pull out a couple of examples, set some paradigms that will help drive some of the ethical deliberations. Is CYP2C19 in clopidogrel, is that really a good example? Can we debate the pros and cons of that and really take some of those perhaps more straightforward, and nothing is straightforward once you get into it, and also develop some of the infrastructure for actually doing this in the clinic, as Jill had mentioned. There have been efforts to try to do this, or at least recognizing the need to do this sort of thing for many years. The first call I've seen in print actually comes from my colleague, Ebony Bookman, a conference report from an NHLBI working group about, let's see, 2004, so now nearly eight years ago, that noted that when genetic results are under consideration for reporting, there should be some standard criterion guidelines developed and followed. To our knowledge, this has not been done in large part, although this report proposed some. And a list of genetic tests that meet these criteria should be reviewed to identify those appropriate to consider for reporting. NHLBI updated this report just recently with another working group that Amy was a second author on there with Rich Fabzitz. Recommending is the third recommendation in Independent National Central Advisory Committee be established to review evidence for genetic risk factors and offer guidance to investigators, institutions, IRBs. Regarding when a result is well enough understood to justify an obligation to return results, obligation is another one of those difficult words, and perhaps we may not be able to go quite so far as identifying those in which one is obliged, because those are all situations that depend so much on local settings, clinical settings, and other things. You heard Eric describe a number of related workshops that have led to this, Genetics and Health Information Technology that was organized by our colleague Greg Furo in April, identified a need for this sort of thing. Colloquium called for it. The IOM workshop on integrating large-scale genomic information called for it as well. NHLBI had a workshop, a follow-up one in August of 2011 looking at integration and display of genetic results in EHRs, came up with some very good recommendations for how one might go about doing that. Obviously, the second genomic medicine meeting in December. And we also held a December meeting that Howard was kind enough to come to in his new home in Bethesda to address the processes, databases, and other resources that might be needed to identify clinically relevant variants, decide whether they are actionable and what that action should be, and provide them for consideration for clinical use. So not necessarily that they be used, but at least that we narrow the universe a bit of those that should be considered. There are many awkward words in this. One of them is this term actionable. Many of you have heard of the terms clinical utility, clinical validity, and a lot of times we get the question, isn't this actionability really clinical utility? And why not just stick with that term? This reminds me of the taste great, less filling debates of the 1970s. In some ways, it may just be terminology, but it may be a little bit more than that. So the best definition I could find of clinical utility from the gentleman who invented it, Muin Khuri, is in the EGAP Report Methods paper in 2009 that it's evidence of improved measurable clinical outcomes and usefulness of a genetic test and added value to patient management decision making. So typically what most are expecting with a variant that has clinical utility is that there is a net and real benefit to a patient from its use over not using it. In general, these kinds of things then must meet a very high evidentiary bar and in some specialties that's expected to be a full randomized clinical trial, which may not be either practical nor necessary for all variants and all decisions to be made. Perspectives differ widely on the importance of the clinical nature of the outcomes. So in some of the sequencing programs, particularly those that Harvard is running and the Medical College of Wisconsin also in the undiagnosed diseases program, there's been a tremendous benefit to patients in ending the diagnostic odyssey, even if you can't do anything about it, at least knowing that there is something that is wrong and having a name to it is something that helps tremendously. Making reproductive decisions, other things, some clinicians may not consider that to be a clinical outcome. Patients may well. And also, I should note, varies by context. So some of these variants may be very important in advanced ages in people at particular risks with particular exposures or a given family history. May be very unimportant or unclear how important they are in a child, for instance, or someone without those risk factors. Pressals may need some tailoring to cost burden and risk of the proposed intervention. If one considers the information, just to be another piece of information a clinician may use in making difficult decisions about a patient, one can argue, and we've heard it argued that isn't more information better, it isn't always. On the other hand, if you're not talking about a dramatically invasive intervention such as when you're on a long plane flight, be sure you get up and walk around even if the marshals want to put you back in your seat. That may be a useful thing and something that may not require quite as much evidence as something else. Recognizing that addressing nuances like this requires very careful, considerate, and really local deliberation. These are local issues. Gail Jarvik likes to quote Tip O'Neill about all politics is local, similarly all genetics is local, and making these needs to be done at the level of the institution. But that can be informed by expert consensus. So what we mean by actionable, and we'd welcome your input on this, is evidence that's not sufficient for unequivocal clinical utility, but is sufficient to determine how some already available information could be used in a clinical context. Someone else may need to decide should it be used, and ultimately that will be a decision at the level of an institution or a clinician or a patient. But it may be an intermediate stage between sort of clinical validity and clinical utility. Sort of asking the question, if you had this information, would you use it? And how would you use it? It may allow considerations of the ethics, law, and policy in the whole return of results arena to really sort of be shifted a bit to the appropriate expertise. It's not clear that clinicians and institutions have the expertise to be able to make these decisions. There needs to be those with expertise in that area making those decisions. If there's a way of at least separating them out a little bit, perhaps work can proceed in parallel. And just like the children's book, The Big Jump, you know, if you go in little steps you may do a little bit better than trying to tackle it all at once. And it does allow for more flexibility for clinicians and institutions to tailor the use of variant information to a given patient, a given clinical setting, and local standards of practice. So we see this as sort of a complex matrix of making decisions. The first issue that one needs to address is what is related to clinical outcomes. And we see that as genotype, phenotype data resources, which CSER is certainly going to generate, but there are others, NCBI's ClinVar, the EGAP process, our GWAS catalog, the ISCA process, the PharmGKB, a whole variety of discovery and association databases that this resource would definitely not do. There's also the question of what should be done. The return of results consortium and efforts like it would sort of land on this side of the equation, which is, they're currently in similar programs, they're doing empirical, behavioral and social sciences research, normative research and determining what we really believe is the right thing to do here. CSER, the CSER sub-projects are addressing ethical and psychosocial issues. And so we really are trying to address here the question, what could be done? What are the options that are available and that should be evaluated? And that would be more where ClinAction would sit. But all of these would need to provide information to clinicians' institutions, IRB's payers, and don't forget the patients, in order to actually decide what will be done with given information. It's also very important, as we've heard previously, to ensure coordination with related efforts. There are a variety of these. Probably the oldest and best, which isn't very old, is the PGRN's clinical pharmacogenetics implementation consortium, or CPIC, which is doing this, basically, drug gene pair by drug gene pair for pharmacogenetic variants. And we would hope that we could learn from that process, model on it, where appropriate, and absorb the knowledge that has been gained there into ClinAction. There's also the clinical sequencing exploratory research, eMERGE, the eMERGE program has already started to identify actionable variants because we're faced with them in genotyping studies. We would expect EGAP would also have important information to provide, as with the FDA. The FDA has a website with over 100 variants listed that are related to pharmacos, to response to various drugs. That's another place. Plus, there are other groups that are doing this, so the Coriel Personalized Medicine Consortium. Actually, it's probably been doing this since before the PGRN has. That group has an entire group set aside that is just looking at actionable variants. The Vanderbilt group is doing this as well. And in fact, as we heard in talking with the various genomic medicine centers in the colloquium, every one of them is doing this kind of thing. For the most part, with the same handful of variants, looking at the same evidence, coming largely to the same conclusions about the evidence that's available, but deciding differently in terms of how and when to implement. Wouldn't it be wonderful if there were a way to actually bring those together? And recognizing that they all still exist and need to exist, can we then build on them, feedback some of the knowledge that we've gained from those efforts so that they can basically not have to duplicate what each of them is doing? So the proposal here is to support identification and dissemination of consensus information on potentially actionable genetic variants in clinical care. The goal is being to identify variants with implications for clinical care, collect the evidence and disseminate it so that people could make decisions on their own with the evidence at hand, rather than having to send a graduate student who is currently done to pull all the evidence that's available. Develop clinical decision support systems for incorporating these variants into clinical care, build upon existing programs, unify and hopefully reduce duplicative efforts across numerous research and clinical organizations. The scope we would anticipate would be a single awardee to collect and evaluate the clinical relevance of variants associated with clinically important traits, and obviously all of those would need to be defined, this is just a concept at this level. We would anticipate a multi-component approach which would include synthesis and curation of the data, consensus development and integration with ongoing efforts and dissemination. Probably the toughest of these is going to be the second, the consensus development and integration with ongoing efforts. It'd be important to recognize that this group would likely not, and I don't think we would be in a position at NHGRI to support providing screening recommendations, but rather providing the evidence on which those recommendations could be made. And NHGRI is in a relatively good position from an NIH standpoint because we are not disease-specific. So we can address these across a variety of diseases and a variety of institutes, many of whom, I can tell you, are thrilled that NHGRI is willing to try to tackle this problem and are eager to collaborate. The question also is not whether clinicians should be advised to order a particular assay, but again, what should or could be considered if a patient's results were already available? Recognizing that this is happening in every clinic or will happen soon in every clinic, but certainly is coming and something that we need to be prepared for. So the consensus development and integration process could work by inviting existing groups that are already doing this kind of work to join or to interact at least with the ClinAction resource, developing a framework for review and evaluation. So gathering from all of these different groups that are doing this, how do you go about doing it? What evidence do you collect? How do you go about collecting it? And really kind of codifying that, not in a concretizing way, but in a way that keeps others from having to basically reinvent the wheel. Perhaps defining domains to group the variants for evaluation. So wouldn't it be really cool if we could divide up some of these? And not everybody has to look at CIP-2C-19, but some could look at variants in one area and some in another. Applying that review framework then and reaching consensus, if you can, on variants and actions to be recommended. And obviously, you know, choosing people to be part of this that are consensus builders rather than perhaps the opposite, but recognizing that sometimes deliberations can't be brought to a consensus and hopefully there would be a rare inability to do so, but it would be important to address that and to make it clear when there was not a consensus and why not? Obtaining input from a variety of stakeholders, payers, professional organizations, clinicians, patients on the draft recommendations and ensuring consistency of domain-specific recommendations with the framework. So if we set out a framework, the criteria and guidance that NHLBI called for many years ago, making sure that at least to some degree, the deliberations are consistent with that and consistent across. One could also consider these different groups as perhaps taking a given area, so might make perfect sense for the PGRN to continue to focus on pharmacogenetic variants. Maybe there'd be some group that just wanted to take the GI variants, for example. They had a GI clinic that was really interested in that because they had a champion in that area or maybe the ophthalmologist or the rheumatologist or the orthopedist. Maybe one group would just want to take a single disease rather than an entire subspecialty or they might want to cut the pie in different ways. Just look at those that are important in Asian Pacific Islanders or just those that are important in Hispanic Latinos or just those that are important in the military, whatever it might be. There could conceivably be a way of dividing up this work rather than duplicating it. The dissemination and clinical decision support would seem much more straightforward after considering the consensus, but this is a challenging area as well. It would be important to provide supporting evidence and documentation of the consensus process and then to develop and distribute clinical decision support rules, which would be kind of a description of what one would do in CDS without having specific software programs or the tools that would be actually the specific software for adoption in the EMRs and other clinical systems. Be great if we could find some user-friendly tools for clinicians without access to such systems and distribute them to other health systems, especially non-U.S. systems. You may recall that Eric noted that the workshop we held on this was co-sponsored by the Welcome Trust. They're very interested in this area as well. Their medical records are quite different from ours as is their medical system. Anticipated funding would be $2 million in fiscal 13 as sort of a startup and then we would anticipate $4 million per year for the following three years. We would hope that two to three domains plus the defining the framework structure could be tackled in the first year and then perhaps five to eight domains annually after that. We would consider a continuation if there was effective development and if the resource was increasingly used, we might reconsider that at three to five years. This would use cooperative agreement and again we would seek support and enthusiasm from other ICs. And again, thanks to those who participated in the workshop and the workshop planning group, particularly Rex and Mark who co-chaired it and especially Erin Ramos who has done much of the work on this and who would be presenting this were it not for an initiative of her own that she's been busy with. So who happens to be in the back? Yes. Holding her initiative. Holding her initiative. So you have to bring Logan around and let everybody see him. So I think with that I'll be happy to take any questions. This may be a little off topic but as you get towards consensus around specific variants, I mean, are malpractice lawyers gonna be drooling at this? I mean, if it's clinical care yet sued for not doing it is research yet sued for doing it? Yeah. And my fear is we already, what? They'll be frothing. Frothing, okay drooling, what else? Something will be coming out of their mouth. Yeah, that's right. Excellent point. Yeah, they do all of that. And one of the big barriers is that clinicians are saying I don't want to have these results in the medical record because then I have to work on that, I have to act on them. So one of the things that's been discussed and again, Howard and Rex and Jim, those of you who are doing this kind of work is we will only pull out a few of those variants so we'll keep the results in some intermediate database and the clinical decision support will only pull out those things that we all agree are relevant and appropriate. So the clinician may never see the 500,000 other results but they will see the variant that all of the, at the institution and others have agreed is one that's worth acting on. Would that address that concern? I mean, you also went through how this is local. You can say everything is local. I mean, we all know if you have a sodium of 150 you should do something. Right. But if you have this variant. Right. What is that filter out? No, it's a good point. And I think one of the challenges here is going to be institutions figuring out where they have flexibility to implement something or where the evidence is so strong and Howard can say this better than I that they don't have a choice and they need to pursue it. I'm not sure I can say it better to you but we already have that situation with the FDA package insert changes. And many of us are getting calls by litigation attorneys saying, hey, we have Asian patients who got carbamazepine, got Steve's Johnson syndrome. You know, package insert was changed three years ago. Let's go. So that sort of thing is already out there in a way with pharmacogenetics being a, I wouldn't say a driver of litigation but a component of that. So in some ways this might bring some clarity to that. I hope that it would remain visible to the people who wanted to see invisible to others but you know what reality is like. Yes, Jim. Yeah, I think, Pearl, your point is actually a very important driver of this. I think it's a very important initiative because it cuts across the entire range of communities that are concerned with whole exome, whole genome sequencing whether you're doing research, whether you're doing clinical, whether you're concerned with the legal aspects. There has to be some kind of central guidance about the things that should be considered and I think you're right to bring up the term like you did on one of your slides obliged and it's not that this I'm sure would be promulgated as here's the final end all and be all list that everybody's obligated to do but it will inform the responsibilities that everybody has doing this and it'll make their lives easier if it's done right. Because all the local groups can use that as a starting point. Great, Rex and then Mike. Yeah, and I think you might just think of this in the same way you think of how clinical standards recommendations get made all along. They're typically made by domain experts, the domain experts publish them and then what happens is at each local site there's a clinical implementation or quality improvement or TNP group that reviews them and decides whether or not they're going to actually want to include them in their clinical practice guidelines at that local site and so ultimately it's been through a variety of levels of expert clinical decision making and I think the key for us to think about is that this is simply and I'm under stating what Terry's laid out for you but sort of a database in which that kind of information can be held and disseminated and then local places can filter it however they want to. Exactly, Mike. So I have two very naive questions because this is sort of far from what I do but I guess one is how many actionable variants do we think there are right now? And two, will whatever group this is be viewed in the field is having sufficient standing that all these different groups who are doing this will actually pay attention to what this group says. Yeah, good points. The first, there are debates and I've heard this debated and Howard you probably have a larger number than I do. I mean somewhere less than a dozen is what I have heard but I don't, would you care to opine? Well, for the part of the genetic stuff. Yeah. For the, about a dozen for the things that you would act on right now, for at the public health level we have a little bit larger number because there you're choosing amongst a menu of available drugs for a population so it's a little bit different question but then you would include some disease aspects and so then the numbers can really go up quite dramatically if you include some of the inherited metabolic disease and other things like that where there's significant data. Yeah, I mean, I can give you a precise number. So in going through OMIM and looking at every gene, our group came up with 161 candidates where there seemed to be reasonable evidence that knowledge of this would trigger specific recommendations. Now, 161 genes. Okay, now you're right. And this is gene-based and that's an important point. The reason to start with genes is genes are finite whereas variants are infinite, right? Now, many of those collapse into the same condition. So for example, there are many of those that predispose to say aneurysms where there's clear recommendations that you should get echoes, et cetera. So I'm not by any means by coming up with a number. I mean, I do that a bit tongue in cheek. I'm trying to say, oh, you know, end of story so you don't need this, right? The point is that it's a manageable number and then you can debate. So we had a long conference call with the University of Washington people because I'm on their committee for deciding these kinds of things. And there was a lot of consensus, a lot of agreement for most of those and then we were now, we farmed out to various people on that call. Well, let's look more closely at something like modi, maturity onset diabetes of the young. Is that really something that you'd be kind of obligated to report or not? And we'll have to sort through those things. But it's a tractable number of genes. I wonder, I think this is a super important area but given the number of groups that are currently working on this in various contexts, I wonder if this is and it would be an RFA for a new initiative or if it should be conceptualized more as a coordination kind of effort to get the people in the room who are already working on this to kind of hash it out and say, well, we've done this and these are our methods and this is what we found and well, we've done this and these were our methods and then have a conversation about it. Because otherwise it seems like you're just gonna have one more of many. Oh, absolutely, absolutely. Yeah, so that's why the goal is really to gather those efforts together. I mean, it really, that's the anticipation is that this group would be the one that would be the convener basically and bring them together. And that's not gonna be an easy thing to do and Mike's point is a very good one. Why would this group be viewed as being the body that decides on these? And I think what we would want to do would be to contact or have them and again, we may be getting a little bit into the details of what applicants would propose in their applications. So I don't wanna go too far down that path but it may make sense to contact the organizations that are already doing this and say what is it that would take for you to be happy that this would be a consensus or a group that you could follow or that sort of thing. But I'm afraid I can't give you specifically how any particular applicant would choose to address that. I agree completely though that what this is is a convening function. It's not just another one of these. But it just seems like an odd, for sort of, yes, this is really, really important. I'd love to see this happen. I just don't see how it works as an RFA. I'm just repeating what the last two questioners brought up. This is a very important initiative but is it something where you need to ask for research applications rather than somehow facilitating, it's pretty obvious what needs to happen. We need to get people together to make decisions on the 161 or whatever the number is. This would be a resource application and we fund a number of resources. So I think from that point of view, the research aspects are lesser than they would be in terms of a regular research grant. On the other hand, without this kind of glue, and this is not a huge amount of glue that we're providing, our efforts are really stymied. And one of the things that we'd really like to do in genomic medicine is find what the obstacles are and deal with them. And this seems to be a way of doing that. Yes? I guess I don't wanna just repeat the concerns but I have the same one. That it's a gnarly set of issues that we all think are really important. My concern is that if you call for a single grantee proposal with a really gnarly set of issues that no one group will actually be able to achieve the consensus that you want because the unfunded groups will still think they have a better answer than the funded group. And I'm not sure for this set of issues that a single awardee in a rapidly changing area where there's a diversity of opinions and the issues are viewed by everybody is incredibly important is going to achieve the goals that you wanna achieve. Yeah. No, and I agree, that's an important consideration. We had a similar sort of situation when we were trying to define phenotypic measures for genome-wide association studies and other genetic studies. And the same kinds of issues came up in terms of, well, who are we to tell people what the phenotype should be? And the Phoenix project, which you've heard about, has actually been remarkably successful in identifying those who are most committed to a given domain, a given area who have phenotypic measures that need to be considered and really bringing them into the conversation. And I think we would expect an applicant to do the same sort of thing here. Whether this is exactly the model that we've done because again, this is a concept and we would rely on applicants to propose their approach. And they may choose a different approach from the one that I've described or that you've described. I think if we don't get into this area, we really are going to have difficulty moving any of it forward. And if you have suggestions that are again at the concept level rather than at the application level as to what else might be considered, that would be very helpful to have. Yes. I think people are trying to make suggestions at the concept level and not the application level. And they're suggesting that this concept will draw the wrong kind of application. I guess, I mean, I think these points are really good. But I guess to me, the strength of this is that if it is couched very much in terms of trying to bring in the various efforts that they're not working and get their input and make them part of the process, without this, what I worry about is that you've got the University of Washington, the University of North Carolina, and you've got Caesar, you've got these other things that are all going to come up with similar overlapping lists with using similar but not quite the same criteria and the field will be left with insufficient guidance. Whereas if this is done right, one could imagine, all right, here's a reasonable template that took into account many different approaches and here's a list, now go to it and apply it locally, et cetera. I guess that would be my defense of this kind of idea, although your points are, I think, really well taken. It's Howard. So there's a need for this. What I can't think of at the moment is who else would do it. Even though there are a bunch of warts on this thing, or whatever, or it's gnarly yet, I always use gnarly in the context of surfing, but I guess it is, it does have other meanings. Warts, huh? Right, so it does have that, it's no doubt, but somehow we need to come up with some approach to getting forward. So the absence of another better body to do this, we need to do, I think, is the way I look at it and I would love for there, if we had any Ministry of Health in this country, then they would have done it. We don't have one, we're stuck. So if the NIH can't do it as a whole, then in NCHR, I can do it on behalf of the NIH and go forward. So I guess I look at it as, there is a need, and if no one else is gonna step up, then why not at NCHR? Pearl? Just for information, are any other NIH institutes doing this for their own diseases? No, and NHLBI is very interested in seeing this be done and as I said, they've had a couple of workshops in this area, NCI is also very interested, and NIGMS is obviously doing it in PGRN. Well, they actually are paying for a fair amount of effort in that's producing this evidence, but you can't have, I don't think we want five or six different groups doing this, yeah. Well, just to clarify, I'm not saying that NHGRI shouldn't do it, and I think we all agree it needs to be done. Some of us are having trouble seeing how funding one group to do this is going to accomplish the goals, and it's probably isn't we've just heard, you know. So I would suggest the way to think about it is that this is the recorder. This is not necessarily the decider, they're gonna take the evidence that from wherever the evidence can come from, and I think to the extent that this RFA is a place where the evidence gets recorded for whatever is available, it will be most likely to be successful, so that anybody else can go to that resource and see what variants are out there, what genes have been identified as important in producing disease or in producing a successful therapeutic outcome from a drug or an adverse event for that matter, and to the extent that this is simply a repository of that information, and people can see what the data, well, but yeah, that pushes some buttons, but I think it should be thought of that way, maybe does that, and that may help you with your rather than a standards organization. But yeah, sure, that would be really, really great, and it brings together many efforts that are ongoing. I was hearing, though, that this was going to be, this whoever gets funded is going to come up with the list of actionable variants, and I'm just not hearing, I just can't see how that latter thing's going to happen, but it really needs to happen. You have to push it, how should it happen otherwise? I mean, I don't know, like a conference or something. It's a glue, right, you need glue money. And funding one group out of five or six who want it is not glue, it's a provisional part. And I don't know, a conference where you can't leave until you've decided. I mean, I strongly support the idea of a database with supporting evidence for variants that's a malleable over time, that changes over time, that somebody keeps up. I think that's a really, really good concept for this. I agree that the standard, I mean, I don't actually think it's appropriate for NIH to set clinical standards. I think that's the role of professional societies to put forth what the clinical standards ought to be. And so to come up with a definitive list, I would echo Pearl's concerns about that set standard of care, that has legal implications, and I would, from a concept perspective, I would shy away from that more toward sort of a database of what we think of over time as clinically actionable variants and the evidence to support that as it builds or does the build over time. So would you both be more comfortable with the idea that this really is a database, a data resource rather than clinical guidelines, clinical recommendations? This is the universe of what you might do, but it's not Jim's list of, here's what we're going to implement at UNC. Is that fair? Yeah, to me, one thing that I think it sounds trivial, but I think it's a really important point of, I think it's critical to not get buried in the issue of variants first, right? Good point. Which you first have to address are, what are the genes in which, if you have a deleterious mutation, forgetting for a moment how you define deleterious, that it is, there's a general consensus that something is advisable that you do, right? That has to be the first question and I would not conflate this with the issue of variants because that is a truly intractable problem at present, whereas coming up with gene lists is tractable. And I like the idea of this being a recording kind of type thing where this effort could say, here are the various lists that have been come up that have been arrived at by using these kinds of criteria, et cetera. And here's the overlap, right? Yeah, or the processes by which you decide on variants, right? So, even the genes part, Jim, seems like there's a lot of, or there's some room for subjectivity or people choosing their favorites or... Yeah. And so what do you really mean by actionable? Right, so what I've noticed is there's tremendous agreement when just informally, anecdotally, when I talk about this on a whole variety of genes. I think I've never yet run into somebody who doesn't feel like a Lynch syndrome associated gene wouldn't be on this list, okay? On the other hand, there always are a handful of rather predictable ones that there's some debate about. You're always going to have this debate and to me, a similar effort, but one that has traction, has been able to have been applied to, is newborn screening, right? Not everybody agrees on whether Crab A disease should be a candidate for newborn screening or not, but for most disorders, right? Most people can agree on a core set. And then states are free to say, well, we think we should do Crab A, whatever. I think this is a very similar kind of thing. And I'm very cognizant of the problems with this, right? But I feel like, kind of like Howard said, somebody's gotta do it, right? Somebody's gotta step up and say, this is how we're gonna, or this is the list that there seems to be some agreement on and then here are ones that seem to be close calls. Or even here is the framework. We don't even have a framework. Your framework's critical, right, right. So I agree with you. It reminds me a little bit of the druggable targets that Pharma likes to have been talking about for 20 years, and that's limiting, I think, or probably has been limiting. So one of the questions is, I think if you do anything here that works, it's good. It doesn't have to cover every single actionable ones, but that strikes, how do you know that a loss of function or a down regulation or something of gene? Usually you know that from a mouse model or something, but I guess you also know from already from human beings. And again, this gets into the variance of, actually I think that's fairly straightforward, because what one has to remember is that we're talking here about really incidental results. We're talking about results that bubble up when there's a low a priori risk that that individual actually has this disease. And therefore, you set a very high bar for what kind of variant you're gonna call. Because what you have to avoid at all costs in a situation like this are an infinite number of false positives, right. You need to minimize false positives. So you set a real high bar and that's appropriate because a priori probability that somebody has Lynch syndrome, right, is low. You haven't selected this person for family history or anything, therefore you say only either frame shift mutations or mutations that have been reported and confirmed to be deleterious. Really only for rare disease that in my understanding are, is not? No, no, this is actually for... On the list that you've gotta decide, and this is one of those ones that people argue about a little would be something like hemochromatosis, factor five lightning, right. You gotta make these calls and then, you know, I mean one of the, maybe in this RFA you could propose that people try to generate evidence, right, that would address some of the more contentious. Well, this may not be big enough to be able to do that. And being a resource, it's likely to be the sort of thing that that wouldn't, that group wouldn't do. I think what would be tremendously helpful would be for them to say, we really need more evidence on X, Y, and Z. I mean, the gap is... Here's the subset that's contentious, you know, the community should go at it and try to figure out what's best. And we can help facilitate that if we're, you know, at the table. Pearl? Pearl. One thing I was listening to, we talked a lot about going to the right side of the NHGIA diagram. What we're finding is clinicians, IRBs, they wanna know what do I do? So while I think a repository of the data is very helpful to many of the people sitting around this table, I think the hue and cry is more, yeah, that's nice, but how do I read those 58 articles to come out to what do I do? So whether that is a separate activity, but I fear that just having a repository is gonna push us further to the left. And actually, if you'll notice in the objectives, it's also to identify the actions that could be taken. So it's not just, you know, here are the variants, but what are the actions that could be taken? But the should be part probably can't reside in a data resource. It probably needs to be decided at the level of an institution. But I think it could be is a step away from this repository. Yes, well, and that's, I think we are hoping that they'll do the could be stuff. So, yeah. David, last word. I think that interface issue is a little bit of my discomfort with calling for this at this time because really, if you think about the slides that you presented, it's a combination of pulling together information what is actionable. But, you know, there was also a description of trying to come up with the interface that is gonna present that information to user groups. And I just think this is a very fluid, fast-moving area and I'm still not convinced that a single group at a rapidly moving time is going to be the best mechanism to put together one database and the interface that presents that information to the very disparate group of clinical applications and users that will probably take advantage of this kind of information over the next five or 10 years. And while I agree that something is usually better than nothing, I've also seen for many organism databases and other things that there's a tendency to want something to set up that it becomes the place where other things get added on later and decisions made at an early stage get propagated. And you might not make the same decision if there was a wider base of options that were presented at the stage where the initial structures and interfaces and mechanisms of handling it might have been considered. So would you be more comfortable then with something that awarded it sounds like you would to three or four awardees that would work collaboratively? Would that address your concern? I am, yes, it would. I realize there's scale issues and I also realize what you're trying to achieve is consensus, but I just think in this area that a single group approach in a rapidly moving area I think is unlikely to be successful. Great, that's very, very helpful because we're struggling with this as well. How do you sort of anoint one group that's going to be the lead and take this over? So we could recast this in terms of, we need to be a relatively small number of awards. We'd probably need to increase the budget some. I don't know, we can look at whether we could do it within this budget. If not, we might need to bring it back to you. Does that mean we're in a position to take a vote? Well, I think what we would propose then would be a small number of awards, three to five, to do the same work. But there would be change in the budget. Well, that's, I guess that's a question. Would you be supportive of, but we need to look at what we can afford, basically. So Eric, I don't know how you'd like to proceed. Well, the option is to wait and not do anything. So I'm hearing conflicting things too, that this is an imperative, need to get going. If we, in perfect cast with one, but if we wait to figure out the perfect, then. But the reality is it is being done, right? So I don't think waiting to try to hone this would undermine the fact that it's being done. The places that are doing whole exomal genome sequencing are coming up with their list. They have to. But the people that are in those, I mean, I hear just lots of confusion. You've got to help on this. I mean, I've heard this from lots of people. You've got to help on this because it's sort of, it's the Wild Wild West out there. And it's something that NHR can do in a leadership position to try to focus attention and build some consensus. This was our attempt to do that. And we can bicker about whether it's realistic to be one group or you need two or three, but I'm a little worried about complete inaction. But that's where the challenge is the people on this table have a backup plan. I mean, we're doing it at HR institutions, but it's not the people around the table that are calling you. All I was going to say is that it seems that part of the award has to go towards developing a set of standards towards figuring out what are going to be the entrance into the database. Similar to the GWAS database, you've had to reach a certain genome-wide p-value. It had to have been replicated across populations. And in many ways, we're talking about effect sizes that are bigger than what's in the GWAS catalogs, right? So it's sort of a question of agreeing on a set of standards that are going to be the community standards of what population database is with every database. It's not going to be perfect, but at least if that's transparent then people kind of know what goes into it. I sort of do agree with the view though that you shouldn't award a single... I don't think a single award's the way to go. I think you're going to end up having a lot of both companies and universities try to vie for that with a lot of potentially good ideas and funding a couple of those to work collaboratively on that might be the best approach. Perhaps maybe if you're thinking it's so important to move forward now. Could you get near the microphone DDM, sir? Have one move forward with one that's the recorder and it's getting everything all together and then wait till the next council meeting to figure out how we go about in terms of the decision making part of it. Just as a point of information, because Carlos, I think that the GWAS database is a good example. I've got it open right now. I use it all the time. It's a repository of information with a lot of results coming in on almost a daily or weekly basis. What was the process at NHGRI that led to the establishment of what currently exists on the web is the GWAS database? And my guess is it was not. A two to four million dollars a year by one group over four or five years to come up with a mechanism to pull the data together. The GWAS catalog is actually an internal product of NHGRI. It's led by my group and it started as a table in a publication in the JCI that we basically said, gee, it would be nice to expand this. We are not in a position to be able to do that for this area. Plus, that really wasn't controversial. That didn't need really a decision to what we were doing was gathering information. People reported P values and we stuck them in a table. That's essentially how that worked. Now, the way it became the lead is that we stuck with it and others looked to it now and say, gee, this is really valuable. You keep it updated. I have two staff members who spend nearly all their time focusing entirely on this, Lucie and Heather. Okay, so hats off to that. But also, I think the fact that it came from NHGRI also had something to do with the uptake, right? Which is one of the reasons we really want to move on this because if we're doing it, we will get uptake. I mean, if we're funding it and we're coordinating it and we're overseeing it. But it seems like the problem is that, I mean, I agree with David about how stuff gets propagated. Once you put it in, and even if you have lots of caveats that this is dynamic, it's going to change. We're going to learn that we shouldn't have put this one on or we're certainly going to learn ones to add, right? But so if, whatever you do, I think you have to screen that from the tree tops because people need it. I just want to put my two cents in. I mean, I think that obviously people really want to do this and I think it's vital. I don't think that delaying for what is it, four or five months is going to make that big of a difference that more thought should not be put into this. I mean, this is extremely contentious. And I don't think that even if it's funded by NHGRI, it is a single RFA really is going to be identified with the institution that creates it. And no matter how good they are, I'm not sure that they're going to be able to draw on the diversity of opinions and contexts that they're going to need in order to create something that is really going to be a useful product. And if delaying four or five months then pays off in a much better product, I think it's a risk that it would be worth taking. I was going to sort of ask you a question about the cry to arms and what people are looking for because it seems to me that as long as we're not couching this as creating a definitive list that substitutes clinical judgment about what to look for or scientific judgment about what to look for, but instead sort of prioritizes things to look for and then provides the clinical decision support, which I think is the key. That's what I've heard people say we really need. That piece seems a little bit less contentious to me in terms of whether one group does it or multiple groups just do it, whether we do it, you know what I mean? So is that clinical decision support or is it the definitive list that you're hearing people? Both. So people who, I mean my understanding of it and Jim may be able to answer this better than me, but my understanding of it is people who don't have the necessary expertise say we found this, now what do I do with it and how do I go about thinking about it, a decision tree. To me the utility of a list is that it gives guidance to everybody who's generating genomic data about what is it I'm, and I'll use the term that Terry used with some hesitation, what is it that I'm obligated to look for? You know we just did a whole genome sequencing on these people and what are the genes that I need to look in and report something if it meets a certain bar, that is the single question that I think people are asking and why they're clamoring. Yeah, just in terms of clinical decision support, that's a way usually through the electronic medical record to feed information to a clinician when they need it and not before, so that you wouldn't have to know for instance about a CYP-2C-19 variant just to go back to that one unless you're going to prescribe the drugs that that gene is involved in metabolizing and that it has an effect on alcohol, so that's providing a rule, sorry. And I do think we need to want to get down or we're gonna get up. So there are NIH consensus documents that come out on a periodic basis, maybe not as dynamic as Terry had in mind, but certainly for, you know NCI puts them out on a regular basis for screening mainly. So would you, I mean another option would be for you to have an RFA for groups that will help feed the engine, but that it actually comes out from an HGRI. And that may come back to, because I think David's point was that it was the fact that it was on an HGRI website that gave it extra credibility, not the fact that you funded it. Carlos, is there a position from the college medical geneticist and genomicists on these issues and if not, could an HGRI and that body and maybe a SHG put together a position paper once that position paper's been defined then that would get the set of guidelines for this database. So Mike Watson was here this morning, the executive director of ACMG. There's an effort right now that some of us are involved with at the board of ACMG to begin to come up with this. I really like the idea that I think Howard alluded to and that you're alluding to Carlos that having the imprimatur of an HGRI or ACMG, a SHG would be a very useful thing. Again, not in a binding way that says here's the absolute list, but here's a list that's been formed by some type of consensus. And that would be great because it would get away from this admitted significant problem of you've got one institution that's now the go-to place and people aren't gonna like that. Aaron, did you wanna make a comment? Oh, no. I was just gonna say the comments about the interface that if we put this list together and thinking back to the GWAS catalog how it sort of started as a table, you could envision the interface that this group develops is sort of that expanded table but we've talked with groups about ensuring like Carlos said having standard formatting so the more sophisticated institutions can then take that through web servicing and their own clinical decision support to develop what they need that works best with their institution. So we could just package the information in a way that works for a broad number of people but at the same time allowing at least there to be a clean, simple interface for those that don't have the sophisticated EHR system or the informatics groups to build systems, they could at least come here and use the information as it's displayed. Okay, let's see if we can build a consensus here. Is there a consensus that it would be a mistake to go forward with funding one group? Is there a consensus that you wanna see multiple groups, multiple awards come out of this or the possibility of multiple awards? Yeah, but isn't that tied in with the concept that either multiple awards or some body like ACMG or NHGRI, right, Seth? Well, I'm trying to decide whether we're gonna vote, no, I'm trying to decide whether we're just gonna take an up or down vote on the document that's before you or you're gonna make some set of recommendations and then vote on that change to the document. Is that clear? Okay, so show of hands, all those in favor of having multiple awards associated with this. Hands up, one, two, three, four, five, six. Those opposed? Mm, a lot of abstainers, a lot of abstainers. Okay, now the abstainers, would you just vote down on this no matter what form or flavor we bring to you? So is there a motion here to defer this concept to May Council? Show of hands, those in favor? Those opposed? All right, Howard, chocolate for you. Okay, are we okay with that? All right, so we'll defer this. Go ahead, Lisa. Lisa, we made it. Question about how the multiple awards would work and you're talking about having these multiple groups that come up with their own standards or own criteria. So I doubt that you're talking about having funding multiple groups to come up with their own criteria and then having those groups working out themselves or are you talking about having different groups for like different disease domains? I think that's gonna get presented at May Council. Okay, but perhaps talking with council members. Sure, it was informative. It's a hard one, you're right. So let's move along to Anastasia.