 And we'll move into the general discussion. And so we'll open the floor to questions for the other two speakers. I would refer you to the organizing questions that are on the agenda. And is it under moderated discussion? Kind of in the middle. Ah, got it. So I'm going to just, OK. Is that the one? OK. So these are the organizing questions, which we can stick to, or we can go off the rails. And so I'll just kind of stand up here to refer to those as needs be. So the floor is open. Comment that last question says, how do we capture professional society guidelines in this process? Are you meaning how does this process incorporate outside clinical guidelines? Is that the intent? I think that the way I would interpret this is probably that we made that question too narrow. Because I think one of the things that I've been taking away from our discussions today is that there's all sorts of different groups that are already doing this and other groups that are wanting to do it. And so in some sense, while professional societies are groups that have traditionally gotten involved in developing clinical guidelines, the reality is that the evidence around a variance in genes is going to be coming from a lot of sources. So the question is really more, how do we get our arms around all of the data that is potentially useful for this type of resource and consume it and reposition it in such a way that it kind of looks the same, the standardization. And I do wonder there whether it seems perhaps, from my perspective, where I've traditionally done regular non-genomic decision support, where you generally start with the clinical practice guideline that everyone uses and agrees to. So it does seem that it would be best if this process can directly influence. So not that there is a professional society guideline that says, don't use the test. And then you take that into incorporate into the thought. So for example, with Warpern, I think there was a chest guideline that said, don't use SIP-2C9 and V-course C1 testing. And at Duke, when I started developing these kind of capabilities, when that came out, it was basically the clinicians that, well, why are we going to use this? Because there's a professional guideline saying, don't use this test. I wonder from that perspective, whether it might make sense for this kind of resource to directly feed into those practice guideline processes so that the end result is a well-accepted, general clinical practice guideline that incorporates these recommendations. Thanks. I can't believe there's no other discussion. Okay, Moin. Okay, quick questions for you and Robert here. I think just listening to the whole discussion today, sometimes you think that reasonable people cannot agree on anything because we have such divergence of opinions and kind of watched a little bit how the EGAP working group comes together. They come from different walks of life. There were geneticists, there were evidence-based people. And I think once you put people in a room and you kind of try to develop a game plan, I mean, now that's shaking his head, I think maybe agreeing with me because you've sat on so many of these groups like the Newborn Screening Advisory Committee, the US Preventive Services Task Force. People start with different points of views, but then if you decide to come up with the rules of the game, set the rules of the games in motion, get stakeholder involvement and buy-in on the rules upfront, then you can apply them and you can refine those rules. And I haven't seen anything throughout the discussion today that leads me to believe that we shouldn't be doing the winning process that Jonathan proposed earlier or actually I think Bob Nussbaum's call for a focus on clinical validity because I think those two things will have to come together. Clinical validity really is, I mean, assuming that genome technology and the analytic stuff gets taken care of, although that's a big assumption right now, but I'm not a laboratorian. I think the clinical validity piece needs to be organized in a relatively straightforward fashion. It's basically genotype, phenotype correlations. It should come from large-scale studies, representative populations, et cetera. The clinical utility we can debate the pros and the cons. I like the two axes you put forward. There are other axes that I've heard, you know, David Binstra wrote a paper on the two axes, the two dimensions of the balance of benefits and harms because, you know, the outcome is, you know, the outcome for one person could be benefit, but the outcome for 10 other people could be harms, you know, case in point and prostate cancer screening maybe. So you think about that and the prices, the economics, you know, the price of the testing is going down to zero or almost zero but the cascade effect may be going to infinity. So on the one side, you're reducing the cost of the upfront testing. On the other side, the more return of results you get, the more costs you will incur. So trying to capture these things in a matrix like this would be useful, although we'll be chasing that matrix throughout because it will change maybe day by day or week by week. So I'm a believer that putting the stakeholders together like this, I mean, this is a great place to start. Rules of the game can be agreed upon. Not everyone will agree on everything but I think a process can be put in place and the evidentiary, you know, sort of try to answer those questions more specifically, I think you need the two tier approach on the clinical validity and clinical utility having those definitions set up a priori differentiating between the purpose of the testing like Jonathan talked about, usually in a diagnostic setting but then what do you do with the rest of the information which becomes more like a screening exercise because these people are not sick for these diseases, they are asymptomatic. So you kind of shift back and forth between a diagnostic paradigm and a screening paradigm and the same patient. So you have sensitivity, specificity issues, downstream effects and you know, I just wanted to kick off the conversation because I think it's doable, although we may disagree on some things but I think there could be broad agreements and it should be done across the Atlantic and involve other groups because I think the UK specifically and some other countries like Canada have a lot to teach us about principles of evidence-based medicine as well. So I'll let Robert take the first shot at that and then I recognize we have a speaker in the back too. Well, I'll be quick, I agree. I think because it's hard, we need to recognize that there's diverse opinion but we should do it and that's why I keep talking about low hanging fruit because I think we can do this incrementally, we can start with a limited set that we can agree on and then reevaluate and add to it with regard to across the Atlantic. If I understood the large report that just came out of Britain, what they have decided for the moment is that they will only recommend that next gen sequencing be used for candidate type testing and will reject the notion that anything incidental will be found or communicated. Is that your understanding of it as well? Well, we have some... Oh, sorry. The report that the PHS... The PHE Foundation doesn't represent the official position of the UK. That was just a position paper for them? Okay. Policy analysis. I see. All right, it was pretty darn thick. Yeah. Policy analysis usually are. All right. So you have up here... Could you identify your... Oh, Roger Klein. I'm actually an EGAP working group member as well and I actually live in Wisconsin right now, so... Well, there's a lot of scannies around here, so we're trying to close. Anyway, so you have here developing consensus for binning variants for clinical use. And I think as Jonathan pointed out, he was talking about binning genes. So the question of binning variants versus binning genes, the questions are different. Binning a variant a priori assumes that the gene should or should not be reported. Looking at individual variants within particular genes is somewhat of a more, I guess nuts and bolts type process in certain respects is to try to figure out the particular meaning of a variant that you find when you're using massively parallel sequencing. So I'm wondering what we're actually talking about here. So Terry? Yeah, I think I might disagree mildly with that. I mean, we know that there are many variants that are tested even on genome-wide chips, some of which are associated with traits and some of which are not within the same gene. So I guess I would disagree that there's so much of a focus on genes and really suggest we look at the variants. I guess, so coming at it from my perspective, I'm looking at the introduction of massively parallel sequencing in order to look essentially at mostly private mutations. Admittedly, there are SNPs. I mean, you've posted lists of SNPs that have low odds ratio. There are defined mutations, but in inherited disease testing, much of what we see are private mutations that or variants of unknown significance, for example, that are going to expand exponentially with the numbers of genes that are tested. And so I do think the question is to whether or not to report, for example, let's say the cost of doing whole exome sequencing gets to the point where it's the way to go for any genetic test, even if you want to look at one gene. And then the question is, what gene should you look at? So first you say, what gene should you look at? And then once you get into that point, once you've been those genes, you can look at the individual variants and try to make sense. And as a person who does, you know, from the laboratory side, what I need to know in constructing a report is, I need to be able to understand what the meaning of an individual variant is in a particular patient who's tested for a specific disorder. And as this information becomes quickly overwhelming because of its volume, I think that the need is to develop mechanisms and processes so that we have access to the best information at the time. And a means of updating. Now it's Neen and then Jonathan. Yes, with respect to that specific first question and in relation to things that are saying, I think one of the areas that I'm a little bit concerned about is how much we're dependent on using data that's already available. And whether that is appropriate or whether there's a potential for that putting into bias in terms of how we're classifying our variants. So for example, I would agree that in terms of going down the low hanging fruit, there are certain genes that we know are pathogenic and we know we're going to get a lot more information about. There'll be a lot more buses. How are we going to define those? I think they're low hanging fruit in the sense that if we can't sort that out, I'm not quite sure what hope we have of anything else. But also just by their very nature, a lot of those genes are the ones that we are using clinically and may have severe potential effects both for benefit and for harm. My concern in terms of that, for example, if you now have a gene where you know, for example, truncating mutations are pathogenic, what are you now doing about your missense variants? I think a lot of the time what we're thinking is that data that's already available, say from 1000 genomes or this, that and the other, will be able to inform on that. And it will to a certain extent, but I think there is a systematic bias in terms of the way that data that's come out of those types of analyses have been curated, have been validated. A lot of them actually are not reporting things that are a bit dodgy, which are often the frame shifts, or that have only been seen once. They're actually explicitly not giving us rare things. Whereas when we're looking at our clinical cases, we're often going very, very much into detail to try and find out whether a variant's present. And therefore, if you're then doing those comparisons, you may see a different spectrum. You may see a greater proportion of rare, missense variants in your cases compared to your controls and you would anticipate that. But I think there's quite a potential, therefore, to making clinical comparisons when in fact your cases and your controls in terms of how you're defining them have been treated completely differently. And I wonder whether in terms of trying to get that evidence, and particularly that genetic type of evidence, whether one of the things we have to keep our eye on is whether we have to actually do systematic, specific sequencing or details within both the population and cases to ensure that we've got appropriate comparative analysis. Yeah, I think that's a good point. I had one brief comment to that and then we'll go to Jonathan's comment. I think this gets back to the point that Elaine was making earlier, that there's still a role for traditional genetics in this. I mean, if we think about cytogenetics, which we've been doing for 60 years, every new technology in cytogenetics has been associated with the same problem where we find new things and we have to make a decision, is this something that's causative or not? And we have approaches by which we can work on that, sometimes by aggregating data, sometimes by looking at controls, but more frequently, going back to families and trying to assess within a family whether or not we're seeing something. And Elaine alluded to that with some of the high-penetrant mutations and some of the genes that ARUP tests for. So there's, I think in some ways what we, another thing we need is a way to facilitate that type of more traditional genetic approaches to understanding, at least for those that are within a gene that we know if there's something wrong with the gene that there's going to be a clinical phenotype. How we can easily move between the clinical and research realms there and how we can aggregate that data more quickly and more importantly, engage people that may not necessarily want to be engaged in terms of getting information, are some of the challenges there. And I just wanted to respond to the comment about sort of our definition of the meaning did start from a gene-centric approach because we're geneticists and we kind of think about things in terms of what does a mutation in this gene cause. I think you could do a very similar thing on a variant basis, right? There is a specific variant called Factor 5-Lyden that does very different things than a truncating mutation in Factor 5. Same gene, different mutations. And so one could develop a bending system for variants that we know about, that have names that we see in the common population and what their usefulness is in the clinic. And then a separate system, like the one I've sort of detailed a little bit more for those private mutations are all of the sort of other rare variants that you pick up in genes that may or may not have clinical significance. Yeah, I wanted to pick up on another point that was sort of implicit in the comments that Robert made. And that's the cost issue that, you know, we are going to reach a point where and have for some multi-gene panels reached a point where whole genome sequencing or whole oxome sequencing is actually cheaper than doing a gene or gene. And this has been an issue that in my role as a working healthcare system, you know, is very frightening because of the concerns about now we have all this other data and the imperfect metaphor that I use for this or analogy is when those of us who are old enough to remember when we went from ordering test by test to ordering the Chem 20, that when you ordered the chemistry panel, the thing that drove the chemistry panel was the economics in the laboratory. It was cheaper to run the 20 tests than it was to run the one or two tests. And if you got over one or two tests, it was cheaper to run the 20. Well, the original roll out of that was you gave them all 20 tests. Well, if you're based on, if your normal ranges are based on 95% confidence intervals and you do 20 tests, you're gonna have at least one test out of range and just about everybody. Now, you know, if you're a good clinician, you look at that and you say, hey, the sodium is one point out of range. I'm not gonna chase it. If you're not such a good clinician or you're worried that you're gonna get, you have liability, then you start to do other stuff and you incur costs for a value that's not relevant. And so in a lot of systems now, what happens is if you order your electrolytes in a calcium, you get your electrolytes in calcium, even though it's being run on the chem analyzer. And the only time that you get called from the laboratory about something else is if there's a panic value. The AST is 15,000, is your patient yellow sort of thing. And so as I've thought about this, you know, we're talking about a chem six billion here. And so we have the potential problem that if we dump all the data out there that we're going to incur a lot of unnecessary costs. So in some sense, I think this gets to what Jonathan's saying. What did we order the test for? What's the clinical indication, which should be the primary definer of what we return? And then can we define what we might call genetic panic values, which might be the incidental findings. And so I think we're, you know, those are the sorts of things that I'm beginning to hear at least a little bit of agreement about at a high conceptual level. Now how we actually do it is gonna be a, it is going to be challenging. But I think conceptually, those are some things that I'm beginning to hear. The other thing that I'm beginning to hear from a number of independent voices is that the place to start is really around the clinical validity piece. And that we will be able to reject a lot based on the fact that we don't have anything there and an Oakland situation, if you will. So Howard? So an earlier comment was made about concerns about the implications of what bin something gets put into and in particular angst about something in bin two needs to be very, very carefully researched. And my thought here is that I would encourage us to separate the concept of binning to help guide what to do or not to do. From the concept of does it really mean what we thought it meant? I would argue that research is always needed. And I would suggest we all think about something totally far under the concept of looking at a genetic variant, but I think very instructive. And that's the story of phyllidomide. So when it first came out, it was bin one. This is a great drug. We should give it to every pregnant woman who's nauseous. And then it became bin three. Oh my God, this is terrible. It's awful. We have to stop doing a lot of harm. And then with still more research, it turned out that contextually, it's bin two or maybe even back to bin one. So certain people really need phyllidomide. It's a great drug for certain uses as long as you're not in certain dangerous situations. So it's been pointed out several times a genetic variant could go from bin three today to bin one tomorrow. Let's not forget that that's a two-way street. And it's constantly moving in different directions depending on your context and today's research. So I had a thought about, so Jonathan surreptitiously introduced a kind of Bayesian way of thinking with prior and posterior problems. And we're not gonna forgive him for that. And no, I think he needs to be commended for it. And I think the definition of a panic value fits into that. It's essentially when your evidence overwhelms a very low prior. And one can think about codifying that. So that's what I'm talking about. And then coming back to Jonathan's right at a very specific point and then a more general one. So the first was you talked about binning genes and then looking at variants. What about the genetic model? Where does that come in? Because that's potentially automatable. One can annotate genes with their known genetic model and filter down to smaller subsets to manually create on the basis of that. So that's the very specific question. And then the more general one is about, you talked about having more nuanced reporting. And I'm speaking from the perspective of someone that's kind of going through exactly the kind of process you're doing, but perhaps a more nuanced one in a subset of patients, which are children, which introduce a different setting. But the thought of reclassifying all of those thousands of OMIM genes for every single nuanced disease setting is something that fills me with dread and whether you thought through that process. Okay, so the second question really, this idea that the binning rules that we use to decide which variants in those genes to report, when I said more nuanced, I mean, as we learn more about the types of mutations that cause disease, I took of, I mean, it was just very, like it's taking an ax to something and trying to create a sculpture. I mean, there's frameshift mutations that are clearly deleterious in some diseases. Frameshift mutations probably don't cause any disease any problems in other conditions. And so building a more gene specific set of rules that would say, in this particular gene, it happens to be mutations, primarily these residues that cause problems and the other mutations, we don't think cause much problems. And getting down to that level of complexity in terms of which variants are the important ones that you would report if you found them. So that's sort of what I was getting out there. And other nuances like you suggested, for children, you may apply different criteria to which bins you might report. And I've had arguments with my molecular lab colleagues about it. And I think the types of things that Robert's research, although maybe not completely generalizable in terms of what other clinicians would think, what would you do with a BRCA mutation in a child if you found it incidentally? It may not be something that we would have intentionally looked for in that child, but if that child has it, probably one of the parents has it. And so you could argue that that actually should be reported because it has this cascading effect on the family and potentially prevents that child's mother from dying at a young age. You know, I mean, so there's lots of, you know, I think room for kind of figuring out how to actually apply this clinically. In terms of the first question, you're gonna have to remind me because I've lost it. I'm just checking all the questions. Yeah. So you're thinking along the lines of complex diseases and how multiple variants could play in or just... I'm not sure I can get a good answer. If it was a single mutation in a gene that causes a recessive disease, it would get through our spinning and then go into the carrier bin. So just, that's where it would end up being. I thought I'm not from understanding the question, right? No. I think this discussion is fascinating and I don't want to add to the complexity, but I've been thinking about it from a little different perspective and that's outside these doors. If we look at what this society has done with two screening modalities in recent history, mammography and PSA, we now know that we are hurting more women with frequent mammography screening than we are helping. We know that PSA has no clinical validity. It's been endorsed heavily by professional societies as has mammography. We have a public perception problem that we really have to start looking at. We have women convinced that mammography saves lives and they have to get it. We have men getting free PSA screenings every Saturday at university hospitals across this country. I don't know how we can develop better methodology, but unless we change the culture in this society to one that recognizes that there is harm in getting some information or doing something too frequently to the wrong people and that we're really wrestling with this and we want to do this, but it's very controversial. We want to tell people who are low risk that you don't need to be tested, that you should avoid testing, that you should actively stay out of the system. That's something that this country has never, ever contemplated. We have always encouraged tests when it's available. So I wrestle with that and I think that this discussion of the methodological one is wonderful because we need it, but when we walk out this door, we've got pink ribbons, we've got a huge societal culture that needs to be changed. Yeah, and I think that from a scope perspective, but that is going to be, those are the types of issues that keep me up at night that we recognize that we can self-define what we think is important, but ultimately it resides within a context that is frequently skewed for a variety of different reasons, some of it related to advocacy, some of it related to a perverse incentive, and there are times you just throw up your hands and say, are we ever really going to get our arms around the problem in this country? That being said, I think that we stand a better chance if we're trying to be proactive in this arena as opposed to just letting the market rule the day, recognizing that our chances of success may be only incrementally improved. The only thing I can say to that, at least from an anecdotal perspective, is that at Intermountain Healthcare, our oncology group looked at the impact of oncotype testing in terms of behavior because one of the concerns that's been raised is that well, women are gonna want the chemotherapy anyway because as I think it was Ned pointed out, the testing doesn't reduce your risk to zero, it just says you're at a lower risk, and so how will this really impact behavior and what was found in a relatively large group of women that received the results, compared to those that did not have the testing, was that they actually dramatically changed their behavior related to the chemotherapy exposure, so at least in that one scenario when the information was presented according to what the best evidence says, there was a clear impact on patient behavior in the direction that we would have hoped it would go. So I think there's at least a chance that we can get this right, but it does require that we focus on the harms of, that the fact that doing more is not without consequence and that we need to highlight those types of examples. I mean, the worst thing from my perspective in whole genome that's occurred in the last few years was the very high profile individual that did his personal genomics, found a prostate cancer SNP, went in and had a biopsy, they found cancer, I'm cured because they found it because it was a genetic test and I'm going, oh my God, because the likelihood, first of all, that that was going to occur was minuscule, and if that anecdote ran the behavior, that could be very difficult. The other challenge, of course, is the liability issue, that if you have information and you don't give it, is that going to come back at some point? Because it said, well, you had this, you should have disclosed, even though it was a very low frequency. So we have to be thinking about those types of policy implications as well. So Ned, I know you want to respond to that. Well, first I say, when you started down that topic, I was trying to decide whether I was going to have to hide under the table or not. So I was really glad you went the direction. But I wanted to step back, I realized, as folks are talking about cook utility and validity and processes and roll sets, that first of all, I want to point out, I was really glad that Robert was here to present what he presented because it's a reminder that when you get into the area of clinical utility, there's not a bright line. There's not a yes or no. I would like to think in the clinical validity train area we could decide on a bright line or at least get kind of comfortable with that. But then recognize that utilities and disutilities, harms, often come in different metrics. And so mammography is a great example. There's a judgment involved with the task force of its binning of mammography in women 40 to 50 into the C category. And it was a value judgment owned by individuals sitting around the table. There's not a bright line. There's not, you know, how many deaths or months of life saved is 100 unnecessary breast biopsies worth. I don't actually have, I can't tell you. I'll just tell you that as we looked at it, our judgment came to a level of comfort that said that's where we ought to bin it. We look at other activities, we try to remove our self of advocacy and said, we just treated this like any screening test. Where would we bin it? But recognize there's no cutoff. So there's no inherent evidence baseness to that. So I think stepping back to Robert's point, recognize that there's utilities and disutilities associated with everything. And they're often in different metrics and that really good people, like the people sitting around this table, can look at exactly the same evidence. Not argue about the evidence itself, but bring their personal values to the table and come up with a different bin. Especially around the issue around clinical utility. So that's, I think, what we have to wrestle with. I think if we can agree to clinical validity and then recognize that there will be variation around the judgments associated with clinical utility, I think that'll move us a long way down the path and it will allow different or even a variety of different decisions to be made. The issue that'd be fascinating, I'd love to hear from the UK, is that let's say we boldly said we are not gonna let healthcare rise above 15% of the gross national product in the United States. That's a lot, by the way, 15% is a lot. That would bring a level of discipline to us deciding about the value of those utilities and disutilities to a level that might actually be in our patient's best interests. And I wonder often about places where the government does, is either a single payer or a healthcare run system by the government if that discipline really, what does it translate in terms to real health benefits? But keep in mind that the utility, so my disutility is I know that if the cost of care goes up, the number of people who are uninsured in Colorado goes up and it's a really direct metric. So when I step back I say, yeah that's great to know that information but if the cost is so much that now this person can't get a childhood immunization or a blood pressure medication, then my disutility metric goes away over the top. So if we can keep those in mind, moving forward, looking at these questions, I at least believe that'll be really helpful and respectful. So, William, before I get to you, let me just respond to or at least give an opportunity for our UK colleagues to take up the gauntlet that was thrown down by Annette about what if we had discipline like you guys have. You guys are disciplined obviously. So what would it look like? We have rationing, that's the reality of it. But I don't know what the clinical people, I'm just a bioinformatician. What do you say about the realities of that? As a government employee, I can make no comment. Well at least that's the same on both sides of the internet. As a public health doctor I think I can and that is that I think that one of the things that happens is that you have smaller disparities in health care outcomes within society. I mean that is one of the obvious things that results. But it does mean that you do have rationing and it does mean that there are things that may often be judged here as having clinical utility that don't get done. But the net effect is that we probably have, we have a lot of disparity in ill health and a lot of disparity in health outcomes would possibly last than you have here. It hasn't saved our economy from going down the tube though, you might have noticed. That may be too much to hold for any intervention. Actually one more, we'll do it Nanzean for her perspective. So I was just going to add on to the UK of the fact that as a clinician I also feel able to comment. I think that one of the things we have to bear in mind both with respect to the UK-US difference and with respect to this particular thing that we're trying to achieve here is that there will need to be some flexibility in terms of that interpretation certainly across the Atlantic. I think it is true that the perception of the patients in the UK and the behavior of the patients is different than in the US for reasons that are probably quite complex. I think overall patients do adhere to the recommendations that their doctors give them with respect to whether screening is useful, whether they're going to have tests. We had very little actually testing by 23andMe or et cetera even though it was available everyone knew about it. Overall if their doctor said actually it's not going to help very much they generally wanted to go with that. There may be some really quite deep things about the British society and in terms of how, why that is. But I think it does, we need to bear that in mind when we're setting, we'll have to set it overall what we've said principles but how they're actually used will require flexibility so they can fit into the different populations that are potentially using them. Yeah I really like that idea because it again comes back to the fact that we can probably agree on methodologies to approach these problems to generate the right evidence but the decisions that are going to be made on that evidence will vary depending on the perspective and I think that that is something that we need to be cognizant of. Thank you for being patient. I think that what we'll decide that in the UK will be the nice type criteria of health economics and that's been the kind of a driving evaluation as a question of rolling out in this area. Thanks. I'm sorry. By the way, I would agree to it. It's nice type considerations but using real world phenotypic data not data from clinical trials unless it is randomization at the point of care which is one of the ways that we are now starting to go in the UK that rather than let the observational data stack up with all its biases, we engender a philosophy of getting doctors in everyday clinical care to randomize and potentially we could do it reminding ourselves about what Zelens did years ago. We can randomize the whole population now. We don't have to wait till the event comes up or we make a decision. We can randomize the population and then we actually have randomized real world data and the biases have gone. Okay. I'm a government employee and I don't mind speaking up. I do have an opinion and that opinion gets me in trouble more often than not. So here we go. So I just wanna follow on Bill and then Ned and this whole discussion here. Utilities and disutilities are very important considerations. Couple of years ago, Kerry Stephenson and Jeff Kalkscher from DECO debated me and David Renzohoff on the pages of one journal that genetic information can cause harm. We were asked by the editor to take that position whereas Kerry Stephenson and Jeff Kalkscher from DECO took the opposite position and we argued back and forth. We all made good points but the point of our commentary was let's get data. I mean data on utilities and disutilities cannot be not obtained in the absence of data like the reveal study and I wish you would analyze the reveal study in even more ways than you have because some of the findings, some of the cascading of the patients that got E4 positive seeking things that paid for that were not helpful. So there is always disutilities there and maybe you can explore that data. We need data and I think this is a good time for that research to be done while the whole next generation sequencing gets analytically more accurate, cheaper, et cetera. And I would maintain that in the absence of data on utility or disutility, all we have or we will have for a long time is information on clinical validity. So the sooner we get into the utility business either from efficacy or effectiveness or both and getting outcomes at the patient level at the system level, at the interaction level, at the family level because you will see those metrics and they will play against each other and then they will be brought to some group like the SPSTF or EGAP or some group or NICE and there will be some judgment on the metrics around that. But in the absence of data, we're all just pontificating and we need to get to that data ASAP and we need to embed it into the systems, do the clinical trials first, small pieces here and there. In the absence of that, I think people will interpret good meaning people like Howard McLeod was saying this morning about the PGRN and what they're doing, they're all believers that pharmacogenomics will work. So they're bringing that a priori bias to the table and their clinical actionability thresholds will be definitely lower than the thresholds of another group that is sort of taking care of the healthcare system and how expensive it is and we don't have rationing here but we have a broken healthcare system and so many millions of uninsured and opportunity costs, et cetera. So we cannot postpone what Bill was talking about and do those kinds of studies while the technology improves because we cannot do them sequentially as part of the implementation research or science that has to go along with the whole genome sequence or next generation sequence is doing the right kind of research that will get us those utilities and this utilities ASAP. So something that's implicit in what you said, what was explicit in what John said and what Ned said earlier and I think it was also in Gervanit's talk is the idea that we can't look at this as being linear and I think a lot of the diagrams that we see show this push that we go from basic science out into the clinic but the theme that I hear reemerging through the discussions is the idea that we also have to have robust methodologies in the clinical world that actually can return data in a rapid fashion to say is what we're doing really adding value or not adding value and at what cost and then we have to be willing as Ned said to say we need to, this looked promising but we need to get rid of it. This is just not working the way, we've had some success particularly I think in some of the supplement issues, the vitamin issues not maybe because we have a natural antipathy towards vitamins to begin with and so when we find data that they don't work we're happy to get rid of them as opposed to our nice medicines but the reality is we have some examples where we've actually taken things out of practice that are clearly not beneficial and in some cases harmful so how do we build those types of systems and again to refocus what has become a relatively broad discussion? What role could NHGRI with partners potentially play in terms of doing that? I think I had Gurvenit first and then Robert. So just following up on the conversation so this is not a government position but what I've observed is I think it's important to have the outcomes information available. I don't have such a pessimistic view that the society here is broken and everything's always going to increase. So two examples I can give is one was this bitter controversy on lung wallium reduction surgery in emphysema and once there was a trial and once people agreed that it should be covered the expectation was the utilization is gonna go through the roof and when the informed discussions were had actually more patients refused the procedure than who went through it because of the aggressive nature of the surgery and the minimum benefits. So that's one example when you had a discussion that actually went down. The other example on a diagnostic is a whole body CT scanning which is in some way similar to the whole genomic discussion. There was this concern everyone's gonna be using it what are we gonna do with the findings and it again has gone away for different reasons I think but it's not always the theme that everything that we do always goes up in the society. Robert Green and Robert Nussbaum. I just was struck by your example the vitamins because I think it's also a good example in the other direction. We've known for quite a while about the validity of or the lack of validity of so much in the nutritional world and particularly in an unregulated environment it's a multi-billion dollar business that frankly contaminates scientific medicine. In my view it's one of the scandals of our current society. But following up on Bob's three P's I think it was earlier. I've got three F's, three F's yeah. I've got three I's. So I hear a saying maybe that we want to try to come to some consensus that's incremental, iterative and informed by evidence and that maybe we want to have an alliance with the what is it the Agency for Healthcare Research and Quality and PCORI to try to use those resources and partner with them in order to move genetics forward in an evidence-based way. Because I think a lot of us are concerned that the rest of medicine isn't doing very well at this and I'm of two minds in this and one I want genetics to be evidence-based but on the other hand it's again are we really holding this to a standard that we don't hold the rest of medicine are we trying to actually fix the healthcare system through genetics? I mean it's... I mean it's odd in a way. So if we combine the three I's with the three F's we have the three F's so. I sort of feel a little bit like the French horn who comes in on the wrong note. If you have a SME6 and a cratenin of four I don't want the laboratory to tell me whether that four or what the clinical utility is knowing that four. It could have been eight last weeks in the patient's head and it could have been a one, two days ago and the patient could have been in the ATN it could be a stable four. What I want them to tell me is is the four abnormal or not? And I think what I would like at HGRI if they're going to get involved with this. Elaine? So along with what I'm hearing I don't know what the data is supposed to look like when you say evidences and things like that. When I hear about it with a variant that I'm trying to classify and figure out I'm talking very specific. Is there enzymatic evidence to go along with this? Is there immunohistochemistry evidence? Is there any functional study? I don't know how to visualize when you're talking about randomized trials or collecting outcomes study because for me this one patient with this one mutation the chances of me ever seeing that mutation again is pretty small. And so when I'm hearing outcomes data and let's look at the data I don't know how to visualize that in the realm that I'm working in. So if I can restate what you're saying this is the issue where we're really dealing with rare or ultra rare perhaps unique variants where there will never be those types of data that could be utilized to help with an interpretation. And so what I'm hearing you say is that again we have to recognize the fact that some of the things we're going to find are not going to be amenable to broader evidentiary approaches that they're going to have to revert to more traditional ways that we've assessed pathogenicity. Is that fair? Yes, but it's not even the rare or the ultra rare. I mean we're getting into the, in terms of genetics what other people consider rare I consider semi-common or something and inherited in my little world that I live in. So when you're talking about a SNP chip and you're reporting out a hemochromatosis variant or a non-cystic fibrosis variant those types of things there are a number of studies for me to work with and to go with even though they may not be perfect. But the next generation sequencing when we're going to be using this at least initially is for the clinicians that don't really know where to go for what else or they've done other tests and it's a diagnostic odyssey. So one of our clinicians approached me and he said I want a CLIA certified exome test. I want you to do it. I want to give you the genes that I know are associated with that he's in cardiovascular genetics and I want to give you that list of genes and have you analyze it for that and give me a report back with that. And then if there's nothing there that explains this to me I want to give you a second set of genes for you to analyze and come back with it and if that still doesn't explain it would you just give me the data and let me play with it. You know so these are the types of scenarios that we are facing really right now. You know and with a ton of data that's going to be coming out. I like who was it that's talked about this and it's going to be an economic coming. But those are not going to be available for randomized trials or for data outcomes. I mean the data outcome is, did we find something that could explain the child's symptoms? Correct. I think we need to sort of maybe bring back to what any share I can do. I think an important thing not to lose track of is the biology that underlies this. So even these rare things that happen are telling us something about biology and they're telling us about biological pathways that are important in some process. And so I think that emphasizes even more the need for us to make sure that we capture these one-off situations in some resource where it's annotated and can be used not even necessarily clinically but can be used biologically. I mean we don't want to lose track of the biological utility of all of this data. So I think an important part of this resource and an important role that NHGRI can play is to continue to link this back to the underlying biology. And then that's clearly within NHGRI's mission. It was clearly within I think the goals of the strategic plan. Even though we're the dots that are out at the bleeding edge there's a lot of dots that are sort of in the red spot in the NHGRI strategic plan where this data from all these one-offs could actually be very helpful in terms of informing the biology. So I would argue that that's one place we can help. So that's another example of again the backwards flow of information to inform or to generalize knowledge. Right and I think I'd just like to make a second point with respect to the evidence. It's actually more of a question for people to think about and that is I'm hearing I'm worried that we're being a little schizophrenic here. Everyone's scared about the magnitude of the tsunami of the data that's coming. But I wonder if actually one of our problems is we don't have enough data yet. Because is part of the problem that we don't have enough collected set of variants and enough collected set of phenotypic information attached to those variants that we have enough examples to actually say we could think about what the clinical validity of it is forget the utility but whether the clinical validity is. So I just want to get us thinking about sort of this almost schizophrenia. We're all worried about the amount of data that's coming but I think actually the problem is we don't have enough yet. Yeah I would agree with that. And you know in terms of helping you one thing that will come out of the sort of search side thing is being able to say these variants have been seen are common in the population of people who aren't sick and you can ignore them. And that data set will get better and better and we'll get more stratified as the population data. 1,000 genomes that certainly isn't enough for this. We'll need much bigger studies to get that at a fine scale but that will get collected. And so that's a positive feedback. Certainly good news for the sequencing crew, Robert. So I guess I'm sort of coming at this from a similar perspective to Elaine so I look at today in today's world we have clinical tools that allow us to rapidly genotype large numbers of genes and there are requests now to do so. So for example in a cardiomyopathy panel cardiomyopathy battery of genes that somebody may request perhaps the first two genes account for 70% of the mutations and then the next five genes account for a small percentage but a significant percentage and then as you rapidly, as you add more genes there the likelihood seems to be very, very small that you'll find a mutation in an individual gene. However the likelihood of finding a variant goes up. So and if you find a variant what do you do? You find a variant of unknown significance in these genes many of them are missense mutations. Do you, these patients can be, their outcomes can be improved or at least it's believed that they can be approved for example by monitoring and intervention. So do you start monitoring patients, monitoring their family members based upon the addition of multiple levels of genes? To be analyzed I think one of the things that need to be done in it it probably involves epidemiology to some extent and statistics is to try to figure out ways in which we can determine how many genes based upon the likelihood of finding a variant of unknown significance versus the likelihood of finding a pathogenic mutation we should add. Cause we don't have any way of doing that there aren't guidelines and I really don't know how to make guidelines. It's very challenging. The ability to genotype is outstripped our ability to deal with the data that comes out in a clinical way not research but clinical. Mark? Yeah, I'm Mark Handel, University of Utah. I'd just like to finish up with one comment today is we talk a lot about things like filtering and binning and everything like that but really all the successes to genomics to date have come from application of rigorous statistical methodologies to the problems. We don't decide if one gene's homologous to another based on filtering or opinion. We don't decide where genes are in a genome based on those kinds of things. We apply rigorous science and statistics to it. One interesting thing about the data that's coming out with whole genome sequencing is that it's very amenable to these kinds of techniques and so with respect to Rex's comment about what the NHGRI can do now I think one thing they can do is fund work towards developing more rigorous statistical methodologies so that just like you get on a diagnostic report back a probability that the finding indicates a particular disease you actually have a rigorous probability that a given new allele or common allele or what have you is actually likely to have an impact. And I think those things are ultimately determinable with these data and again the irony right now is not that there's too much data but as Tim pointed out there's still really little. We still don't have enough data yet to really know what the frequencies of rare alleles and disease-causing alleles are in a larger population but I think it's coming and we need tools that can operate on that in a rigorous fashion. So I think there's a twin track kind of approach there. The first is the collation of single observations and the sharing of that information with associated phenotypic data and that already occurs for copy number variants the deciphered databases operating for seven years. ISCA has been offering for how long? A couple of years? Three years. And they're collating this kind of information and it's useful on the deciphered databases used by 230 odd clinical centres around the world. There's only 23 in the UK so that means 90% is outside the UK and we need to be collating sequence variants in the same way because the one thing we can be sure of is that every base in the human genome is being mutated multiple times in the current generation that's alive today. So the humans are out there. We need to collate the information. In terms of the probabilistic approach but equally that doesn't mean we ought not to be developing the probabilistic approaches as well to provide us with evidence for things that we still have only seen once and I think it does place in extremely there are ways in which we can start to use the population variation data to ask the question how likely is it that I would observe this a variant of this type even if I haven't observed that variant itself but it does place as Naz mentioned a very high burden on how we collate the population variation data and how we disseminate it to enable those probabilistic algorithms to work and the current data that we have is not sufficient for the task. So if I'm hearing what you and Mark have said and to maybe expand a bit on your two track essentially what I see is coming at it from both ends that we've got population issues that we've heard about but we also have the laboratories dilemma of finding the individual things and that somehow we have to get our handles on both of those data sets and bring them into some sort of a convergence so that we can do the probabilistic statistical analysis on those things that are amenable to that and use other approaches on the things that may not be amenable to that and that if we can get all the data from those various sources and have some sort of a strategy upon which to analyze the data that we might have a better chance of filling in the current gaps in our knowledge. Is that a fair restatement of what you said and what you said? Yeah, I certainly didn't want it to get boiled down to an eye of the roar, I think both. Right. Both NHGRI and other international groups can participate and lead. So Tim, I'll give you the last shot. Just one, everybody's saying statistics but I'm a predictor at heart. Ultimately, we're going to have to deal with the things that have never been seen before and the ultra rare things and there won't be statistics associated with them so I've always thought that ultimately we have to be able to understand the mechanism, the consequences of the statistics well enough that we can actually build real predictive models so we can recognize that a bit of this is already happening with SIF polyfens, some of the algorithms and they're assessing a variant on the likelihood of it disrupting a structure, for example. That's not good enough but we will have to go down there if we're going to deal with all the variants. So we're at five o'clock which is when we said we would end so we will end and so the working group is going to be doing a working dinner to try and pull together the things that we heard today. Rex has the unenviable task of trying to present that to you first thing tomorrow. So start time tomorrow again is at eight o'clock with refreshments being available and so refreshments at eight, the sessions will begin at eight 30 and other things that you have, Jackie. Quick note, anyone that needs to go to the Shady Grove metro station, there is a shuttle scheduled to be out front of outside the lobby at five 15 and five 30 this afternoon and then in the morning, if you are taking the metro again, when you depart, you would need to go to the west exit towards the Kiss & Ride just to clarify that in the morning at seven 15 and seven 30. The shuttle service will bring you here. Thank you.