 Thank you very much. We now have the time to personalize medicine and Gail Jarvik representing the Emerge Network. One of our broad guiding questions for this discussion is what can we learn from the existing approaches from five different networks or agencies dealing with this problem. And as we heard already from Chris O'Donnell, there were a variety of different levels of evidence and definitions around evidence and also different categories and definitions around clinical utility or actionability. So maybe I'll go back to the speakers from the five different groups and say, can you see commonalities and is there any hope for convergence around developing the evaluation of evidence or the criteria for clinical utility or actionability across the five groups? Anybody want to volunteer? Gail? I think the commonality that I see is that there are too many variants that we need answers about that we don't have time to all do them independently. We may not have time to even divide them up and do them. And it makes sense to work in a more coordinated approach and not reinvent the wheel. I can add to that that there are a lot of people who feel the burn right now, want to be involved in these projects. And I think that there's a lot of person power that could be put towards these if they're coordinated. There also has to be an iterative stage where people who say they want to be involved if they don't get things back quickly are booted out. And that is an important part of it. But I think the speed in which one does quality review is the critical piece. And there are so many people who want to be involved that we can afford to boot some people out. And so I think that kind of echoes what Gail was saying, that we need to get more people involved and not just within the U.S., I mean, I think it's the exact same problem almost every place, maybe a different context, HMO versus NHS, but a lot of these issues are so similar that we've got to be calm about. But even though we have different levels of evidence and different definitions of clinical utility, is there not a way to think about a consensus of categories of evidence, categories of potential clinical utility, and then individuals in different perspectives can choose given those sort of common quantitative scales or categorizations, how conservative or practical they want to be in terms of, do I use this today and do I allow individual patients access to this information when they want it, do we reimburse it, is it going to, how much is it going to cost a healthcare system or hospital, et cetera. Rex, you want to come up? Yeah, I guess what I'd like to throw out for either the speakers or the group, in listening to this, it's not clear there are any striking examples of things that everybody would agree are, sorry Howard, ready for prime time, yeah. And A, I'd be interested in whether that's a perception that anybody else has. And second, I'd like to dig into that a little deeper and get people's thoughts about is that simply because we have not yet had enough time to work through enough examples, the person power issue, or is that because we all just, again, to Howard's point, we all believe in this and we want it to work and so we're just sorting out to try to find a way where it is working. So I'd be really interested in people's reaction to that because it's really important to get beyond whether we're all doing this because we believe it's there or not and whether it really is there or not, doesn't matter, or is there something underneath that? Sorry to be provocative. So I agree very much with what David just proposed. When I listened to the five talks, I heard the first two as groups that had the luxury of being a research protocol and being able to simply say we can say yes or no to certain things and just say it's research and for the purposes of research we'll stop here. What I heard from Ned was it's good to be systematic. What I heard from Howard was yeah, but patients come into clinic and need an answer now. And then again, ARC is trying to shed some light on a process. What I see is the theme through all of it is there needs to be a process and there's way too much data for anyone to do it. And then if I can add in the perspective, again, of the AHLBI workshop that was held a few months ago, we touched on this concept as well and one of the consensus that came out and those of you who are in the room, correct me if I've got this wrong, but what I heard from our group was better than answering gene by gene, variant by variant disease by disease, if we can identify the process of how to make a call. That allows the idea that Dave suggested of depending on your criteria, is it research, is it population health, is it a patient in clinic before me today, you have a system and if we can build systems and give tools to people, everyone can make their own decision and they can fight about it later depending on their situation. Yeah, I think there seems to be some consensus around the idea that there's way too much work to do and the only way we're going to solve it is to have a distributed approach and if that's the case then we have to agree on methodologies, at least a suite of methodologies that we all accept are reasonable and can be applied depending on the circumstances. Ultimately in this country, it will come down to the individual stakeholder, however that is defined as to whether or not it's implemented because it is, that's just the way it works in this country, we don't have anything, well we have very few things that are sort of done by fiat at the national level when it comes to health care delivery. But we would have to be able to agree on methodology and then I think we also have to agree on representing the results of that in a standardized fashion in a way that's accessible. I mean those are the two takeaways that I see, the thing that I'm less sanguine about is that I don't think we're going to probably identify generalities that can be applied across a lot of variants from the perspective of the, because of the tight linkage of the variants of the clinical context. So I don't think we're going to find generalizable principles that we can apply across the entire genome and say okay, we've now taken care of these 50,000 because they all look the same. I'm a little bit less optimistic that we're going to be able to arrive at that. It's coming down to the, you know, when you've seen one variant, you've seen one variant problem. So thanks Mark, I think that was really well said and I think focusing on agreeing to the process and how we talk about the outcomes is just vital and I would say that, you know, it would be nice to even start with getting to the point where we all agree on the process of establishing clinical validity and saying that's the standard we're going to use before we start launching into use even in the research setting. I mean, I think one of the lessons learned from the Duke experience is that there is a risk in high throughput activity and the risk, at least for them, was pretty high but I think if we can agree on that set of variants or sequences or genes that we say these have potential and they meet some standard for clinical validity, then that gets us all on the page for that part of the process. And then the next part of the process is exactly what you said, it's trying to figure out how do we approach them in these different settings and I would just put on the table it's what is the risk of being wrong. We talk about certainty in the evidence-based world, certainty is just the opposite of the risk of being wrong. And if the risk of being wrong to the individual's health, the community's health, the population's health is low but the upside is good, then you can have a trade-off and a process that says let's grab those first. If the risk of being wrong is high and the outcomes are potentially devastating on the individual community or population level, then you might say boy, that's a really good place to pull out the full randomized controlled trial and then finally this issue about real-time prospective clinical data analyses and for the methodologists in the room figuring out how we can develop robust methods that we feel confident are getting non-biased answers, then I think we have a complete package that we can build a high-throughput responsive system. Robert Greenberg and I would like to respond to Rex's challenge there if there are areas of potential agreement and as I'll show in my talk in a few minutes, I think there are some low-hanging fruit both with respect to specific variants that we can agree on and with respect to categories. I kind of like to push back on that mark and suggest that we might be able to create categories of variants that allow us to generalize within those categories so that we can do randomized trials, for example, in one area or use trials or effectiveness trials in a particular thing which back with Muellen's conference years ago we called a sentinel study of some sort and then generalize that to other variants of that type. An example might be what you learn right now about variants that increase your risk of Alzheimer's disease might be also relevant to your risk, similar variants that increase your risk for Parkinson's and ALS and so forth. There's similar types of issues involved in terms of treatability, fear, and that sort of thing. Second point I wanted to make is that I think there's going to be this tension that we're describing between systematic and centralized and chaotic and decentralized and it's going to play out as others have pointed out in the messy practice of medicine which has as you were saying has always been able to turn up its nose at evidence when it finds it convenient to do so. So the tension gets played out in sort of consensus, evidence-based recommendations and priorities which generally will orient around do no harm. But then another axis which we haven't mentioned much is what will society pay for. So you could imagine a two-tiered system where there is the freedom to explore all sorts of genetic information but there is only payment or reimbursement for certain evidence-based criteria. And so I just wanted to throw that into the mix of conversation as well. Yeah, I want to speak a little bit to the prime time issue. It's a good provocation, important provocation. I personally am drawn to Howard's definition of what people often mean when they talk about it, is it ready for prime time or not. But if you look at the group that you're getting consensus from, you're going to have a different, basically, evaluation of it's ready for prime time. So if you look at the PGRN, the Pharmacothermic Research Network Group, who have been committed to this for a long time, 12 years, and you look at the slide that Howard showed, you see not 10, not 20, but 30 different drug gene associations that they're feeling are approaching actionability, to use a word that has obviously got some emotional valence to it. So I think that's going to be a part of our discussion. First off, what is the evidence, and we have clearly different views about what is the evidence that's required, but also what is the body making the decision? Who are these people who are ordained to declare prime time or not? And I think we don't agree on that particular group. So I'll be very interested in how this evolves. Naomi, did you just have a... Yes. So I'm thinking about what Ned and Moine have called Vintu, which I understand to be actionable, but not necessarily. The jury is still out useful because it's got validity, but not necessarily clinical utility, which can be quite difficult to establish, but for now it's not as difficult as some might believe. I also heard the coverage with evidence development raised. I think more knowledge of what is actually transpired with coverage with evidence development under Medicare will show you that it's produced a lot of coverage, but very little evidence of utility. And so therefore I'd say if you truly believe in this route of actionable and Vintu, then it really is incumbent when placing something in Vintu to say here that not just go try it and we'll see if clinical research will generate something. Hey, diagnostic imaging, that's what's been going on for years and you can see where that has helped us. That's kind of the frailty of our whole healthcare system when we're facing an enormous challenge as we try to bring approximately 49 million people under the umbrella of coverage over the next few years. So I would suggest if things are going in the Vintu and you're thinking they should be actionable and you want to know what they're useful, then it's incumbent to lay out what is the structure of an investigation that will actually demonstrate this utility. And if I would say if it were to be tied to coverage, and there are many reasons, good reasons that payers are very reluctant and accounts are very reluctant to get involved in this, then it needs to be something that can be done with observational data. For a variety of contractual and structural reasons, coverage with evidence development doesn't lend itself to trials. If this is a question that truly requires a trial, typically because of the complexity of the variables and the issues involved, then you shouldn't put it in Vintu. If you put it in Vintu, you should have a plan for translating actionability into utility in a clear horizon and framework. And as Ned said, then either up or out. Audience back here is too quiet. Anybody awake back there? Steve, just as a follow up to the comment about ready for prime time, and maybe we'll talk about this more tomorrow, the central repository network and take the point that there should be more than one that are probably integrated. Work well when they have this idea of submitters and assertions, and so there doesn't have to be one group making a conclusion about promoting Vintu to Vint3. There can be many different groups and they can disagree, even at that kind of curated level. And then as long as we have the coordinated access to all the information, you can see the differing views, and then that can be the inputs to the next decision making process, which each organization can do differently. Okay, so I'm going to jump into this then because we're hearing it too much without really defining it what is action ability. In inherited diseases, most of the time we're talking about the patient. The problem with genetic diseases and inherited diseases is that it's no longer the patient, it is the family. And so even if we cannot treat a specific genetic disease, does that mean it's not actionable? Well, no, what about family planning? We've identified a child with developmental delay because of fragile X. Is that useful? Well, it's useful because you've stopped the diagnostic odyssey. You have an accurate diagnosis. They can do family planning. They can make decisions with that. But I'm not really sure how good the treatments are for fragile X. I've heard both ways that they're improving. But they're not there. And we're certainly not going to cure fragile X. You know, breast cancer testing. Is it going to make a huge difference to the woman who has breast cancer in her treatment? Maybe some. But my goodness, it's going to make a lot of difference to her younger sister to have the testing done. So the actionability needs to be pretty broadly defined to be able to include these things. Or just use utility instead of actionability. Kathy. I would be interested in kind of a laboratory perspective on this because one of the things that I've found is sometimes, you know, we're acting as if we actually are gatekeepers anymore of this information and testing being offered and we're not. But a lot of times what drives whether tests are offered is what is actually on a requisition form from a primary care provider perspective sometimes and what tests might be offered in a certain panel to, you know, Ashkenazi disease panel. And a lot of times when you ask, and we did some research on this when you ask a primary care provider and OB-GYN about why they offer a certain number or of conditions to their patients. It's not so much because they've looked back at guidelines and have taken advice from the guidelines, but it's simply because of what laboratory they use with the number of conditions that they're offering. And which we're seeing a lot more now with some of the newer companies coming out. So I'm just wondering from a lab perspective. And I think that it's so important for us to have these conversations with industry and how those decisions get made as to what gets on these forms or, you know, because I think that does drive a lot of practice too. A couple things. First of all, I second David's suggestion not using word actionability because in the last three minutes I've heard from you two completely different things. One person uses clinical validity and another person uses clinical validity. And clinical validity and clinical validity I've heard from you two know what they need. I don't know why they need another word. So I would just get rid of it. Let's get back to using in terms of the all agree on we understand. The other point I wanted to make was that I think this afternoon's discussion, although very, very interesting in some sense has stepped back on what I feel is should be the main focus of what we're dealing with here and also perhaps with NHGRI should be focusing on. And that is clinical validity trying to understand what the impact is on a variant of a person's behavior and how to gather that kind of information. Because if you don't have clinical validity, then there's no point in talking about clinical validity. And I think right now, today, we have a situation where most people who are doing clinical genetics while running labs are having their fingers in dikes. And what they see across the dike is a tsunami. And talking about clinical validity, they're probably worrying about whether it's a good idea or a bad idea to scour five miles behind you when this tsunami hits. We're going to be accommodated by a large amount of sequence data. We don't know what it means. And we need a scientific approach to collect that information and pull it together of all the different ways that we use family, function, frequency, family function, frequency that we use for trying to understand whether a variant has any clinical validity. So I'd like to ensure that we focus on that and talk about it. In some sense, talking about SNPs and worrying about SNPs. In some sense, if they have some clinical validity, because we've done association studies and we know what the observations are, but for me, to a large extent, that's a hammer to put the hammer down and step away. It's not going to be that much. We are instead, based with huge numbers of variants, genes that we know that can really interfere with the function of that gene, a lot of major impact on that person's health. And we don't know how to do it. Great comment. Unfortunately, we're going to have to cut off the discussion now to take our coffee break and come back and the next group is going to tell us how to develop consensus for bending, classifying variants for clinical use. So we'll reconvene at 2.50 or 10.03.