 passed the gavel to Howard to lead the panel discussion as a moderator. Thank you, Dan. Brandy, did you receive the slide? I do, and I'm loading them right now. Thank you very much. So I was asked to be summarizer, but really it's not so much summarizer as it is stimulating some discussion. And I was given one slide. And so I think that the quote from the famed American philosopher, Mary Violet Relling, is really important. Can you implement if you don't know how it works? And part of this is that I'm not sure we're discovering in the right context. And by looking at pharmacogenomic implementation from the way it's been done traditionally, we're really looking at end points that often aren't the exact ones needed for implementation. And we're not necessarily involving the right people in that effort. And so I think there's an opportunity and a merge to really overcome some of this, because the discovery will be happening in the context of routine practice, not within the unusual situation of a clinical trial. And especially in the oncology area, we know that the patients who are able to go on trials are the rare unusual patients, not the normal patients. I think that the randomized trial versus the organically randomized data from the clinic is certainly an important element. A merge has an opportunity there. But I really worry that... And we can get the necessary sample sizes as Dan alluded, Dan Rodin alluded. But I'm not sure that we have the right methods for really optimizing data from the electronic health record. And I don't mean the methods for extracting the data or deriving phenotypes. But from a study-designed standpoint, this is a pretty nascent area and it needs some focus. Last thing, there's not a lot of evidence that we can do iterative interventions into the electronic health record. Now, Vanderbilt's done a little bit of that. But there's still a lot to be learned about how do we take the merge-like setting and turn it into an implementation science in its setting, in its fullness. And then lastly, something from Dan Maze is that guidance for best evidence-based therapy selection is the sweet spot for pharmacogenomics. It's something that can be done with these infrastructure such as we have for a merge. But there's still a lot of work to be done to not only discover, but to really push it forward into practice. So I'll stop at this point and see if we can get some discussion started. The floor is open for questions or comments here. And I'll lead with, I guess, a provocative question because it seems to me that the dichotomy between discovery and implementation makes it sound like implementation is just a done deal, that operational health care is something you don't change. But in fact, the other major theme of foot in the national agenda is learning health care systems. And they continuously change and they continuously learn. So how do, I'll ask Mary first, what's the intersection of her set of assertions and the idea of learning health care systems? Well, I mean, a learning health care system. I guess you could say that there's aspects that drive that learning that are based on implementation projects that I still think that those are largely coming from research. So we just had some examples of that with these system-based, quote, randomization. But that really is a research project to see whether outcomes are indeed better if genomic medicine is used versus not. If you know that outcomes are better, if you know that a patient has a completely inactivating variant in their TPMT status, it would be 100% unethical to give them the normal dose of that drug. And to test yet again 30 years after we know the answer whether there's more or less toxicity or better or worse outcomes in patients who have this out-of-during dose based on TPMT or not. So that could be part of a learning health care system if the health care system is practicing really bad medicine. And I acknowledge that there are many aspects of health care systems that are practicing horrible medicine. But that doesn't make it right for us to try to capitalize on that lousy health care in some kind of vulturistic way just to generate more data for something that's not ethical to study. I don't disagree with you, Mary, but I think the question of where is this boundary between research and implementation? The whole elegance of a learning health care system is that you really merge quality improvement and research in a sense because they're the same methodology, even though they're done by different people, and apply the results not only to the literature but to the practice. So we haven't discovered everything. I think we can safely agree upon that. And in the context of, I guess, expanding the boundaries of our discovery, it's a more systematic integration of research in a sense into practice so that we can harvest the experience of routine clinical practice so that for the most part goes fallow in today's biomedical world. So I actually see it more positively. I see it positively. I just see it as clinical research, not implementation. This is mysterious. Part of my reason for including the are the right people involved in the effort question was around what Chris just raised and that a lot of the folks involved in quality improvement aren't involved in eMERGE or aren't involved in a lot of these efforts. And certainly at my previous institution and my current institution, I've been able to cause change to happen much faster by working with that group than by sticking with the group that's more comfortable with an endless number of clinical trials. Yeah, and this is Mark Williams. And I would also add to the discussion having worked in integrated health care delivery systems and again using the methods of quality improvement coupled with research. I think there is a real sweet spot there. But I think perhaps the takeaway for the questions that Howard has teed up is if we think about the next phase of eMERGE, would a component of that be what are the appropriate trial methodologies or would a proposal for an RFA have to include something that says, you know, what is your pragmatic trial methodology to be able to study this? Because that's where the implementation research is really going. And as an example, I mean PCORI is highly emphasizing pragmatic clinical trials, which I think eMERGE is very well positioned to be able to leverage. So is that the question that we're really asking about inclusion here, which is different trial methodologies? So this is Julie Johnson. I mean, as I listened to this conversation, I think the question was put forward as a dichotomy really. Discovery versus implementation. And in reality, there are three steps. There's, you know, the original discovery of the genetic association. And unfortunately, most cases in pharmacogenetics and otherwise aren't at the stage that TPMT is, where I think we still do need evidence for whether the genetic association has clinical meaning. And so that's not discovery and that's not implementation. I agree. It's some sort of clinical research. And so I think the question is, how do you do that best? And then there's the clinical implementation. And so, you know, I think part of the problem is that the latter two things have been lumped in some ways, and quote, implementation. So implementation is stuff that's truly ready and then you're testing uptake and, you know, attitudes about uptake and that kind of thing. And then we need to come up with some term for that middle space. So testing the relevance of implementation. Testing, you know, or testing the value, the clinical value of utilizing genetic information to guide care decisions. And so, you know, it does seem like Emerge is really obviously positioned for the, so if we say there's three things, the first and the latter. For the middle, depending on the trial design, and I think, like Mark said, if there's pragmatic design, then perhaps Emerge is really perfectly situated. And I mean, I would tend to agree that that might be the better approach. But if it's a more traditional clinical trial design, then I don't know that that makes sense. So I would argue that the question isn't maybe posed quite right, because I think there's really three phases, and we're talking about the first and the last, when we say implementation and discovery. And that middle piece, which is maybe where we need the most help, is kind of missing. Yeah, this is Ergen from Mount Sinai. I think I agree entirely with what Julie said, and certainly reflecting on the conversation, I think a number of points were raised that clearly need to be considered very seriously. And one of the major points from Howard on his slide is that do we have to write people for implementation, and I would say within the construct of Emerge, probably we don't have the right level of expertise for implementation, because what's missing in Emerge to great people is the constituency of provider and experts in clinical care and workflows. So if implementation is something going forward, then I think there needs to be a clear expression that these stakeholders will need to be brought into the fold, that these people understand clinical workflows that are essential to clinicians. And currently we don't have those at the table for true implementation. I also think in agreement with Mary and Julie that we are well positioned in this sweet spot and in between, and with some intelligent approaches, what Mark pointed out, pragmatic drug design and others, we have some unique opportunities. One was raised and was mentioned recently in conversations that we had with our external advisors. So this opportunity that we have is to recall by genotype, and we should think about this, what kind of fantastic opportunities is for us where we have phenotypes across the electronic health record, we have phenotypes for tens of thousands of individuals, and if there are burning questions or scenarios for which evidence needs to be generated in a specific case to move something over the border from research to generate the evidence that would allow us to formulate an implementation strategy, I think that's what we can do with this kind of approach. And so perhaps that's one way of thinking about that sweet spot. So this is Justin, I want to re-emphasize what was said. One of my favorite quotes is from Paul Clayton who said that you implement every system three times. You implement it once to find out if it can be built at all. You implement it the second time to figure out how you should build it, and then you actually build the one you use. When we talk about implementation science, a lot of that focuses on the uptake of the intervention by the target party. But in fact, in eMERGE 2, we're actually at that first step of implementation. Can you even build a genomic decision support system that will bolt onto an EHR? So most of the questions of implementation science are the ones we will address when we build the systems the second time, which is to figure out how we should build them and integrate them into the workflow. So I think we need to think about, as they were saying, implementation is multiple things, and a lot of what we're doing right now is just can we make the technology jump through this hoop at all? And I think an appropriate target for eMERGE 3 is, okay, if we can make the technology jump, how should we make it jump to optimize the clinical practice? This is Zach. Can you hear me? Yes, we can. Great. It's very unusual for me not to be able to jump right in, so I'll do my best now. So Dan Rodin made some comments which triggered in my mind some, I think, important scientific questions for eMERGE 3. So Dan mentioned that some rare variants were at a higher frequency in the African population. And then on the chat part of this meeting software, he also responded to everybody that there seems to be about 50,000, perhaps African Americans in the cohort. And I think that's important because to say in a three-part question, A, does Dan think those rare variants that are more common in the Africans are actually the causal variants and therefore actually cause a pharmacological change in those individuals if they're subject to the same drugs? In the context of B, recent very nice papers showing for a number of heart diseases such as hypertrophic cardiomyopathy, rare variants that were supposedly causal and first ascertained in European populations have prevalences of 30% in Africans where it's clearly not the case that there's 30% HCM in that population, which leads me to the scientific agenda for eMERGE 3 is, I really think we can start addressing and we should address because very few others have and we're in a unique position to be able to do so because of the electronic health record system and health center derived populations to be able to start understanding the degree to which some of these variants are genuinely causal variants or are incidental findings that actually could result in overtreatment or mistreatment of individuals in underrepresented minorities. And to that I'll leave Dan Roded and others to respond. So I'll take the opportunity to answer that because he asked me specifically. The answer to the whether CIP 2C9 star 6 is causal or not, I suspect it is because we know something about its function and it's a reduced function allele and therefore it makes biological sense as well as statistical sense. But I think that the more generic question of variant of uncertain significance, especially in the rare variant spaces is one that anyone who takes care of patients with hypertrophic cardiomyopathy or in my case, you know, cardiopathy struggles with every time you see a patient. And I think that we're going to have to come to an understanding that there are diseases that are caused by rare variants, that are diseases, there are phenotypes that are likely to be modulated by rare variants and then there are rare variants whose role in pathophysiology has been dramatically overstated by initial studies. And I agree with Zach, if we're ever going to make headway out of a variant that is one in a thousand, how are you going to figure out what it does to a phenotype? You can either do an in vitro evaluation of function and sometimes even those are misleading or you can ask the question, does it associate with some kind of phenotype? And to do that you have to have very large numbers and we're one of the places, emerges one of the places that can do this. I see that people are talking about Biobank UK, about Kaiser, about the VA and I think that those are large resources with which to emerge ought to consider collaborating. It's easy to say that and it's actually operationally hard to do. Each one of them has their own access models, each one of them has their own data sets that are bigger in some ways and smaller in other ways compared to emerge. I know a little bit about Biobank UK and they are having trouble getting the kind of detailed electronic health records that we are used to with an emerge. So they may have 500,000 samples but they have that drawback, they have the great advantage that they have very, very detailed phenotypes for some particular diseases. So I think each of them bring something to the table and emerge as large as it is ought to be a player in that space and I think I've said enough. This is Dick Winchelbaum and I couldn't agree with you more which will shock you that I'm saying that. But as a matter of fact what you said and I think what Emerge has shown has profound implications for clinical trial design. I think the Emerge program is beginning to tell us that the way we've been doing the studies with 2000 patients in one arm, standard therapy in 2000 and the other arm, standard therapy plus another drug may not be the way to go forward. Do you have any comments with regard to the implications of your own comments for clinical trial design? No. I think I'll let other people talk. I don't want to monopolize the your time. But I think that I will just say a generic comment that implementation in trial design has to happen after we do discover it. Or coincident with, and I made the point that Dan Mace has made the point about a learning healthcare system and that was sort of what I was trying to do on the last slide. I think that the implementation side and the discovery side go ahead and have as one data set grows the other data set by its nature gets richer. So this is, Mike, I have just two quick comments about the VA's million veteran program. We do have approaching about 50,000 African Americans. So I do agree, Dan, that there's plenty of reasons to figure out how best to collaborate and we are working behind the scenes on figuring out how to create data sets that can be accessed from outside and also de-identified to overcome some of our collaborative barriers. But the second is that we've got a big initiative on implementing trials within the system. I really think that we do need to focus some attention on implementation of trials that are more broad in terms of their enrollment, but that use the BEHR backbone rather than every time we do a trial creating a whole new electronic backbone to support the trial activity. And we've got a trial in that point of care mode that's under review next month that will be randomizing people at the time they pick up their drugs. It happens to be a hypertension trial, a cortaladone, compared to hydrochorthizide and we've got some regulatory issues to get around. But I think that, you know, it emerged with its expertise in implementation could help put a big role in figuring out how to do very large trials at a very low cost utilizing the electronic health record and making trials go from $100 million trials to $5 or $10 million trials. This is John Harley in Cincinnati. Maybe one of the themes of Emerge 3 would be to assess penetrant. We would have the electronic medical record to go to and we'd have this enormous database. And so if the issue in clinical application is penetrant, then we would be in a really powerful position to actually reach some kind of resolution or make progress about what judgments to make about penetrant. Yes, I think because what's lengthened that to penetrants and which variants are and are not pathogenic, and I think a huge amount of the work in genomic medicine now is understanding what variants are pathogenic and which are not. And we have a lot of data that we could put to that question and really understand that in Emerge to be really exciting. Howard, I want to really echo that point because both penetrants and heritability can be derived from this data set with current technologies and can really add some rich context to almost every phenotype that is just totally missing right now. I mean we're chasing things that really we shouldn't and vice versa. And that would be a huge service to the community in addition to all the cool methods and findings that would come from it. I think this is the sweet spot. There was just a comment earlier that there's a rare gene frequency of 10% or less we really need to enroll a thousand people with that genotype in there. We are not going to be able to do that for the number of potential genetic variants. We have to at some point go ahead with implementation and then use different study designs to study the truly rare genotypes. We cannot randomize every one of them and we can get prospective studies of everyone. Certainly some will need to but by and large we can't hold back implementation until we have every SNF sorted out. Yes, so I mean the idea is not to do a giant child of every variant. The idea is that you need very few people with the same variant to say well if they all have breast cancer by you know 89 then it's probably a pathogenic variant and if none of them do if three people don't have breast cancer by 89 then probably it's not one of the pathogenic. We're talking about genes that we know what the gene does. We want to know what certain variants in that gene do. The most important gene stuff are the ones that we know cause disease. We want to know for each variant in that gene does it cause disease or not. We actually don't have to throw a lot of people at each single variant to get that information. For scleotropy I think there can be a lot of different phenotypes associated with the variant. Sure, I think that's a separate interesting question but I think this idea for individual rare-ish variants that you don't need a ton of data. Okay, so we're just a little behind schedule but it's been a very rich discussion and it has exactly squarely in the site of what eMERG should be doing and taking advantage of opportunities that only this network has. So we'll... Should we just ask if there are any burning things? Okay, is anything burning out there? Okay, so I think it started... Neil Rich, you know, is not on the phone or hasn't had a chance to say anything and sort of acted in the chat part of the box and for those of you who haven't looked at it you should, he makes the point that there will be between Kaiser, VAMVP and UK Biobank let alone eMERG, there will be, you know, 800,000 or a million GWAS subjects soon and so is there a role for further GWAS in this space? I will say that I don't think UK Biobank has GWAS data yet. We're going to have the Biobank chip and that's going to be late 2015 but be that as it may, these resources are getting very, very large and so is Neil on the phone? Yeah, can you hear me? I was muted before. Yes, we can hear you. Yeah, so as some of you know, you know, at Kaiser we had a grand opportunity award. We did GWAS. We have about 104,000 individuals GWAS, a multi-ethnic cohort, it's an adult cohort and of course we have very extensive electronic health records connected to all of these people. It's longitudinal over 20 plus years. Mike Gaziano is on the phone. He can talk about what the VA Million Veteran Program is doing but of course they have extensive electronic health record data also and the UK Biobank, I think they've already genotyped over 100,000 individuals. They are going to be genotyping 500,000 total over the next, I think, about two years. Currently they cannot link to electronic health data in the British healthcare system but there are certain things they can link to. They're also doing a lot of their own phenotyping. They are doing imaging on 100,000 individuals. They are doing lab tasks on everybody and I think they have extensive plans for doing more phenotyping. So I guess my point is being that in terms of the balance between, which was the topic here, the balance between discovery versus implementation, it seems to me going forward, if I had to characterize what I would see as the big strength in terms of eMERGE is that we have a lot of different health delivery systems linked to EHRs in this network and the question is how you can address which others cannot, how this is going to be rolled out in different settings, how Genomic Medicine is going to be rolled out in these different settings because then the size of the system doesn't matter. It's really the variation across the systems that matters. So actually, this is sort of the way the discussion has been going anyway. A lot of this latter discussion has been about implementation which to me seemed to be appropriate. Okay, good. Excellent. So we're a little behind but again an excellent discussion and what we'll do is attempt to do a 10-minute break even though that's never been reported in the history of science so please leave your desktop open and your phones muted. Don't go offline so that you have to log back in and we will plan to begin again the presentations right. Well, we'll just do it right at 30 minutes past the hour. Whatever the hour might be in your time zone. We'll talk you then.