 Next slide, please. Thank you. So John raised an interesting point, which I think will be a recurrent theme, and actually is a recurrent discussion area within eMERGE. That's why I didn't contribute to the discussion before, because I had my 10 minutes in the sun now. So as I thought about what I wanted to say, it is a balance between this idea of discovery. And discovery can be across the eMERGE. It can also be in other kinds of cohorts. We're not going to consider that here. And then implementation in the eMERGE. And its implementation is as everybody on this call does, or everybody on this call who's part of eMERGE knows, is not simply a matter of dumping a bunch of genotypes into the eMERGE and letting magic happen. There's a very complex process that we are just beginning to understand. So next slide. Randy. Good. Next. Click again. Click. OK. So I just wanted to make the point, which Rex had made before, that discovery science in eMERGE is involved not just discovering new genotype phenotype associations, but discovering how to do research across the electronic medical records. So deploying algorithms that work across multiple eMERGE. This is a screenshot of a table from the hypothyroidism paper. Click again, Randy. And then, of course, using existing data, that's the hypothyroidism GWAS next. And then we've shown that you can use the eMERGE for phenylwatt association, which, as Rex said, and I will re-emphasize, is an experiment that is hard to think about implementing in other kinds of data sets, the principle being that the phenotypes have to be broadly defined and of medical relevance. And so if you have a diabetes study, it's very difficult to think of how to do this. In community courts, it's conceivable this could be done in some fashion, but it just depends on what the phenotype definition is. And we have interesting phenotypes in eMERGE next. This is my version of a slide that Rex showed up. I updated it after Irwin said that he thought the numbers were a little bit wrong. And I'm sure that everybody will tell me that their numbers are higher than they were. But this is the current number that I see is 362,000. And the imputed data set GWAS is 50,000. And may, as we've heard just now, go as high as a couple of tens of thousands more, so 70,000, 80,000, 90,000. So click again. So no, go back. Thank you. This is a personal comment. I think that if we're going to individualize medicine, we're going to treat patients differently, the only rational way to develop a data set to do that is some approach like this. If you have 1,000 patients and 10 of them are likely to respond differently in some way, those 10 will never provide the evidence base that will allow you to treat them differently with a straight face. They won't pass a SNF test. So if you have 100,000 people and 1,000 of them respond differently, then you can start to make a case that this group should be defined differently. So it speaks to what Gail was talking about with the rare variants. We have to have evidence around rare variants. When she says rare variants, she might be 1 in 10,000. When I say rare variants, I might be 1 in 1,000. But those are the kinds of subsets that we can now begin to identify with a data set that is this large. And again, I emphasize, and I think that the point has been made before, that there's a lot of GWAS data in this set. There's less rare variant sequence data, rare variant or sequence data in this set. Next slide. So as I thought about what to say about the discovery and implementation missions, and I guess I telegraph my bias at the top, because I think we ought to think about both in some way of balancing those. And I'll come back to the idea that they interact with each other at the end of this little chat. So I thought, well, what can emerge, contribute uniquely to discovery? And what can emerge, contribute uniquely to implementation? There are lots of people doing GWASs, for example, named lipid traits, or in acute myocardial infarction traits. And we can participate in those studies, because we have large data sets that can contribute to very large meta-analyses and that sort of thing. But what emerge ought to focus on, I think, is what we can do that other people would have a harder time to do. That's what I'm going to talk about next slide. So the easiest examples, I think, are in drug responses and cancer susceptibility. And you can say, well, you know, we understand the pharmacogenomics of clopidogrel and warfarin or synbostatin, and therefore we don't need to study them anymore. All we need to do is implement them. And the same thing goes for some of the common cancer susceptibility alleles. We know, but I'll just say that. But the real question is, do we really know all there is to know about variable responses? That's a rhetorical question. I'm going to make the case over the next three or four slides with an old drug that there's lots and lots and lots that we don't know. And part of the reason we don't know it is because we have been limited in the size of data sets that we've been able to study so far. Next. So I'm going to show you four or five slides of warfarin data. This is a slide that I'm very fond of showing because it shows what happens when you examine people who are on warfarin on very large doses of warfarin to achieve therapeutic anticoagulation. And without going into the details, the major message of this slide is actually that the reason people need very large doses to achieve therapeutic anticoagulation is because they're non-compliant. They don't actually take their warfarin or they don't absorb it for some reason. But if you take people who actually take their warfarin and who have very high dosage requirements, it turns out that there's a group of people who have rare variants in PCOR-C1. So this is the gene that encodes the warfarin target and hit the advanced button branding. And there's a rare variant called D36Y. D36Y is rare, but if you run an anticoagulation clinic in Israel, you have to take it into account because it's a 5%er in the Ashkenazi population and those people require very large doses of warfarin. And I'm sure that there are other rare variants that have very large effect sizes when it comes to determining warfarin dose. Now, if we ever use Warfarin again. Next slide. So these are data from a study that we did in BioView, probably around the time we were joining EMIRV. So these are older data, but I think they really make an interesting point. We looked at the predictors of warfarin steady state dose in our own cohort. And you can see that they're the 2C9, Star-2, and Star-3. Those are the marquee variants in 2C9. They have strong asos. They have minor low frequencies that are shown, strong associations with ultimate steady state dose. There are 3 V-core C1 SNPs all in LD in this particular group with very, very strong associations with warfarin dose. Go on. OK, now that population is actually both the Caucasian and African-American population. When we broke it down by European American or African American, the statistical significance is actually so much less than the African Americans because the data set is much smaller. But we actually lose the C2C9, Star-2 signal entirely. And the genomic architecture of the C-core C1 is such that the 3 SNPs that were in LD for the Caucasians are no longer in LD in the African-Americans. And the SNP that counts is actually the bottom one, 99, 232, 31. So hit the button again, Randy. So this is a warfarin. Warfarin is more complicated. There's more than two genes. Just show you where the other genes are on this slide. Hit the button again. So we actually looked at common variants across many, many genes in this slide. And the point is to look down at the bottom, there are SIP variants in African-Americans that are, whose minority frequencies are, for example, Star-8 is a 7%er in African-Americans with an effect on ultimate dose. But nobody ever thinks about that when they consider creating algorithms. So I think we really have a long way to go before we understand even common variation in commonly used drugs. That's what these slides are supposed to show. Next slide. So you heard from rats that there are efforts to implement some really interesting genetic variants in the EMR, Factor 5-Live and HIV and APOL-1. And these are the poster children for the idea that there are common variants that may have large effect sizes in defined populations. And my question to you and the rest of, and everyone else on the call, is are these really the only common variants that have large effect sizes, or isn't there a place for discovering more of these? The APOL-1 story has really only emerged in the last two or three years. Irwin can talk much more about that than I can, but it's a fascinating story. And of course it's only in African-Americans. Stand, Mrs. Terry. You have about one minute, you have about one minute, please. And I have about one minute. You're right. Next. Next. Okay, and then as you heard from Iftacar, go back, have you heard from Iftacar? We are also deploying complex combinations and the questions are how to deploy them, how to validate them, how to measure impact and outcome, and I'll just ask you in the next slide. So, discovery science, the 362,000 DNA samples coupled to the EMR can enable our FIWAS, I've already said that, complex outcomes, Brandy hit the slide, and the complex outcomes are not only longitudinal over time and drug responses and disease subtypes, but gene by all those. Next. I think we have to think about ancestry and develop ways of generating larger African-American cohorts. We probably have 50,000 African-Americans across the merge right now, and then there are issues around privacy that we still need to address. Next. The implementation science, I'll let you read what it says there, but it basically, how do, if you're gonna implement, you have to have evidence and the evidence comes from the discovery side, how do you do it, in who, education issues, decision support issues, and then tracking outcomes, I think is something that somebody has to do and it's not us, who else? Next slide. And then I just wanna make this point that as we implement, we learn, and as we learn, we generate larger data sets. So if you implement HFE in a very large cohort, you'll generate data on HFE that will then feed back into the discovery side. So I don't think that it's discovery or implementation, they each feed on each other, and I would argue that we have to retain both, but we have to think about what kind of discovery we can do that we're uniquely positioned to do. That's the end of my 10 minutes. Okay, thank you for that, and we're gonna try and quickly pass the baton to Mary Relling for her reaction. Can you hear me? Yes, we can.