 Okay, so for the rest of the afternoon, we have a number of reports and presentations. First up, we've invited Rex Chisholm to be with us. Periodically, we like to give council updates on some of our research programs. It's been a while since you've heard anything about the electronic medical records and genomics or eMERGE network. Rex is a member, he's one of the PIs on eMERGE. He's also a former council member, so he knew his way down to this room and he knows the drill. It'd be interesting seeing him on the other side of the podium for a change. eMERGE, I believe, has 11 PIs, if I'm right. 12 or 13 now. Well, I'm going to get to them in a second. Dan, okay, it's a point of debate apparently, but there are at least 11 and Dan is one of them. There's also a coordinating center and two sequencing and genotyping centralized centers. So we've given Rex the near impossible task of speaking on behalf of all of these investigators. And he's going to do that for us now as he gives us an update about eMERGE. As he's getting to the podium, I just want to say that Rex also has been the chair of the steering committee since the very, very beginning. So it hasn't been fun yet. Well, that may come soon. It's great to be here and it is interesting to be on the other side. I think I realized it was six years since I've cycled off council. So it's been quite some time. So it's my pleasure to be able to talk to you a little bit about eMERGE. And there are several people sitting around the table here and in the room that could equally well have given this presentation today. So they'll correct me, I'm sure, if I screwed up too badly. So today, you can see eMERGE consists of about nine clinical sites, two sequencing centers and a coordinating center. And most of the work of eMERGE gets done by work groups. You can see them listed here, clinical annotation, genomics, pharmacogenomics, return of results in LC, EHR integration, outcomes, and phenotyping. And one of the things I think that in my view has led to the real success of eMERGE is, although it's hard on all of our investigators, is we actually meet together as a steering committee three times a year, which is unusual for I think most consortia like this to meet as frequently. And we make sure we devote a large percentage of the time at these meetings to the work group so that people can actually have face-to-face time together rather than just sort of the anonymous phone conferences that are also part of this process. There are, as you can see, six subgroups that eMERGE focuses on. Familiar implications of return of results, because that's an important thing. Once you know something about you, then obviously there's big implications for your family. Think about HLA, return of results, legal considerations, info button, and I'll come back to this in a moment, participant survey and phenotype variables. And then we've been lucky recently in receiving some additional support in the form of supplements, one for geocoding. And this has turned out to be a really powerful approach that eMERGE has been able to apply. Another for phenotyping, applying the OMOP model to the eMERGE sites. That's very new, and so we're just beginning that. And then a healthcare provider survey to help understand what it is that we're doing to the healthcare providers as we start to think about rolling out genomic medicine. So for those of you that aren't familiar with eMERGE, it's currently in its third funding cycle, eMERGE 1 started in 2007. And I think it's fair to say that there was in 2007 a fair amount of skepticism about whether one could actually use data from electronic health records for research purposes. In fact, I was often told that I was crazy to even think about it. And it always offended me that I couldn't use it for research, but yet it was used for something I think actually even way more important, which is to take care of my health. So I think one of the good points of this is that during the course of eMERGE 1, we were actually able to demonstrate through a variety of projects that, in fact, you could data mine electronic health records, create a cohort, and then do genomics discovery on that. And in some cases where we were replicating known variants, you could actually identify genetic variants that had been observed previously in completely purpose-built cohorts. One of the other things we learned in eMERGE 1 that was really important is that this was a really, could be a really efficient process, because once you developed the cohort and did GWAS genotyping on them, you could actually completely repurpose that sequence with the depth of electronic health records and completely reuse that data over and over again. And I think as you'll see in a minute, we've used that now probably close to 40 times to look at different phenotypes. In eMERGE 2, which started in 2011, we began to ask the question, well, can genomic findings be applied to clinical care and how? And so we did a few clinical implementation pilots, and it was actually during eMERGE 2 that we also began to focus on pharmacogenomics, and I think that was one of the most exciting outcomes of eMERGE 2. Then in eMERGE 3, we've moved on to think about sequencing technologies, how do we use that to both improve discovery and then how we use it to do implementation? So the aims of eMERGE 3 today are to sequence and access clinically relevant genes, presumed effect gene function, in 25,000 individuals. We're about halfway through that process now, I'll show you in a minute. Assess the phenotypic implications of the variants that we observe, then go the next step of integrating those genetic variants back into electronic health record and then use them for clinical care and ask, A, what are the process issues that are related to that? For example, how much burden do we put on the providers that have to deal with this? And then ultimately, of course, can we demonstrate any improved health care outcomes as a consequence of that? And then, obviously, one of the things that we're always very interested in is, can we produce community resources that can be shared broadly in the community and help broaden the area of impact of eMERGE? So to think from the standpoint of impact, one of the things that we have today then is 110,000 participant genomic data set. And what's special about this genomic data set is that all 110,000 of these individuals are people for whom we have access in a longitudinal way to their electronic health records. That means that we can look not only on temporal changes that happen, but we also have the ability to look at what happens in repeat measures, to look at trends, and to look across these 110,000 people in a fairly detailed way, reusing that information over and over again as we redo analysis based on phenotype from the electronic health record. Underlying all of that is some technology, which is the eMERGE record counter. There's an open-facing version of the eMERGE record counter, which anybody can go to and ask in the eMERGE collection how many type 2 diabetics are there that have a BMI over 40 and get a number. We have a thing called Rodin's Rule that says whatever number you get, it's only going to be half that number at the end of the day, but nonetheless it gives you a sense of what is possible in that 110K data set. And then we have Sphinx, and I'll talk more about Sphinx in a minute, but Sphinx is about how do you actually do the integration of phenotypes and actually deploy those more broadly. So one deliverable, then, that comes out of this collection of participants, deeply phenotyped, if you will, based on their electronic health records, is to actually look at about 83,000 of these that we have comparable GWAS level data on. And what you can see here is a nice example of, I don't know if you can, yeah, you can see that. This is just a principal components analysis that shows the ancestry of that 83,000 person cohort. And what you can see is it's actually quite a diverse collection of individuals, and it's, I think, one of the values of having this 83,000 person cohort and the ability to use that. So right now this is the basis of a lot of the work that's ongoing in Emerge 3 as we generate new phenotype algorithms. We can apply this to this data set and actually ask questions about new discovery of associations that we have not previously seen before. The next deliverable is the development of the EmergeSeq platform. I said one of our goals was to identify 25,000 people, actually sequence a panel of genes, and then think about how do we use this in returning results to the individuals, think about their families, think about the providers, and then obviously think about long-term outcomes in terms of health. So we went through a process where we took the ACMG, then 56, now 59, but actually we were recovered because it turned out we had all of them on the panel anyway. Each site then was able to produce its top six. So those were dependent on either a particular implementation strategy they were interested in or a particular discovery project or a mix of them. And we ended up with a total of 109 genes on the EmergeSeq panel. After careful analysis by our clinical action ability workgroup, we came to a consensus that 68 of those were actually clinically actionable in one way or another. And we can go through a whole series of SNVs, but at the end of the day where we are now with this 109 gene panel is that we have 68 genes and 14 SNPs that are returnable, that we believe are clinically actionable and actually can be returned to, will be returned by all sites. I should take, as a brief aside though, say that some of the things that might not be clinically actionable actually are actionable in some interesting kinds of way. So I'll just use an example of some of the early sequence from our site has identified a few individuals that have variants in genes that have been previously associated with obesity. And when we return these data to those participants, it's interesting, there's an incredible sense of relief because there's some explanation for their condition and it's not just a lack of willpower. And it also gives them the ability to think about action ability for their children where there might actually be some ability to intervene earlier and have an impact. So I think one of the things that we're learning is action ability may be a much broader concept that we actually need to be exploring in more detail. To give you a sense of how this is looking, we're about halfway through the sequencing of our 25,000, about half of them are being sequenced at Partners Brode and interpreted in the laboratory of molecular medicine at the partners. And our results are broken up into two categories. They're indication-based results, so people who were sequenced because there was an indication like, for example, colon cancer is one of the indications that was used for sequencing. And then there are other sites where the indication for sequencing, there was no indication for sequencing. It was just sort of a random probability that somebody would actually have a condition. And it's interesting to see the results. They're pretty consistent. In the indication-based return, we're seeing about one and a half or so percent that are positive. In the non-indication-based return of results, we're seeing in the broad case about 6% positive. And in the case of Baylor, we see pretty similar results. The positives are about 3.2 for indication-based and about 3.1 for non-indication-based. And I don't have time to go into some of the variations here, but they turn out to be interesting. And then what kinds of conditions are we seeing? Well, you can see here, for example, we have about 40 cases in the Baylor cohort that represent some cancer condition. We have about 20 that represent some cardiomyopathy. We have about 10 that represent a long QT. So there's a fairly heavy cardiovascular implication here. And then a few others in terms of malignant hypothermia and hemochromatosis. Another area where it's obviously very valuable to be able to take that information from an electronic health record is to actually develop a process for how do you do electronic phenotyping on data extracted from a health record? And so we've spent a lot of time in a merge developing multiple electronic health record-based phenotypes and to understand the process that is valuable in terms of making those phenotypes as robust as possible. And then we deposit all of those into a phenotype knowledge base called FeeKB, which currently has about 37 finalized and publicly available phenotypes. So if you're interested in understanding how we did those 37 phenotypes, you can actually see the algorithm on FeeKB. To give you a sense of how the workflow works, the way it typically works is we begin with one site who has an idea about how to create a phenotype algorithm. And then after they've done it, they pass that algorithm off to a second site who usually is able to actually improve the algorithm significantly because they either view things in a different way, they have a different collection of data at their site, or there's some usage of clinical values in a way that's slightly different between sites. So that validation step turns out to be really important. And then once it's been validated, and this usually involves some iteration back and forth, we then go to a sharing phase where all of the sites in Emerge then have the opportunity to run that algorithm and contribute to a synthetic cohort that then can be used for discovery purposes or eventually for implementation purposes. And then obviously the ultimate goal of that is to publish. And this has led us to develop quite a few tools in the area of phenotyping, using KNIME as a way to actually package these up and share between sites sort of a computable package that can be deployed at a new site. LMAP, which helps us provide specific mappings between different phenotype elements. I've already mentioned the Emerge record counter and FKB. And so all of these are important tools that are available publicly on the Emerge website and are also very valuable in terms of doing this kind of phenotypic development. So in phase one of Emerge, we did a total of about 14 phenotypes. Each of those then went on to be the source of a GWAS study. In phase two, we did another 15 phenotypes that got us up to about 30. And then we added 27 phenotypes in phase three that'll get us ultimately by the end of next summer to about 70 phenotypes that we will be able to deploy across the network, across either 83, or in some cases, even 130,000 person cohort to actually begin to look at discovery. So another important impact that came out of Emerge that's been, I think, very widely used to this state. And I think I have to credit Vanderbilt is taking a lead on this. And what they did was they developed this tool. Hopefully, everybody's familiar with GWAS at this point. But it's sort of a GWAS flipped on its head. So instead of taking a condition and asking what genetic variants are associated with that condition across the genome, it takes a genetic variant and asks what conditions are associated with that genetic variant across all of what we call GWAS space. So it's a process by which basically ICDs have been clustered into groups and used as a basis to do a FIWA study. Just as an example of that, one of the things that Josh Denny led was a FIWA of the entire NHGRI GWAS catalog of SNPs. And it started with a discovery arm that asked how many SNPs could we apply across this FIWA space and actually then discover new associations. And that was also based on a replication arm which said across all of GWAS catalog space, how well do we replicate what's already been identified? And the answer was there was a remarkable concurrence with this across the entire GWAS catalog. So this is really a really highly powerful approach to think about how to look at genotypes in their association with multiple phenotypes. And I have to just mention one that was a lot of fun that the Emerge Network participated in was to actually look at variants from the Neanderthal population that existed in our collection and ask what conditions were associated with the presence or absence of those Neanderthal variants. So it turned out to be quite interesting to be able to think about how you could look in today's current health record and think about some of the evolutionary aspects of the variants that are different between Neanderthal and the current population. So the final area that I wanted to talk about was Emerge Pharmacogenomics. We did a panel that was developed actually by the PGRN Network. And it was an 84 gene panel that allowed us to sequence across 9,000 participants, 82 of those genes. And then as there were clinical implementation guidelines for pharmacogenomics, put those back in electronic health records and use those predictively when a person was going to be prescribed a particular medication. So we have this 9,000 person cohort. We have both things where there's a clinical implementation guidelines. And most of the sites have now got clinical decision support that supports that at the time of prescription of a medication. But then as you can imagine, in Sphinx we also have a lot of additional variants in those genes that have never been seen before. And so it creates yet another opportunity for additional discovery. And we continue to collect utilization and outcomes data. And just to give you a sense of what this looks like, we've done multi-sample calling across the entire cohort. And this, again, shows you the principal components analysis. Not quite as diverse as, of course, the 83,000, just because it's smaller, but still pretty good ancestry diversity in this group as well. And then, of course, we've focused on how do we put this information back into electronic health records in a way that can be used by clinicians as they either prescribe a medication or as they talk about risks in families as people have information that says that their family might be at risk for something. And this is very heavily dependent on a project called the Infobutton Project, which uses reusable tools that are available to most electronic health record system. And we've also put together a clinical decision support knowledge base that provides a basis for doing further clinical decision support for people that might be interested in doing that. And then we're in the midst of experimenting right now with the use of common tool called DNAnexus, where all of the emerged data from the EmergeSeq platform, from the 83,000 GWAS study, is available in this DNAnexus platform, where we can build common pipelines and common tools that anyone in the Emerge network is able to use. And we think this is a particularly powerful approach to think about how do we improve the consistency and reproducibility between the different sites. To give you a sense of the impact of Emerge based on publications, what you can see is that we've got a pretty strong record of publications. We've been a little slow in the last couple of years because we're just in the process of sequencing all of these genes. But there are 119 manuscript concept sheets that are waiting and in the works right now for us to use that data as it becomes available. And we're very proud of the citation history for the Emerge network, cumulatively between Emerge I and March of this year, accounted for about 17,000 citations. You can see about half of them in the area of genomics, but also ranged across phenotyping, return of results, and electronic health records implementation. And then, finally, is another way of thinking about use of this data is all of this is, of course, deposited into DbGaP. And you can see that the data is getting increasingly utilized as approved authorized downloads from DbGaP so that people can reuse that information as well. I've mentioned most of these tools already, so I won't be laboring them. But if you go on to the Emerge website, which is a link, is present in one of the things. I can't remember it off the top of my head. You can get access to all of these tool sets for use. And then where we are right now is future deliverables. So DbGaP submissions for Emerge 3, this 25,000-person 109 gene panel. We're going to do about half of that, an interim submission that will be happening momentarily, and then the full submission that will be submitted later. We've already submitted the GWAS E1 through E3 imputed set, or are about to. We have developed IT support at each of the sites to put clinical implementation back in the electronic health record. And while I can say that in one or two sentences, it's non-trivial. And it requires a lot of cooperation between the clinical side and the research side. And I could talk for hours about that, but I won't bore you with that. We're focused on outcomes, both in terms of the implication on families and participants, but also on the health care provider group. And then, as I've already alluded to, creating and deploying an additional 27 phenotypes and then using those phenotypes for discovery. So I know it was a whirlwind tour, and I already went over my time a little bit, but happy to answer any questions any of you might have and talk with you further about what Emerge is up to. And I don't know if any of my Emerge colleagues want to say anything at this point as well. Questions for Rock? Right. No, it's just a fantastic resource. Thanks for everything. Thanks for letting us do it. I have a really mundane question about DNA nexus. How does that interact with DBGaP? So you have to presumably first get DBGaP approval. So you can use DNA nexus, or is there some special status there, because you're not actually downloading the data, but you're logging into your system? Well, right now, the nexus is available only to Emerge investigators who have appropriate data use agreements in place. But so it's just for Emerge investigators at this point in time to see if they're using it. So in that case, are there plans to open that to a broader community? It's a great question. It's not cheap. So I think the question will be thinking about how do we manage the costs of that. But I think if we can demonstrate that it's, and now here I'm just my own opinion, if we can demonstrate the utility of it for the Emerge network, I don't know why we wouldn't want to think about a resource like that that would be broadly available. We have a question related to that, because DNA nexus is a company, right? Yes, it is. So what is the line of thinking around choosing to go with a commercial solution like this that actually gets built and paid and developed as part of a research project like Emerge, but later on looks basically the community to using that specific product? Is that a concern? Is that something that, you know, for sequencing we do it all the time without, I think, much concern? But for software, I don't think that has been generally the solution. I think this is an experiment. It's to actually ask the question, how well does it work for a consortium like this? And the answer is just unknown at this point. It may work very well, in which case people may find it compelling, even though it is supporting a company. But I also appreciate there's a strong desire to have an open source community that can actually support these as well. So I want to raise two issues related to that. One that I think is on people's minds, which is financial, as I said, it locks you into a particular solution. But the other that has to go with open source, I think often when people think about open source, they think, well, it's for free and I can do whatever I want with it. That's one aspect of it. But for methods people, the reason that they're so gung-ho about open source is that then you actually know what methods are being applied to your data. If the source is not open, you actually don't know. And that becomes increasingly a concern these days. And I think even when a commercial partner in this area is introduced, since they can still sell their product in certain ways, there should be a lot of consideration to that aspect, which goes beyond the fact that they built a data platform or not. I agree completely. It's probably worth saying though, one of the features of DNA Nexus is that anybody can actually build a tool and deposit it into DNA Nexus. So it's on top of their platform, but there is the ability to do methods development for analysis that sits on top of that. But your point's well taken. First thing, it's good to move the conversation back to a merge. First, it's very impressive. Could you talk a little bit about the success of putting genomic data into the electronic medical record? But not the technical aspects, but in particular it seems the value of that is downstream. How often, and it's probably too new, but how often is that used by the healthcare entity going forward? Yeah, so that's a great question. So I'll talk about what I know best, and then I don't know if Dan might want to say anything about their experience at Vanderbilt, but we, for example, for the Pharmacogenomics Project, have built clinical decision support that fires at the time a provider writes or attempts to write a prescription for a medication. And if there is a relevant variant in the record that would affect metabolism of the drug that they're about to prescribe, clinical decision support fires and says, this individual is a poor metabolizer. You might consider an alternative medication if there is one, and that's all built into the system. Today at Northwestern, that is firing only for the people that were recruited into the Emerge Pharmacogenomics Project, but because of the way we've built it at Northwestern, it's a flip of the switch to actually expose that to the entire health system, which now consists of about 7 million people. So we're in actually discussions with our healthcare system about when might you actually turn that on. Now in the case of PGX, there's another prerequisite, which is you have to have the sequence of those genes in order to pre-populate the electronic health record. There's some costs associated with that, so one needs to think about that. But the IT infrastructure is already in place and literally could be turned on by the flip of a switch. I think as we start to think about risk scores for complex conditions, then it gets a little more complicated and their economic benefit of that, I think probably is too far in the future for us to talk about. But things like this infobutton project gives resources so that a physician can at least ask a question and get to underlying literature and get to additional sites that might provide information relevant to the particular condition they're interested in. And it's great to hear that there's an economic analysis that's involved and I assume that's part of Emerge that someone's looking at the cost effectiveness of this. So one of the outcome measures that the outcomes workgroup is focused on. You know, that's great, my compliments. Carol? Yeah, this is awfully interesting and it was a whirlwind tour and I found myself wanting to go, but wait, and also, is there a publication on that? And so if we can have your slides, but also there's hundreds of publications, I'm sure. I'm most interested in the return of results and how you actually have made that happen. But my question is whether, I mean, this is such an impressive effort at body of work, it's gone on for a long time and you've really refined it in a number of institutions now with these sort of moving toward all of the things one could think about as being maybe possible, hopefully possible within the all of us study. So what, it just really strikes me, you don't wanna invent the wheel a hundred times. Is there, are there efforts to kind of harmonize what you're doing with the kinds of projects that are going on with all of us? The answer to that is yes. There's actually quite a bit of cross fertilization between people that are sites for a merge and people that are sites for the all of us project. Josh Denny, who I've already mentioned from Vanderbilt is the PI of the all of us, at least for the Consent Project. And then there are several other people involved either in the genomics group at all of us or in the new caps group, but it's about privacy and security for that that also includes several emerge participants. In a lot of ways, I think it's fair to say it may be overstating from a little bit of a point of pride, but I think emerge has been fairly influential in informing how all of us moves forward. But I think all of us, and I'm a very active participant in the all of us project from our site and through committees, it's a huge operation and it's an aircraft carrier. And I think emerge is much less of an aircraft carrier and emerge is actually a much more nimble organization where we can test out some of these things before trying to deploy it in the aircraft carrier state and think about how, what works, what doesn't work and really go for the best of strategy as we think about transforming what we've learned from emerge to the all of us project. So Rex, can I just jump in before we go to Val and just as I was gonna ask a similar question and I guess what I'd like to hear your impression or maybe Dan could weigh in because I know there's a couple of points of joint involvement is what you just described makes total sense, but is that coming from the people that are involved in both or are you getting traction within the broader all of us consortium to embrace emerge as that high lit small boat out in front of whatever metaphor you wanna use? I think the answer to that is it's early days. I mean, so I think early on as the all of us project was being conceptualized, emerge got looked to quite regularly as what can we learn from what emerge did? I think it's fair and now my own opinion has a active participant in the all of us project. All of us is so focused on building the platform for participant recruitment that we've not spent as much time as I, for example, would have liked to have seen us spent thinking about downstream uses of the all of us platform. It's sort of if you build it, they will come and that's the stage that we're in now. And I think when we get to the stage of actually doing all of these things, then they're gonna need to look back to things like Emerge and Caesar and some of the other activities NHGRI has funded as a way to learn about how to do this better. Dan, do you share that view or you wanna add anything? I had my hand up because I was gonna say something, but it's all been said already, so I think I'll just try to shut up. Val, I think you're up next. Rex, very nice, thanks. My question goes to this anecdotal, or maybe it wasn't anecdotal, that's my question. You talked about how someone was made to feel less guilty because they had some obesity predisposing variants. Was that based on anecdotal data or do you have a study? No, it's anecdotal. I mean, it's like, you saw, so we're doing 3,000 people at Northwestern, at our site. Each of the sites is doing 3,000 people. And out of that, it's like four people. So I call it anecdote, not a study, but I think it does create the opportunity for us to understand the impact on them and on their families in a storytelling kind of way, and that's where it starts. And then once we've heard that, I think we can talk about in the future how might you think about a study to do it, but absolutely anecdotal, no statistics, they're too small to even think about statistics. Yeah, so the flip side of that is someone might just plain give up, right? Because they say, oh, it's in my genes. And so a genetic determination is needed to balance that out. But the other question was with regards to returning information to the patients with regards to clinical action ability, where's it going with regards to heterozygotes for things, recurrence risks? Yeah, so that's clearly something that we need to be thinking about as a next step. But we've got, like I said, we're about halfway through the sequencing of, so that's like five to 6,000, it's about 11,000, I guess, at this point. We don't yet have much experience with returning those results. You're saying we're getting, call it 3%, people that have a returnable results. And I would say at most sites, we were lucky we were one of the first sites to get our sequence back, and we've only returned probably to a dozen people at this point. So it's just too early to tell, but that's exactly the kind of question I think we're well positioned to answer. Thank you, that was very helpful. So one's more general question, one's a specific question. So I'll just, the specific question is it look like based on patients' phenotype, the proportions were no different. If I understood what you're, based on indication, it seemed like it was still about 3%, which is what people have reported across for incidental findings. I was curious if you meant patients with cancer, for example, had no more than people without. That was the more general thing since this is part of the open session. The answer to that one is yes. I mean, if you were selected for colon cancer, the other findings, I obviously knew your colon cancer, the other findings were no different. Oh, I see, okay, so I misunderstood that. But since I think we're still in open session, you spent a lot of time talking about all the work on phenotypes, but I think for a lot of people, it may not be clear what you mean. Like, so you just talked about obesity. So this whole long list of steps is what it takes to really look across electronic health records and agree on how to measure obesity. Is that what those series of steps is about? Precisely, so let me use the example of type two diabetes. So that was actually one of the first algorithms we did in Emerge One. And surprisingly, how complicated it is to produce an algorithm that actually has a high positive predictive value and a low negative predictive value of type two diabetes. And so it involves several steps. It involves several if-then, yes-no branches. It involves not only codes, but medications and a variety of aspects. So even something as relatively simple, notice the air quotes, but relatively simple as type two diabetes took a fairly complex algorithm. And then they just get more and more complicated. And one of the things that we're finding now in Emerge Three is almost every single one of the algorithms that we're doing in Emerge Three is requiring some significant natural language processing to be able to look at free text fields that are in the electronic health record. It's exceptionally powerful when you do that, but it requires some infrastructure. I do want to amplify one thing that Rick said, and that's sort of in response to Val's question as well. So if there are four out of 3,000 people with a funny genotype, funny genotype, and then you go back and look at their phenotype at Northwestern, that means there'll be 35 or 40 once the data set is finished. And then that's a number that I think starts to get interesting. If it's one out of 40,000, no one's gonna, then it's all about families and traditional genetics, but if it's 40 out of 25,000, then you can start to look at the electronic record itself and say, how penetrant does this vary, and what are the phenotypes, and what are the responses? So I think that's one of the things that we're on the edge of learning, and of course, once you get to a million, then all bets are off or not, but I think that's one of the real appeals of this going forward. Any last comments? Not? Thank you very much. Thank you, Rex. Extremely helpful.