 Thanks, Gail. One clarification of what Gail said related to ClinGen. I'm on the Actionability Committee and while we love to have clinical guidelines or gene reviews, that's not a requirement. We have, in fact, reviewed things that don't have those. They just get a lower evidence scoring. So this is on behalf of my co-chairs, Hawken, Hawken-Arson, and Josh Peterson, who are both here. The mission statement of the Outcomes Working Group is to develop cross-site outcomes to track implementation and impact of Emerge 3 sequencing, or focusing on answering the overarching question of whether returned Emerge 3-generated genomic results impact healthcare utilization and outcomes of importance to patients and families. Now, I want to take just a little bit of a time, since maybe not everybody is familiar with this sort of rubric that we're using. I should have put them in red since I said rubric, but we'll ignore that color challenge here. So the example I'm going to use is an MLH1-associated variant associated with Lynch syndrome. When we think about outcomes, we think about them in three different ways. There are process outcomes, and that is a potential change in healthcare utilization related to returning genetic information. So, for example, if we return to a pathogenic variant in MLH1, we might order a colonoscopy. That would be a process outcome. Was a colonoscopy ordered based on the recommendation from the outcome? Most everything that's done in medicine and quality focuses on process outcomes because they're easy to measure, or at least easier to measure. But as Einstein once said, not everything that counts can be counted, and not everything that can be counted counts. So this is an issue where this is quite a ways away from the proximate of clinical outcomes that we'll get to. Now, there are intermediate or surrogate outcomes, which would be a biomarker or something else indicating that benefit or harm is more likely. So, again, using a colorectal cancer example here, if a fecal occult blood test was performed and it was positive, that would be an intermediate outcome. They would indicate that there is, in fact, a higher probability that this individual may, in fact, have incident disease related to this variant. And then there would also be adherence to the recommendation, would be an intermediate outcome. So we ordered the colonoscopy, did the patient actually get it? And if they did get it, then that moves us closer towards the clinical outcomes, which is the actual measurement of benefits or harms to a patient who receives an intervention. So if the colonoscopy was ordered and if the colonoscopy was performed, if we saw that there were adenomatous polyps that were removed as a consequence, that is a clinical outcome. Because we know that these are pre-cancerous lesions, and as a consequence, this is a positive clinical outcome. Now, what I've just presented there is what we also call a chain of evidence, which is if we choose to use a process outcome or an intermediate outcome, how confident are we that measuring that outcome actually relates to the health outcome of interest? So for colonoscopy, an intermediate outcome and colorectal cancer, there's a very strong chain of evidence that indicates that performing the colonoscopy, removing polyps that are found, has a very dramatic impact on reducing the incident rate of colorectal cancer. We also have a very good chain of evidence that if an LDL cholesterol of under 100 milligrams per deciliter, which is an intermediate biomarker outcome, relates to the risk for coronary artery disease. There are some that have intermediate levels of evidence, such as prescribing a beta blocker, which is a process outcome, and sudden cardiac death in individuals that have one of the inherited arrhythmic disorders. But you can really only count this as an intermediate outcome if you measure adherence. So it's all well and good for me to order the beta blocker for the patient. It never fills the prescription. It doesn't take the prescription. Then we don't have evidence that this would improve the health outcome. And then there are ones that have very weak evidence, for instance, CEA 125, which would be an intermediate outcome, an ovarian cancer, or total body MR, again, an intermediate outcome, and leaf romani associated cancer mortality. So as we think about what outcomes we're going to choose to measure, we have to understand what type of an outcome is it and what is the evidence to support its relationship to a health outcome of interest, which is ultimately what patients and family members care about. So what we've been doing in anticipation of returning of results is to develop standardized outcome measurements that include process intermediate and health outcomes across all the different conditions that will be returned as part of eMERGE 3. There is a general intake form that all patients, participants who will be getting results, will be filled out by the site that is returning those results. And then we have a whole series of these. And you can see that these vary from very simple ones. So for ornithine transcarb amylase deficiency, for example, there's only six outcome measures that need to be captured. Whereas for aortopathy, there are 67 outcomes that need to be captured. So there's all of these are placed into a red cap database, so that's a standard space collection across all of the different sites. So we will in fact be able to collect the data, correlate the data, and analyze the data collectively. Challenges. As Gail pointed out, the return of results is occurring late in the process. So we are heavily reliant on process and intermediate outcomes because of the length of the project. This means that the evidence that we have in terms of association with outcomes is much lower. We have a one time point of assessment for outcomes. Again, pragmatically, we just had to say we have to measure outcomes. We don't have adequate time to measure them over time, which is what we prefer. So we will measure them once at six months post return of result. This wasn't picked randomly. There's some evidence that suggests that when you return a result, it takes a certain amount of time for the patient participant to process that result and act on it. So rather than measuring at a one month time frame or a three month time frame, we thought six months would be a bit more robust. So the timing of sequencing and report has been comment on. Another thing that is very difficult is how do you attribute that outcome to the return result? So in other words, you return that pathogenic variant in MLH1 and the patient has a colonoscopy. It would be great to say, oh, well, we returned the result, a colonoscopy was done great. That's a win. But if that patient had already been scheduled for a colonoscopy prior to the return of the result, then we really didn't improve that outcome. So what we're trying to do here is to get a little bit of increased confidence about whether or not the return result actually led to the outcome of interest. And we're relying on assertion by a site. In other words, as I look at it, and I say, yeah, I think that this was related to the return of results versus no, I don't think it was. It's not the most robust way to go about doing it. So if we think about opportunities for measuring health outcomes, if we follow the model that has been followed from emerges one through three, perhaps there is the potential to follow some of these participants into Emerge 4. But this is much less straightforward than the phenotyping algorithms and the GWAS efforts that have been followed across these three funding cycles. We could identify conditions or genomic results. This gets to the prioritization piece that was brought up earlier, where health outcomes are more likely to accrue within that four-year time frame. Or for which there's a very strong chain of evidence that a process outcome is related to a health outcome of interest. So things that I had kind of thought about were pharmacogenomics for common drugs. We actually did a little bit of that in Emerge 2. That's an area where, again, I think at least for some of the pharmacogenomic returns, you could see something within a four-year time frame. I think a huge issue is unrecognized genetic disorders. I think there's been a lot of criticism, and in my view, perhaps my personal bias as a clinical geneticist, that, well, you're only focusing on Mendelian disorders. You know, there's rare. Why don't you do something important like cure diabetes? Well, good luck with that. The reality is that Mendelian disorders are in aggregate common. In our data, it's 3.5 to 4% of the population. That's not an insignificant number. We can identify these people with high confidence, and we know what to do with them. So the impact is actually quite high, even though the number is relatively low. And what we've been seeing, and I think which we are also beginning to see in Emerge, is that there are people out there that have genetic disorders that have no clue that they have a genetic disorder. In looking at our data in CFTR variants, we find individuals that have two pathogenic variants in CFTR. They have diagnoses in their chart, like bronchiectasis or chronic obstructive pulmonary disease. They don't have cystic fibrosis as their diagnosis. There are metabolic disorders like ornithine transcarbamylase that can have adult onset, milder forms, but can lead to morbidity and mortality. And Dan will note that I added renal disease and dialysis patients. He emailed about that earlier today, so I figured I'd just throw that in there to throw him a bone. So just to know that somebody's paying attention, Dan. And FH is a huge opportunity. This is a major, this is a Tier 1 conditions defined by CDC for genetics. LDL cholesterol is a very good surrogate marker. We have very good treatments. This is, I think, very important. Thanks, Sharon. Just about done. Getting sequencing results faster to allow for longer follow-up has been mentioned, and then developing and testing methods to attribute outcomes. Challenges, outcome collection approaches are still site-specific. In contrast to phenotypes where we use the same approach and we have manual processes that are required. This is very inefficient. I referenced this earlier, so I won't go into great detail about it, but I think if we take an implementation science approach, we can learn from this. I referenced this R01, a dissemination of implementation of Lynch syndrome screening, not to be self-serving, but to note that this is the first genomic project that was funded through the NIH's dissemination implementation science program. And it, I think, there are lessons to be learned that could be applied into Emerge 4. Same thing here. Collaboration with the pragmatic trials in Ignite 2 around certain conditions. Now, we don't know what those trials are going to be yet, but we will know, and we should certainly look at how we can do that, because now we'll have two different ways to accumulate evidence. A pragmatic trial methodology and a more of an observational cohort methodology. And those two could be synergized, but we have to have standard outcome measures to be able to do that. And then the public health impact of Cascade testing makes this a point of emphasis to develop and test methods, and that could include a legal and policy emphasis to inform novel approaches of contacting at risk relatives, not using the typical approach of contact in the patient themselves. Economic outcomes have been referenced. Again, there is an NIH-funded, NHGRI-funded R01 from Vanderbilt University, Washington, and Geisinger that is looking to develop and test models to understand which outcomes would drive cost effectiveness. This model can identify which outcomes are most important to capture, and we can design Emerge 4 to prioritize those outcomes. And with that, I'll stop. Thank you. So, Eric Boerwinkel is going to give the response.