 Okay, well, you've had a few minutes to look at the questions that we were tasked with. The first is really about utility, validity, cost-effectiveness, quality of life. More outcomes-y question. The second is, other information that we can integrate into our testing and analyses. And I'm going to be mostly in this space, and Mark's going to be mostly in this space, but as usual, we'll step on each other a little. So you've seen this slide, but I did want to point out a couple things. So we have 109 genes sequenced on this panel, of which we've called 68 clinically actionable. We have a lot of variants, including a lot of pharmacogenetic variants. Those are not counted among the actionable, so just keep that in mind. That's a different kind of actionable that we weren't counting. And so we have 14 variants across about 10 genes that we called clinically actionable as well to be returned. And a lot of those are homozygous for recessive disorders, hemochromatosis being a good example there. And then we say return by all sites, but I'm going to show you that's not exactly true. But what was really useful about this exercise, and I should say one more thing, what was really useful about this exercise was to come to the agreement across all the sites because that way we could have more common data return. And of course, you notice that this is more than the 59 genes that ACMG endorses that are fundamentally based on the ClinGen categories. And the differences there are twofold. One is we allowed a little less penetrance than they tend to prioritize, but the other is in order for the ClinGen group to call something clinically actionable, there has to be a published practice guideline, or at very least a gene reviews that addresses what clinical change you would make. And for genetics, that's often not the case. And so for us, we're able to say, well, as clinicians who are practicing in this space, we believe this is actionable. And hopefully we'll be able to provide that level of published evidence then that ClinGen needs to go ahead and make a broader recommendation. All right, so this is what we're actually returning across sites. So there's a group of people who are identically returning the consensus list, and this doesn't really include pharmacogenetics here, as I said. There is a group that's added to the consensus list with some genes specific for the traits that they're looking at. One of the pediatric sites is not returning some of the adult variants for, I think, obvious reasons in Geissengers at its own model that we would spend a lot of time on, so we want. So it really is helpful, though, because we'll be able to get the same kind of follow-up evidence on most of these genes. And sites were pretty flexible. There were definitely some edge cases in here that we decided to go ahead and include as actionable because we wanted to learn what happens when we do return them because this is research after all. I think this is a really important thing when you think about outcomes. So this is the sequencing and reporting timeline. And so, as you can see, it goes across, you know, three years of the program and the model that was used is every site had their data split in half and so that if you were first in here, you are last out with your second half of the data and if you were last in with your first half of the data, you are the first person to get the second half of your data. But that means we're getting data way out here. And so for outcomes measurement, that's problematic, right? Because you cannot follow these people very long toward the end of the program. All right. How we actually return the data, the one loop that varies is when the primary care provider hears, sometimes they hear. After the patient hears, sometimes they hear before. The patient hears, but it's notable that everyone is really using medical genetics professionals, either medical genesis or genetic counselors to return these data. And that's something we need to think about as far as throughput in the future. OK. So this is just an example of the incidental finance. It happens to be from our Seattle site. And, you know, one of the things I point I want to make with this slide is that there are a lot more, like, cardiomyopathy results than there ought to be, right? So there's two possible explanations for those. Cardiomyopathy should be, like, one in 300, one in 500 at most. And so this is out of 1,163 people. And that's just too many. Two things are going on here. One is we're returning likely pathogenics, which is not. Of course, the ACMG guidance, ACMG guidance, is definite and expected pathogenics, not likely pathogenics. And I think what we're going to find is a lot of these likely pathogenics actually don't meet that 90% threshold for probably being pathogenic and are wrong. Or a bunch of them will be low penetrance. And that's one of the things that we are really going to be able to get at, because in this, oops, pointer, out of the four likely pathogenics for this particular gene, which is a common hypertrophic cardiomyopathy gene, three of them are identically the same variant. And it is a variant with a fair amount of literature behind it, but it's kind of frequent. And so if we're getting three out of 1,000, then across the network, we may get enough to really look at penetrance for that specific variant. And that's a real opportunity for us. Many of the SNPs that were added to the platform were actually added for that reason. They are called pathogenic or likely pathogenic, but they look too common, and we want to understand the penetrance. So that's an opportunity that we have. As far as added data, we've had this model and emerged that we have not gone back to the patients. This is really the first time we're really returning genetic results and doing some refenotyping even, but we haven't collected extra data from them regarding our phenotypes. We've gotten really electronic health record derived phenotypes. So a newer model that was brought in just recently with a supplement is the geocoding. And as you can see, you can get a lot of socioeconomic, food accessibility, traffic, environmental data. There's a lot of data built into this geocoding. So this is another way, without going back to the patients, that we can use existing data to get a lot more information for our analyses, to look for gene by environment interactions, et cetera, stratify our analyses. So we're looking forward to having those data available. This little thing says made of 100% recycled genetic material. It's a baby t-shirt, for those of you who are not familiar. Mark wanted one, but we pointed out the size problem. So the family history data is incredibly useful. These are genetic diseases. And in particular, we have a big focus on things that are really Mendelian at the moment. And so the family history is useful in stratifying the data and looking for penetrance. It's also very useful for looking at co-segregation so that when you find someone with a variant, you can not only look for the phenotype in them, but look at the phenotype in their family members. And it's a very useful way to get penetrance as well. However, family history itself is not captured well in electronic health records. There's not a standardized form. It's very difficult to pull back out at most sites. And those who do have electronically have it in different formats. And so a standardized format for family history across all of Emerge, where we're going back to participants, would be very useful. And there are other things you could collect as well. I think some of the apps for data collection or even some of the wearable technologies going back to patients is useful. We do have to keep in mind these are biorepository-based samples. Some of the many of these people are lost to follow-up. We're dead at this point, especially in the older cohorts. But for the people that we're bringing in and seeing, there may be ways to capture data that we haven't tried before in Emerge. So cascade testing is very important for penetrance. One of the nice recent paper by David Vienstra was that one of the major drivers of cost effectiveness for genomic testing, this was done in Lynch syndrome, was the number of family members that got tested after the initial diagnosis was made. Because, of course, the first sequencing panels are a couple of thousand dollars, but those follow-up tests are just a couple hundred dollars because you know exactly what you're looking for in them. And so the ability to think more about the family and cascade testing through families, I think, is an opportunity for Emerge. And in particular, we have found that family communication just isn't great around these things. So how can we make the family communication easier and more successful so that we get the biggest bang for our buck in this? So challenges and opportunities. This is still, for me as a clinical geneticist, the heart of it. We have to know what genes are associated with what diseases, which variants are pathogenic, and what is their penetrance. And I'm particularly concerned about the pathogenicity and penetrance. And so opportunities we have is to standardize what is returned as allowable across sites because of site variation to get the same data to have a larger database. Refenotyping, when you find a variant and the person's not known to have this order, you can look at their electronic health record, or you could just do a physical exam and look at them when appropriate or sometimes follow-up testing, such as an EKG or an ECHO, might be really useful for a variant to understand whether or not it is penetrant in that person. Similarly, family cascade testing, not just for co-segregation, but also for penetrance, we are absolutely in the data that's being returned now finding patients who have a pathogenic period for cardiomyopathy. They themselves don't have it, but they have a family history that's compatible, and we're just working through testing those family members to see. The pooling of data across site, especially for these not novel but low frequency variants, if they're one in a thousand, that will get 25 of them across the network and we'll be able to really look at penetrance for some of these. Reanalysis, I didn't really talk about, but one of the real advantages of getting the data early would be the ability to reanalyze it to see how much changes in a couple years, because as the knowledge base about variants increases, variants, especially the US is unlikely, well, they all change categories, honestly, every change is made in every direction, and it would be really nice to understand the rate at which those change. It'd be really useful information to share with patients. And then methods to share variant reclassifications with the patients and participants themselves. How do we get back that information? If we've seen you once, how do we follow through if the variant changes? Adding family history to analyses, again, standardized tools across sites, and again, you could expand that to wearables, apps, other things, and we'll hear more about that later today. Adding demographics, I think geocoding is gonna be a big opportunity. We do have some basic demographics in the electronic health record, we can get ancestry from our principal components. It is cost effective when family members get tested, so family communication tools and psychosocial data, cascade testing, more efficient ways of returning the results. Maybe we don't need a genetics professional in every case, so maybe at least some of the counseling can be done in a web-based forum. This, I feel like, is a real missed opportunity in the current cycle. As you saw, about 97% of the reports, the patients have a negative panel. There's no positive finding. For the vast majority of sites, and just one exception, there is no negative report generated and returned to the electronic health record. Those are valuable in medical care, and that's something that we're not really addressing. I think there was a cost component to not getting the negative reports. But we've already done all the analysis, so generating reports and implementing them, I think would be very worthwhile. And then, I think this is something on the outcome side, we really need to think about the current structure, and this is no criticism of the sequencing centers because this is the structure of the consortium, was that the sequencing was spread across three years, and even that was, I think, shrunk back from four years so that we got the data sooner. But if we could front-load the sequencing budget and use an existing platform instead of developing a new platform like a medical exome, whatever, I think this would give us a lot more time to focus on outcomes, and in the end, be a much more efficient mechanism. So with that, I'm gonna turn it over to Mark.