 Well, welcome, everyone, to session two, Genomic Screening Technologies. My name is Erin Ramos. I'm the Deputy Director of the Division of Genomic Medicine at NHGRI, and I'm co-moderating the session with Jeff Brasco. Jeff introduced himself briefly, but he's a pediatrician and Director of the Division of Services for Children with Special Needs at HRSA. So I will introduce our fantastic speakers, try to keep us on time, and then Jeff will mostly moderate the discussion. I moved over here so I can see folks on the opposite side of the room. We'll see how it goes. OK, so Christine Ng is our first speaker. Christine is joining us from the Department of Molecular and Human Genetics at Baylor College of Medicine. She's also the Chief Medical Officer and Chief Quality Officer at Baylor Genetics. So Christine is perfectly positioned to talk to us about technical approaches and logistical considerations for population-based genomic screening. Thanks, Christine. Thank you, Erin. And thank you very much for inviting me to share some perspective on the current state of genomic sequencing from a technical and logistical framework. And I should say that the running title of this talk originally was Nuts and Bolts of Genetic Screening, which may be a more descriptive idea of what I'm going to be talking about today. So I just wanted to start with a summary of things that we've learned, lessons learned from optimizing high-throughput genomic screening. First of all, the input is very important. So there should be a consistent DNA source and quality. We should also optimize and simplify the workflow with automation as much as possible. And this is especially important for library preparation, which has been somewhat refractory to automation, but there's been a lot of very good advances in that for now. Of course, it has to be cost-efficient for the laboratories. And then also, there should be a focus on continuous improvement. So there should be a continual upgrade process, not only with the testing for laboratory protocols, but also for the analysis. And this can present some challenges for the laboratory, because with every new major change, or not even so major, validation, testing and validation is necessary. So you want to be very prudent in terms of deciding when and which advances you want to introduce to your workflow and that they must be durable. You also have to define your metrics. These are pretty well understood for next-generation sequencing, but metrics are very important to monitor throughout the process. And also, you want to monitor them long-term. And I'll be speaking about ways of doing this by looking at the results and the data that are being generated. Analysis, I think we touched upon this a little bit earlier. I'm sure we'll be talking about it more. It must be automated, but there's always going to be a manual component to this as well. And in order to keep your database updated is of utmost importance, but also to communicate how often that database is updated. And then finally, we talked about the input, but what about the output? There must be clear communication of reporting practices so that providers and patients and screenies understand exactly the result that they're getting. So I just wanted, I am Cleo Lab Director for both Baylor Genetics and the Human Genome Center Clinical Laboratory at Baylor College of Medicine. So I wanted to talk a bit about requirements to launch a clinical test. And I'm going to focus mainly or mainly exclusively on LDTs, but as we all know, there is more discussion about oversight of laboratory-developed tests. First of all, it needs to be performed in a CLIA environment, a CLIA-certified laboratory. CAP accreditation, of course, is preferred as well. The indication for testing needs to be clearly defined, as well as elements on the requisition, such as consent for additional testing or research that we touched upon earlier today. Specimen requirements, the specimens that are acceptable for the test and have been validated for the test. SOPs have to be generated, and there has to be a process for validating your reagents, your instruments, and your vendors on a regular basis. In terms of the test design and the validation, the technical limitations of the test must be determined and disclosed with the rest of the description of the test. Assay interpretation also needs to be fully disclosed. Your rationale for validation, and this can take many different forms, but especially with the different variant types that you're going to be reporting on. So SMBs, copy number variants, if you intend to report on those. Your clinical reporting criteria, again, the limitations of testing and reporting. And then importantly, the post-launch evaluation. So these tests, once they're validated and launched, they need to be continually monitored for performance, as well as for the results that are given. So if you are expecting a certain population frequency, a variant or gene, you must continually monitor your results to make sure that there are no surprises. And that will address some of the false negative, false positive reporting that we've been touching on. So clinical test validation, and this was discussed in the first talk. There is analytical validation, which is, of course, your accuracy, your precision, your sensitivity, your specificity, reproducibility, limits of detection, and the choice of your validation samples. So for a large panel, you must have positive controls and validation samples that are going to be a good representation of the genes that you're going to be assessing, especially the more common ones. And then, of course, the clinical validation. So what is the purpose of your test? Is it newborn screening? Is it family planning? Is it wellness? The evidence for including those genes and the action ability that is associated. So in general, these are the stages of validation of a clinical NGS test. The test design, the development, and the optimization. And we talked about, I touched on some of this earlier, the test scope is very important. So what is the endpoint of your test? What is the information that you're hoping to provide to the patients? The samples types and the turnaround time. So this should be very clearly communicated to participants so that they are not waiting for their results and they can expect their results within a certain timeframe. The wet lab workflow and any automation or if you can automate the whole process, the QA metrics and performance need to be determined. And then, of course, your pipeline needs to be established. You may need to develop additional modules for some challenging regions, both on the analytical side and on the technical side. And I'll talk about this a bit later. Your variant confirmation approach. So are you going to confirm positives by an orthogonal method? Or are you going to rely mainly on the metrics of your NGS analysis? The test validation, which I talked about, needs to be very thorough. And we should also remember that with very high throughput testing, any weakness in your pipeline, both wet lab and analytical, is going to be exposed with high throughput. So you have to make sure that these processes have been tested and stress tested as much as possible with volume, with difficult regions, and level of detection. And then the test performance needs to be monitored, as I've mentioned, with QA metrics established and tracked. And also importantly, sample identity contamination to ensure that you're not having sample mix-up. And I'll talk about this a bit later as well. So it may sound simple, but a very important logistical concern. So the sample collection, what type of sample is going to be accepted. These are some of the considerations in deciding which type of sample is going to be used for the test. Participant convenience is, of course, very important, at most importance. The laboratory should provide kitting. So whatever materials are needed to gather the test sample should be provided in a kit that's going to be provided to patients, as well as detailed instructions for self-collection. And I'm talking mainly about where the patient is going to be collecting their own sample, not where the sample is collected, let's say, in the laboratory setting of a medical center. Very importantly, there should be bulletproof labeling, as well as downstream matching to the patient's contact information. This sounds fairly simplistic, but this is one of the major sources of error in a clinical laboratory. That labeling is not performed accurately, especially if you're testing partners, multiple family members at the same time. Samples come unlabeled, samples come with labels of the other person who is being tested at the same time. The cost, obviously, is very important. The ability to automate whatever sample you choose for DNA extraction. But most of these considerations have already been solved. The sample stability during shipment and the time to processing. So they should be able to withstand mailing through the postal service and some delay from the time that the sample is obtained to the time the sample is processed. The failure rate of the chosen method should be assumed as well. Of course, whole blood is the gold standard with, I think, still the lowest failure rate. Saliva is a little bit higher, maybe about five to six percent, though this can vary based on the quantity of saliva obtained. And then also the ability to store the DNA long-term. So it's stability long-term if a biorepository is to be part of the testing process. So these are sample collection options from noninvasive on the left to invasive on the right. I just put some visuals in terms of the types of instructions that are provided to patients. I think the dried blood spots is one of the ones where you have the card and you write your name on the card. So I think there's less possibility of sample mixup there and misidentification. But I was struck by the differences between the noninvasive and the whole blood method. So if you look at the whole blood, there is a healthcare provider that needs to collect the blood and there are multiple types of medical devices. So there's the needle to pierce the vein. There are the tubes. But then most importantly also, there's the biohazards that are produced. So needle sharps and others that all have to take into consideration as resources needed for whole blood. This is taken from paper, recent paper by O'Brien and colleagues. And it's from the Oregon Project for Population Screening of Inherited Cancer and Familial Hypercholesterolemia. I know it's difficult to see this, but this is basically their workflow from their patient consenting to the patient asking for kits. About 25% of patients who asked for kits did not return them. Then the sample is processed in the laboratory and about 5% of those samples, and this was mouthwash, failed the DNA extraction step. And then taking the sample through NGS reporting in this particular example, they did choose to retest a positive. So, saliva kits were sent out to presumptive positive patients and another sample was obtained for orthogonal confirmation, and this was done by Sanger. But just an example of a workflow that was in place for this project. Choice of platform for high throughput testing. So just a couple of examples here. Genotyping arrays, I think were popular, maybe a little bit in the past. UK Biobank's project had an array of 800,000 markers. And this is perhaps better suited for biobanks, genome centers, and other core laboratories. All of us is using a GDA array as part of their testing process. So they're reporting on ancestry from this array as well as doing concordance. Arrays are cost-effective, high throughput, low failure rate. But of course, there's less flexibility after the design. Targeted NGS panels can be used as well. Examples are universal carrier screening panels, hereditary cancer panels. There is less data produced, which will allow you to have higher coverage, and this can result in your ability to accurately call CNVs. But of course, there's less flexibility after design and a lot of work and curation that has to go into designing these panels. WES has less data, less cost than WGS. Of course, you have the ability to reanalyze, but you may be missing some regions that are important for PGX and potentially also PRS. WGS, hypothesis-free, ability to reanalyze the highest cost and the highest amount of data. And then there are some hybrid designs that we heard about last week at ASHG. This is a low-pass WGS, which can be combined with a relatively low-pass or less than a clinical grade WES. Example of an NGS workflow. So the blue is a technical production, again, to automate this as much as possible. The green is the identification of DNA variants. This can be automated. And the yellow is your tertiary analysis. And there are a lot of tools. So this is your annotation, filtering, prioritization. And then for more diagnostic tests, your variant prioritization. Some, to some extent, this is becoming more automated, but there is a manual element as well. Just wanted to. Just to interrupt one minute, Les. OK. We do have ability to automate some of these processes. And for population screening, we rely heavily on the existing databases. Quality metrics then can be applied. Of course, there are quality metrics that could be applied to the initial nucleic acid samples after library preparation. Primarily, it's looking at insert size, post-sequencing their pass-fail metrics, including the sample identity, and then the post-sequencing monitoring, as I mentioned before, including what I think is important, the periodic review of your positivity rates to make sure that you're capturing both your positives as expected, as well as not over-reporting. Just wanted to give a brief example of our carrier screening panel that we do in our laboratory. This would be a tier four as designated by ACMG. So one point that I wanted to bring up here is that typically these panels are not unidirectional, so it's not just NGS that you're doing. There are difficult genes, but ones that have high clinical utility, such as Fragilex, FXN, CYP21A2 for congenital adrenal hyperplasia. These cannot be accurately assessed just by a unidirectional NGS. You have to have, as we do in our lab, another workflow. So for CIP21A2, we do a long-range PCR, and then we spike it into the NGS. So you have to have separate assays, and then you need to join your assays together. You also have to make sure you're taking that one sample through all the different workflows, and you're not having any sample swaps there. So sample identity becomes very important. And then we do orthogonal confirmations for specific genes, especially the more challenging ones, such as SMN1. I am going to. So return of results, we're going to have a lot of discussion about this. But of course, critical to clearly communicate the result parameters to providers and participants. And then the recontact, reanalysis, and possible subscription has to be defined in advance as well. So just in summary, I shared an overview of the current state of clinical methods for population genomic screening. There are clear distinctions between the reporting for diagnostic versus screening genomic tests. Typically, pathogenic likely path for screening. Of course, VUS is for diagnostic. But I wanted to make the distinction that the laboratory quality measures are not distinct. They must be exactly the same for diagnostic testing as for screening. And that can put some challenges on a laboratory because of the high throughput nature of this type of testing. Thank you. Thank you so much, Christine. You were asked to touch on a lot in a short period of time. So thanks. So our next speaker, Bob Courier, spent more than 20 years as the chief statistician of the genetic disease screening program at the California Department of Public Health. Although the focus of this workshop we heard earlier is on adult individuals, the lessons learned from newborn screening I think can and should really be factored into our discussion. So we're grateful to have you here today, Bob. Thanks. Thanks for inviting me. And I have no financial interest to disclose. The opinions presented are on my own. And some of them are pretty strongly held. I should have put up the disclosure of a paper that I published a little bit ago called whose title is Newborn Screening is on a Collision Course with Public Health Ethics, which appeared in the International Journal of Newborn Screening. Anyway, why focus on newborn screening? The main reason is it's already by far the largest genetic screening program in the country. And in addition, the newborn period is perhaps a unique opportunity to intervene in genetic disease before symptoms develop. So one might say, well, why don't we just sequence everybody, all of the newborns, and identify all the treatable disorders and get going? The goal of this, my part of the talk is to just say what a bad idea that would be. But for people who are less aware, newborn screening starts with a collection of the blood card about at one day of age. And usually, the parents have little or no involvement in that. So newborn screening as a medical test is unique and not being consented. I'll avoid all of my rants about that in the talk. But it then goes to the laboratory where there's a lot of biochemical analysis, and then the results are reported relatively quickly. They need to be reported quickly because the target disorders are serious and urgent. I will say on the way by that this first step of doing the biochemical analysis radically changes the prior probability of the, or the posterior probability of disease among the positives is much higher than in the general population, continuing the base that's theorem series. So this is a state program. And so in the context of public health screening, generally the state has the usual public health ethics considerations. I want to underline, at least three of them, that newborn screening really is universal. Essentially every baby in the country is screened. Every baby, $4 million a year. And as a universal program, it also applies to everybody regardless of race, ethnicity, socioeconomic status, insurance coverage, anything. It always happens. And a part of that is that the state program needs to be a trusted partner in the process. And when we come to genomic information, this becomes its own kind of challenge. For newborn screening, the choice of disorders is really important. Because the parents haven't been involved, the disorders need to be certainly serious. The state justifies it by saying, this isn't the best interest of the child. And we have to move forward. The baby could have a metabolic crisis at two or three days of age if the MCAT isn't diagnosed. Of course, the conditions screened have to be treatable. But along the way, because the state is screening everybody, the goal is detection of all of the disease. But the screening test needs to have a low false positive rate and consequently a high positive predictive value. On top of all of this, this is a hugely high throughput result. The state of California tests 1,500 babies a day. And that all has to happen promptly. I don't think genomic testing is in that ballpark quite yet. But single gene or small numbers of gene sequencing is used in newborn screening a lot after an initial positive biochemical test. And it has two functions. One is to reduce the false positives from the biochemical test, where the other is to aid in the subsequent differential diagnosis of the positive result. Well, let's take a look at a couple of examples. This is a schematic of cystic fibrosis screening in California. The first box there is the biochemical testing. And only about a percent and a half go on to further testing. So it's already weeded out a lot of people. The first tier is actually a mutation panel. At the time of this paper, it was only 40 genes in California. It's now up to 100. But on the way by, I want to point out that in addition to the usual SNPs and small indels, there are a couple of whole exon deletions that are sufficiently common to be important. And there's some deep intronic variants. And all of this comes from knowing what's going on with many, many patient samples from CF. If two mutations are identified from the panel, it goes on to the diagnostic testing. And there is a diagnostic testing, a sweat test, for CF. If there's only one mutation, it goes on to sequencing the whole gene. And in that, I want to point to the if two or more variants are found, including any VUS or anything, one of them is known pathogenic. So it goes for diagnostic testing. And I will point out that if you compare the panel results, the number of CF cases, that's the n equals 138, and in the results from the sequencing, the almost the same number of CF cases were identified at those two stages. Turning to Adreno-Lucal dystrophy, in this case, the screening test is considered positive if it passes two tiers of biochemical assays. At that point, it gets referred for diagnostic follow-up. But at the same time, the relevant gene, ABCD1, is sequenced to help, among other things, distinguish between ALD and other paroxysomal disorders. But now let's consider what happens if we started to do primary population screening. That is, just go directly to genetic sequencing. There are a lot of disorders that look like candidates for newborn screening, except there's no biochemical test. There's their relatively early onset. They have clinical follow-up, but there's no test. And the group at UNC has identified over 400 gene disease pairs that would be potentially suitable for genetic screening. Their definition of suitable in mind of early onset are a little different, but OK, fine. But this, I think, is, to me, one of the really important things to think about in genetic screening, that when there is a diagnostic test, you can afford to consider all kinds of variants, including VUSs. That's what the cystic fibrosis model did, that you have one known pathogenic variant plus something else. Let's check it out. But when there is no diagnostic test, then you're really relying on the genetic result itself, not just to predict the genotype, but really the goal is to predict the eventual phenotype. And in that case, you have to just stick to what you know, which is probably, I keep thinking about autosomal recessive disease, because essentially all of newborn screening is that. So you really have to just refer, like, homozygous cases of a known pathogenic variant. But given that what's known about the pathogenic variants, that starts to impact your equity. And so this is a really difficult thing. Every single gene is its own screening test. And so you have to know what you're looking for. Some diseases, like crepe disease, is commonly caused by a large deletion. So depending on the disease, you have to know, and you have to be able to find copy number variants. And enough about VUSs. One of the things that goes with this, because the knowledge base isn't uniform across ethnicities, is that there can be racial ethnic distinctions in screening. And we'll take a look at one of those. The thing that struck me about this paper was that the number of variants that were found did not seem to vary according to race or ethnicity. But when it came to interpretation and identification of pathogenic variants, the differences were significant. And I think this is probably not a surprise to anyone. I keep focusing on autosomal recessive disorders because it strikes me that the notion of pathogenicity doesn't belong to a single variant in this case. It's the combination. So these are PAH variants, the causative gene for PKU. So in each group, we have two homozygous variants and the compound heterozygote between them, and then looking at the frequency of various levels of disease. And so for the two groups, the left group and the middle group, there's one bad variant and one not moderately pretty OK variant. The resulting compound heterozygote is in one case not so bad, and in the other case almost as bad as the worst variant. The group on the right, I think, is the most, it's not troubling, and it shouldn't be surprising. But in this case, the compound heterozygote is worse than either of the homozygous. And it just reminds us that enzyme structure and function isn't a linear combination of the variants that went into it. So it almost goes without saying, but compared to current newborn screening methods, genomic sequencing is much more expensive per patient. It's reduced sensitivity and specificity. Which is exacerbated by racial ethnic differences. This secondary sequencing is really valuable, but that's not really what we're here for. And there's this promise of newborn screening for all of these other disorders if we could use sequencing. So here's my list of things that really need to be worked on. Many of them are in progress. There are people here that know more about this than I do. I really would love to have screening tests in place that distinguish between soon onset and down the road. Bob, sorry to interrupt, about 30 seconds left. Holy moly, this is the last slide. So let's just say this. Diagnostic testing would allow the inclusion of more VUS, sharing variant data not in individual laboratory silos, but across the world really. The interpretation of compound heterozygotes. Can we automate that? I don't know. And there needs to be guidance, real guidance, on pre-symptomatic management of genetic disease. I think my impression is that we don't have a good sense of what to do with cases that are identified with a genotype that don't yet have a phenotype. And thanks. Thank you so much. So our last speaker of this session before the discussion is Jonathan Berg. I know Jonathan quite well. We've worked together on ClinGen for the past 10 years. Jonathan wears many, many hats at UNC, one of which is directing the program for precision medicine and health care, which has implemented a clinical service offering screening of CDC Tier 1 gene. So Jonathan brings lots of experience to the discussion. Take it away. Thanks for inviting me. And I think I'm going to pick up some threads from some of the other talks. I'll try to give another way of thinking about this. And the way that I've been kind of cogitating around this is really thinking about the number needed in genomic screening. So there are a couple of well-established terms that are used in medicine, number needed to treat, being individuals who already have identified risk factors. How many of those do you have to treat with something to prevent that poor health outcome versus number needed to screen, which would be the number of people you have to screen for those risk factors to find the one that you're going to essentially help by preventing that adverse event. So all of this, as noted previously, we're talking about the risk for poor health outcomes, right? Monogenic disease, risk for poor health outcomes, polygenic risk for poor health outcomes, however we want to formulate it, that's the concept. And can we use that similar logic to examine genomic screening for monogenic diseases that convey high risk? All right. So the first part of this is going to talk about test performance and population prevalence. I probably don't have to go into too much detail about the way we classify variants, but just to put it a really fine point on it, this scale from benign to pathogenic is about probabilities. And there's a really sort of important zone there in that orange to red where we're talking about high level VUSs and likely pathogenic variants that are particularly useful in a clinical context when you have a diagnostic workup, as Les pointed out, but also are potential false positives, especially when you start thinking about that low prevalence population. I'm going to show you my version of math based on images here in the next couple of slides. So if we're going to start with a population of blue people where there's a few orange people in there with a monogenic disease, the prevalence of that is fixed. It is what it is in our population. This example is of approximately one in 100. That's going to be at the best and in terms of the prevalence of the diseases we're talking about. So this is an optimal situation. And of course we have the test performance, which we've already talked about as being tunable, right? Which types of variants are we going to include as positive screens? That's going to yield us a population of patients, some of whom are the people with disease and some of whom don't have that disease but have been picked up because of the way that we set that tuning. And so then we can calculate things like the true positive rate, the false positive rate and sensitivity of the test. And I've given this one an 80% sensitivity. And we can also calculate the false positives and they're in the specificity, which in this one I've given a 99% specificity. And you can kind of see how those numbers relate to each other. Of course this is just determining whether an individual has or likely has that disease, with all of the caveats about their prior probability to have it, not whether they're going to develop the symptoms of the disease. So we're going to get there in just a minute. All right, so let's start with those groups. Now we can do our math, right? So you've got your four true positives divided by all of the positives. That's your positive predictive value. So in this particular example, we've got 67% positive predictive value. That's going to be very good performance compared to the prevalence as we'll look out for other conditions. And the negative predictive value is going to be very high almost no matter what sensitivity we choose to use as our threshold because there's so many people in the population and because these conditions are so rare. So what then is the number needed in this case? So I'm going to use a number needed to diagnose and I use the term diagnose as a molecular diagnosis if you will, probability of a molecular diagnosis, that one true positive person and that's going to depend on the sensitivity. And so this is an example table just basic calculations of the prevalence of the condition and then setting out some kind of basic clinical sensitivity and specificity values that relate to each other in terms of the fact that if you're going for 100% clinical sensitivity, your clinical specificity is going to get pretty low. And if you go for the maximum clinical specificity, your sensitivity is going to suffer, but that's the trade-offs that we're going to have to make. And so you can see with a one in 250 condition like perhaps HBOC, something in that realm, if we're going to be going for a midline clinical sensitivity of 90% with a specificity of up to perhaps 99%, we're still going to end up with about three false positives for every true positive. There's just going to be some leakage of those people into our population. And it gets much, much worse the rarer the conditions get. You almost can't do screening for a one in a million condition with anything other than the absolutely well-known pathogenic variants, or you'll just wind up with lots and lots of false positives. Okay, so what are some critical values? And this gets to the research questions. We have to get to the prevalence of these monogenic conditions. So we know where to start from in terms of those priors. Most estimates we have are pretty much hand-waving guesses, right? This is about a one in 50,000 condition or whatever, but there is nice population ascertainment now from biobanks where we can look for the path and likely path variants. We saw some examples of that earlier. We could use those numbers to get a baseline ballpark, but of course there's going to be some questions about whether those people are actually truly having that disease or not based on that ascertainment. But that could give us at least a lower bound estimate. All right, here's another way to think about thresholding the clinical performance of genomic tests. So I'm thinking about this, not now as a single variant that's coming back, but as the population of variants that might come back from a given test. And if you had a distribution, something like this, where it's almost a frequency histogram of number of variants reported, if this were the characteristics of your test and you had a couple of sort of the higher level likely path and path variants that gave you a pretty high percentage of the cases overall, you could get a reasonable sensitivity at very high specificity. But then you'd have to ask yourself, is it worth gaining a little bit of additional sensitivity down into that orange realm at the cost of some reduced specificity? So how much value do you get out of pushing your threshold down? Obviously the better we understand pathogenic and benign variants, the better catalogs we have. Shout out to ClinGen. If we can eventually get the MAV data to really help us classify these variants and separate them so that essentially we're pushing things to path or pushing things to benign, then we will do better with our predictive ability. Oh, and I would just point out, I think this is an area where we could ask our clinical labs to help us with this. So for people who are clearly have a high, high, high prior probability of a given monogenic disease, what kinds of distribution of these variants do you see in that population? Would be an interesting way to look at that. Okay, so the clinical performance is gonna be something we need to learn and understand to know, to calculate the number needed. If you have a variant that's responsible for all cases, then you can really go with that variant. That's gonna give you a lot of information, but obviously most diseases have a much more complex mixture of variants and that's gonna cause us some problems in terms of really estimating what our sensitivity and specificity is. And I would also add that the complexity of the combinatorial diplotype of recessive conditions and which two variants an individual has even greatly more complicates that. Okay, so part two is about penetrance, actionability and preventing poor health outcomes. This is why we're doing the screening. So under normal risk management for that whole general population that are true negative, they're getting their appropriate routine management. They're individualized by family history, et cetera. They're getting average population outcomes. That's great. The one individual that we missed on our test is a false negative, is likely getting inappropriate routine management. We might not be able to help that and they do have the opportunity for clinical diagnosis. So there is still a safety net for that person in the sense that they're probably getting at least some amount of medical evaluation and follow up for things generally. Under high risk management, we're gonna have our two false positives who are getting inappropriate high risk management that gets to the harms that are likely going to exceed the benefits for these individuals and they'll have below average outcomes probably. If we look at our four individuals who are positive, now you have the advantage of cascade testing. I'm not gonna talk too much more about that. But within those, we have what we've previously talked about of disease penetrance, right? This is a fixed value for the condition and some of the people will be non-penetrant. That's the one person that kind of comes up to the top there. I refer to this as over diagnosis. They truly do have that monogenic disease. We're just gonna be treating them more than they need to be. Whereas the three people in the bottom circle there are gonna potentially benefit from the high risk management. So these are gonna be the will be penetrant people. And our goal is for them not to become penetrant. We want them to not develop that disease either because we've treated them, prevented it, picked it up early, et cetera. We wanna convert those into light blue dots. We're not gonna be successful all the time, right? So in this example, we've sort of prevented that poor health outcome in about two thirds of this population. Likely gonna benefit those individuals more than we harm them and hopefully have above average outcomes. Now this non-penetrant overdiagnosed person goes in with the false positives. This is someone who we are not helping by all of that high risk management. They were never gonna develop disease anyway, so all we can do is harm them by whatever it is that we do. So this is a way to calculate the number needed to treat. In this case, there are six people being treated. Only two of them benefited. We get an NT of three, right? And so these numbers could actually be calculated through some of our population research to figure out what that actually looks like. Timeline is also important. If we're screening prior to symptom onset, then we have a greater opportunity to mitigate those harms. We're gonna potentially have more people that we can prevent that disease in. But if we're starting the screening coincident with the onset of disease, well, some of those people have already had that disease. It's not gonna change their outcomes at all. And we saw some of that example in the Biobank data. If we're starting the screening after symptom onset, then the best thing we can do is identify people that should have been diagnosed anyway and potentially improve their family health through cascade testing. So what do we need to know? We have to know the penetrance of monogenic disease for each of the diseases we're screening for. We obviously have ascertainment bias from the affected populations that we've studied so far. We're gonna probably have a much lower penetrance for population screened patients. That's gonna increase the number needed to treat since the proportion of individuals that would benefit are gonna start to go down. I think we also need to better characterize the age-based natural history. When do symptoms start? When do people start to develop the poor health outcomes? And when can we time the intervention so that we pick up those people before that happens? Okay, so a couple of strategies. One is to follow up the tests, right? On all positives, like we have a gold standard test we do. Well, that's gonna cost money and we need to calculate that in to our cost of screening. But we could potentially resolve some of those people as false positives. They are actually negative and then they go on to get appropriate management. There's the issue of how to deal with this incomplete penetrance, right? Well, one way to do that is through what I refer to as proximal surveillance. Some low burden way of following people, maybe generating some additional data, looking at their family history more closely, finding out a little bit more about their risk and characterizing it so that some of those people are gonna go on and get low burden care for their lifespan and not get something definitive done. Whereas others might trigger a definitive intervention because of a phenotype that develops, right? There's a slight widening of the aorta. Somebody's following that. It hits a point where they then need the surgery, that kind of thing. On the other hand, you could really work on the refined risk assessment and really try to triage people into those who really don't need a whole lot of additional management and those who do need whatever that definitive management is. And this might be the sort of thing that you would bring into the decision making about something like prophylactic surgery, right? How are we gonna decide which people to remove an organ from based on that risk? And so that's gonna really have to require some detailed medical workup and decision making and I think that's gonna be costly and I'm not sure we're capturing that effectively in our current estimates of cost. So essentially we need these strategies and they need to be defined for each condition that we're gonna be doing screening for and we need to build this into the cost of a screening program so that we're not just thinking about how cheap it is to do the DNA sequencing, but we're thinking about all of the costs of the management. Okay, so critical value to know. This is another way of saying clinical utility. I'm taking actionability and making it quantitative. What is the quantitative actionability of each monogenic disease? Sorry, Jonathan, one minute left. Got it, I have one slide. So how much reduction in morbidity and mortality will we expect to get, right? Can we really put our hands on that? How effective are those strategies to reduce the false positives and mitigate the overdiagnosis? And in the absence of any controlled trials or 20 year follow-ups, how are we gonna estimate what that number needed to treat is really to achieve those, to reduce the poor health outcomes? So going back to my toy example, again, this is sort of a best case scenario because it's a fairly high prevalence condition. I've given it pretty good test performance characteristics. Estimating the number needed to screen. We've got about 125 people to find one diagnosis. The number needed to treat is about three. So we would need to screen 375 people to find one person who we're gonna help. And in doing that, we're going to identify people who are false positives and overdiagnosed and we're gonna do them harm. And so we have to make sure that we're balancing that as we think about these conditions. So the conclusions then, we need this key evidence. We need monogenic disease prevalence for the things that we're considering screening for. We really need to understand the clinical test performance and the spectrum of variants that we're getting out in each of these diseases. We really need to understand the natural history, the age of onset and the penetrance for these conditions and the population, not just in our ascertained affected cohorts. And we need these quantitative actionability estimates to know how effective our interventions are gonna be. And then based on that, I think we're gonna do what Les was suggesting is tune the thresholds for what variants get disclosed based on what the condition is and what we're gonna be recommending for the people that have it or that at least screen positive for it. And think about incorporating some of this into the cost effectiveness models to really give us a better sense of where in that process are the key features that we have to really tune to get the best performance out of screening. So I'll end there. And I think we have. Thank you, Jonathan. Thank you, Christine, Bob and Jonathan. It was a really neat follow-up to what the first talks were this morning. Christine leading us through what happens, has to happen in the lab. And Bob Dutz, what happens at the state lab and some of the concerns about that. And then Jonathan was a nice model for how to do it at a population level. I guess one of the things that was missing a little bit in taking off on my previous questions about the health system and how well it functions is also how well screening at a population level gets followed up. And so at HRSA, we not only fund the advisory committee of Newborns, which of course, Ned chairs, and how the things get onto the RUST, but we also fund states for doing Newborn screening follow-up. And so that whole system, we didn't really say much about that sort of. So we start off Bob, I wonder if you could say a little bit about your experience in California about what it takes when you do population level screening to make sure that you follow up on all those kids. So that's a key part of any large screening program. Yeah, thanks. So all of the Newborns that are identified with a positive screening result are referred to the appropriate specialist for diagnostic evaluation. And that, I think that happens essentially immediately and is really quite universal. I mean, again, it's part of the Newborn screening program. All of that communication takes a lot of keeping track and monitoring. After the diagnosis, things start to fall apart, depending on how intensive the clinical management becomes and where the family lives. I mean, in California, there are families that live four hours from the nearest geneticist. And so they don't get the kind of care that people in other parts of the state get. And that's just geography. There's also language and, well, just general socioeconomic status. And then insurance coverage starts kicking in. So the gaps in long-term follow-up are real. I think I'll stop there. Thanks, Bob. I see Ned and Kaelin and then Terry and then Heidi, but just before I do. So I think this raises an important issue for thinking about the research agenda, that we don't stop our process diagrams at we have a diagnosis and the clinician knows there's a whole lot of things that happen downstream. And what we've learned from Newborn screening is if we don't calculate that into the number needed to treat, we're missing out on something important in our calculations. Ned? Yeah, I appreciate that. And Bob, I appreciated your discussion and I agree with you. I didn't want to come back to something less that, that all those false positives, we just diagnosed them. It's way more complex than that. And we're doing research into the harms associated with reporting a false positive result back to a parent and the Odyssey that they have to go on to. So you don't have a disease that's gonna cure your child in the first year of life and you don't need a bone marrow transplant. I think the other thing you talked about is the immediacy of treatment. And so crab A is difficult because the bone marrow transplant has to happen so rapidly to be effective that there's worry about the chances to make a mistake in that diagnosis. So we're not supposed to talk about Newborn screening. Let's think about how these apply to adults because it's not that much different. If you have a false positive result, I would argue that wrong information is always a harm and treating a false positive or an over diagnosis is also always a harm. And so it's always that balance. So Jonathan, the only thing I would add to your wonderful talk is calculating the number needed to harm. All the therapies we're talking about, all the interventions to follow up on a positive genetic test also carry harms. None of our therapies or approaches are harm free. And so always thinking about, yes, I only have to screen three to benefit somebody. How many people do I screen before I end up with a harm? So I just always like to come back to harms. Can I just quickly respond? So like, yeah, I agree with you, right? That the number of false positives will, you know, I think we just trust the math, right? That these results are not 100% specific. If we're talking about likely pathogenic variants, then you could get astronomical numbers of false positives and that would just become overwhelming. And so I think that's part of the issue is tuning the types of results you would get back to keep it so that you're at least in a manageable number of false positives and then try to get strategies to reclassify those and sort of that, yeah. I think your name is Caitlin, I can't read some of these. So I think my question is pretty directed for Christine, but of course others chime in. I'm curious around the sample collection element here. And if you have thoughts or recommendations for criteria that organizations who are trying to implement population screening might use to help make decisions about what sample collection method to implement and thinking about from a practical perspective, you know, we've at MUSC have started with saliva sample collection and are doing that in clinical settings as well as at home have not gone into blood yet, but you know, because of the logistical barriers, but just would love to hear your thoughts around the decision tree or decision making there. Yeah, thank you. You know, I think this needs a lot of consideration, you know, first about what your sample volume is going to be and then what your outreach is going to be. So if they're going to be seen at a medical center and it's going to be somehow combined with a visit, then your options are, you know, much greater than if this is going to be mainly by, you know, outreach through social media or other types of approaches I think from studies that have taken place so far, it's been mainly non-invasive testing and from the slide I showed, and again, I was surprised at the level of complexity of the blood draw versus these other non-invasive. So, you know, you really have to think about the resources that are needed and whether the incremental improvement you get in, you know, sample viability and lower failure rates and long-term storage, you know, is that meaningful? Also, the mislabeling and there is no full proof method for this yet and that has to be developed. As I mentioned, the Guthrie, the dried blood spot card where you actually put your name on the card and then collect your sample, I think is maybe the closest but anything where you have to put a label on a tube is prone to error. And then marrying up the patient information downstream to the sample, that's critical as well. But, you know, we don't, as far as I know, we don't have, you know, guidelines, but I think that is definitely needed depending on the scenario. And just one follow-up, I think it's been, maybe it's post-COVID, maybe it's just getting used to fit tests and at-home sample collection, it's just been really amazing to see how people are responsive to the at-home collection. And so I think that's been a lesson learned and really a surprise seeing high uptake of people wanting to do at-home and then high return of those. So. Thank you. I forgot to mention that during my talk, but yes, I think the at-home COVID tests have really sort of made people more knowledgeable and not so anxious about handling their own samples. Yeah. I wonder what we're on the technical stuff, Christine. If you could say a word about reanalyzing for variants and later on, how the practical aspects of that, have you started thinking about those kinds of things? Yes, so reanalysis is, even today in a diagnostic setting, is a challenge. So, you know, making sure that patients are aware that, you know, whatever result they've gotten is in a particular window of time. And then making sure that your reanalysis protocol is very well understood and communicated, and that should be and is in our, like, exome and genome consent forms, how often reanalysis is performed. But in the laboratory, it's also, you know, a challenge. You have to, you know, have teams that are dedicated to, you know, your database and making sure that, you know, whatever changes you make to your previous interpretations are well vetted because, you know, you're gonna have to reach out to those patients again. So, it's definitely not an easy process, but you have to have the resources dedicated in your laboratory, the protocols, and follow them. Heidi, you probably have some thoughts on this as well. Well, could I just make a quick comment about dried blood spots? That's what newborn screening is. Those spots, if stored at minus 40 with desiccant, can be analyzed after 20 years and give good DNA for an exome or genome. Yeah, I wonder if there are any differences between the dried blood spots that are collected by a healthcare professional. So newborn screening I think is usually, you know, in the hospital versus those that are, you know, collected at home. So I was not able to find any data about that. But what, do you know what the failure rate is, typically for newborn screening programs statewide? Less than 2%. I have Terry and then Heidi and then Josh. We could talk about reanalysis all day long, but I think we'll hold that. I have a technical question, but you go ahead. Great, yeah, I wondered, Jonathan, if you could expand a little bit on your comment on, you know, how do we get at prevalence of monogenic diseases? And presumably, yes, monogenic, but, you know, breast cancer isn't monogenic. And yet we're looking for a particular gene variant. So both, you know, how do you expand maybe to complex diseases as well as, you know, what would be the approach to assessing that prevalence? Yeah, so, I mean, I've seen sort of the extrapolated numbers based on, you know, if you say that of breast cancer, some percentages are related to this gene or that gene. And then you sort of extrapolate from the population prevalence of breast cancer to get the prevalence of that monogenic form, right? So that's one way to go about it. I think that there's actually, my guess is that polygenic risk is probably gonna be more directly sort of applicable straight into the kind of, the risk that a given individual has at a level of polygenic score that they have, as opposed to with, you know, with our penetrance issues, somebody might have a BRCA variant, but then we know what the risk for them is some percent by some age. Is that modified by their other polygenic factors? Is it modified by environmental factors? How does all of that work together, which I think is a lot harder? Whereas I think with at least the polygenic risk, you've sort of evened out across all of the other factors that might be involved in the risk, you know? So I think you go directly from the projected risk based on the polygenic score, perhaps combined with other environmental things if you're doing a genomic sort of risk predictor that might be more direct to the risk of the individual. So this one's a little more, oh Heidi Rehm, Mr. General Hospital and Brighton Institute. So this is a little more probably directed at Christine's talk. Today we have, and you gave the great example of a carrier screening, sort of tier four, lots of genes, but you also have to supplement that with certain assays because of more difficult to detect variation and certain genes, fragile X, SMA, et cetera. In the secondary findings world, we accept that those tests aren't comprehensive. In fact, certain genes have been left off the list knowing that they're technically challenging. And so we sort of view it as opportunistic. If you happen to come across it, report it, but we're not assuming it's comprehensive. And I'm wondering how we all think about the middle of the road here where we're all starting to think about run a genome at birth and use it throughout the lifetime, kind of cost effectiveness approach. But that means that we'll be taking tests like carrier screening and sticking them on a genome where they're not optimized for the comprehensiveness of when we design a test for a specific task. And how do we think about using, labeling those tests that are not perfect, but we also, the cost to make them perfect would make them less useful at a population level. And how do we make it clear what this test is compared to the gold standard ordered intended for carrier screening, for example, versus we're trying to do this, but it's not perfect. Do you have thoughts about how to label offer that kind of scenario? Yeah, very challenging question. I mean, I think we do run up against this in the diagnostic world already, where we have our disclaimers and many labs do this gene by gene and the list becomes overwhelming. Exxon-5 in this gene, Exxon-52 in this gene are not well covered. And so understanding that and then being able to take that information and translate that to the patient to tell them exactly how good this test is for you is very difficult. I think for, let's say our carrier screening test, where patients are voluntarily or to some extent voluntarily taking this test, you want to give them as good a test as possible. And so that's why these orthogonal methods are being introduced in order to get the sensitivity where it should, where patients expect it to be. But I think looking at a population level, obviously it's going to be different and you're gonna have to balance cost and time with patient expectations. Thank you. I think Josh is next. Yeah, I hate Josh Peterson at Vanderbilt. This is a question directed at Jonathan. So it seems like the sensitivity specificity framework works pretty well for identifying a genetic risk and false positives and true positive sort of thing. But I worry a little bit about using that framework for identifying essentially the connection between the risk and the disease because of time. And every, there's a distribution of disease incidents over time and when you go to apply, let's say the idea of penetrance to an individual person that you're trying to counsel or treat, then you need to of course account for how old they are but also essentially when they got that information. So if we're gonna be screening 18 year olds, how do you counsel them or what do we need to know? What's the right metric to communicate essentially that risk that connects the genetic risk to the actual time related risk of disease? Yeah, I mean it's a great question. I mean, that's the big problem, right? Is the goal is to find the people before they have symptoms of disease so that we can do something to prevent them from having disease so they never become penetrant, right? And so I think it'll be as we sort of roll out screening in the context of the interventions that we want to do with people, it's gonna be really hard to tell which of these people who have that pathogenic variant are gonna benefit from the whatever intervention we're offering them, right? And so that trick of sort of communicating the population benefit that may not actually be an individual benefit is gonna be something that differs from when we sort of think about individualized medicine, right? We're doing this for individual benefit. We're actually not, we're doing it for population benefit. And that's part of the communication I think of of what those results mean for that person. Maybe this is part of the research agenda. I was just struck by some recent articles that showed that you could show stick figures in a diagram and you get twice the sort of actionability based on what patients like to do compared to let's say a single probability number. So I mean, it's not only the metric itself but the way that you communicated and it seems like we really need to know more about that in the context of genetics. And maybe to add to that, it's also the consent process upfront, right? That when someone's signing up, we probably see this in clinical medicine all the time right now that this happens to me as a primary care person where one of the specialists is ordered to test because someone had an eye finding and then they find a bunch of things they don't know what to do with and they get sent to me. And I don't know how much counseling was done upfront. So it's both once we have result but even before, I think understanding that better probably makes sense too. I have Erin and then Carol and Mark. Thanks. So we heard someone mention all of us and we know other databases like Nomad are increasing their representation from individuals from diverse genetic ancestries. But what else sort of are we at the point are those will those be sufficient? You know, we heard Jonathan you saying sort of we have to tune our decisions to maximize true positives and minimize false positives. Do we, can we do that yet? Do we have the data from diverse ancestries to make sure those choices are gonna benefit all? I mean, the answer is probably not yet and certainly not for more of the rare diseases. Again, and I think it's, yes, you have to, if the well-established pathogenic variants are concentrated within people of European ancestry because we've seen them the most and studied them the most, then that's gonna be a problem and we'll have to figure out how to address that and make sure that the catalog of well-established pathogenic variants is diverse so that we can benefit most people. And maybe a quick follow-up to that Jonathan. How do we, and maybe Christine can help with this as well and Bob, how do we get these new data streams? So as we get more data, how do we make sure it goes someplace like ClinVar so we can use it going forward? You guys thought about that much? Sorry, I was not paying attention. Oh, oh, yes. So, I mean, we're like, for instance, we're right now working on the submission from the All of Us research program, which obviously has a larger diversity. We're also working through the Global Alliance globally to try to get every country to submit and support their submissions because you can't submit variants, you don't test from patients. So I think it's just a widespread model to support data sharing at various levels, including interpretation and ClinVar submission, but also the raw data and how we all use that. We just launched Nomad V4 last week and the contribution of new variants, despite the fact that we dumped huge amounts of European individuals into this database, there's a very small increase in the number of variants from Europeans that are above a certain frequency that lets you exclude it. The contribution of the smaller number, 130K of non-European, was massive. And it just really demonstrates just how important diversity of data is, not only for interpreting the population it's in, but interpreting other populations. Thank you. So the question was kind of similar in that, Jonathan, you mentioned Maeve, sort of the multiplex assay variant effect and how data from that sort of project could feed into understanding the effect of all of these diverse variants that are reported in. I mean, what's your vision for how to integrate those types of data into this assays? So I would see the need for a good level of communication between the ACMG committee that's starting to say, these are the things that we think are worth the population screening for. ClinGen and expert panels to get really high quality specifications for classifying those variants and the Maeve groups to tackle those genes. And between all of that, you should be able to have a really good catalog of clearly pathogenic variants and hopefully not as many of that are likely path or kind of in that wishy-washy range so that those things could be rolled out with confidence. That's the goal. Then that'd be a really good project. Mark Williams, Geisinger. So this is a comment rather than a question, but I'm happy to have others comment on my comment. And this comes back to the idea that when you do screening, the balance of sensitivity and specificity results in residual risk, which also needs to be communicated and falls for assurance. And we have plenty of examples from newborn screening and also from even direct to consumer testing like 23andMe where somebody has an obvious family history for hereditary breast and ovarian cancer and they say, well, I don't need testing because I had 23andMe or a child that presents with chronic rhinocytocytus and pneumonitis and poor growth and floaty stools as well, I had newborn screening for cystic fibrosis so we don't need to do a sweat test. So I think one of the other aspects of this that we need to consider is downstream, how do we communicate the idea that this is screening? We are intentionally going to be missing people, some for technical reasons that these are just genes or regions that we just can't get at and others for rare diseases that don't meet the thresholds that we would define for screening and how do we facilitate recognition and testing? So at ASHD last week, there was some discussion and some of the sessions about whether genetic counseling is actually needed for negative results. So we're all very familiar with genetic counseling for these types of programs for positive results, but what about the negatives? And is that actually just as important or perhaps even more important that participants aren't left with the impression that they're no longer, they have to be concerned about this particular issue? Yeah, to add on to our discussion from the first thing, if we are arguing as some of us are, perhaps all of us are, that we don't have sufficient genetic counseling resources through the positives, then I think we can fairly well assume that we definitely don't have enough resources for the negatives. So then it raises the question of how do we develop resources that can achieve some, I won't say equivalence, but at least some acceptable level of communication for those particular issues since we know we won't have the human resources to be able to do it, nor can we afford the cost associated with them. So I'm gonna say that part of what Les said earlier was kind of these automated and computational ways of doing this. And I think that the genomic learning healthcare system is going to need to know somebody's gotten screening, what their results were, but also all of the other phenotypic stuff that you just mentioned, and the probability that that represents a disease, right? So that you can calculate what is the actual Bayesian kind of relationship between the fact that they had a negative screening test with its performance versus all of the phenotypic stuff and does another test need to be done and how can we rely on the EHR to pull that information together and flag it for somebody to act on, which I think will be a really interesting challenge for you. Yeah, yeah, it's well outside the realm of this, but I think some of the work that's being done at Vanderbilt and other places to say we've got data in the EHR that can make it so that we're not reliant on between the years, which we know is going to not be successful to really flag folks and say, this is an individual that definitely needs to be tested for CF or whatever, based on a very high confidence phenotype that raises that prior probability. Sounds an awful lot like a genome-informed risk consent from the merge. I'm gonna call it a quick time out because I think you guys are purposely trying to confuse me. If your card is up and you don't have a question, if you can put it back down so I can figure out who's next. All right, excellent. I think actually Dan is next, and then Caitlin and then Terry. So three random comments. One is I appreciate the shout out for the phenotype risk score, Mark, but we actually looked at CF and Lisa Basterash really couldn't find any, couldn't find any undiagnosed CFs using the phenotype risk score. So that's an example of the phenotype risk score, not finding extra cases, but I appreciate the shout out anyway. The business of counseling negatives, I think that comes back to what Les was saying. If somebody has a very clear phenotype, so I'm an arrhythmia guy, so if somebody has a QT interval of 550 milliseconds and their genetic testing is negative, they still need to be followed by somebody who knows something about QT intervals of 550 milliseconds. On the other hand, if it's population screening and they were screened for, because they don't have an indication, I can't see that we need to counsel those people. And then I just have a comment about the maves. We're sort of, I would say we've put more, we've put several toes in the water around maves. And one of the things I think I'm learning is that what maves do is assign pathogenicity or not, depending on a particular protein function that's being interrogated. So you can never be sure if you have a particular variant that looks benign on one assay, whether it's gonna be benign on all the other assays that you might want. And obviously the field is moving very, very quickly. And maves have nothing to do with penetrance, at least as near as I can tell. So there's still this penetrance problem we're gonna be left with. And the other problem that somebody's gonna have to solve with maves is that there are a couple of hundred KCNQ1 variants, for example, perhaps even less than that, that have been annotated by ClinVar and ClinGen. And when we do a KCNQ1 mave map, there are 13 and a half thousand variants. And so there's a problem scale and how we're gonna sort of be able to accommodate levels of evidence and just be able to present data in a chewable fashion to the wider community. A separate discussion, I think, but worth, I have to say something about maves. Any of our folks wanna respond to those comments? All right. Caitlin, you put yours down, but you know. Oh, okay, go ahead. I thought I was allowed to. So I was going to follow on to Mark's comments. I think there are, as far as research agenda and research questions, a lot of opportunities to think about, from a behavioral perspective, what's happening with people who have negative results? Are they, how are they interpreting that? What are they doing with regular screening behaviors? Are they stopping screening because of the way that they've interpreted their results? So I would just maybe emphasize that as a potential research direction. And then I think, too, highlighting some of the work that Kim Kappings and Guirame Adelphiol from Utah have done in the bridge trial and in their ITCR work with returning negative results using a chatbot for genes associated with HBOC and Lynch. And they've seen a non-inferiority to returning those negative results with chatbot through standard care. And I think that, you know, it's not population screening, but is potentially a really good example of a mechanism to be returning and educating folks about the negative results. So quick comment on your first point, which I think is a really good one. If I see a teenager who's obese or overweight and I check the cholesterol and it's normal, does that mean that they say, oh, now I can do anything I want? Right, and if someone does a whole genome sequence and it's nothing shows up, does that mean I can drink and smoke and do whatever I want because I don't have any risk factors? So I think research in that area is probably critical as you're pointing out. Comments from our panel. I mean, you guys are quiet. All right, I think of Terry and then at the end of the table, I can't see your name, sorry. Kate. Yeah, so I just was curious, Caitlin, what is ITCR? That wasn't my question, but... You asked me too fast. It's a funding mechanism through the NCI. Oh. And so it's focused on developing algorithms for helping to identify cancer risk and then tools and resources. Great. My question actually was for Bob Currier. When you said that when a diagnosis is made in a child and again, we're not talking about newborn screening, but we can get lessons from it, they're referred to an appropriate specialist, which leaves, you know, there's lots of arrows along the way to that that you commented on. But I wonder, you would think that that would be sort of the upper limit of who would respond, who would actually follow up, et cetera, and that in adults who have freedom of choice, et cetera, it would be much lower. And I wondered, is there any estimate of who actually, you know, what proportion of screen positive actually get into care and get appropriate care? You know, if that's... Well, we do know that I would say over 95% of positives get to a diagnosis. Well, yeah. After that, I really don't have, unfortunately, the way the newborn screening system in California is set up, after a diagnosis is made, the positive case management is handed off to mostly CCS. CCS. Uh, it's the care of children with special needs. Oh, really? It's another part of the state health system. And newborn screening actually doesn't get data back about long-term care, long-term follow-up. So it's very hard to really have a sense of that. So I'll say it hurts so that we are now funding both propel grants and copropel grants for states to be able to do a longer-term follow-up. And you're right, Bob, but it doesn't generally happen. I will give you just one example that we know of what may happen in Ohio where they're screening for Crab A, and it's not a secondary test for psychosome. They have a high number of false positives. And more than half of kids get completely lost to follow-up. So identified newborn screening is being positive for, but no follow-up. And so it's really concerning what can happen. Yeah, and just to follow on that, I think there's this connection for each state to have who's gonna follow up these results or the state lab is making sure that those are getting acted on by someone. Will there be something similar in a adult population screening program is that the responsibility of a health system to identify those pathways? Is it the state that does it? I mean, how are we gonna figure that out? It's patchwork, it'll be a problem. And I think we're gonna have to rely on our primary care providers and educate them on what to do when they get a positive because that's gonna be the first person that often sees these people. What to do next? And I think that's an important distinction, right? In the newborn screening, since it is a fairly much state mandated test, you could argue that if the state is doing this, there's some obligation to make sure there's a follow-up. What we're talking about so far in adults though, it's not state mandated. So it does probably fall on the healthcare system and likely the PCP. I think Kate was down at the end there. I wanted to ask a question. Okay, online, okay. So after Kate online and then. All right, just a quick, the ITCR is actually a general technology mechanism for the NCI, I've been on that study section several times now. So all sorts of technologies, screening is just one of the many technologies. So the other thing I was just gonna comment about the negative results is that there's clearly a trend. I was just on an ESAP for clinical programs to start returning negative results through portal and through low risk BOS that more and more institutions, especially because of sort of the workload of genetic counselors have really started moving towards returning those results that way. I realized it might be slightly different for a population screening, but I think you're gonna see a real clinical trend towards results being returned. And obviously people have a mechanism to ask questions to the portal if they have a question, but my impression is that that is really increasingly being adopted in many, many clinical settings, just to put that out there. I agree, it's not possible to know if people particularly interpret negative results. I mean, I think people always say, you need to continue your screening, you need to continue your risk. I do this all the time for melanoma genetic testing in particular, an area where you wanna make sure people realize that genetic testing does not change their need to have dermatological screening, and I sort of am very upfront about that. But I think that this is a cat is out of the bag phenomenon that negative results are really gonna start to be generally returned clinically via portal. Any of your panelists like to respond? Okay, there's a card up about halfway down, I can't see who that is, I apologize. Introduce yourself, please. Sure, Kelly, that's an outfit. I'm gonna talk more about education and training in a little bit, so I won't steal my own thunder, but one thing I just wanted to mention though is talking about these negative results and the people who might get false reassurance or overinterpret that negative, and you're talking about with new board screening, it's state mandated, you have very broad participation in that, but I think at least our experience, I think others today, a lot of our population screening programs are they're opt in, and there's a inherent ascertainment bias in those populations of people who perceive benefit of that program, and you're gonna have higher rates of people with that personal and family history in there, and so that risk is even kind of exacerbated, and I think we just need to acknowledge that, be prepared for that, and when we think about the calculations of those risks that whether we should really be using population prevalence, or should we assume a kind of increased ascertainment bias there? Actually, my comment was somewhat related to your comment. As these population based tests, let's say for familial hypercholesterolemia or other conditions become more available, we have to make sure that they're used in the population setting. There are some examples from our carrier screening where providers are actually using it as a diagnostic test for a suspected, let's say, child with CAH or other condition, and it's just not appropriate because of the reporting structure and so forth. We have to make sure that providers, even though these tests may be more available, may be less expensive and collected at home, et cetera, should not be used for indications other than what they were intended for. We are at the end of our time. Do we have, Erin, do we have a chance for one more question online or do you want to go to our summary? Okay, is there someone going to read it? Yeah, Carol Horowitz, go ahead. Thank you so much. As a primary care doc, half of our jobs are using screening as teachable moments and saying, you know, just because you don't underpass the non-smoking, just because you don't have diabetes doesn't mean you shouldn't need help here. So I'm struggling to understand both why we, it almost feels like the concern you have here is more elevated than some of these other times that we screen and have many tests. And I also am a little bit concerned that the idea coming out is that we want genetic counselors to do everything that we just don't have enough money to do that. And it might actually be that reading these things into primary care and handling or reading your other things will be view positively and we shouldn't be that as a loss. So as I turn it back over to Erin, I want to echo that and say that PCPs give this kind of counseling all the time. And if there was some tool that let us know, you know, what the data showed, we could probably do it. The research agenda though really should tell us more about whether negatives change behavior in a negative way. Erin. So I'll just do a 30 second wrap up. I'm not going to give a robust read out of all the good points that were raised. So Christine covered important concepts regarding the validating and stress testing, the pipeline, especially when thinking about high throughput screening. It's critical to have robust systems in place to monitor performance, both of the tests and the interpretation have to be really thoughtful when deciding when new advances plan to be introduced into the clinical sequencing workflow. We didn't talk about that much during the discussion, but that same goes for sample collection and choice of platform. We hear from Bob, the new board screening considerations are around serious, urgent and treatable disorders are applicable in the adult context. We really need to figure out how to hand off positive test results to clinical providers in the context of the US healthcare system. Disparities exist. We need to do better to include ancestors from underrepresented populations. We're seeing the value of that particularly like Heidi described with NOMAD and the contributions to increase in the number of variants that we can classify as benign. It's imperative to have better estimates of prevalence of conditions, natural history of disease and age-based penetrants. We need to calculate what is required into following up on positive findings, better understand harms of reporting false positives and I'll stop there. Thanks. Thank you both very much. I would note the lunch is up in the far corner there. The hotel people may come out and pull the table out so you can come down both sides, but if they don't do that, maybe one of you or a couple of you could do that without spilling anything. So, and then, yeah, that's right. And we'll be back at 120 please, 120 to start the next session. Thank you all.