 So, thank you very much. I think I'm going to retitle our talk as the nightmare of the virtuous cycle. Because really what we're talking about is we're going to keep changing everything. So it's dynamic. And so our panel, Katrina, Robert, Sharon, and myself, really have tried to capture this framework in our discussion points for panel three. And this is a work that was done by everybody through a string of emails over the last several days. So thank you, everybody. The challenges are the dynamic nature of the genomic medicine data. Looks like I have this out of order here. So next generation data is going to continue to evolve. With major changes expected in the near term, these include increased read length, decreased cost, and increasing the use of whole genomes for clinical purposes. And so I think you can just start to imagine what is that going to mean as we go forward on this. Dynamic analytics and laboratory reporting is nascent and will change rapidly as the knowledge base increases. So correction of errors in the literature and early reports, I think we're seeing some activity around ClinGen. But there's a whole host of things we have to be thinking about including the basic research where we're basing a lot of this. We don't have the same level of clinical ecumen, if you will, around that. The conversion of variants of uncertain significance and genes of uncertain significance to knowledge, Heidi and the previous group covered that well. Genomic data's impact on treatment will remain dynamic as the knowledge base increases in all aspects of reporting. So there's the primary results, there's secondary, and there's all incidental, and those are going to be some fluidity between these as we go forward. So there's, of course, sources of errors inside clinical reporting. There's the limited genotype-phenotype correlations. This was also discussed previously. Both correct and incorrect correlations exist in the literature, and this will also continue to change. And the range of phenotypic expression is uncertain or unknown in many cases. And sometimes the rare disease may give us an outlier, if you will, of what that gene really, really does. So we need to be thinking about that as we go down the road. An over- and under-reporting of the results based on varying clinical platforms and guidelines. By this I mean, or we mean, that looking at some of the different sub-specialties may look at this slightly differently as we go down the road in terms of guidelines. Ares and sequence data, these include incomplete sequence information, error rates depending on the type of variant, and errors in the sequencing analysis pipelines, errors in sample tracking, other standard clinical laboratory errors, and then errors in combined sequence data with other clinical data at the level of the treating physician. So remember, the report that's coming out is one piece of the data that the physician is using to make the diagnosis. It is not, in and of itself, the diagnostic. So the changes in treatment and care are also going to be impacted by this. Clinical implementation is going to change dramatically with the emerging definitions of what is actionable. We've had a little bit of discussion around this, and what's clinical utility. So the different medical specialties could have different definitions on what is actionable. So I know our clinical geneticists view this a little bit differently than our cardiologists. Clinical utility is going to have a different meaning to the patient, to the physician and the payer. This comes back to looking at the personal utility that Jeff mentioned in the first panel, and how will these different definitions be managed in the context of the dynamic nature of the genomic data. So keep in mind that this virtuous cycle that we're talking about has an impact to trickles down through each one of these different decision lines and the different areas that we're talking about today. Increasing numbers of drugs with companion molecular diagnostics or theragnostics are tested and approved. Indications for the existing drugs may be paired with molecular tests as knowledge is gained. As an example of one of the drugs, there's others that are coming out on the market. For the companion diagnostics, there's something important to think about this. So you'd think that it would be just really easy, okay, genome's been sequenced, hey, let's just look at that DNA and use that as information around that. But the companion diagnostics are tests that are submitted along with the new drug applications and the FDA targets the precision medicine, and what's important about this is it relies on a specific methodology. It is not just go look at a random sequence. So that's a big issue around this. It generally focuses on the limited numbers of mutations, so it's not in the same realm of what we're talking about for genomic sequencing, and it can have the strongest indication for efficacy. So alternative analysis of a specific gene may be approved, BRAC is an example of that. But once the genome exists, the data can be used for companion diagnostics. Is there a means to link it to the drug? Now let's keep in mind that with that notion, is the FDA going to agree with your genomic sequence as being part of that, and that's one of the issues that will be mentioned in a moment. So the changes to treatment, pharmacogenomics, obviously is likely to become more common across the clinical continuum. Current next generation tests often have minimal pharmacogenetics testing involved in it. So will the data be regulated differently? So are we talking about the DNA diagnostics having one level of regulation and the pharmacogenomics having another level of regulation? I mean, there's some significant issues that need to be sorted out around this. And how will this data be related to the patient's genome be stored in the EMR for future use and when new medications are prescribed? And what does this mean for existing medications that a patient's on? So the reanalysis of the genomic data, this is the virtualist cycle that I'm thinking about. Where are the variants data going to be stored long term? So there's one thing about having the knowledge base, but what about for that individual patient? Who's going to do the reanalysis? How will the rate of reanalysis be set? How will the reanalysis be paid for? Will variants be reanalysed in the context of either the primary, the secondary or the incidental context? And those issues are going to have a significant role about each of those events above. In what category will the variant be for pharmacogenetics? It could be primary for new drugs in the future. It could be secondary for a drug that a patient's currently taking when the patient is sequenced for different diagnosis. We need to be prepared for reanalysis to not only uncover new and actionable findings, but to also result in some prior findings becoming irrelevant or incorrect. And that's the downside of this as part of this virtualist cycle. We will for sure, we are already finding that there are changes that need to be made in this dynamic data set. Then we have the issue of duty to inform. So what do we do with this information? So are changes needed in the laws and the regulations required to what constitutes duty to inform? What changes to the dynamic knowledge of the patient's genomes are mandate, re-contact and re-reporting. So if there was, and I'm not saying there is, but if there was a patient portal, is it sufficient to simply deposit that data in the patient portal? Should there be a separate clinical visit? Who will and how will this be paid for? And should all data types be updated or only the primary? In other words, if you find a new secondary, new incidental, what are the decisions that are going to be made around that? How can the physician and the patients be updated without alert fatigue? They're all aware of this as a challenge. If there's, you know, Heidi's team and ClinGen is finding more and more data, I mean I can imagine in the not so distant future that every day there's multiple things that are being updated. How do we do this in a way that it helps us improve the practice of medicine? Will there be different rules applied to different specialties? And I think everything we're talking about is falling into some of these categories. Are there different rules for the different variant type? I'm going to say there probably are going to be. We need to decide on what they're going to be. So the guidelines for reporting results from the different types of testing are still evolving. ACMG has recently announced new recommendations for germline testing, but it doesn't include pharmacogenomics or common alleles. Efforts are underway to define somatic mutations, but things like methylation status, how are these going to be managed inside these guidelines? There's no agreed guidelines on other areas that are being pushed out there. RNA expression, circulating DNA, and single cell analysis, all things that are right on the horizon of appearing inside clinical laboratories. How and who should these guidelines be defined and evolved? And how will the changes in these data types be updated? It is likely that the different tests will be performed by different laboratories. How will these data be integrated, interpreted, and conveyed to the patient and the physician? Again, it's an amalgamation of data that's accumulating in multiple sites. So solutions for these challenges, I'm going to use this word loosely, solutions, because I think it's more that I've been talking about if you have no idea how we're going to do this, is you have to bring geneticists, pathology, specialty groups with the payers. I think that we need to have those discussions with them together to set guidelines. I think it also needs to require us having patient advocates involved in this as well. We need to fund and develop clinical trials. I use that in quote type studies to look at data return, duty to inform within these three areas of rare disease, cancer, and healthy patients. And I'm just wondering if some of the programs that are already underway in this portfolio, there couldn't be some other pieces tied on top of this. Is this a separate study? I think you could do this for really a limited amount of dollars to look at as we're reporting early on what's the outcome towards the end of these studies. I think these are going to be some important issues to look at. What should be done to define the clinical utility and what is actionable? Obviously, this has been mentioned before. The double blind clinical trials are not the correct study design, but how should this be done? That's been mentioned by the other two groups as well. So the main points, I'm going to summarize this here. We're doing a little bit of a hybrid of the other two panels. So I'm a talking head, and then we're going to have Robert come up here and talk about the way Caesar's taking a look at some of these activities. But I'm just going to finish the main points here, why Robert comes up, just because I couldn't get Robert's slides in without screwing up the formatting. So this is a technical error on my part, why we're doing it this way. So the dynamic nature of genomic medicine, data creates a series of problems in returning results to patients and physicians. Physicians outside genetics have little understanding of this dynamic nature and limitations of the different methods. Returning genomic data to different ethnic groups, age groups, and levels of education and wealth make it more complicated. But tests can be rapidly integrated like the NPT and disrupt current technologies. And can the FDA diagnostic process keep up with the rapidly evolving genomics data? Genomic data reanalysis and retesting will increase as utility increases. Genomic sequence is only the first of the omics types in this dynamic data that will be incorporated into healthcare. And the challenges across multiple disciplines, governance, and legal requirements make finding solutions problematic. Okay, Robert? We need to switch the slides. Can we switch to the, can we move to Robert's slides? I think she's trying to put the money. Thanks. So under the theme of changes in evidence we thought we might expand the discussion a little bit into looking ahead to some of the challenging areas and thinking about those. So, you know, to some degree, let's see, is it this one? I'm now the presenter, okay? Spacebar, okay, thanks. So to some degree, Howard has touched really focusing on diagnostic exome and genome sequencing. And so what I'm going to talk a little bit about are some of the ideas we've already touched on with regard to secondary findings, medical action ability, penetrance, intermediate and scalable phenotypes, and population screening. And part of the reason is because we are, I mean, there are two analogies that come to mind, the elephant where we're each touching a part of it, and also the plane that we're, what is it, we're assembling as we're flying it. I love that one. And part of it is that we have to somehow put scalability and structure into place for the medical care system, which is a blunt instrument. While at the same time being nimble and being able to iteratively and experimentally do creative clinical research that helps us see the path in front of us. I'm only going to use a few examples from CSER, not nearly encyclopedic, just because it's what I know best. Remember, there's nine different sites. And in terms of the scalability that people like to talk about, you know, hundreds of thousands, millions of participants and subjects, this is pretty small as potatoes. Here's some of the sample sizes there of the people who are enrolled in CSER. So take this with a grain of salt, but it also allows us to really drill down and explore different kinds of clinical trial models. Pretty much the focus in CSER as it is in some of these other groups as well is what are the medical behavioral and economic outcomes that we are focusing on? And what can we find out? Thus inform some of the group and scalable initiatives that we're talking about. Now, CSER's been great at moving beyond the ACMG 56, looking at larger numbers of actionable findings and also finding them in the various sites. So you can see that there's a huge range of sites. If you look at the third column on the right, there's a huge range in which different sites are returning different groups of supposedly actionable findings. This is one of the strengths, I think, that allows it to look at things in a granular fashion within a small-scale clinical research exercise. And looking at it a different way, if you take the nine CSER sites and a couple of other sites across the spectrum from NHGRI, you can see that some experimental studies are really returning a single gene or a small number of genes. Some experimental studies are returning a very large number of genes, up to almost 5,000 as we are in MedSeq. And so we're gonna be able to look across these studies and I think say something about them without having had to make an a prior decision about how to proceed. I will also say that several of the sites in CSER, and I know in some of the other consortia as well, are looking really rigorously at health economics. David Veenstra's work suggests that with some of these panels, there is an acceptable cost per quality gained. And that's really exciting for secondary findings. Whether this can be transferred into the greater population realm is, of course, a very salient question. I think it's on everybody's mind and that you may already have internalized this. But it's really important to make a very clear distinction, at least in my mind, between opportunistic screening that we've all been talking about with the secondary or incidental findings and true population based screening. Opportunistic meaning that there's an infrastructure in place. It's sort of ordered by people who ostensibly know what they're doing. It's relatively cost neutral because you've already ordered the sequencing. There's some recommendations that are in place we've been discussing. And it's all fitting within the medical model where there's a certain amount of secondary findings that are allowed to be cycled back into the clinical encounter. Population based screening, as such as we do with newborn screening, current newborn screening. We have nothing like the infrastructure in place would obviously add cost. There's no recommendations yet. It's very much this public health model with all its attendant, unforeseen downstream clinical consequences. So you might be surprised to hear me say this, but I think we really ought to be gathering very rigorous evidence before we endorse this. Now the problem is that outside this room, there is an extraordinary momentum moving sequencing in this direction. And so we've got to somehow accommodate that enthusiasm and that energy while at the same time collecting the data that's important. I do think randomized clinical trials have their place in this, such as our MEG-C project where we're doing with healthy people. I do think there's a lot to find there. If you look at 4,600 well-established genes, we're finding that 21% of people are carrying a risk variant of some sort or other and 92, that's for a dominant or semi-dominant disease. And this is 92% for recessive carrier traits. And we're looking as all the other CSER sites are at exactly what doctors are doing in this information. What is their real life downstream use of such information? Others such as Jonathan are looking at clever ways to try to make actionability less of an abstract term and more of a rigorous term. Jonathan's created, for example, a semi-quantitative metric to define actionability. Katrina's group, for example, is not only talking about medically actionable conditions, but unpredictable, mild, adult onset, short, and lifespan. These are things that really speak to what systems want and what people want beyond what we assign as medically actionable. And finally, I just show you some exciting results when you try to take this into a large-scale project where people are followed for many years. We've looked at the pathogenic variants in 462 people who are exome sequenced in the Framingham Heart Study, and were followed for at least two decades so that their phenotypes had a chance to emerge. And we're finding that suggestive clinical features are present in a high percentage of them, whereas if you model this and those without the pathogenic variants, the percentage is not nearly so high. And this is a remarkably significant difference, despite the small sample size, we're seeking to replicate this now. So the last slide, I would just say, I think this idea that we've been hearing about scalable phenotyping and intermediate phenotyping is really exciting. One can imagine it sort of segwaying and aligning with the new wearables phenomenon, where people are getting biochemical or chemical evidence along with other types of phenotype evidence, where people are enrolled en masse to be part of our research infrastructure. And we think the physicians think it's going to be the norm. I think the physicians will adopt to it when they get information that's useful to them just as they did with NIPT. And I would encourage us that in many situations before we scale it up, where we could be throwing a lot of arrows, but not very accurately, we try these things out on small scale clinical research studies. We are nimble in that regard, and we find our true aim in that way. Thank you. So Howard, do you want to moderate from there or from back here? I'll do it from right here. But I want to just see if Katrina or Sharon want to add any additional points quickly. Yeah, thanks, Howard. I just wanted to speak to a few of the things that both you and Robert mentioned based on the experience from our CSER study. So one point I wanted to make is that the value and meaning of the information may be context specific for the participants. So for instance, in our CSER study, we're focusing on returning carrier status. And this is in couples who are interested in becoming pregnant soon. And so they have a real near term interest in learning this information. But in some of the other CSER studies, they are also returning carrier findings. But I think it's not as meaningful in the context of the reason why the sequencing was ordered. And so there may be a very different reaction to people, to that same information, depending on their life stage. And so even though the information may not change over time, it just may be they're simply not ready yet to learn that information. I also wanted to give an example from our study where we've kind of changed our minds in the direction from a pathogenic variant to more of a benign variant. And that was the example that Gail talked about earlier. Where the first time we saw that variant, we were more likely to call it likely pathogenic. But then by the third time we saw it in 50 people, we were thinking maybe this is not a pathogenic variant. But we had already reported it out to the first person. And so what do we do? We go back and tell that person that we've changed our minds in a different direction than I think most of us are thinking about when we're talking about how the results may change. In our study, we also ask people what they're interested in learning about. People come to their genome information with a preconceived notion of what might be found. And even though we may not think something is particularly actionable to that person, it may be. So for instance, if they have a strong worry about Alzheimer's disease risk, it may be very relieving to their minds to get that information back, even though it's not something that we would normally think about testing for. And so I think we do need to take that patient perspective into account. And finally, I just wanted to mention the work that we're doing in ClinGen in terms of defining actionability. And we have started off in a very specific context where we're looking at clinical actionability in the context of returning secondary findings in adults. But I think that we realize this definition is going to change when we're talking about children or talking about somatic variants or other kinds of context. And so I think this work is just beginning to define what actionability means. Sharon? Yeah, I just wanted to make two points. One, to feed off of what Katrina just said, so we're in like the opposite. We're returning a very similar number of recessive carrier status in our project, but it's children newly diagnosed with cancer. And my favorite quote from the interview with the mothers when she was asked about things like carrier status, she goes, yeah, that's fine. I'll worry about it when I'm a soccer mom again. So that like in the context of another illness, they're happy to have the information, but it is a very little import, which is very different than adults considering having children. I just wanted to kind of highlight again, which we already did a bit, is the limitation of companion diagnostics, because sometimes I hear that, well, the way we should do this is through that process. And the current companion diagnostic process very much freezes the technology. Whatever technology was used for that test is what's approved. And so laboratories often have to import different assays to do that specific mutation, which they also have a perfectly good other assay for. But more importantly, I really think that genomics very much outpaces the regulation. Once a drug is approved for one mutation, there's little incentive. Even if you find three additional mutations that predict response, there's very little motivation to increase, to change the companion diagnostic, and particularly for drugs that have already expired, like they're not patentable. There's really no financial incentive for that type of research, and I think it really needs to be supported by NHGRI and NCI and the Disease Specific Institute. And conversely, there is actually a counter incentive to do research on mutations that might confer resistance to treatment. And there's really no regulatory process very well designed for that right now. So for example, the EGFR T790M mutation is very well described in the literature to actually confer resistance, particular inhibitors. And that really doesn't fall within any of our regulatory framework right now. So I think we do really have to think about how to support research on mutations that will convey response as well as resistance to both new drugs and existing medications.