 First is Ned Colange from the Colorado Trust. I'm really, really pleased and honored to be here. I get to rooms like this, and I'm always reminded that I'm not a geneticist, and Maureen Quarry can't make me one. I'm just a simple country clinical epidemiologist, kind of turned philanthropist. But I've been working with EGAP since its inception and wanted to talk a little bit about the other end of the spectrum. Listening to the discussions this morning, I was reminded about talks I give to medical students and others about evidence-based medicine. And I have this slide that I worked on with David Eddie that said something like, promising, innovative, new, cutting edge, none of those are synonyms for effective. And now I'm going to add to that actionable. I looked up the definition of actionable, and it says information on which you can make a decision. It doesn't say the decision's right or works. It just says you can make a decision. So I want to move down the spectrum a little bit to unanswered questions. So in 2004, Maureen Quarry and others were asking these questions about genetics. How valid and reliable is the information? What are the benefits and harms associated with actually using these in clinical practice? And what action should we take on the results? Then what should our response be from the medical community, public health, policymakers, et cetera? And then here's a slide that Maureen used, I think, just last week or two weeks ago that really kind of says the same issues, although there's a lot more articles now and people saying things like, we need to quit. This is Jim Evans pushing genetics into medicine and talking about the evidence, dilemma, and waiting for the revolution. So I think that we're still facing a lot of the same issues. So I want to talk about eGap, evaluating genomic applications and practice and prevention. This is a CDC project that actually has a steering committee made up of other national partners. It's non-regulatory. I always like to say it's independent, non-federal, multiple-disciplinary working group. Not that anyone ever notices the word independent, because the USPSTF, the Preventive Services Task Force, is independent and yet is also known as the Secret Government Death Panel. So we're not a Secret Government Death Panel. We're just a bunch of people trying to make some sense out of issues. We integrate existing processes for education and appraisal. We minimize conflicts of interest. And we want to be evidence-based, transparent, and publicly accountable. And you could find the work that we managed to do on egapreviews.org. So how did we put this together? So we wanted to integrate knowledge and experience. So we took genetic assessment framework from the ACE framework, and then developed strategies to look at quality of studies. And this was a little hard for the analytic validity area. But we took it on and borrowed information from the USPSTF. We also borrowed systematic evidence-review strategies from AHRQ's Evidence-Based Practice Center. So we incorporated things that people already did. We also, I wouldn't say that egap pioneered, but we helped and used new modeling strategies to address different evidence gaps, specifically in the area of GWAS and the unique problem of small relative risks. And then we developed clinical recommendations with hopefully clear linkage to the evidence. There's a process that we put together, and this is it. And we try to be transparent with it. So we select a topic, we define the clinical scenario, and this is important because this meeting, I think, is vital because the clinical scenario that we framed our work in has changed. So our clinical scenario has always been, should you order the test and use it to guide therapy to gain an important health outcome? And now the question is, the test has already been done. What are you gonna do with the results? So we're trying to adapt with the changes, and we'll talk a little bit about that at the very end. So then we evaluate the quality, synthesize the literature, determine this issue of the balance of benefits and harms and make a recommendation. I can't do this talk without showing an analytic framework. I have to apologize, but it's how we think about the work, and it starts on the far end saying, well, here we have adults with non-psychotic depression, and you're thinking about therapy with SSRIs. Number one would be the overarching question, basically, is a randomized controlled trial that shows this actually improves health outcomes. Given that we rarely get that, we put together this chain of evidence and ask questions about, number two is, well, does the test actually measure what it's supposed to measure? And questions number three is, what it's measuring actually clinically valid? Is it related to something that might be important? And then number four and five gets you to the issue of, does it show clinical utility? Or in the vernacular of where the money is today, is there a patient-centered outcome somewhere down the line? Questions in the framework where I talked about? So analytic validity is a test, right? Clinical validity, do they translate to something? And that includes all of the issues of clinical positive and negative predictive values, sensitivity, specificity, penetrance, all of those are wrapped into clinical validity. And then finally, utility, does this translate to some important health outcome somewhere down the line? And are the outcomes benefits better than the harms? I have to add this last issue because I've drank in the CER Kool-Aid. So this is the next question that you should ask and as I was thinking about the CVD risk factors, this is where the money is, right? Does the availability and use of this information improve health outcomes instead of net benefit when compared to usual care? Somebody already said it, we're actually pretty good at this. Not great, but pretty good. And what's the marginal benefit? And then once I've identified that, and let's just assume there is marginal benefit to all the things we've been talking about, what is that at the expense of? Now as a cost of whole genome sequencing comes down, it's not a monetary cost, it's really the cost of the workups, the diagnoses, the treatments that may or may not be needed, and then the harms associated with those. So we have completed some recommendations. I love the pretest to ever put it together, included at least three items that we've already made recommendations on. One is the SSRI issues where the recommendation is that the evidence is insufficient for or against 450 testing to inform SSRI study and use is discouraged. So we actually said it's insufficient and we're not certain you should do it till you actually know the answer. And this was interesting because the issue of clinical validity, which was what we've been talking a lot about here, was the place where the hole was, right? So you can identify where the gaps of evidence are. Here's another one in the list, insufficient to recommend for or against UG T1A1, I don't have all the letters in there, in CRC patients to be treated with a Rino T-can. So this was an interesting one because the clinical validity is there, but the clinical utility is almost set on its head. It was almost like, yeah, you should run a higher risk of severe adverse reactions because you actually get a better response rate. And so because we couldn't actually develop a clinical scenario to make a confident recommendation, we left it in the insufficient area, but it doesn't mean there isn't promise coming. And then finally, evidence is adequate to recommend against routine testing for factor five Leiden in adults with idiopathic VTEs. And in fact, the evidence showed that long-term anti-coagulation benefited people almost equally, right? Whether or not you had the variant or not. And then back to CVD, evidence is insufficient to recommend testing for the 9P21 genetic variant or 57 other variants in 28 genes to assess risk for cardiovascular disease in the general population. And it's this issue that you just can't improve the receiver operator curve in predicting outcome very much with these very small relative risks associated with these SNPs. So those are kind of the issues that you all answered if you actually answered the pre-test or the pre-work you answered questions on. So let me just kind of end with translating where this comes because it feels a little as if we've been standing on the sideline watching your search and the horses left the barn. You know, we've been at this since 2005. I think we put out our first recommendation in 2009. And if you summarize it, we're doing about one gene test a year. We will never catch up. Okay. Okay. So you're a little bit ahead of us there. So, but is there some relevance? So I'm gonna tell you that, you know, in my heart of hearts, we continue to have relevance from the standpoint of, you know, finally when we put these into clinical practice and we use them and we recommend them, we should have good evidence. So that would be tier one, according to Dr. Curie. Tier two would be, well, you know, we have clinical validity. So it's promising. And it's actually associated with disease. It's actionable. So we ought to think about how to handle those differently. And then tier three, discourage use. The other way to put it in is to end, to this is Jonathan who's gonna speak this afternoon and Jim Evans and Moine, or they're all on this project. This is saying, well, let's talk about bins, bin one, two and three. And the methods actually can translate directly to that. So if there's poor evidence for analytic validity, you guys, the laboratory's in the room, just fix that, would ya? Okay, just get that next genome sequencing working so I don't have to worry about analytic validity, because I don't like to talk about it. If you don't have good evidence for clinical utility, I forgot to go back, should never hand me a tool. Let me go this way. Poor evidence for clinical validity, this has been three or Moine, it's your tier three, right? It's, let's put them off for now. We need more research, don't use them clinically. If we move on down, we have clinical validity, but we don't know if it provides clinical utility, this has been two or tier two, okay? We need more research, it doesn't mean don't do it. There's promise here, this is the promise of personalized medicine. And then if you have evidence for clinical validity, if it's positive, let's figure out how to translate it into practice, and if it's negative, let's figure out how to not do it. So, the kind of practicality in bending is that it's expensive and time consuming to do clinical utility. We have to worry about clinical validity, and you heard a little about that in the edges, that because we're looking at numerators most of the time, we don't have good cohort control groups, there's biases that can be introduced. And, but on the other hand, looking at clinical validity is relatively easy. So, we should be able to say, boy, if you don't have at least clinical validity, let's put you into bin three, let's say no early, and we'll let the researchers in the room figure out whether or not those are useful. So, I'll end there and see if I can answer some questions. For comments or questions. Do any of those categories where you've decided that it's not actionable now, does any of that change when you've got the sequence already? So, is any of that a cost consideration that it's not worth the cost of the test doesn't justify a small effect? But maybe if you've got it already, there's enough effect to justify using it. Yeah, I think this is the crux of the conference, right? So, we know there's an association, and as you think about it, that does make it actionable. And so then the question says, are there additional costs or harms associated with that action? So, one of our recommendations, which we take a little bit of grief for, okay, a lot of grief, is Oncotype DX and breast cancer. And the question, I mean, there's evidence for good clinical validity, but when you get to the utility issue, it actually looks good as well. But remember, you use it to not use a therapy, right? So, you can think about that a little bit. Is there a risk to not using the therapy? There are certainly benefits, right? So, you don't expose women to these chemotherapy agents that have adverse effects. But the benefits, the clinical benefits, while small are not zero. So, there are some harms, albeit small, associated with the use of that test. So, I just want, I want to always bring up that actionable actions have two sides, right? And we have to be disciplined to make sure we're considering not just the economic costs, which I think, I'm content now, are going to be minimized, but the downstream of cost effects of the test itself. Man, one of the things that come up repeatedly in our conversations is about the definition of clinical utility and how that changes based on the, like, I have the whole, who's talking about that, and the piece of personal utility. And I think that's going to be a big issue to try to wrestle with and come or have consensus around is what does actionable really mean in a clinical context and how do we incorporate personal utility as well into that definition. I think it's a great question, I think, yeah. Thanks for asking it. So, the issue to me is that, when we're all in medical school, because I know you all heard this, I'm almost certain. Someone told you don't order a test unless the results are going to change what you do. Well, I will tell you that bar has moved and some of it is moved in the area of personal utility from the issue of wouldn't it be nice to know, right? Or, yeah, I'd kind of like to know that. And there's not an in-depth understanding about what that phrase translates to. And I will tell you, it translates differently in Canada, for example, than it does in the US. Or they actually asked parents, this is fascinating in Quebec, they said, if you couldn't do anything about it, would you want the results? And then the United States people say, well, of course I would want the results. And the people in Quebec said, nah. So, I think wrestling with this issue in our culture and our discipline of medicine is gonna be real important and it's not gonna be an easy answer. I mean, once you have the genome, right, I don't know, the 3 billion or 3,000 tests we can identify, you know, how much do you want to know? If I only learn about one variant I might have a day, that'd take me a few years to get through my genome. Uh-oh, sorry. Can I ask you, so the EGEP methodology is very rigorous, it's like a typical US-permanent services task force or similar kind of methodology where you're really looking for very strong evidence. There's, I guess, discussion about whether that's the right metric for genetic genomics. Can you comment on what your thoughts are on that topic? Right, so I try to think about what aspects of genetic testing should make it an exception. And one is that, boy, it's happening so fast. So I don't think, for me that's not a good reason to use a different metric. So then I say, well, do I believe that the potential downsides or the harms are somehow less. And I think that's an askable and answerable question to which there's not a rich database. So there are some times when we've looked at the negative side of information and didn't find significant harms. We didn't find no harm or anxiety or labeling or problems, but we found less. And I think once we, if we can actually think about the relative risks of premature adoption versus late adoption based on evidence and look at those trade-offs and take them on head on, then I think we can better answer that question. If it's screening, right, and it's gonna cause me to do something that could be harmful, then I can't think of a reason why we would apply a different metric. So screening is an asymptomatic, otherwise well, people, which is what we all wanna be. That's what I wanna be, AWOP, otherwise well, asymptomatic person. Yeah, I think this gets back to the point that you were making that, you know, much as we were talking about how you apply screens to variants to determine whether or not you can really even begin to look at them from the perspective of biological relevance. Given all of the information that we have, we have to develop some rapid screens to be able to say we don't have to look at this whole bunch of them because we know based on these very simple criteria that we can apply that these just don't have enough evidence. So it makes no sense to apply a huge, expensive evidence review process to something that we know operating is not gonna have the evidence that we want. Now there's gonna have to be some nuance there because for a rare or ulterior diseases, the evidence is different than for big, common conditions. And so I think this is one of the areas where I know that EGAP has certainly thought about it, but hasn't necessarily commented on how they might apply that type of assorting process to really get at the ones that we need to focus our energy on. Well, I think that's really great, Mark, and we're looking for new members, so I hope you have your application in. I think this is, if EGAP's gonna be relevant going into the future, I think this is what we need to do. We need to embrace the issue that the things have changed. The clinical scenario is different. Can we still provide some value in a way that will actually keep us funded? Can we provide some value moving forward? I think it's a real important issue. You also, as you were talking, reminded me of something else, and it was a great thought, but now it's gone. It, sorry, as my mom would have said, it must not have been that important. That was really, really an excellent overview, and one thing that has struck me in hearing the various talks that I've heard as many different definitions of evidence as the number of speakers today, but I will say that the EGAP standards are much like, for example, NHLBI is putting together the updated guidelines on prevention of hypertension, obesity, et cetera, not genetics, but those are the kinds of standards that EGAP are using, and so it represents one extreme, but I think one needs to consider that that's what's being used in clinical guidelines in the current time. So now you've reminded me, and I'm really thankful for that. So the issue is, so let me take two things. One is to quote Jim Evans, and hopefully I'm not stealing any of Jonathan's thunder, but what's in bin three today could be in bin one tomorrow, right? We just don't know yet. So there are these, there are a lot of variants of unknown significance, but to your issue, there's a big, I mean, we've been talking basically around bin two today, clinically actionable with or without strong evidence of clinical utility. I think if we could be smart and disciplined enough, I mean, I was trying to bring this forward, I just rotated off the advisory committee for newborn screening. If we could be disciplined enough to put it in the field and say we're going to do that thing that I think it's, yeah, coverage. Well, I was trying to think of who pioneered it. It was the folks in California. Coverage with evidence development and be disciplined enough to say when we find out moving forward, that it doesn't work, we're not going to do it anymore. That's the problem. If we could be disciplined, then I think bin two is that area that's asking for that. Let's talk about conditional use, use in research settings, coverage with evidence development with the discipline to stop somewhere down the line and say, you know what, this doesn't work, let's not do it. I mean, that's a good place for me to stop.