 So, this spring, the National Academy of Medicine was asked to, or actually, they published a report, and the report was titled, An Evidence Framework for Genetic Testing. And the academies were charged by the Department of Defense to examine the relevant medical and scientific literature to determine the evidence base for different types of genetic tests for patient management. I think that's a quote. At the May Council meeting, there was a request from Council to have a report on this report, and lucky us, Wendy Chung, ended up being on that committee, she graciously agreed to present today. Thanks, Rudy. Okay. So, just to be very clear, this was obviously a committee's work, and I just want to give credit. Jonathan Berg and Dave Valley actually put this slide deck together that I'm using, so credit to them for doing this. These are the members of the committee, and as you can see, it's an illustrious group, and also in terms of, for those of you who don't recognize all of the names of everyone on the list, just to break it down by specialty, it was quite a multidisciplinary group, as was necessary, folks actually ranging from genetic epidemiology, basic science, and gene discovery all the way through really bioethics, health economics, and everything in between. And I can say I personally learned a lot from interacting with such a varied group. This is exactly the statement of task as we were given by the Department of Defense. This gave us a relatively, in some ways, narrow framework in which to work. This was to, as Rudy was saying, look at the scientific literature in this particular case, and to provide some recommendations to advance the development of an evidence framework, and then also to think about a decision-making framework. And so I'm going to go through those three different aspects that we were given in terms of our job description. The first that we needed to do was to be able to identify just the definitions, and I'll go through a couple of those definitions, and to think about the clinical applications and clinical utility. And then to think about the process by which evidence is generated, how you can actually aggregate that evidence, and then to have a decision-making process for being able to use that evidence. In some cases, as you'll see, there's a lack of evidence, which is not surprising, and that makes it difficult to make decisions when they're not informed by good data. So it won't come as any surprise, I don't think, to this group in terms of what the definition of genetic testing is. It was a pretty, I think, what we would think of as standard definition in doing this. You're really looking for whatever you're looking at, whether it's DNA, chromosomes, RNA, but something to be able to use in terms of hereditary or genetic issues. Very specifically, and especially just following along on this TCGA presentation, this was specifically not somatic mutations. So it was out of scope to think about cancer or somatic mutations. This was really focused solely on heritable things, and it was not ancestry testing or other things like that. It was for specific medical application and medical purposes. The three different use cases in terms of doing this, which, again, won't be surprising to this group, we're thinking about diagnostic testing. So once a patient is already symptomatic, predictive testing in terms of someone who might be at future risk, and being able to predict that based on, for instance, family history might be the reason for doing this, or population-based screening like newborn screening, and then finally reproductive situation. So when you're thinking about this, really, for the purposes of family planning. The one thing that was somewhat different about this, if I think about our Department of Defense task, as opposed to when we think about this more broadly, is the DOD has very specific use cases and very different value systems, perhaps, than you might have if you were doing this for a healthcare system. And so that very much constrained or defined what we were doing in terms of this. So just as an example, when it comes to reproductive issues, the DOD, in terms of their value system, might have a different sort of outlook in terms of thinking about reproductive issues and reproductive decisions than you might for a population at large. So in thinking about this, the evidence framework about thinking about this, there are certain challenges. In many cases, as I said, there isn't direct evidence for this, in part because I think we're still figuring this out. We're still learning about some of the genes for these. We're still learning about penetrants. We're still learning about what the implications are of interventions. So I think there are many limitations in terms of that, but I'm going to go through some specific limitations in terms of each of the three areas. So in terms of analytical validity, what I mean by this is being able to just literally, if I go into the laboratory and take a bit of DNA, for instance, how accurate am I, how analytically sensitive and specific and accurate and reproducible is this if I do this, especially when you think about from laboratory to laboratory in terms of doing this. So this is very much a technical aspect. Clinical validity is really thinking about, for instance, if I take all the women with breast cancer, what's the percentage of those women who are going to be positive for BRCA1 or BRCA2? So thinking about that clinical validity and how well established is that connection between the gene, for instance, and the disease, or that in terms of a clinical use case. Clinical utility is really thinking about how do I take that information that I'm getting back from the genetic testing and how do I really use that then in terms of care, implementation, improved outcome, and clinical management in terms of that particular genetic result. So with doing this, there are some challenges when we think about this. So when we think first about analytical validity, and I can say this in terms of my own experience with running a clinical laboratory, one of the issues is that being able to simply have gold standard samples to be able to test your assay out, being able to actually have validated samples to be able to make sure that your assay can detect what might be technically a very, very difficult mutation to detect for whatever reason. Being able to have these, some of these are available, for instance, through Coriel or other biorepositories, but we don't have nearly as many of these as we would like to have. By being able to have unbiased data in terms and data sets to be able to analyze and from technically, from a technical point of view, there can be technical issues when you're changing over assays from one to another. So some of the evidence that comes from this is what we do all the time in the laboratory when we do, for instance, cap testing, when we actually exchange samples and can do this in a blinded way. That's some of our best data that we get from this, but one of the limitations or the gaps are that we often don't have this aggregated in terms of data, single data sources where we can go back and be able to query that. In terms of clinical validity, again, this is sometimes difficult because we're still in the process of discovering the genes for diseases and being able to know how to interpret the large number of variants that are being identified. And so this is just something that we will continue to produce as a scientific community the evidence base in terms of knowing about things like penetrance, variable expressivity, and we've already heard about things like emerge that will start to be able to answer those questions. But this is still very much in the works in terms of being able to come up with this. This oftentimes, again, when we think about this, how do you aggregate the data or their databases that can effectively consolidate this information and use it in a user-friendly way were some of the challenges that we think could be dealt with. And then finally, in terms of clinical utility, this is perhaps in some ways the most difficult to be able to aggregate because there's a lot that's going on in clinical care and clinical medicine right now that is impacting care, but we're not simply collecting the information on that. So there's a lot that's going on, for instance, with carrier testing that's being used a lot, cancer testing, I just think of as two things that are very much in terms of clinical care, but we're not aggregating that information. It's difficult to be able to do across health care systems to be able to understand, in fact, how much that's actually changing clinical care or what that clinical utility is. And so because of that, we oftentimes have a chain of indirect evidence that's based on siloed data, that's based on, in particular, anecdotal cases and sometimes, but we don't have as much as we would like. The gold standard for doing that are randomized clinical trials in terms of being able to get those data together, but those are very difficult, costly, and the question is, in part, who funds those clinical trials? When you think about clinical trials for a drug as an example, it's a pharmaceutical company that wants to bring that to market to be able to pay for those studies in terms of genetic testing laboratories just as an alternative. They have historically not funded such RCTs, and in fact, that hasn't been necessarily part of the culture or financially something that's been viable. And so that's been a difficulty in terms of being able to have those data. And so oftentimes, this is data that are simply lacking in these cases. Finally, I'll just say that one of the things the committee thought was important is that there are certain ways in which some of us have used the term personal utility, but the way that information is used that's not necessarily directly impacting clinical care, but simply as an example for many people with undiagnosed disorders, being able to end the diagnostic odyssey can be very valuable. We do, as a value statement, believe that that has value and that has utility, but that's not necessarily something in a merge, for instance, that we can pull out in terms of a decision that's easily made. Even though that decision, for instance, might obviate the need for further testing, further additional diagnostic testing, that's not necessarily something that's already collected. So in terms of looking at this and trying to make specific recommendations about what to do, and again, the context for this was very specifically for what can the Department of Defense do, and just to put some information around that. So the Department of Defense, when you think of their purview, includes Veterans Affairs, so all of the veterans, and then it includes the family members. Many of them are getting their care literally from the VA hospital system, but some of them are also going and getting their care throughout the entire United States through TRICARE. And so they're getting that, and they could be coming for me, for instance, even though I'm not employed by the Department of Defense, but they could be going outside an official sort of DOD healthcare system, but insured through TRICARE and getting their information, getting their care in that way. But again, when we're thinking about specifically for the committee what to recommend to the DOD specifically, we would like them to, or our recommendation was for them to try and use whatever opportunities they had to take the information that they already have within their healthcare system, and to be able to use that information to drive better data, to be able to aggregate that, to use that to essentially do a self-study to understand what information could be used for them to improve clinical practice and clinical outcomes. So within this, the high quality things that we would like to see are RCTs, but we realize that that's difficult and not necessarily always practical, and that as we're doing this, we wanted to be able to collect some of these data streams that aren't necessarily usually collected, such as that clinical utility for personal utility, and then being able to also support discovery efforts that would be able to solidify those relationships between variants and those phenotypes. One of the biggest things though is being able to establish that infrastructure to be able to track and to collect those data, and I think that's many ways the most difficult thing to do, because like I said, it's not simply within the VA system, although the VA system is the easiest place to start collecting those data, but really for the DOD, it extends very widely throughout many other healthcare settings, and to be able to collect that all into a searchable database would definitely be one of the ideal things to do. So we tried to think about what else could you do, so besides just querying the VA healthcare system, are there other things in terms of claims data that are coming in, or are there partners with clinical diagnostic laboratories that are routinely used, where you could simply subset and take a slice out of if you're using LabX, be able to take all the members that are serviced by LabX and at least take out and one fell swoop all of your data from that laboratory. Those were some ideas in terms of being able to access those data and aggregate the data for the DOD. And in terms of that being able to document in what ways that information was used in clinical practice and being able to measure outcomes in terms of patient outcomes, at least for those patients served by the DOD. Within this, we also realized that there were many gaps and opportunities, and that this would require, in fact, being able to think forward in terms of some of these things. And so we wanted to make sure that the DOD was at least using the things that the rest of the genomics community had already built, the infrastructure that was available. So for instance, making sure that variants were deposited in ClinVar as an example. One could, for instance, mandate that you were a laboratory that was going to be paid for by the DOD, that that laboratory would necessarily have to donate their data to ClinVar, that at least we were trying to get data aggregated in the larger sense for the community simply by using the power of the purse. And in a similar way, if there are other databases that were developed, that that would again be a requirement in terms of any laboratory used by DOD participants. But to realize that any of this that was done was simply not something that Veterans Affairs had to deal with, but this was part of a larger system in terms of other funding agencies and HGRI, being one of them, PCORI, AHRQ, but other funders who could be able to support this kind of work. So the last of the tasks that we were trying to figure out was that if you, in a very practical sense, had to, as for instance, being at the DOD had to decide for a new test that was being suggested. If someone came in and made a request to be able to cover this test in this clinical use case, how would you make a decision about, yay, nay, do you cover, do you not cover? So as we thought about that, we realized obviously that many others have thought about this before, both in the United States as well as abroad, so we went through a systematic review for all of the other systems that have been in place to be able to evaluate these. And I'll go through these in sort of step by step, but just very briefly that in doing so, you would have to think about the specific use case under which this was done. You would need to think about some things very practically in terms of just how much was this burden? Was this a test that was going to be done as a one-off one person ever? And so you might take different levels of sort of energy, intellectual energy, and going through that use case, then if it was something that was going to be used in terms of financial burden with a significant portion of your budget, something like that, you should really take the time to do it very thoughtfully and very carefully. And that when this is done, you want to be able to use, if others have already evaluated this, you didn't need to reinvent the wheel. You needed to simply be able to go back to those prior evaluations. And similarly that if your group took the time to actually make a decision that that information was transparent to the rest of the community, so that others didn't have to reinvent the wheel if the group doing this at the DOD had already gone through and made a very thoughtful decision about this. So within this, and I'll go through this step by step, but is again defining the problem, doing a triage. Do you need to actually do the review? If you do need to do the review, going through making a decision about this and putting that decision in a repository and making this very transparent, so that for instance, if others wanted to be able to understand the thought process by this, that this was something that would be readily available, both in terms of patients who are covered in terms of the DOD, but also the rest of the community. And importantly that this would sometimes need to be revisited and revised, that certainly we're not in steady state in terms of the information or clinical utility. And so that there may be certain circumstances in which you might for instance, make a triage decision about no, but then that decision might be changed over time as things evolve. And as we're doing this, as I said, to be able to make this information accessible to the community. So the first thing which is important, and I don't think people necessarily think about as carefully, is when you're thinking about a genetic test, it's not just the genetic test itself, but it's also thinking about the clinical circumstance in whom that genetic test should be applied. So as an example, if you're thinking about hereditary cancer testing, there's one use case which you're talking about, people who have already been diagnosed with cancer in a diagnostic sense, but there's another group of people when it's predictive. Those individuals who don't yet have cancer, and those two scenarios might be very different. Or if you're thinking about reproductive testing, for instance, with noninvasive prenatal screening, there's one case if you're talking about women who are over 35, and that might have different parameters and different utility, then for instance, you're thinking about taking all women regardless of age. And so being able to define those in terms of the specific tests being performed, all of the analytical factors that go into that specific test with that specific technology that's used, as well as, as I said, the clinical scenario. And so being able to define that is important because one test, different clinical populations might have a different outcome. The next in terms of trying to think about this is is it worthwhile? Just sort of very quick triage, is it worthwhile in these particular circumstances if it's very clearly not, if it doesn't pass that SNF test, you can just sort of triage it immediately to a no. But if there is, if it does appear to be worthwhile, is there already evidence for this? And again, if that evidence already exists not to reinvent the wheel. And then to be able to think about, as I said, the burden in terms of the total cost for doing this. And that's an important decision, especially in a cost constrained system as the DOD had been in. So thinking about this, if there is insufficient evidence already, the idea is to be able to do a triage for a rapid review. If it's very clear upfront, being able to make a decision within that. And in some cases, as I said, it may not really be worth the time and effort to do the review if it's just easier because it's one case and it's sort of just, it's obvious that there might be some utility to have a triage individual who could say yes, when in fact we'll cover it in this one particular case. If there ends up being a greater demand in the future it might require more formal review later, but for this one case, we would go ahead and approve it as an example. Within this, as the review actually came to doing a very full study and analysis in terms of a formal review for doing this, thinking about the outcome of the test and if there's any other alternative besides the genetic testing that might be used and then to approach something like an E-CAP sort of process in terms of being able to go through that. As this is done, being able to think about the potential harms if it is not approved. So being able to think about the cost and benefit from this analysis and then being able to set values in terms of the healthcare system as a portion of making that decision. And then as I said, being able to make sure that this is publicly available in our repository so that this is something that others can be able to use and ensure consistency across those decisions that are made and then be able to make sure that this is something that's iterative and that continues to learn as information changes over time. So with this, I think one of the things that we realized is that there's oftentimes a data gap being able to make some of, inform some of these decisions and that continues to be one of the challenges to be able to do this. And so it's difficult, especially in the time frames that certain decisions need to be made to be able to have that evidence in place to be able to do this, but being able to support systems for gathering that data is perhaps one of the most important recommendations that came from the group. So this is simply a summary of what I've just stated, but being able to do this, we hope would provide at least the evidence framework in a very rigorous, robust way of being able to think through this, assuming that the data are available to be able to inform that. I think the challenge for many of us is thinking about how can you sort of build the plane as you're flying it and being able to think about how do you collect data when things are changing so rapidly and you need to make decisions on the fly as you're doing this. And I think that's the challenge that's in place but being able to set up those data gathering systems that require the least amount of effort that just happen automatically and aggregate data essentially passively was one of the things that would be most practical in terms of being able to come out of this. So I will stop there and I'll be glad to take questions. Yeah, yeah. I think it was you. Oh, it was me. So wow, that was, that's a big report. I downloaded it and I'm glad to have your cliff notes. But I also am concerned about a couple of things and I want your opinion. I mean, first of all, this seems to indicate that, wow, we're still, there's a lot that is not known. Well, I guess that's no surprise to this group. But, and it's also very hard to get funding for systematic reviews. I mean, in fact, really excellent systematic reviews. You can't get funding. It's also hard to publish them when they don't find much as I've known myself. So given that, I guess the bottom line here is just sort of on the quick and dirty you would passively make sure that you're able to collect information as it exists now, right? And just to keep track of it all. But that's not very systematic and it's not, and also that I'm not quite sure. I guess I think the population you're talking about is military population. Right, the military population. And of course, that's a huge population but it's also got some biases. So I guess I wanted, you know, comments. Am I right in thinking, and my general kind of, whoa, this is really, this is tough. Yeah, no, I mean, it is very tough. I mean, this is the difficult part of this end of the implementation pipeline is really being able to have, to me, it's being able to have systems in place that are automatically gathering the data that you need to be able to inform the evidence basis for review that you need to do. Quite frankly, I mean, I think the review of the evidence, I agree it's difficult to get funding. I think you're gonna have to have committees that are actually mandated, funded, supported to be able to do that. I think the greatest challenge is having the evidence in place to be able to do those systematic reviews to be able to look at efficacy. And I just frankly don't think we're gonna be doing many, if any randomized clinical trials to be able to go head to head, if you do genetic testing, if you don't do genetic testing, that type of scenario. I think it's gonna be at this point, just as an example, I don't think we have as much of the evidence that we need to dictate some of the practices that are already in clinical practice with sort of a hunch, but without as much rigor and as much evidence as we would like to have. And ultimately, payers are gonna push back and that's going to get us back to the point where we need to be able to show me the money, right? Show me the clinical utility, the outcomes to be able to do that. And so the challenge is in terms of who funds that, no one's jumping up to fund it in the biggest way that I think needs to be done. So the question is where do you start at least? What are the pain points? Think about in terms of the pain points, at least being able to tackle some of those and do that as a mechanism for triage and prioritization, perhaps. And just one extra thing, I think the most difficult is when you're dealing with healthy populations, generally. I mean, there's certainly data, whether they're perfect or not, but nevertheless, most of the data are collected from families that already have these conditions. But- Well, exactly as you say, for healthy populations, the problem is you're waiting a long time for an outcome of something bad to happen, and it's oftentimes taking a long time to get to that endpoint. Right, it's the penetrance issue. Like, let's just follow people till they're 100, right? Right, right. Thanks, Wendy, that was a great overview. So I did notice in there the internal problem of complaining about lack of evidence of clinical utility. And I think I quickly saw that as an incentive to generate some of that evidence, there was provisional coverage, what I might call coverage with evidence development. Right. You'll give provisional coverage. But as Gail mentioned that for healthy populations, there's such a long lag time that doing prospective studies is difficult. So can you say a little bit about that whole idea of using modeling? Was there acceptance and more discussion of sort of lowering the evidence threshold and more acceptance of use of modeling? You know, we actually, interestingly enough, we didn't even talk about that. You know, we were going very specifically on, you know, being able, not projections, not modeling, just actual data to be able to inform those decisions. It's great suggestion, we didn't even tackle that. When a new law is passed, often, not always, the first paragraph of the law exempts the federal government from the law. Do we know if military DOD employees are protected by Gina? So did the committee cover protection? That's a big issue clinically that individuals in the military are not protected by Gina. We can argue as to how strong Gina is, but there is some language that's similar, but still it's not Gina. This is a big issue for teenagers who are going genetic testing who are considering careers in the military. So my question is, did the committee address the use of genetic testing to discriminate against military employees for employment and promotion? So that would have been out of scope for us in the sense that it was just specifically focused on what is the clinical utility, not worrying about, you know, what your own personal concerns are in terms of privacy or discrimination. So medical geneticists deal with this daily, weekly, on an individual basis when they order tests for a patient. And they often have to defend it to the third party payer of why they ordered a test. And one reason that third party payers don't readily accept is that this will be useful for other members of the family. And was that addressed as a clinical utility issue? Sure. Yeah, so we did address that as a clinical utility issue. And just as an example, or to give it a little bit of more color for this, one of the issues that the DOD had been worried about is that you could be, for instance, a member of the military and go and request this test at whatever center and that someone there would review it and say, no, this is, you know, I'm denying it in this particular case, someone else in Arizona could go with a similar sort of situation and their local person could approve. So number one is there wasn't consistency across things. I do know, and I happen to know some of the genetic counselors who are involved in the VA system, one of the problems they oftentimes have is that the person who's informative in the family might even be a male, as an example, for hereditary breast or ovarian cancer because they're closest to it. And they try and justify oftentimes that that person should get tested in service of another family member. And it's been very consistent that that's not been covered in terms of things. So you're right that at least the way practice is going now, that's not, we viewed that as the clinical utility, really thinking beyond just the one member and thinking about the family. So yes, that is part of the clinical utility that goes through this. It's always this tension, though, between whoever the payer is and whether their particular member is the one who's being served or if you can have a broader view in this, so. Thanks. Okay, thank you, Wendy. Rudy.