 Great. Thanks, Teri. Our next session, session six, we'll be discussing obstacles to screening, working through multiple perspectives from across the landscape. And our first speaker is Michael Hultner, who will be talking about why payers are reluctant to cover genetic screening tests. Thank you, Jillian. Thanks for the opportunity to speak today. I've really enjoyed the Genomic Medicine series over the years and really appreciate the opportunity to give back. I've been asked to talk about perspectives from payers related to reimbursement and support for genetic screening based on my experience in the payer community and also current experience helping labs get their tests reimbursed and supported. But I have to give you a disclaimer that the opinions I'm going to share with you are not representative or official views of current or past employers. So when I was thinking about current obstacles to genetic screening or screening programs in general from the payer side, I was putting together my own list, but I figured I would ask chat GPT what the top 10 obstacles to or reasons why payers are reluctant. And this is the list I got. And I think it's pretty remarkable that we've been talking about all of these things in the last two days. So what I thought I would do is just go through and give you some observations I have in each of these areas, and hopefully that's additive to the conversation. So first one is cost effectiveness. Of course, this is very important to payers and health systems. I would give it a four out of five stars in terms of relevance, in terms of making a decision about whether or not to support a screening program. This idea of value that Mark talked about, you know, the equation of outcomes and cost is very relevant for this decision making process. But what I've witnessed is that happens in terms of per member per month types of calculus and not necessarily qualities. So I think one thing that we might want to talk about is how to translate the language of expert communities to the payer communities when it comes to these decisions around or how to quantify value in health care. So that's one point. Health outcomes are very important to the payer community and I've witnessed this being a major factor in considering support for screening programs. And as we discussed on the first day, you know, the performance of tests related to producing false positives and that number needed to screen or number needed to test is very important to the decision making process and the outcomes of the positive screened patients has to be overwhelmingly greater than the cost of supporting the population needed to screen in order to make an impact on the decision making process. So those outcomes have to be highly impactful and the risk of achieving those outcomes is very important to payers. So they, in considering a screening program, there's a bigger consideration about how you run that program in order to achieve those outcomes. The one thing that comes up pretty frequently is this short term focus and this I've seen it to be very relevant in the decision making process. So of course, in the Medicare side, screening really isn't supported by statute except under exception. So on the commercial side, how often or how long members stay in a plan is very important and on average that's around two years. So when you do a value equation looking at per member per month or per year value, you have to gauge that against how long you're going to have to be able to realize that outcome in the plan. So taking the impact of outcomes and the value equation and reducing that by a two-year window has a big impact on making decisions on the payer side. Budget constraints of course are always a factor. Members are very interested in achieving high impact outcomes from screening programs or any type of healthcare program, but they have many good programs to choose from. So when you look at where to put resources, if they can achieve the same aim with a lower risk but high reward type of program, that's going to win in the decision making process when it comes to supporting a screening program or not. So looking at screening low risk populations to be able to identify high risk patients, if that can be done with a non-genetic screen, that's going to take priority over genetic screening programs. Evidence is in my experience probably one of the strongest factors in considering a screening program. So evidence behind the power of the test, evidence behind the outcome and its likelihood to create value is very relevant in the decision making process. And unfortunately, often there's a disconnect between those that are promoting a test for screening and the payer's understanding of the evidence behind it. So I've heard several discussions about evidence being good enough. I think there's also perhaps a research topic in understanding how to communicate evidence from expert groups to payer groups that might be productive. Logistical challenges was a surprising factor that I witnessed on the payer's side in that in order to run a screening program and have it be effective and reduce the risk of achieving health outcomes, quite a bit of logistics have to come into play. Reach to patients and providers is absolutely necessary, activating patients to actually follow through on the screening recommendations, making sure that providers are aware of guidelines for screening is actually quite low. And when you look at pilot programs that have been conducted, often the adoption rate is in the single digits when you reach out to patients for screening programs. First selection, I'm sensitized to this issue now, listening to you all on the potential of selecting patients that are into a high risk category due to a false positive or following up with more severe types of interventions is quite a concern. I haven't witnessed that directly, but something that I think has been covered in this conference. Competing priorities, I think we've already discussed this too. Expensive complex screening programs without a high degree of evidence are going to suffer in the decision making process compared to simpler, say, non-genetic types of screening programs or patient activation programs. And the impact on guidelines I think is also quite strong. There is, you know, having strong guidelines and helps in the implementation process by providers being aware of the evidence frameworks behind the recommendations. And so payers really rely on consensus from expert groups and USPSTF weigh strongly, of course. And, you know, if it's not on the path to A or B grade evidence, it's very difficult to get support for screening programs. So I would say in the chat GPT motivated obstacle of changing guidelines, I think it's more having firm guidelines and having those be backed by strong evidence as something that's necessary to get support for screening program from the payer community but not always sufficient. And public health impact, I think that largely payer communities that I've been exposed to see this as a responsibility for the public health agencies, but they would be more than happy to help implement public health program once it's funded and enacted. So how could this work? So given the fact that there's a high bar for evidence, there is a different value equation and there is logistical burdens, I think the thing to focus on for promoting the screening program that could be covered and supported by payers, of course, strong evidence of health outcomes that sit in those boxes that Mark talked about where they'll invest in outcomes that increase costs with the right evidence support. Having a test that has a very good performance parameters that is simple to implement and low cost and ideally recommended by US PSTF with high patient and provider adoption and most importantly with low upfront risk. Often what I've seen in the business cases that come forward for these programs is when you factor in all of these things, there's a strong pressure for the test cost to be near zero in order to be able to say that you can support such a program. So I think one other potential is where there is a quality measure or a compliance requirement that's an opportunity to get a test that's fully supported. So that's all I have for you. Thank you for your time. Thanks, Michael. Thank you. And our next speaker is Bob, Bob Fremuth, who will be talking about why sharing genomic data is so challenging. Thanks very much. My name is Bob Fremuth. I am an investigator at the Mayo Clinic. I have a research program in genomic medicine and I work with our Center for Individualized Medicine on the implementation of genomic medicine initiatives and our enterprise infrastructure. Today I'm going to talk about a few things that we have learned through that process. I'm going to first summarize the current state of genomic data sharing, looking at the perspective of standards. I'm then going to talk about a few examples of challenges that we have faced trying to focus specifically on those things that have probably more relevance to screening data. And then I'm going to review selected research opportunities. What I'm not going to do today is focus exclusively on standards, which is my usual thing, or discuss technical infrastructure, which has its own challenges. Now, there are many challenges related to sharing genomic data. I'm going to focus on a few examples related to the what, being the observed data, that is, the sequences and the variations, laboratory interpretations of those data, and then what I've called here derived interpretations, or those things that occur downstream of the lab's interpretation. I'm also going to try to weave in a little bit of the how, but what I'm going to leave for others who are much more qualified than I am is questions related to when, why, and who that tend to go more into the clinical and the policy spaces. So the goal of sharing data isn't simply to send information to someone. It's also to enable the use of that information, and in the context of this meeting, that means that a patient's genetic data moves with them as they receive care from different providers who can then use those data to inform treatment decisions. Now to accomplish this, the recipient needs to know not only how to unpack the syntactic structure of the message that they receive, but also what that content means. Now all three of these big broad categories on the left apply to both screening scenarios as well as diagnostic testing, and the challenges related to the first two are fairly well known. There are significant gaps that remain in these, in the interest of time I'm going to focus on the third chevron here, which again I've called derived interpretation, and applies to things like carrier status and risk scores, and I've tried to select a few interesting challenges that I thought might be relevant here where the roadmap forward might be a little bit less clear. Before I get into those examples, however, I want to acknowledge that there are contexts that wrap around these data elements that provide rich semantic meaning without which it can be difficult to interpret those data. And we need to be able to capture and represent those contexts with all their nuances accurately for that data to be meaningful. Now while this is sometimes more straightforward for humans to do naturally using language, representing all of those nuanced context and relationships in computable forms can be much trickier. And I don't have time to dive into any of these, but I include these here as examples for reference. Now in order to achieve interoperability between two systems, they need to be able to communicate. And when there's a gap in that communication, we have three or at least three different ways of approaching that. We can change one system to fit the other. This is possible if you control at least one of those systems, but it's hard in an environment where vended solutions predominate. We can implement an adapter, which plugs that gap between the two of them. This is often used when we don't control either system, but it's not possible or practical to actually change the systems themselves. And the middle example here is to adopt a common standard, and that is when systems can be controlled by two different parties, but those two parties are willing to work together to meet in the middle. And this is actually the most scalable approach because it avoids these point-to-point solutions that don't generalize. So this slide is just to remind me that there are several types of standards. I'm not going to get into the different purposes for which each of these are developed, but the key here is that when standards are maintained by many different organizations, it can be hard to harmonize them and use them together for a specific use case. In the context of genomic medicine, we, of course, have different communities, and those different communities have a tendency to use different standards. This is true in the clinical space and in the research space where there are, at times, significant differences in choice at very fundamental levels, including the numbering systems that we use and whether we left-shift or right-shift normalize genetic variants. This can be overcome, but it is even more of a challenge when those distinctions are not apparent or they're not explicitly stated, and data is exchanged and used perhaps incorrectly because of this. So to that end, I just want to give a quick plug to two standards groups that I work in here, HL7 and the Global Alliance for Genomics and Health, and a shout-out to those that are working with me to try to bridge this divide between clinic and research. We're trying to leverage the strengths of each of these standards, and the vision that we hope to achieve is that the standards that are now aligned would be able to allow us to take data that was used for clinical reporting and use it seamlessly with data from the research side, whether that be from the growing number of public knowledge bases that are now supporting some of these standards or the bioinformatic tooling that has historically been the genesis of those file formats themselves. Now everything that I just mentioned applies universally to genomic medicine. I'm going to shift gears. I'm going to now get into a few of those examples where we talk a little bit more about gaps that I think will be felt more acutely in population screening scenarios, and these are all lessons learned from what we've done at Mayo Clinic, which has involved a cast of thousands to implement our genomic medicine program, and of course, I can take very little credit for any of it. I'm going to start with a pharmacogenomics example. We implemented our pharmacogenomics CDS program in 2013, and over the following four years, we implemented more than 20 gene drug rules to fire decision support at point of care. Now the CDS design was tightly coupled to a single genetic test. The triggers for that CDS were dependent on the format of the specific genetic results that were brought back to our system. Now this is a fragile design because any change in the formatting of those results required a corresponding change in our CDS logic, but that was the best we could do at the time. Now somewhere around the 20th gene drug rule, at the bottom of the list on the left, we switched to a new EHR system, and we had to rebuild everything from the ground up. So we implemented the Epic EHR, and at some point after sharing with them ideas about how we could improve our PGX implementation system, they released a layer of abstraction between the genetic test and the CDS logic. And they did this by creating a translation engine in this concept of genomic indicators, which is similar to a problem on a patient problem list. And using this system, we've now been able to add more than 400,000 genomic indicators to patients charts. Now the use of genomic indicators did several good things for us. It provided this layer of abstraction that helped us insulate our CDS logic from the format of any particular lab result. It also created a new shareable data element related to pharmacogenomic testing. But we also learned that the design of our genomic indicators was influenced at least in part by the downstream CDS logic that we wanted to implement. And that dependencies between the CDS and the genomic indicators and the component results meant that a change in any one of those things had the potential to ripple through our configuration files and affect the others. Now the fact that our internal design about CDS logic had the potential to impact a shareable data element meant that sharing those genomic indicators is not as straightforward as we might want. I'm going to illustrate this using two examples and a hypothetical scenario. Now we have intentionally changed our genomic indicators to better support more specific CDS logic. The first example on the top here in green illustrates our genomic indicator, our initial genomic indicator for DPYDs on the left, where we have now designed this to capture a categorical level of risk. We've later subsequently changed that genomic indicator by splitting it. And we now support a series of not only metabolizer statuses within that genomic indicator but also activity score. The second example on the bottom is similar for the interest of time. I'm not going to go into that. But remember here, we've got more than 400,000 of these genomic indicators on our patient's charts. And every time we make a change like this, we have to scan through those charts and we have to update every one of them that might have been affected by that change. Now remember also that while this example is using genomic indicators to represent pharmacogenomic information, these can also be used to represent carrier status or risk scores and apply to population screening. Now let's think through what the potential impact of this might be. And this is just a toy example here. But let's say that we have a lab report with a result that's converted into genomic indicator A. And the ordering clinician writes a clinical note referencing the result of that test and the fact that there is now genomic indicator A on that patient's chart. At some point in the future, that genomic indicator is now retired. And now B takes its place. We have a revised translation using clinical knowledge that has now been updated and a new note is now written to reference B. But what was referenced in the original note is now broken. It no longer exists. Moving forward through time, B is now deprecated. C arrives. A new clinical note is written. Links and back references from the others are now broken. Now imagine what would happen if genomic indicator A was shared with another health system when it first was applied. The second system would not be aware at all of the internal changes that we made from A to B to C. And even if they got an updated copy of this patient's chart in the current state with indicator C, what would they do with that? Is that a new result? Do they keep both? How do they know how to make that decision? Taking it one step forward, what if that second health care institution had their own CDS program? They had their own genomic indicators. They had their own build. And now, because the patient came back to the first health system, they got a copy of that chart, too. Now they've got more genomic indicators coming back and landing in this space. Reconciliation becomes a big problem. Now these examples illustrate how local implementation decisions can negatively impact the value of sharing genetic data if appropriate provenance is not available. And as we gain knowledge and find new uses for genomic data, we can anticipate that these genomic indicators will continue to change. We need a stable model that adapts without breaking, and we must support local implementation decisions while encouraging harmonization. Now this slide shows the happy path for genomic testing. In this case, we have a clinical lab that's performing a test, sending back both a human readable PDF report as well as computer readable discrete results back to the ordering EHR system. Now, of course, reality is not always this simple. And this is an example that is based on a microcosm that we have at Mayo Clinic. Testing labs can return results in a variety of formats that vary by the test. Ordering EHR systems are part of a larger clinical electronic environment that includes ancillary systems and data warehouses that contain information from former EHR implementations. Now this schematic is just a snapshot in time. If we were to watch this evolve over a time series, we would see lab tests are replaced, electronic systems are upgraded, IDs are mined, and then deprecated file formats evolve with the tools that were used to develop them. And the dependencies that aren't even shown on this diagram are insidious. Component results depend on the test build, discrete results depend on a variety of things, and warehouses slurping in table structures from other databases can lose key relationships if those table structures aren't adequately defined. Now in this type of environment, results from genetic tests can be stored in a variety of different places and it can be hard to know a priority where to look for a particular result. This gets even more complex if we add data that is produced by research studies. And I'll skip through the implications of how we need to know how to use that data appropriately. Taking one step further, I'm almost done. We need to maintain robust linkages between data so that we can use those data appropriately and those data must be fair. They must be findable, accessible, interoperable, and reusable. A Mayo Clinic has been doing genetic testing since the early 90s. Many of those reports are not in our EHR. They're in our clinical data warehouse and they're in there using identifiers that were deprecated long ago. To maximize the value of genetic results, including that of screening data, we need to be able to develop systems that manage those results over many years. And in fact, those data have to be more enduring than the clinical systems that host them. I'll just leave this as some of the identified research opportunities that I think exist in this space. All three of these have ongoing work. I think there remains great opportunity to make tremendous progress in these. And I want to acknowledge that while I'm talking specifically a lot about data representation here, there are many facets to genomic medicine that will ripple out from this. With that, I'd like to thank everybody for the time and your attention. Thank you. Okay, I think we'll move on. I'm Dan Rader. Our next speaker is my colleague from Penn, Kate Nathanson, who's gonna talk about implementation of genomic data into the EHR, Kate. Thank you very much. So thank you for inviting me. I'm really delighted to talk about this. Bob and I actually coordinated our talks very intentionally so that Bob really talked about a lot of the overview and a lot of the problems and a lot of the issues that one has to solve. And I'm going to really talk specifically about our experience doing EHR integration at Penn and what we've learned. And I think that it's also will bridge to the next talk, which will focus on implementation science and how we implement using genetic data. So again, thank you for your time listening and we will get started. So I think the first thing that really is very important to talk about, we have named this the PennChart Genomics Initiative and I am here on behalf of many, many individuals that we have engaged in integrating genomic medicine into the EHR. We have geneticists from all sorts of specialties, neuro, cardiac, cancer, reproductive. We work with our pathology and molecular genetics team. We've been highly engaged with our legal and privacy team that's particularly important to be able to do this. We were talking about how laws differ between states. So really important to consider. And obviously our information systems team has been very engaged as well in this process. And so I will say it takes a village, but it takes a really well-supported village and we really have had buy-in from leadership at all levels and that has also been particularly important for this support. So I am just sort of taking this from a review that was written a few years ago looking at barriers to implementation. I'm not going to spend a lot of time talking about why should we integrate into the EHR. I think that was covered by a lot of prior talks and particularly we had a talk yesterday from Family Medicine talking about the EHR integration is being particularly important. And if we are doing population screening, this is sort of something we need to think about. But what we currently have in many institutions is barriers to identifying the correct patient for genetic testing. How can we order the genetic tests through the EHR? How can our physicians make sure where they're ordering the right test through the EHR? The test is performed and how do you get the test information back into the EHR? And then where are the results being delivered? Are they being delivered in a way that can be clinically actionable? And then I'm not going to obviously talk about billing and reimbursement. With the vision, and I think all of our vision is that we have this integrated value chain or learning health system that enables us to use genomics in here. I'm not going to talk about patient identification. I'm happy if we want to discuss it. We are obviously, and many others, developing e-phenotype being algorithms to identify patients and test selection. We'll talk about how do you test select? How do you pre-program test selection? How do you set up ordering in a way that facilitates or supports ordering, particularly for non-genetic physicians, as well as resulting? And then how can you do that to a clinical genetic, improve clinical decision support? Again, I will be talking specifically about our learning processes. So just to think about how do you initially start to have a genomics-friendly ENR? And I think that it's really important to note that the process that we embarked on really started almost 10 years ago when the genetics programs at the institution standardized all our naming conventions. That's actually really important if you're to sort EHR data together and bring it together. You need to have standards for naming and labeling that work for everybody, and everybody needs to agree on them. We created a precision medicine tab within Penchart, which is our version of Epic. I want to say that we intentionally called it a precision medicine tab because we don't want this to be solely limited to genetics. There is other precision medicine data. We have a particular interest in immune health, for example. That also would go in that data. We have a genetic-specific document type. And when you scan results in that document type within Epic, it goes into that tab. It also interestingly allows isolation of genetic data. And we have had a lot of discussions about doing that in order to prevent its upload into health exchanges, which I think is important. Because of our long-term naming conventions, we were able to, once we set that up, to move our legacy data over. Again, this isn't discrete data. This tends to be PDFs. But because it was all named very similarly, we were able to do that. And the precision medicine tab is very highly utilized just in the last three months, over 35,000 views by over almost 7,000 providers. So this is something that's now been well-recognized and well-done, well-used in our EHR. So, oops, I'm gonna go back. So this is the PenChart Genomics Initiative Timeline. I just want to note that we started this process in 2019. We have this thing at Penn, which is about CAR-T, which is the overnight success story that took 20 years. And this is the overnight success story that took at least five years. So we have been working on this process of setting up genomic medicine indicators of doing integration. Each time we do integration, we have two-stage integration. You integrate PDFs, and that's followed by discrete data. And then you keep integrating and reintegrating, adding new genomic indicators as we've gone along, adding patient-friendly information. And I think that there's actually a new, how do you send genetic data out, which Bob touched on, something called Happy Together Genomics, which we just added in. So again, this is an iterative process. This is something where we are continually upgrading and adding new components. I think that, first of all, I wanna show that this has been widely used. And as we started implementing, you can see increasing usage over time. These are the labs that are integrated, as you can see. I just wanna say that we do not have 278 genetic providers. I'm sure I'm shocking to hear that at Penn. And this really shows that this is a process that's used and genetic testing that's used by non-genetic providers. And we know this because we can pull now the data on who's ordering genetic testing. And that's actually something that's really important as we talk about implementing things like minimal standards, or we think about wanting to know sort of what the patterns of genetic testing are at our institution. We have a much better handle on it because we can figure out who within the institution has ordered genetic testing, at least using this process. If they're not using this process, obviously much more difficult, but we know that there are a lot of non-genetic providers using this process. So the point of this is if you build it, they will come. The other thing that we did and published is a time study demonstrating that this saves time as compared to the standard portals with our genetic counselors doing this study and so on average per test, they saved at least 10 minutes. So if you can imagine people ordering and resulting six tests, that's a save time savings of an hour. And so you can easily see that this allows our genetic counselors to work at top of scope. We have, this is sort of the overview of view of the flow of the genetic test results. This is both Mendelian and PGX. Bob focused on PGX. I really am gonna focus on Mendelian diseases. So just to give you a sense, we have testing that comes in from the internal genetic testing laboratory. It can come in as both PDFs and structured results. It can be either scanned into the document type, in which case it comes into the pen medicine precision tab and the variant can be manually entered or it can come in both through our HL7 interface with the laboratories in which the genomic module comes into the genomics module and the variant components. We have then having it driven to the genomic translation engine from Mendelian diseases. That's around a pathogenic or likely pathogenic call and non-mosaicism. This has been a big issue for us trying to avoid chip, for example, as going into data sets to drive our clinical decision support. And then we have CPIC translation tables that are used for our pharmacogenetics. Then we have genomic indicators. This is just what it looks like to see a genomic indicator. It's on the patient snapshot page, so right in the front. And then we have that pushes our drives, clinical decision support with patient-facing information, disease information and links. I'm not gonna show that, but I will talk a little bit more in detail about provider-supported clinical decision support. And I would say the other thing is if you build it, they want improvements. So again, this is an iterative process. So for example, we wanted to be able to do one-click testing so the patients get the kits sent to at home. We have now a way that people can send the clinical note directly to the clinical lab so that it supports billing. And we've done a lot of improvements. I'm gonna talk about that in a little more detail in a few slides about our latest improvements that we've built. So the impact of these indicators just, we have, again, focused much more on Mendelian disease genomic indicators. So a nice parallel but slightly different than the Mayo Clinic where we have 138 unique genomic indicators. Many of ours are pharmacogenetic, but they're less used than our Mendelian ones. And over 4,000 patients with an indicator. And most of the indicators also shared to the pen chart. And as you can see, actually for us, the highest number of genomic indicators are for patients with BRC1, BRC2, and LYNCH syndrome. That is because we've developed clinical decision support for each of those syndromes. So this is just showing quickly clinical decision support, what this looks like for the provider. So you add the genomic indicator, it adds the problem in the health maintenance, the LYNCH syndrome, and then it pushes out and derives GI genetics, appointments upper endoscopy as needed, this is what it's done. And then this is just showing a genomic indicator on the right for breast cancer. This is just showing what it looks like as this is the snapshot. And then again, adding that, and you can add completion, for example, if things were done outside the institution. This is just showing what a clinical decision support more looks like with a patient view. On the left, for those of you in EPICS or your flu, and then if you have this, you have your LYNCH syndrome screening. It's required a lot of review and reiteration. So for example, we had a lot of discussion that we had to stop normal colon cancer screening for patients with LYNCH syndrome. We had to go through governance to do that. We had to really have a lot of things about thinking about the surgeries and the procedures. Remember, you don't wanna have someone who needs a beer, say one mutation, have a mammogram, they've had mastectomy. And that required sort of review and iteration to try to get the clinical decision supports to correctly fire. So we did a lot of pilots back and forth to make sure that they're accurately doing this. I think the other thing that I think it's really important to emphasize is this allows your EHR to really become a database. So we're able now to track who's behind on their clinical decision support, behind with our clinical decision support tools, which patients have not gotten, they're screening appropriately and we can bulk send out reminders to patients. So I think that's been quite effective and something that we've really taken advantage of. It's also important to note that now we know who in our record using has a mutation and identify sort of our beer, say one and two mutation carriers actually much more efficiently, for example, that we could before by use a problem list, but not as accurate as this. So I think it's really been effective that way. This is just to show our clinical decision support for pharmacogenetics. This effort was really led by Sony Tutasia, who is a pharmacist and this is looking at the snapshot, the results in the precision medicine tab for pharmacogenetics, which can drive in terms of clinical decision support to types of BPAs. One's an interruptive BPA for serious adverse events and one's a passive BPA for non-serious adverse events. So I'm sure they have that similarly set up at the Mayo Clinic, but you obviously want to alert people when they have interactions that are a problem. Oh my gosh, all right, I'm gonna cook going. So what are the challenges of EHR integration, which are the project scope, the technical build, the language barriers and the vendor relationships and privacy, all of these have varied and been concerned. We're anticipated, I think that some of these other ones are anticipated. First of all, the cadence, the stakeholder needs, important clinical workflows, changes and how it's difficult and knowledge dissemination for us. I'm gonna quickly, because I apparently only have a minute, so I'm gonna quickly talk about, we have advancing genomic medicine grant, which is a six arm pragmatic cluster randomized trial actually delivering, doing e-phenotyping, identifying patients who need genetic testing to influence medical management. And this is quite large study doing pragmatic study for genetic testing and using nudges. I'm just gonna talk and please ask me if they have more information about this. We did a actually discrete choice experiment to identify behavioral economic informed language for patients, sorry for providers. We also did it for patients, but this is about providers to understand what messages providers want to order genetic testing. And this is just showing messages that address status quo bias, where they're most positively received and we're moving forward. So we actually have developed what I'm gonna call one click genetic testing for non-specialty providers. This is a passive BPA or a Nudge language, link genetic testing. We have genetic testing with smart sets already built with insurance-sensitive testing. I'm particularly proud of this aspect. We worked with EPEC so that if your insurer is capitated to a certain lab, the test automatically goes to that lab. And they have pre-populated panels and it also pre-populates the notes and there's an after-visit summary with information for the providers just shown here. And then the results come back with static link providing to a provider website with information about referral and genetic testing. And then last but not least, please, if you are interested in this, please let me know. We've built a website actually to address the interest and we have all the mechanisms and micro-learnings and set up on how you do this. And we've also developed a provider information website for our providers. So hopefully just important to note that you can change the EHR to meet your needs of genomic medicine. You really need to be multidisciplinary. You can really do innovations and interesting kinds of things to move this forward. It just is complicated and requires some thought. Acknowledging all the people. Thank you, Kate. And our last speaker in this session before we move to discussion is Alana Rahm. Alana? Yes, hi, so thank you. Alana Kolchak Rahm, I'm here from Geisinger. And my title is we changed all of our titles to these questions. I get to talk about why is it so hard to see health benefits from genomic screening? And I think we've answered a lot of that. This is what we've been talking about for the last two days. So I'm really gonna keep this at a level of really hopefully connecting the dots so we actually can maybe have some good discussion, continue these great discussions and see the way forward. So why is it so hard? Everything we've been talking about, right? Because it's complicated. It's not just enough, access isn't enough. Increasing testing, getting people tested is not enough. Increased participation in research, all of that. It requires engagement with the result, engagement with the information, doing those health behaviors by clinicians and health systems and patients. And it requires the barriers and facilitators are around everything, communication, billing, access. We need that ongoing engagement to develop the right things, right? So this is very multi-level. Many people have said this, right? It's this multi-level diagram. We also have this issue of voltage drop, which is a lot of, I think we've been hinting around, but I'm gonna name it directly. And that is the idea of diminished effect when you take it from your highly controlled environment where you've developed the evidence and then you put it into the real world. So in an example using genomic screening, so even if we could get it 100% effective, we have a defined population, whatever it is, and we can get everybody in that population tested and everyone can get identified with a, in that 100% population that we've tested, we find everybody with a variant that we wanna work with and manage their health. Health benefit depends on actually these other factors too, whether there are alerts and our systems adopt these practices that help connect clinicians and patients to what needs to be done and that clinicians work with the individuals to change that management and the patients themselves, because they have their own agency too, have to follow the guidelines to change their health behaviors. So that overall impact, if you gave a very generous 50% threshold at each one of those levels, your impact health benefit at the end at the population level is going to be 12.5% of something that you got everybody 100% tested and found everybody that you could find in that population. So, what do we do with that? This is kind of where we've been talking about all day, we've been skirting around this. And people have mentioned this, so I'm gonna name it directly, we use implementation science for this and this is what gets us out of this thinking, well, okay, we identified the barrier, we identified the issue, we identified who got tested, who didn't, implementation science helps us move from that thinking of just what does it look like to the broader real world thinking of what works for who, when and under what conditions in what context. And I would also argue that in this day and age, we need to move beyond the thinking of implementation being a last mile issue and I think it's been noted by a lot of people here. This is, you bring implementation science thinking to the whole process. It not only, so the other thing implementation science helps us do is it helps identify the core components of the programs, of the interventions. Again, other people have mentioned this over the past two days. It helps us identify the menu of strategies and the strategies that work in Penn and Mayo and Geisinger and Kaiser are gonna be different than the strategies that work in an FQHC, right? It helps us identify what strategies work where. Helps us work, when we're working with our populations, our patients, our stakeholders, what strategies, what are those core components that will work and what need to be adapted to work with those stakeholders? And as I've said, engagement itself is a core component of what we need to do here and we've talked at length about why engagement is important for that co-creation across, and I will just point out again, across the spectrum of implementation from planning through adaptation, everything. And the big point in genomics is to mitigate a lot of that bias that we may be inadvertently creating. And then, so one of the other things that I think, again, we've skirted around and talked about and I just wanna point it out directly relates to some work that we also did led by Jen Wagner and Dan Davis at Geisinger over many years, is that different people want to engage differently. Now this was work done specifically with our patients, but again, and they helped us develop this rubric of how do we, how do people engage differently? And it takes all types of engagement methods. So some people are, you know, your participants, they may be your active or your passive participants, passive participants being the don't talk to me, but yeah, use all my data that you want. And other people may want to be your co-creators, your co-investigators on your projects. Others want to be ongoing parts of your advisory committees or your overseers to help you develop policies in your system. And it's not a one or other which of those is the better engagement strategy, it's all of those are your engagement strategies and they may work differently in different parts of the project or what you're trying to do and they may be overall for your system, but it's not a which is better, it's a they work differently in different situations and you need to consider all of them. And so again, we've been pointing this out, why do we do this? Again, naming it specifically, there is, none of our systems are the same and none of our patients are the same. Someone else just said it, you've seen one system, you've seen one patient, you've seen one patient. Equitable benefit requires equitable implementation, but that does not mean there is a one size fits all implementation, there are structural societal and other of those multi-level factors that are impacting implementation. And if we're not working with that again, constantly throughout the process, we run the risk of exacerbating inequities if we're not identifying them and addressing them through again more strategies and more engagement and that. And so again, this is, and it's not just engagement, it's a number of things, again, that implementation science helps us with is very well pointed out in these five different areas by recently by Caitlin Allen who was here yesterday and think is still listening as she's traveling as well as Rachel Shelton a couple of years ago and how to add that anti-racism or equity lens and implementation science and how it helps us do that with these projects and our programs. So stakeholder engagement, again, a core component, it helps us use our develop and select those adaptations, our evidence-based practices, how to adapt those practices. Our models and frameworks and theories help us bring all of this thinking and this lens to what we're doing over all the component, all the spectrum of implementation. It helps us guide our evaluation approaches as well as the implementation strategies that we're using in different contexts. And as an example of that, which again, Rachel Shelton does really well in her article using the ReAIM framework, which is an acronym, hopefully folks have heard it enough, it's Reach Effectiveness, Adoption, Implementation and Maintenance, it's one of our core theories frameworks in implementation science. But so the definition of reach, and so instead of just using your typical table one, this is who participated in what we did when we made it, when we built it, and how many doctors ordered it, instead of just that or how many patients got a test ordered, and describing the differences, that's your typical table one, you're actually thinking through was everyone reached equitably and who was not reached? And following up on that, why were they not reached and what do you need to do about it? Same with thinking of your effectiveness is, it's also around was that impact equitably experienced across the people who participated and did not participate. Do certain groups experience a higher burden or negative effect? Again, bringing that thinking and making us bring that into our evaluation processes and our reporting and our continuing on, and adapting and redeveloping and reevaluating our processes. And the same thing at our systems level for a clinician level for adoption and implementation as well as thinking about maintenance going forward. Again, bringing that lens to this. And I'm gonna give a few examples, and these are just snippets. I think everybody else in the room here has given way more detailed examples. But just to show how across the spectrum and in different ways, this can make a difference. So at Geisinger, we have one of our population screening pilots, and we did an analysis led by Lainey Jones using the Reaim framework. And these are just our numbers of the doctors in primary care who ordered the test. That was made available to them, ostensibly as to be ordered on everyone in a primary care visit. This is the breakdown of who ordered it. And while we have our high adopters and our medium adopters, and then we had a bunch who just ordered one test, and I don't have that number up there. But when we did the engagement afterwards to find out why, what was going on with these numbers, not only did we get things like they forgot about it, they needed more information, they were self, we found out things like, oh, I thought it was a scarce resource, so I wanted to save it for those that I thought would need it. So they were self-selecting who they were offering it to, other ways they were self-selecting who they were offering it to was, oh, I didn't want to give it to, this person was too young or too old, so I didn't think we'd change medical management. So again, to save it for other people who might need it more, or I had a reason for wanting it to test this person, so I only wanted to use it, order it for people that I thought I'd find something on. And so again, gives us ideas of now how we need to go back and revise and adapt this program to help make it, to help it do what we were intending it to do and have that benefit that we wanted to have. Another example came from a PCORI funded project, one of the engagement grants to help us with the next step in strategic planning for the Lynch syndrome screening network. And while the patients and the clinicians and researchers that we engaged with helped us define a research agenda, which are those last two boxes, with the things that they were most concerned about, one of the other things that we came out with from the lived experience of the people with Lynch syndrome who were participating through Alive and Kicken's annual program, they talked about these things like, in insurance, in order to maintain their screening, they have to come in now for annual colonoscopies, but their health systems or their doctors were coding it as a diagnostic colonoscopy because they have Lynch syndrome, not a screening colonoscopy, which has all sorts of indications for how much money they have to pay, whether they can afford these screenings. And so if you can think about this now in a broader population thinking, oh my gosh, if we're telling people they have Lynch syndrome, what if they don't have resources that will let them get these downstream screenings, things like that. So again, highlighting how much of an issue this is that we weren't even thinking about. The other one, this is one where we use, this is trace back interventions across three organizations, Geisinger, Kaiser, Mid-Atlantic and Kaiser Washington, to identify individuals with ovarian cancer who are still living, had never had genetic testing, and try to get them tested so we can both help them and identify their family members and get family members testing. So sort of a cascade testing process there. We used engagement human center design to design the specific programs, again with core elements and the specific adaptations for each site, and then we had to, then we've run a couple of iterations within each site of this implementation. So you can see it was implemented slightly differently at each site, and these are our overall numbers, but what is not in here is we're also doing some work right now to show where we see the drop off at each of the different sites. I can tell you Kaiser Mid-Atlantic, it's actually, this is their final rate, 37%, but it's actually much higher of the people and less, it's more diverse if you see, if you look at the people who actually agreed to have testing and received a test kit, it's the returning of the test kit. So again, those are just some examples of how we use implementation science and our implementation science thinking across the spectrum of where programs are in their implementation process that we're testing as we're researching them. It also, just to show this in our normal, some of the ways we talk about implementation science where the thing is whatever the program, the intervention, effectiveness is does it work, what do we do to help people in places do the thing is our strategies and our outcomes are how well do people in places do the thing. Adding that equity thinking on this and engagement thinking is, again, co-creation. Are the benefits experienced differently and why? How do we adapt them with our participants and stakeholders for those different settings? And again, keeping in mind, if all settings can't do it, do the thing the same way, what are those core components? And as Kate alluded to too, oh, by the way, it's never gonna work right the first time. You do not develop the perfect solution. It keeps, you have to keep iterating. You have to keep testing it, which is why engagement and implementation science across the spectrum is very important. So hopefully, I've given you that connecting of the dots for the opportunities and conclusion for this in that how do we achieve the promise of this, which is, again, implementation science is our tool. It is one of our tools in the toolkit that helps us understand how we're going to achieve the goal and provide equitable benefit for all and evaluating that and engaging, ongoing engagement across the spectrum of implementation will help us facilitate that as well and improve the impact of implementing screening programs. And that this is a journey. It is not a one and done. It is not a last mile. This is the journey to get where we need to go because we need to answer all of these research questions that we talked about. And these are the tools that we use to get there. Thank you. Thank you, Lana. I'm just gonna take a minute to summarize some of what I heard and then invite Jillian to comment and then we'll open things up to questions and discussion. So I thought those four talks really compliment each other really nicely. I'm just gonna maybe comment on three themes that came through for me. One is the issue of sharing. Obviously, Bob and Kate in particular talked about it, not just sharing of data, sharing of experience of processes of things like CDS across institutions, but the challenges that involve. So sharing clearly, one big one. The second is, and Mike really hit on this, is the lack of evidence and how much of a huge barrier that is to actually implementation of what we're talking about at this meeting. So what can we really do about that lack of evidence or as Alana termed, does the thing work? And how can we come together as a community to design the rigorous, ideally randomized trials that will produce the evidence that will convince the payers in particular, but also the practitioners and the patients to do it. And it does make me wonder that whether the payers have been engaged sufficiently in the design of the RCTs that we're doing in part to convince the payers to pay for this. So maybe that's something we can discuss. And then the third is this issue of implementation science and just broadly implementation. Alana, I thought did a great job at putting that in perspective. I mean, in order to do implementation science, as Alana said, we need to know that the thing works. So we have a lot of implementation science to be done on things that we've ultimately shown that they work. So I think obviously it's very much linked, but there are lots of issues that come into implementation science and ultimately implementation and barriers that Alana really nicely demonstrated. So those are just three of many other potential topics, but Jillian, did you wanna add anything to that? I think you summarized it really well. I think the only other observation I would add onto that is that we're clearly at a point now where there's not, it's not like, oh, if the payers would just pay for it, or oh, if we could just get genetic test results into the EHR. We've shown now that we can do those things and we've identified huge numbers of other little problems along the way to solve. And so the challenge is really how to start solving these problems at scale. And I think all of the speakers touched on that in different ways. So I think with that, we can open up to questions. I'll manage the queue on that side of the room that I can see and Dan will manage the queue on this side of the room. We'll go back and forth. I also have the Zoom in front of me. So folks are calling in by Zoom. I'll be monitoring the chat for that as well. Heidi. So both Bob and Kate spoke about the use of genomic indicators in EPIC. And my understanding is that one of the challenges is that it's very sort of variant-centric, not necessarily encompassing what was the indication for testing. Was this a positive report or a negative report? Maybe the variant doesn't explain what was the indication. And as you've thought about the clinical decision support, do you just focus on the variant and its association? Or do you also think about the context of the indication and whether that question was answered or those two separate things? Just curious. Yeah, so we actually, Heidi, I think that's a great question. And we actually just, Dan can tell you at our clinical meeting just had actually a conversation about this specific issue. So what we were struggling with is how do you deal with people who have a clinical diagnosis and don't have the, sort of slightly differently, like people with NF1 who have a clinical diagnosis of NF1 but don't have a genetic test diagnosis. And so should they have a genomic indicator and how do you deal with that with clinical decision support? And so our decision, and I'm not gonna say it's the absolute right or wrong decision, was that we would limit genomic indicators to those people who have a specific genetic test report. That there are other places to put a clinical diagnosis in the problem list and other things like that that can indicate where people have that. You can use, I think that begs the question of, you can add, so what we also decided you can add genomic indicators to people's charts without having a clinical, a discrete test result. And we've done that many times. And so we will add genomic indicators in the instance that it's gonna drive clinical decision support without having discrete data in the chart because we feel it's more important that they have the clinical decision support without having the discrete data necessarily. And so we have a lot of people, for example, with beer say one in two mutations, a lot of that's legacy data, right? And so we've added that without the discrete data. Hopefully that's, there's this two slightly parallel questions, Heidi, hopefully I've addressed the way that we've handled it, but it's a really important point that we've had a lot of discussions about how to manage. And who's making the decision to add that to the system based on not having the data there? Is that you or a team? I wish, is it me? I tend to be the one that sets the sort of like standards. I will, I have to admit, but it's our genetic counseling team. And again, like this is only possible because we have so many engaged individuals. And so we have, and we had actually yesterday a quarterly genetics meeting in which we discuss essentially all of that. And one of the things that's really important at least at our institution, there are a few things that I hold near and dear. And one is that we have to do things that everybody agrees to across the genetics community, that our reproductive genetics people need to be on the same page as our cardiogenetics people so that everybody is using the EHR in the same way. I also feel extremely strongly about sustainability, which is why we've really done an EHR based method and really getting buy-in for all the different providers. So I and that group will make the rules and then our genetic counselors and things like that will implement them. I think the challenge actually to me more is how do you get outside the genetics community so that people outside the genetics community are doing it in the same way that the genetics community is doing it. And that's something that I see currently as a challenge that I'm taking on. I'd like to add a little bit to that. That's a great question, Heidi. One of the things that may have come out in my talk was the different types of information that we're putting into genomic indicators. Some of them are metabolizer status. Some of them, as you indicated, are specific variants. There's allele state, there's haplotype, there's activity score, positive, negative, activity score. There's all sorts of different types of information we could assemble into these things. And one of the things that we learned is that depending on how you want to use that information, that is which of those data elements you want to use to trigger and inform the actual CDS logic, that's going to now dictate how you design your genomic indicator. So what you're asking about is, or at least the way that I interpreted what you're asking about is similar to that. So what happens if we want to have indication for testing? What happens if we want to have categorical variation or other types of information that could potentially be used to trigger or to refine CDS? That's an open question. Right now, the genomic indicator model is fairly flat. It's a single term, and we pre-coordinate whatever we want into that one term. Now, something like SNOMED is much more complex. We can assemble terms from throughout the SNOMED terminology into a pre-coordinated string that has a lot of structure around it. One of the research projects that I'm doing is looking at how can we better define a model for the next generation of genomic indicators. I've got some work on this. We're not quite ready to publish it yet. We're still working through some of the details and evaluating it against the lessons that we have learned from our prior implementations. But I think it's starting to get at some of the things that you're inquiring about. If I could take just a moment and just comment on one of the things that Kate said as well is there's an issue around governance here. So the problem list can be managed by many different providers. Things can get added, things can get removed. The model for genomic indicators is similar. Different providers can add genomic indicators. Providers can also remove genomic indicators. And so we need, in addition to the content, we need the governance model to go along with it so that we're not adding indicators to patients' charts so that we intend to stay there for their entire lifespan so we can continue to file CVS off of them and they disappear at some point or they change along the way. Just to add to that, I think one of the gaps here is a workflow or a space within the EHR for physicians to do the diagnostic thinking, right? And that brings in the pre-test probabilities and the results of the test and their conclusion about what it means for the patient. It was a high-risk patient, probably has this genetic disease, but the test was negative. That's one thing, right? Or this was a population screen negative with no other reason to think that they're at risk, that it's another thing, or it's a patient with ovarian cancer who tested positive for BRC-1 and that's another thing. And that diagnostic thinking part of what we do is not, there's no workflow for it. And I don't think that genomic indicators necessarily support that very well because you have to apply the genomic indicator after the thought process. So that could be an interesting place to think about ways of developing what kind of computational support for that that could essentially solve that problem that you're talking about. Didn't pay attention to what order all the cards went upside. I just have it in the order Josh and then Jessica, then Pat, then Dan. So I was just thinking that, there's a lot of a huge investment in making sort of the genetic data discreet and then also incorporating the EHR. And I think we, I'm wondering if there's a research question here about essentially the value you get out of that. Like when we were sort of modeling population screening, we didn't model the potentially millions of dollars that Penn has spent doing this. So my question is, is it possible now or would it take sort of a more naive, or an institution that doesn't have this to figure out what kind of changes in risk management or changes in attention to the genetic results when you go from sort of a non-discreet PDF problem kind of situation to a more integrated situation? I guess the question for Catherine. Yeah, I think it is. Again, I don't think it's been millions of dollars or at least I hope it's not been millions of dollars. I think that it's more getting the buy in and the sort of leadership team around it to say, okay, this is something we all, we support doing. I think it's been really important to, we've had a very designated sort of clinical leadership and our counselors have really bought into it. And so I think it's really hard to define the time because there's the time that the IS folks do, but there's also the time that I spend doing this is this really part of what I mean, like what I do, there's the time our counselors spend investing in it. And I think that for us, it's just, we felt it was imperative to improve patient care. And so there's lots of ineffable sort of clauses to be able to do that. I think it's gonna be extremely difficult to figure out how you implement like what the actual cost is. One of the things that we're very interested in doing is that now the other thing is we hope that by sort of putting it on a website and sending everybody to places that no one else has that same sort of how you say energy. I'm trying to think of the right word, not implementation energy, but initiation energy that there's sort of more consolidated information about how to maximize it. So hopefully they have less of a hump to get through or less of a thing to be able to do. So we are assisting people in trying to move it forward, but yes. And let me just follow up and say, do you feel like putting, let's say, BRCA results in discrete form to have decision support has increased the engagement of the patients and providers and increased risk management, appropriate risk management? Yes, so that's actually a great question. So I would say that it's actually really, yes, but it's actually really hard to measure that. And what we found actually is just having it in the health maintenance, like even though we didn't do any alerts based on it, but what we found is that having it in health maintenance, maybe people will wear, oh shit, I'm excuse my name. You know, I have to do that. And so it actually just seeing it there did, and it did increase, has increased it, yes. And just having that visualization there, it's very difficult to discreetly measure those things, though, without, we've had a long discussion about do we roll it out and sort of a stepped progression and that way we can specifically measure that. And we decided not to, but our impression has been yes, that it has really made a difference, just sort of, you can see it in your health maintenance tab when you log in to Epic. And that just seeing it there has made a difference and increased engagement, so yes. Kate, that expletive, that was more with Lynch than BRCA, right? Thank you, thank you for clarifying that, Mike. Yeah. I'd like to. Can I, oh, go ahead. I was gonna say, if I can add a little bit to that too, from that, I really wanna agree with Kate that it's a lot of these, what you asked there is, there's a lot of intangibles in this and some work that I didn't present has to do with when we actually looked across multiple health systems and how they could implement Lynch syndrome screening as a use case for implementing any of this kind of stuff. And what Kate is talking about are some of the same things that we found, all of these things that led to whether you could have implementation at all or whether you could get to something, so again, using tumor screening for Lynch syndrome as a use case, not what we call non-optimized and get some people on board and not a lot or optimized program was a lot of these intangibles of could you get a positive inner setting? Basically, everybody on the same page agreeing that this is an important problem to address and then figuring out strategies on how to address it and then being able to move into a fully optimized program. You had to have not just implementation champions but maintenance champions as well as ongoing review of the data, ongoing these things that Kate was just talking about. But those are all those intangibles that require strategies and people to do things but they're harder to, again, those hard outcomes of what were the actual numbers of who was, what did it actually do? I think that's a really good point and I'll say this and Dan can comment. I do wonder if I hadn't been at Penn for like 15 years and hadn't known all these people for a really long time and worked with the Gena counselors for a really long time. Like if I had come in and tried to do this without having that sort of really large basis of, I'm gonna say, I mean, hopefully not overstating trust in what I was promoting, whether it would have worked so well. I say that, Mike, I really do wonder if that made a big difference of where we were and then how did we get there, if that makes sense. Like to comment, just two things that came to mind, Josh, when you were asking that question, one of them is, first of all, I want to acknowledge that there's a bit of a catch 22 here. So in order to generate demand, there needs to be a capability. In order to build the capability, we need to have demand, right? So there's obviously that. But the two things that I wanted to pick up on here are related to the idea of cost, the idea of the investment. First is that there's obviously a learning curve here that early adopters of genomic medicine programs, as you know, spent a lot of time trying to figure out that first one, right? The second one was a little bit easier. The third one is a little bit easier. By the time you get up to 20, it's, okay, now we're kind of in operation phase. The way that we help new sites get to that point, or one way, is through dissemination of those lessons learned. So the sites that have already blazed that trail, if we can disseminate what we've learned, like Kate has done with her website, helps to lower that bar. Now, that still doesn't bring us down to zero, right? It's not cost free, even after learning how to do 100 of these things. There's a base cost to that. The way that I think we can start to lower that amount is through technology. So how can we now develop more tooling, more standards, more capabilities that help us do this at scale and automatically? So I think through both of those things, we can help drive that cost down. The second point that I want to make is then that there is secondary use for this data. And I know you know this, but the use of genomic data in its discrete form enables research that could not be done as easily otherwise. And if we're going to try to calculate a cost or a value to going through everything that we're talking about here, we need to somehow factor in the benefits that would arise from all the downstream discoveries and good things that would come from that research. For the record, I fully agree. And I also wanted to add there's a, so Bob's been talking a lot about those implementation costs. And I want to highlight Jing and some of her work in that we've also looked at this from the standpoint of helping the, what the health system thinks about is the, as the cost to actually not just implement the program, but the value as Mark was talking about. And actually we've been trying to, or should say Jing, because she's the one who understands all of this and builds all the models. But coming up with these models that help them in that decision-making process. So it's not just, what is it costing you to do this? But also the, if you can get more patients identified or whatever is the, who are you using in the process of developing your program if you're using the doctors versus using the genetic counselors or whatever. You can play with those costs and figure that out. And it's, that's another way of that we've come up and at least some of our work of that thinking of what do people want to know? Or what are the costs that they're considering and trying to figure out how to implement or why to implement? Okay, I want to make sure we get through some of the questions we got a bunch of folks here. Jessica. I just wanted to note that something that keeps pointing out to me is, developing programs that are siloed within healthcare systems. But I think we need to think that, patients don't stay in those healthcare systems and how do we make sure that the benefits and the information is portable from healthcare system to healthcare system? I don't know that there's a good solution to that but it's something to definitely keep in mind. Just to say that that's actually what I was talking about at the last end which is they've developed that and it is now at least the Epic system that's portable within Epic systems. I mean, and just to add on to that, I mean, the only constant actor in the United States is the patient. Everything else is variable. So any solution that doesn't involve the patient at the center is going to fail. And we've seen any number of examples of that. And so I think that's an expansion of what we're talking about with engagement which is not only engaging in terms of designing and leading the research but also engaging from the patient perspective about how could this potentially work so that this information could in fact travel with the patient as they meander through our healthcare system as it is. Okay, I guess this question is sort of generally to all members of the panel. I was really struck by Michael's presentation about what pay payers are looking for and how right now much of the testing that we've been talked about in the screening context doesn't meet any of those criteria, right? Nothing is simple, but I just wondered if you guys were working very, very hard and making good progress. We've heard many of these obstacles for many years about genomic medicine. Do you feel that because it is screening that there are specific and additional hurdles that we need to research because it's screening? So my framework for this is I've worked recently a lot in multi-cancer early detection, another genomic test, and the hurdles that are in front of that technology being adopted are huge because it is considered screening and not diagnostic testing. And in that space, there are industry champions that have the resources to develop the test and get the legislation passed by Medicare and do the long-term studies of what happens in terms of downstream compliance with other types of screening tests. But what do you think are the unique challenges because this is screening? And to get US Preventive Services Task Force recommendations takes a decade, and who here is going to be leading that evidence development and effort and steering it so that we can get to that because if it's an ARB rating, it's not going to be used. Other than research-funded initiatives or state-funded initiatives to essentially put the money behind it, but that's not sustainable. It needs to get paid for in our healthcare system. I can say screening, especially genetic screening, does get extra scrutiny for all the reasons that you mentioned and for the reasons that have been covered here in terms of the numbers needed to test. And actually a lot of the screening gets aimed at lower prevalence conditions and a lot of people don't understand the impact of that and the risk associated, yeah, David does. The CDC Tier 1 though is held up, I think, in the payer community that I'm aware of as the example to follow for implementation because the evidence is respected, but it comes down to the implementation of CDC Tier 1. And I've seen some analyses where it looks like it's tractable to support and invest in, but it comes down to the implementation. And also the cost of testing has a huge impact. So when you wrap all of that around a respected screening test that has a good body of evidence, I think that's the place to focus and that's been discussed here. And the implementation science around it I think are where the challenges lie. Anybody else? I would agree, I think, I'm actually looking to you to hear me answer that from the payer side. I think that's a really good question. And I also, I guess, would just flip it too in that again, this is, we've been in this space forever, right? Of building the plane while we're flying it. And so as long as we are generating the evidence while we're also looking at our implementation outcomes and understanding why we're getting the outcomes that we're getting, I guess I'm naively hoping that in doing that, that's also generating the evidence that will be, that can be acceptable to the payers and we'll find those industry partners that will want to, will help us do those longer term studies, I mean, yeah, the money has to come from somewhere and we have to keep doing this work to help solve the problems and maybe that's a naive hope that eventually we'll get there. But it's not a reason to not do the work out there. Oh, no, absolutely. Which I don't think you were saying at all, but yes. I mean, I think the other perspective you could take to it is that within claims data sets, the highest volume and highest spend tests are all screening tests today. It's NIPT in commercial and Medicare is different, but NIPT, carrier testing and Coligard, right? That was probably 60, I don't know, huge volumes there within the testing and those are all screening tests. So by volume it's NASA, the number of policies is tiny, right? Yeah, yeah, I do think it's population based screening is different and we're not supposed to be using the CDC tier one to do that. I mean, that is the, so I'm saying from a research agenda what additional evidence do we need to make sure that it fits within this population based screening paradigm? And I would also add, I just want to add in, I think some of what Jillian is saying is there's also this, I mean, it was a communication issue, a numeracy communication of this perception of, you know, you put the word genomic on it and it all of a sudden it's thought this is very rare, whether you're using the word screening or anything. It's like, oh, this is just such a small number of people and again, so how do we continue to communicate and show that this is not small, that the population impact is there? Just one quick comment. I think all of us study is a nice opportunity for generating evidence that the US Preventative Services Task Force might be interested in. We just heard their returning results. So you can do some carefully designed studies on observational data there. I think that will be very valuable. And these other implementation studies we've heard of, they're gonna generate evidence. Aaron? And they have the advantage, so-called, that only half the people who want results actually pick them up. So you can sort of, there's lots of reasons for that, but you could sort of call that a control group of some kind. Aaron? Yeah, I have a follow-up question for Bob. Because you made a good point about the value of the secondary research that can be, ultimately accomplished if we implement population-based screening. I mean, that's sort of what we're aiming for with the Genomic Learning Healthcare System, but I had this maybe naive question, that the standards with the FHIR, HL7, Clinical Genomics Working Group and the Genomic Indicators, are those aligned with or being aligned with the GA4GH standards like the variant reporting standard kind of to facilitate the interoperability between the research and clinical settings? Yes, thank you for asking that question. I could talk for a long time on that and have as some of you can attest. Yeah, so this is actually a pretty big ongoing effort that we have right now, and I think you're getting directly to the heart of one of the slides that I had up there, which is the fact that we have different standards and used by different communities, and there is no one right answer. We have to figure out how to coexist with all of these things, but how do we lower that barrier of interoperability between them? And so the goal of what we're trying to do here is create a, at least, harmonize on the touchpoints, so we can create a scenario where the core data from one standard can move into the data structure provided by another, losslessly without losing any semantic meaning and be able to kind of move across the fence, if you will, into that other community. Your specific question about genome indicators is a little bit different. We don't have that concept in GA4GH yet. We could potentially start moving that direction in some of the less mature work that that workgroup is working on, but we do have similar types of knowledge structures for that information on the FHIR side. So we can represent the idea of a genome indicator within FHIR. Hopefully, we'll be able to improve that knowledge model a little bit, the extent to which that then passes over into the GKS space, I think, remains to be seen. I also want to make a quick comment that I think it's really important that each of us take personal responsibility to ensure that standards, such as what Bob is taking, are implemented. So we have been incredibly clear with all of the laboratories and the groups that want to integrate with our EHR that we will not do it unless they are meeting those standards. We also, and Heidi can tell you, brought Epic to talk to the people forming the standards to make sure that Epic was also making sure that those standards were implemented. And so I know that there are companies that have changed because I told them, Penn will not integrate with you because you are not adhering to the commonly accepted standards for reporting. And those companies have gone back and redone what they do. So I just want to say that each of us needs to take personal responsibility at our own institutions. And when we go out to say there are standards, you have to adhere to those standards and you have to follow them or we will not work with you. And we've been extremely clear about that. So I think I'm gonna ask a question that relates a little bit back to Josh's earlier question and maybe a little bit to this, but it starts with Michael, which is, is there a future potentially where payer reimbursement could be dependent on integration of tests into the EMR with the argument that tests may be more valuable if they're integrated in the EMR and their value diminishes if they're not or even thinking just a couple of steps even further ahead in the future when you start to reimburse based on like turnaround time, right? If you had a five minute turnaround time because all you need to do to get that pharmacogenetic test is hit an API somewhere that pulls the result back in. Could you make a policy or some sort of incentive to say I will pay for this test in that instance, but if it has a two day turnaround time or greater, it's lost its value and I won't pay for that. And would that be a mechanism potentially to pay for some of the IT infrastructure that Josh referenced earlier? I will say that some of the payers are not reimbursing unless the lab is on ClinGen's lab list to document data sharing in ClinVar. So if we've been able to do that, then I think the same concept would be feasible for the EHR integration as well. It's some of it though is getting the payers to pay attention to this and figure out who to talk to and then they turn over people who you educate about this. And so it hasn't been a perfect adoption, but it definitely is the payers when we actually get communication with them. I think it's a great idea. And when we, to go back to the personal responsibility aspects that Kate was noting, we do have relationships with payers. So at Geisinger we have a payer that's associated with us and so we had discussions with them. In addition to not using laboratories that aren't sharing data, we also don't use laboratories that won't share the full sequence information with us if we requested, which gets at the idea that there may be other things we would like to do with this. And I can guarantee you that each of your institutions knows who their top five payers are and they have very good relations and connections with those top five payers. And so if you can get on the agenda, you can bring this up. And so this is that sort of grassroots approach that can really move the needle. And it's not directly answering the question, Jillian, from a payment policy perspective where you would have a policy that states, this is how we do it, but it does at least deal with some of the contractual issues which actually drives a lot of the reimbursement landscape. Yeah, and I think also just to add to my question before turning to Michael too is, is this a research question? Is it worth asking whether there's more value in one instance over another? That seems to be the research question there. And then if you had that, could that then be used to drive policy? Well, and what I like about what you just said, Jillian, is, and I was just saying to Kate, is I think some of these pragmatic studies and that like what Kate is doing and the more studies we do that include that. Again, as we do them pragmatically and using good implementation science methods and things like that, I think it will develop that data. And so I really like that suggestion and question that you just asked because that would provide, we could use that data and then we could make that argument. It's such a complex question. I could take an hour to answer it. To talk about the value of genetic data coming into payers, right now there is a current problem that payers don't have that information and can't even see, can't use that information to determine outcomes related to a test. That is one problem. That problem is compounded by the fact that the coding of test data is so poor that they can't even know which test is being ordered, which test is that data associated with because CPT coding is so, is so coarse. So using that data for operational healthcare operations research, which is what we wanna use it for, is still a difficult problem. Add to that the fact that the data coming across from labs isn't standardized, isn't formatted. It usually comes across an HL7 messages in a text blob stuffed into some field and you have to figure out how to parse it. To get into the question of value, is that data valuable? Yes. It's valuable I think for healthcare operations research, it's valuable to get insights to outcomes. Can you assign a dollar amount to that that you could use to subsidize IT infrastructure? I think if you are aimed at generating evidence from data that's already there and already flowing, there's value from there and investment should be happening and is happening to be able to bring those data together in a way that you can do outcomes based type research. The question of secondary use of that data for life sciences, research and discovery and the value associated with that is a super complex topic about, I don't know how you get, connect all those dots to get money to flow between payers and researchers. I don't know if that was helpful. We're almost out of time. Are there any other questions or comments? Just listening to this, it does really have me wondering in this issue of generation of compelling data that's gonna be compelling to payers, to the USDS and to the task force, could we do better as a community really thoughtfully designing observational studies and randomized trials that really are explicitly designed to generate the kind of compelling data that we need for these purposes? And I'm just posing that as a rhetorical question. Are there ways we could bring others to the table and collaborate more around design to really generate the data we need? Chilean, any final comments? No, just that I agree with that. And I think I really appreciate the approaches of many folks on this panel who have gone into incredible detail to try to understand what are the things that we really need to understand what are the questions we need to ask to operationalize things or to allow payers to make policies on things and what are those? So I completely agree. Great, all right. Nicely on time, thank you very much. So the lunches are in the back there. Please bring them back to your seat and so don't wander and we'll let you munch while we go into the next session at 12.45.