 So, if I can have Alana and if you would stay up here in Grid and then have Alana and Jessica come up, we'll open this up for discussion. I wanted to riff off this last point here, which I think is a very interesting one. You know, when we do these types of studies, a lot of times we're really caught between, you know, internal validity where we're all using the same approach, the same standards, the same tools and everything. And then we have this opportunity, which is to really test real-world or external validity. And so Alana, I wanted to put you on the spot just to talk a little bit about how some of the tools of implementation science that we've been talking about, in eMERGE, can be applied to really get the maximum amount of information out of this heterogeneity and diversity while still using rigorous scientific approaches. So I think the biggest take-home from that would be, it's all a variable in the equation. So we're used to looking at when we think about a study with really good internal validity, we know everybody used the same, you ask the same question, did it all the same way, we can look for associations. The differences between the sites, the differences in the natural experiment, the way things were done, the heterogeneity is a variable in the equation, and we can look across, did sites that deliver results, one, with a genetic counselor, how do they differ from sites that delivered by a clinician? Do the different clinicians in Ingrid's survey, do they think about it differently? You can look at that as a variable in the equation, and that's really how you use a lot of the tools of implementation science. It's just thinking about it as that sort of variable, so you can still look for your associations and use it that way. It would be my biggest suggestion. Great. Thank you, Alana. Yeah, Rex, do you want to go to the microphone, please? I just wanted to follow up on that. So how do you think about it from, I guess I think about it from a statistical perspective, and how do you think about it not just being sort of a random collection of things, and how do you tease patterns out of that? So while I'm not a biostatistician, I usually focus more on the qualitative and mixed method side. There are some statistical ways to do that. There are some folks that are developing statistical processes to look at that. We can also use some of the frameworks from implementation science so that we evaluate, maybe at least use the evaluation framework on the back end to say this is the context that we're putting it in. So we're evaluating across different providers, we're evaluating across different time periods, and then it looks the same as any other sort of table that you would create if you had everything done at exactly the same time period. So you're just sort of switching around where the variables go would be the sort of high level way I would describe it. I mean, in some ways, and you can correct me if I'm off base here, but in some ways, the more heterogeneity there is, probably the more reliance there is on descriptive approaches, mixed methods, qualitative approaches as opposed to statistical approaches, whereas if it's a little bit more homogeneous, then it lends itself more to a statistical approach. So some of the art of the science is actually trying to sort out when to apply those different approaches. Is that a fair statement? Yeah. And there's probably others who can answer that even better than I can with the statistical backgrounds and advanced methods. Just a quick riff on that, and Mark, not to pick on Geisinger, but just as you think about the different healthcare systems and it's not really picking on, you know, you guys have your set of very unique programs, other research, you know, that you're interested in or engaged in, and, you know, it would probably be pretty difficult to measure that quantitatively, but I mean, do you guys ever sit and think about how your environments are influencing, your different environments may be weighing in or we're involved in CSER and that, you know, that's who does that? No, I'm kidding. Yeah, you chose poorly. Yeah. So I think the, and I'll let others weigh in on this as well, but to me, I think the context is absolutely critical, and so while we've talked a lot about provider response and patient reported outcomes, we are in fact also collecting institutional stakeholder perspectives as part of this as well because part of the lessons learned that we want to be able to assess are barriers and facilitators at the system level that support implementation of genomic medicine, and so while we didn't present any of that today, that's definitely something that we're very interested in, and I know that there are the implementation. You might just mention that the project that you're leading on Lynch syndrome through the dissemination implementation, RFA, it's not specifically related, but has some relationship to the work that we're doing in a merge specifically looking at those types of contextual factors. Do you want to talk just a bit about that? Yeah, so I can say there are ways to look at those contextual factors across sites and another RL1 that we have on implementation of universal Lynch syndrome screening. We're actually doing a coincidence analysis, which is a type of a qualitative comparative analysis to look at those understanding the impact of those contextual variables, those contextual differences on implementation of genomic medicine, but what we've already done even with an emerge, I believe it was Iftikari, the IRB paper, some of the ways our projects are set up were dictated by the organizations and how their IRBs actually let them set up the study. So, we couldn't create them all the same way across all the sites. And again, that's a variable. We put that into our contextual information about how this happens. And just, do you want to talk about, because I know that this is something you've also looked at, at least early on, looked at in CSER, do you want to make any comments about how you're looking at site contextual factors beyond some of the demographic kind of information that you presented? Maybe Frank can answer this question a little bit better than I am, but we're giving the organizational readiness to change, measure across all of the sites. Anything else? I'm sorry, Frank, could I have your answer? And maybe individual sites are doing kind of more stakeholder assessments that aren't necessarily harmonized across all the sites. No, I was trying to think of what the variables would be while I walked over to the mic and I was hoping that something would come to mind. Turning through. And I'm still kind of thinking about it. You know, one of the primary differences across our sites has been the populations, of course. You know, we've got both children, adolescent populations and adult populations. And we've done our best to adapt the harmonized measures as such. But, you know, and of course when we analyze the data, we're going to have to, we're going to have to have some way of marking which site the data came from so that we will be able to keep that contextual information in mind when we do the analysis. And of course we're doing the ORCA, which is a Survey for Readiness. Yeah, but I think it has more to do with the first part, which is the stakeholder engagement, right? So we are strongly encouraged to engage our stakeholders who generally want the surveys to be quite specific to the local population and not so standardized. And the other point that Jessica already made that a lot of the surveys are actually not at appropriate reading levels. And so, you know, I think the committee tried pretty hard so we could at least harmonize changing them. But it's kind of an interesting thing when you're told to use validated measures but they're validated measures that aren't appropriate to most of the population. So I think those are some of the issues that came up that I don't think are going to come out of organizational readiness. There also were some very interesting cultural issues about the country of origin issues and different parts of the country being more or less comfortable asking those questions. Yeah, I think it's tough to get the reading level down to where administrative stakeholders can actually understand. But, you know, that's a cross we have to bear. Great. Josh. Yeah, I think this question is kind of in the same band with some of the others. So I was thinking about the different sources of heterogeneity and of course one of them, which is the fact that the sequencing results fall into lots of different categories, lots of different domains. And so I wonder if there was any signals when you think about a general measure like a provider attitude on how much that might vary based on what is actually being returned? Is it a cardiomyopathy risk? Is it a cancer risk? And is there enough power essentially to sort of tease that? Those kind of differences out. Yeah, so I'll address that since we're kind of looking at that. So the answer is that they're really, and we were talking about power and statistics, it's going to be difficult because of the difference in the results. And so a lot of this is using this mixed methods kind of way of looking at things. But it's very true. And I think that we may be able to kind of group variants into different categories in terms of impact on the patient. But I think that's a very good question. I mean, that makes, again, it just adds to the complexity in trying to address this. And I do think that we'll get obviously a lot of information in the service, but I think qualitative interviews are going to help us maybe kind of put it together and contextualize it. So for us, the real goal is just to make kind of some recommendations. But you're right. It is going to vary. This is one more source of variability. Yeah, anecdotally, and I'm sure you've experienced this in Europe. I know from information you presented even around pharmacogenomics that there's a lot of variability in terms of, well, I believe this one, but this one I'm not so sure about. And certainly, in our institution, you have ones like chameleohypercholesterolemia where you can actually measure that cholesterol level and show it to them and they say, okay, yes, I believe. And others where it may be very difficult to actually confirm that there's any type of an incipient phenotype when the initial phenotype you see is sudden death. It's not a great initial phenotype to have. And we'd like to avoid that. But the trust, then, is different. And so I think that's, again, something that's going to be a very interesting study. Again, we're going to run up against the numbers issue, which is can we really get a large enough cohort across enough sites to really understand, you know, at a statistical level what the differences are. But I think there's going to be some really rich qualitative information that comes out of the stories and the experiences that people tell about, you know, actually returning these results. Can I just add that part of our surveys are also not just about management, but more about how does it feel to get something back that you didn't order that you may not understand what's the impact on your workflow. So I think some of that may not be as specific to the actual variant or disorder that's being returned, which is more kind of the management. But kind of thinking about, you know, do providers talk to the patients about it? Do they know what to say, you know, those kind of things? So there are some things that may not be as related to the specific finding. Janet, did you want to say something? Yeah, I was just going to comment that one of the things that we thought would be really interesting was whether or not a primary care provider who has had other patients who've had BRCA testing might feel very comfortable when a BRCA result comes back or a medical oncologist who gets that result may not think twice about getting that kind of a result, whereas the cardiologist who gets a BRCA result may be floundering with quick referral to genetics or their oncology colleagues. Yeah, so we're probably kind of looking for patterns, maybe more than statistics with all of these kind of variabilities. Well, and one thing I wanted to add, kind of going off of this and using consistent terminology is we've been having the conversation about what do we mean by positive and negative and discovering that there are different groups of positive, you know, that perspectives are going to be very different from something that's diagnostic versus something that's secondary or incidental or whatever we landed yesterday versus a carrier result. And so in trying to be consistent in how we define those categories and what that actually means. And I do want to make an editorial comment to commend Cesar for choosing the organizational readiness for change assessment. You know, any of us that are doing this recognize what we're really dealing with here is not a discussion about evidence or evidence-based medicine. It's really a discussion of a cultural change. And we have organizations that are very different places. So I think trying to quantify that using that instrument is really cool. I'm looking forward to seeing what you find. Yes, thank you very much. Sarah Knight, Cesar II, South Seek and the University of Utah. I wanted to mention the qualitative work that we're doing across Cesar II. And that is not harmonized so much. We have the ORCA to capture some internal context. But we also have a number of qualitative interviews across all stakeholder groups, parents, patients, health system executives, clinicians, and community leaders that may have an interest. So a number of these interviews address barriers and facilitators. And others are more in-depth interviews about participant experience in genome sequencing as well as in the studies. And I think this adds a real level of richness to what we're doing with the ORCA. We haven't harmonized these interviews or groups or advisory panels across Cesar II. However, we have open discussions about data integration using hopefully some strategies that are sound. And we have discussions going on right now about what strategies to suggest to the group. It sounds like that's something you're doing in a merge as well. And that may be a very interesting conversation to have about how do we integrate both the quantitative information coming from the ORCA and these rich qualitative studies that are going on as well. Have you had thoughts about that? I can just comment about the health care provider survey, that specific one. That has both qualitative and quantitative aspects to a survey and interviews. And we're doing the same interview for this particular across the four groups at all of the sites. But I think there may be other kind of issues within if there's anything about a merge and kind of integrating that. I'm not sure we've done such a good job of integrating. I would say across the emerge sites, there's some that are doing much a lot more rich qualitative data. Northwestern, you guys are doing more in lots of qualitative work. Some of the other groups are doing a ton of qualitative work, Geisinger, we are. Others are not doing as much. But I think, and we haven't gotten to the point of actually trying to, I wouldn't call it quite harmonized, but compare what we're hearing across sites. And I would bet that there are certain themes that are common across all of the sites as we do patient experiences, we do all of those different organizational stakeholder experiences, healthcare provider experiences. But with the merge, we haven't, other than the healthcare provider survey, we haven't specifically said we're going to do use, we haven't harmonized interview guides or anything across sites. But I think there's still some comparison with that data that can be done. And then short of harmonizing an interview guide and doing the same interview with all stakeholders, we do have another project, a supplement for emerge that Georgia is leading to do just that. We're looking at integration of a family history tool. And we're doing the same interview with stakeholders across different sites. So that is another way to do that qualitatively and see those differences and similarities. And I don't know if Jessica, you wanna talk about the qualitative and quantitative integration of the worker or something with Caesar? Thank you very much. So as I think we, I'm trying to think of some takeaways as we are gonna be moving into breakout sessions. And it seems to me that one opportunity that would exist is again, while there's a lot of heterogeneity, there I'm hearing some commonalities between Caesar and Emerge about approaches using surveys, using qualitative interviews, interview guides and that. And it would seem like one reasonable thing that could be done would be to say, can we identify the tools that are being used, whether they be the survey instruments or the interview guides. And look to see commonalities there. And so while they're, the goal would not be, I would hope, to say, okay, hey, let's fix everything and so we're all on the same, I think. I don't think anybody wants to go through that. But I do think we could look for opportunities where, these things really are mapping together. And so there's opportunities where we might be able to pool data and get a broader sense of experience. And so that might be something to really consider in the breakout session as to how feasible that might be. The other thing I wanted to come back to, Jessica, is you presented this group in one of your last slides. It's just going to be starting to convene about, and I'm trying to remember now exactly what the content was. Was it about a survey instrument that you were, or do you know what I'm talking about? Yeah, the validation of a subset of the instruments? Yes. And so I was wondering whether there could be the opportunity for perhaps the merge representatives to participate in that exercise as well to really see if we can get both groups moving a little bit farther forward. It seemed like if that would be permissible, that would be a relatively easy take away that could be done. Okay. I see Frank nodding his head. Yes. Okay. Yes. Great. Frank says it's okay, so we're good. Okay. That's up for Frank. I can tell that Frank is in charge of almost everything here, so that's great. Frank is responsible if anything goes wrong because there's lots and lots of things. Yeah. Yes. Thank you. Just on that note, I think it's very important. I think a lot of what we're trying to do here today is to kind of identify commonalities, and I certainly, I think in thinking in terms of the methodological approaches, that's incredibly appropriate. However, I do want to all just remind us that the way that people come into the experience of getting a genetic result in the context of a merge is quite different from what we're doing in CSER. And I think that even while we are looking for ways to sort of harmonize some of our methodologies, we cannot let go of the fact that most people who are getting a result in a merge are there as a consequence of a kind of a participation in a biobank and a research exercise, a research exercise being constituted in a healthcare system, but it's very different from the people who are coming to CSER as patients with defined clinical problems. And I think that that means that doctors think about the information differently. For sure the patients think about that information differently. And I think there could be some really interesting and generative opportunities for sort of compare and contrast between the results between the two groups that I don't want us to lose sight of even while we're talking about sort of harmonization efforts from the point of view of the methodology. Yeah, I would just reiterate that. I think that's a great point. And so I also should say that, you know, there's a stakeholder breakout session after this. So we can talk about like how to do some of this stuff, you know, just making a plug for that. But I think that's a really great point. Is it that's another one of these variables that we need kind of to think about what the differences are. And certainly it was not my intent to imply that, you know, we would be trying to be more homogeneous. Yes, absolutely not. But I just wanted to jump in at that point as we're talking about it just so we don't lose sight of that. That's a key variable to study if we can use relatively similar instruments to gather the data. Exactly. That's a really great opportunity. Yeah. And even within Emerge, there are some of the sites that are actually using more of a disease-focused entree into the program as opposed to a general population by our bank. So even within Emerge, we have some of that heterogeneity, if the current one. So one of the lessons we learned in having created both outcomes forms and also abstracted data. We now have data for about 60 individuals that have had the six month, you know, final results time point is building on Alana's suggestion like you create an algorithm and then you deploy it to another site to see how it works. I think that would really be helpful to do it in a stage process, probably iteratively, and then even have an SOP for creating an outcomes form. So I think one of the lessons we've learned is to, you know, study these charts ourselves and then iteratively develop our own outcomes form. And then I think the next stage would be to send it to another site and see how they kind of get familiar with how easy it is. And then eventually deploy it across the network or across several networks, emergency. So I think having an SOP for that process itself would be very helpful. Great. Thank you. Yes. In picking up on the previous comment comparing the two consortia and thinking about how patients enter, the other thing we'll have to pay close attention to it seems to me is the age differences between because clearly emerges overwhelmingly adult with a small percentage of pediatric and we're the opposite in CSER. And so just the fact that one is dealing with the person themselves, the other, the parents in most cases, et cetera, is going to have a large impact on the perception. And perhaps the physicians involved, pediatricians versus internists, for instance, are not necessarily the same in their familiarity and acceptance and so on of genetic information. So there's originates there that we'll have to probably pay attention to as well. Yes. Thank you. Great. Any other questions? Excellent. So we're just a tiny bit ahead of schedule, which is never a terrible thing in these meetings. So once again, I want to thank our three presenters. I think this is an outstanding session. So thank you very much. And Rex, we will launch us on to our next activity. So as we do go into the breakout sessions, I just want to put the leaders of those breakout sessions on notice that for the discussion this afternoon, where we're going to think about collaborative projects that might go forward, I'm going to ask each of the leaders to come out of their breakout sessions with one or two suggestions for what such projects might look like. Now you can have your break, unless Mark wants to hum a few bars. No. Yes, if they're on the sheet here.