 All right, we are going to resume here. So Bob Fremuth wrote and asked if I was gonna go into the hurry up offense and I think it's gonna be more akin to watching Patrick Mahomes trying to escape from the Tampa Bay Buccaneers front four. So in any case, what's going to happen for this next hour is that we're gonna go through some slides that Ken and I put together with the takeaways. Yeah, Tampa Bay didn't take it easy. I'm a Holmes Hoffman, why should I? So Ken is going to be editing slides with the content as we go through. We do have a lot to cover. So I'm gonna be fairly draconian about moving us forward and would ask that if there's some minor comments or edits, pop those into the chat or the Q and A, we will definitely pick those up. But if there's some major themes that we either got wrong or missed, that's where we really wanna make sure we don't leave anything behind here. So Ken, are you ready to share your slides? Ken was feverishly preparing the slides, so he will probably be joining shortly. I'm here. There we go, okay, great, thank you Ken. So just to remind people of the goal of this meeting was to develop a research strategy on the use of genomic-based clinical informatics resources to improve the detection, treatment and reporting of genetic disorders in clinical settings. So we wanna make sure and focus our comments around research strategy, research questions, as opposed to doing what we all too commonly do, which is to get hung up in some of the issues relating to infrastructure and standards and that. Next slide, please Ken. As I listened over the course of the two days, I pulled out what I would consider to be overarching themes that came up several times and were not specific to a session. And so these are the overarching themes that Ken and I heard. Research should include an implementation science framework. We need to work on understanding the value proposition for all stakeholders, which would include patients, research participants, providers, payers, the C-suite, researchers, et cetera. And to that end, the NHGRI should support potentially convene multi-stakeholder collaborations to explore this value proposition to make sure that the research is relevant. We need to have an ongoing engagement with patients as partners to develop applications that are under patient control to promote genomic medicine. And in particular, to identify ways to lower barriers related to regulatory processes to promote research in this area. One of the ongoing challenges with consumer-focused research is that the private sector can do this very rapidly. But whenever we try and do this under a research umbrella, we frequently get bogged down in a lot of regulatory issues with IRBs and consents and third parties, et cetera. So are there ways that we can accomplish what needs to be accomplished with this patient engagement and still do it in a timely fashion? One of the things that I think was most striking right from the get-go are the pervasive bias across all aspects. And I think we initially started with the idea that there's bias in our genomic databases, but I think we heard many speakers talk about bias in data, in our information systems, in access, in definition of value and knowledge and probably a number of other things. So I think we have to recognize the pervasive nature of the bias and make sure that we explore that upfront. And then the last here is the need to account for relevant workflows, that everything that we talk about impacts somebody's workflow. We need to understand that and we need to research it from the perspective of the workflow. Pat DeVerca added that we need to develop and utilize core outcome measures to demonstrate value. I think that's an important point. So we'll be sure to capture that. Jeff had indicated our goals are aligned with the private sector, why not include them in some way? I think that's what we're trying to get at with the multi-stakeholder collaborations and we will make sure and expand on that. Any other comments on this particular slide? Yeah, Mark. Go ahead. Yeah, just on implementation science, one thing I've observed is two very different interpretations of what that means. When I talk with some of my friends in behavioral and psychology research, they take that to mean evaluating the impact of a behavioral intervention. I think in the EHR world, we see implementation sciences, how systems are implemented. So as we get behind that, we'll wanna be really clear on what model of implementation science we're referring to. Yeah, and the NIH actually has a dissemination and implementation group that works across the NIH on this and they have some definitions. And so I was sort of implicitly assuming that we would use definitions from the NIH around this, but that's a good point. And a good implementation science framework should specify all of the different answers to those questions. So that's a good point to consider. Any other questions on this particular slide? Mark, I just put it into chat, a comment about the last point on workflow. We have heard a number of different times how genomic data is disrupting the practice in some ways because it is providing opportunities for clinicians and other providers to think in new ways about how they practice and what data points might be relevant in a particular situation. So existing workflows may not be readily geared towards building in the consideration of genomic data. That might prevent, present an opportunity for us to help develop new workflows around these data types, specifically rethinking the situations in which it might be most advantageous to start considering genomic data for a particular patient and think about how the introduction of those data in those new workflows might be a goal that we can work towards in the future, help shifting the clinical practice into those new workflows as opposed to trying to shoehorn an extra step into an existing workflow. Okay, there are a couple more comments that are coming up in the chat, but to make sure that we get through everything, I'm gonna make sure we will capture those. None of them, I think they're pretty self-explanatory so I don't think we would need it clarification or discussion. So Kim, let's go ahead and move to the next slide. This is just, I'm not gonna really have any discussion on this slide. This is just a summary from the survey that I presented yesterday. The key needs from the survey, research into methods for integrating analytical interpretations derived by computational models of genomic data into clinical settings and then study ways to ensure that CDS has the ability to incorporate and support multiple genes of clinical information. These are a little bit more generic than some of the deeper discussions that we've had. And then a couple of the other things here really support some of the overarching themes that we've noted on the prior slide. So let's move to session one. So session one, we heard loud and clear that we have to study the inherent biases in data algorithms, information systems and in implementation. These are multi-disciplinary, multi-dimensional. And this is certainly not an exhaustive list but ones that were explicitly discussed for race and ethnicity, social determinants of health, urban, rural, academic, non-academic centers I think education could fit in here as well. I thought there was a really interesting comment that was made about implementation equity. I think Kevin Johnson mentioned that. That was a term that was new to me but I really hit home. So that's one that I took away there. A second bullet and I have a second slide on session one here. So the need to explore the value proposition which is imbalance between researchers and participants and this is particularly noticeable in underserved populations. Ken, let's go to the next slide. We'll go through the bullets and then we'll come back for discussion. We need engagement in research across a broader range of organizations. We have a Casablanca problem, round up the usual suspects, NIH puts out an RFA and we see all the same faces, most of which are also on this call. And again, this concept of implementation equity. We need innovative research questions and methods to address these inherent biases in a rigorous and systematic way. This was a really interesting point that Dr. Jeff brought up and we really don't have methods right now that we can roll out to implement this. So this is gonna require some research into actual methods. And then outcomes have to capture both benefits and harms. This was specifically mentioned in conjunction with genomic clinical decision support to inform some mitigation approaches. So across those bullets, any additional comments? Jeff had mentioned about literacy and numeracy. I'm gonna say that's another part of the list. And we have to make sure that we're covering that Rex mentioned, yes, we will include more detail from the Desiderata and Survey in the final report, but I didn't wanna spend too much time in the discussion on that at present unless there's a specific point that you wanted to bring up. Okay, any other comments or reaction to the takeaways from session one? Okay, Rex can't unmute. Can somebody unmute Rex? You should be able to talk now. Can you hear me now? Yeah. Excellent. So there were just a couple of things that I thought actually resonated with some of the other things we heard in later sessions. So one of the comments was we needed to capture do a better job capturing conditions across multiple genes. And I think given the broader interest in polygenic risk scores, that's something that probably should get added. Again, I'm just not sure what level you wanna deal with these things. And then we did talk a little bit about the, in that session, we talked about the parent inconsistency between people wanting to be able to track the actual details of the molecular test slash analysis and wanting to make sure, and on one hand, but on the other hand wanting to make sure there wasn't too much detail in the EHR. So I think those need to show up at some point, but great. Okay, and then could somebody unmute Suzanne Bakken, please? Sorry, I don't need to be unmuted. My chat is just disabled. I wanna be able to chat. Oh, okay. So, and if you wanna plop it into the Q and A, so we should be able to capture it there as well. But yeah, if we can, at this point, essentially enable chat for everyone, that would be great. Gerald, I don't know if you can do that or... Yes, that has been enabled. So we can do that now. Everybody can chat. Okay. I wanted to add something if that's okay. Yes. Just a little hand raised, oops. So I was just wanting to add that, the learning health system and enabling the learning health system from genomics because this was just the, in the example with Emerge, it enables discovery in clinical care, but that loop was not necessarily closed completely. And so that's something that could be added as a bullet. Great, thank you. That's really important because we did see that figure from the strategic plan several times during different presentations. And we don't have to do it now, Ken, but I might move that actually to an overarching theme is the whole learning health care system, because I think that figure would frame that discussion very well. Other reaction to the takeaways from session one. Clark, this is Erin. I would just add for the implementation equity that I think this is gonna take some outreach on our behalf. I think when these certain organizations, when you feel inferior over a long period of time, it's hard to be courageous and step up to the plate and try to go after these interesting opportunities and so forth. So I think, and that I think we, as a group too, we need to do outreach to those types of organizations that maybe are kind of missing from this group. Very good point. Hearing no other comments and not seeing anything else appearing in the chat. Let's move on to session two. Again, two slides on the takeaways here. Use of implementation science research methods to improve implementation equity, not just research equity. There have been a number of comments that have been coming in around implementation science frameworks. And so we're getting some good engagement around that. And I think we'll be able to address some of the definitional issues that have come up as we prepare the final report on this. Develop a patient-centered research agenda. There were a number of potential research questions that came up in that session. Authentication, privacy, security of genomic data that's successful or underpatient control. Re-contact absent clinician oversight. A knowledge requirement for different stakeholders. You know, sort of what is that minimum knowledge set to be able to use the information. Innovative enabling platforms for obtaining and returning genetic results. And the ability of a patient-centered focus to potentially reduce bias. And the next slide on session two, Ken. Need an updated business model of research to attract a broad range of stakeholders to participate, to understand more about the incentives to implement genomic-based clinical informatics resources and tools. And research into ways to represent genomic information as structured data while minimizing manual processes. That also comes up a bit later from the discussion today. So we're now open for discussion on session two, comments, questions, additional points to make. I raised my hand, but I don't think you saw it. No, my hand, I'm not seeing the hand raise function, unfortunately, so. Yeah, I can see it, so I can call it, sorry. Okay, Ken, why don't you take care of that then? Yeah. I just put it down, but I do still have a comment. Sorry. Regarding this session, I think something that I didn't emphasize enough in the 10 minutes I was given, but I think it's very important is that we need to figure out certain methods to use to close that loop and do the learning healthcare system that was mentioned earlier. Reusing genetic data that's from test results presents some methodological challenges and you need to modify the strategy compared to when you use research genomes. In some ways, it's a parallel problem just starting to do phenotyping in the EHR as opposed to having all your subjects come in for an exam or something like that. So I think that figuring out methods for this reuse model would be really helpful in this case because some of the limitation of using genetic data in the clinic is because we don't know the full phenotypic implications of some of these variants. I think it's a really gonna be a very powerful place for us to learn about that. Yeah, and I think that that ties in with what we heard in today's session from Marilyn about the use of this data over the lifespan. So that reuse question is a really a critically important one, possibly even rising to the level of an overarching theme. That these genomic data are different from most other types of clinical laboratory data that we use on a daily basis. Go ahead, Robert. Yeah, thanks, Mark. I'd like to pick up on that point on the previous slide looking at the Patient-Centered Research Agenda. One of the things that came up a couple of different times, I'm not sure where you'd wanna capture this, is the concept of data portability. I think it might fit into this topic as well, but again, it was mentioned a few different places. So being able to access the data, break this, I think in some ways, kind of a legacy mindset of the data living in only one place or under the auspices of a healthcare system as opposed to potentially with the patient, some research as to how we can improve the portability and access to genomic data that has relevance across a patient's lifespan. Yeah, I think that that's a really excellent research question was something that I didn't explicate under the patient-centeredness, but as I said in my comments yesterday, right now, at least in the United States, the patient is the only common actor across the healthcare delivery system. And so if we have data that is relevant to the patient, that data has to move with the patient. And so that's a really key point that I think could lead to some very interesting research questions. So Ken, as you're taking notes there, if you could just add, making sure the data moves with the patient. Mark, I think- Go ahead. Mark, I think another thing that came up in session two that I'm not really sure how to frame it in a research context, but scalability of some of the examples that were given, that was brought up several times. And I think that it would be important to capture that in the summary. Thanks, Carol. Yeah, scalability, sustainability. These are ongoing issues for a lot of what we do. We can be successful over the course of a four-year grant, but then it dies there. And we need to make sure that we build in some aspects of this scalability, generalizability, sustainability. So thank you for that. Other comments. And I know we're going through this very rapidly. And one thing I've learned in my decades of doing this is that not everybody processes information at the same speed. And this is really trying to react while we're inundating you with information. So these slides will be available. And I would encourage those of you who would like more time to process this, to feel free to take that time and respond to Ken and myself asynchronously. And we'll make sure we incorporate that as we pull everything together. Cause we're certainly not going to have the report on this meeting ready by next week. So. So Mark, put comment on the scalability and sustainability piece. If I'm not mistaken in the eMERGE, at least in the early phases of eMERGE algorithms were developed in one system and then tested an independent system to be valid. And I'm just wondering if that same paradigm should be repeated here as part of the research agenda that if we're developing systems that are just unique to a particular place, it's not going to go very far, but if we can show generalizability and import them to other systems and demonstrate similar results, that would be very powerful. Yes. And of course that was an inherent feature of the network where it was a requirement to be part of the network that you had to be able to do this. And so in some ways that was baked into the RFA for eMERGE. Now that's not typically something that, we required dissemination plans and data sharing plans as part of investigator initiated research, but how we might be able to take some of the imperative to have generalizability and get away from just sort of a generic, well, we're going to publish some papers and put it in GitHub so that we can actually have substantive examples of generalizability, I think would be a very interesting thing for NHGRI to consider. Okay, let's move on to session three then. Takeaways from session three. And again, I think I've got two slides here. Informatics research for genomic evidence computing and genomic knowledge-based construction to enable scalable, shareable, computable inferences of genomic knowledge and harmonization of practice guidelines. There's a lot to unpack there, but I think it emphasizes some of the things that we were just discussing. Research into novel workflows that diminish burden for primary care providers, tap into other healthcare workers and engage patients and don't default to alerts and reminders. So essentially innovation in the idea of how we actually present this information in a clinical workflow. Studies to ensure that new technologies don't exacerbate health disparities. And ideally, actually reduce health disparities might be a more positive way to frame that. Educational policy research agenda to reduce barriers and improve knowledge for patients and providers. So in other words, research into educational strategies and policy implications that may raise barriers for implementation could be a potential research agenda for NHGRI in this space. Next slide. I stole this from Chenwa-1, because it was so beautiful. So the blue text are my modest efforts to the one that she presented in her talk. I just love this idea of the socio-technical strategies for success. Informatics Research for Genome. That looks very familiar to what I just read. So I'm not gonna read it again. Harmonize the interests of multiple stakeholders to facilitate team science and implementation science. Instentivize collaborations to foster research on a learning health system for genomics. What is new evidence who has affected what needs to be done? And clinician and patient center design a workflow. So thank you Chenwa for providing that. So comments about the takeaways from session three. So Bob noted in the chat that as Bob always notes in every setting that this depends on data representation and standards, which is absolutely true. The question then is, is there a research agenda on data representation standards? And we do get into this a little bit in sessions four and five about ways that we can actually do research around data representation standards as opposed to say what we always say, which is we need standards around this area. And then there certainly is a research agenda around a knowledge delivery of the complex data for decision-making. Mark, just on that first point, what I was trying to get at is the first point on the previous slide mentioned research into genomic knowledge bases and specifically the construction of them. My point there was constructing new knowledge bases ad hoc with their own localized specialized models will actually hinder interoperability. And so the new knowledge bases that we are looking for here really should be preharmonized and standards compliant whenever possible. Great, thank you, Bob. Also, two people have mentioned on the last sub bullet here, more human factors research. And then Terry, you had your hand up. Sure, I just wanted to make the point on your previous slide, you talked about making sure that new technologies don't exacerbate health disparities and we hear this a lot and it's a very important point to make, but it seems to me that clinical informatics could be key in reducing health disparities that we currently have in genomics in medicine. So why not try to be a little more proactive about that? Yeah, and when I read through that, I tried to fix it on the fly, which is to say, change that from, rather than try not to exacerbate to actually reduce health disparities. And I think the other point you made in the chat was that these disparities exist across many more areas. I captured this one specific to session three, but in the overarching goals, I tried to be a little bit more global about the idea that we really need to be looking at these types of disparities and biases in all areas that fall under this topic area. Pat Deverka mentions economic benefits. Again, I think we're trying to capture that in the overarching goals around value that by having different stakeholders, we can also accomplish that. But certainly there are, the NHGRI is already funding some economic evaluation research under the auspices of the LC program. So that would be important. And then fair principles were mentioned, which is something that should be incorporated. And then I see, Guirme, you have your hand up. So go ahead. Yeah, on the health disparities issue, one, I think one point to consider is the technologies that we expect patients to use in certain interventions. It's important to keep in mind, for example, that I think there's pure research showing that among the underserved populations in the US, 25% of the people do not have access to the internet or a smartphone. They only have a text and voice. Oftentimes you come up with cool things like chatbots and patient portals. And there's a large percentage of that population that won't have access. Yep. Yeah, I didn't explicitly include that in a sub-bullet, but certainly that digital divide, technological divide is an important thing to consider in health equity and disparity. Yeah, another point is insurance coverage. We had some barriers in trying to implement some of our interventions with underserved because for example, if they test positive for a BRCA, insurance won't be able to cover costs of aggressive cancer screening like breast MRIs. Yeah, and that's where we get into the real, I guess I'll use the term, even though I really don't like the term, the slippery slope in terms of where can you actually sort of impact in a research program as opposed to acknowledging the idea that we need to identify how the things that we think are going to improve care could actually unintentionally exacerbate some of these disparities. And so the need to capture exactly that type of information to say, hey, we facilitated all this stuff relating to identify people at risk, but in our healthcare system, a lot of people don't have access to the care that they need to actually act on the information that we're providing, in which case, have we really done them a service? Other comments regarding to session three, some really good discussion here. So Jay, go ahead. Just a one point that more research involving other modalities for decision support. One of the things that Jeremy brought up was chat bots, for example, and so there are a lot of other modalities that could be explored in novel ways. Yep, okay. And Shenhua, I just noticed you had your hand up, so go ahead. Yeah, so I also think we need a policy in place regarding who should be responsible for delivering and acting on this highly relevant or clinically meaningful genomic results, because I feel there's still a lot of confusion there. Yes, and you've done a beautiful job of allowing me to segue into session four, because that's in fact, one of the takeaways that we captured from session four. So why don't we go ahead and move on there for given the time. So session four, we had some takeaways, research on what constitutes a minimum data set for clinical care and research. I love the phrase, I think it was Suba that said, learn from less data, that's sort of a paraphrase of the perfect is the enemy of good enough. Oh, what happened? There we go, thank you Ken. Then research at the interface of human cognition and artificial intelligence, how can we take the best of both? Include research into explainable AI to promote clinical implementation and then research into the development and implementation of a common semantic framework to reduce reliance on manual curation. And do we have another takeaway slide there on four Ken? Yes, research into data interoperability between clinical systems that is focused on the implementation of genomic medicine. I thought a very powerful theme that came out of session four was research based on specific use cases to support genomic medicine implementation through informatics, but to engage with the diverse stakeholders who were referenced earlier to prioritize which use cases really have the most resonance with the diverse stakeholders so that we can make sure we're doing research on the most important questions and not have something that ends up being cool but not relevant. So Mark, Mark, I made a mistake, let me switch something, that's actually- I was gonna say, that seemed different to me but I was just going with, okay, here we go. I appreciate you going with the flow, but yeah, this is actually a session, the remainder of session four. Yeah, okay. And then the next slide would be session five. So this is where the remainder of the session- Okay, great. So this is where the point that Chen-Wa had made. Research into the development of a responsibility model across the EHR for patient access to needed information and also to capture what Chen-Wa said, which is understanding who within the healthcare system has responsibility. Now, I did have a question. It came up in the chat and it was raised about the idea of using 80-20 use cases in genomic informatics. And we didn't have a chance to discuss that during the session. And I was wondering if the, and Ken remind me who had posted that first? I, maybe I'm wrong, but Lisa, was this you posted that or maybe I'm wrong in that? If somebody wants to claim responsibility for that, I'd love to understand a little bit more about how you're thinking 80-20 use cases could be useful. I think I posted that, Mark. That was in the context of HL7 standards development comparing HL7 version three with fire. Yeah, it's the idea is to identify the cases that cover most of the practical, readily available clinical genomics, things that we could do today. And that could be covered by 20% of the standards. Gotcha. Yeah, so this is the thing that we frequently end up with is that we develop a strategy to address a problem and we can take care of 80% of the problem, but we spend, we delay ourselves because we spend a lot of time consumed about the edge cases that don't necessarily fit, whereas we could potentially move forward on the 80% get that to work and then go back and try and pick up the edge cases as best we can. Is that a fair restatement? Yeah. Okay, I like that. Thanks. Okay, other comments or questions about session four? Mark, this is Nephi. I just wanted to make sure that somewhere, I'm not sure you're the best way to address this, but to better develop regulatory frameworks to open up the use of genetic data. And I'm not sure exactly how that fits into the research world, but that's sort of a need that needs to be addressed, I think. Yeah, I think that we mentioned that in the prior session, but it can be refined here is that it is within the purview of NHGRI and they have certainly done this before to research regulatory frameworks and how they either facilitate or in more cases, impair implementation. And so that type of research would certainly be within the realms of consideration for a genome. I just, for balances sake, can in your comments just say how they facilitate and impair research. Because one might assume that there are, Gina could arguably be said to maybe facilitate, although there would certainly be people who would argue with me on that point. Yeah, this is Teri. I think it's not only how they may facilitate or impair research, it's how they would affect clinical care and maybe effect is a more benign way of saying this. But certainly, I think we've seen with pharmacogenetics that the reports that we get now that basically gives you genotypes and nothing else are meaningless to patients and clinicians. Right, so that's a really good point. So effect research and clinical care, yeah. And Chenwa just made a point in the chat that when we use AI, we may be referring to it more as augmented intelligence rather than artificial intelligence. That would certainly be an important distinction to make as we throw around that term. Other comments or questions about session four? Do you wanna say something about AI and its potential to exacerbate disparities? Yeah, I think we can definitely fold that into the point that we made previously about technology and just make sure that when we think about technology that we also include things like AI, machine learning, et cetera. Again, this gets effect. Whatever the difference is between those two words. Well, yeah, it depends on what you're trying to get funded I think for the most part. But it also relates to the point that was made in session one about inherent biases and algorithms that they don't in fact represent unbiased or naive approaches to data. And there was a point made, Casey made a point about that Gil had raised in his talk smart contracts that have both policy and technical implications and that that's one way to address a responsibility model. And Mark, I have my hand raised if you... Okay, I didn't know if it was raised from before or if you had another comment. Somebody can't tell me because I can't figure out how to do it. So you had made an interesting point about synergizing human and artificial or augmented intelligence and there could be an interesting research agenda around how to do that in genomic informatics. I don't know what it would be, but it might be. And I'm not sure if it's the session or the next session that it belongs in. Yeah, I know I captured something. I may not have been stated. So obviously, go to the prior slide, slide 10, Ken, I think. Oh, the interface of cognition and artificial. Yes, yeah, yeah, that was the one, right. And again, I missed it. Well, geez, I don't know how we've been going through it at the speed of a FedEx guy or the side effects of medication ad. Right. All right. It might be a little bit clearer because interface of human, cognition, artificial intelligence, but anyway. Yeah, okay. So we'll make sure and just flag that to clarify that some. Chen, I'm gonna get to you in just a second. Jeff Ginsburg also mentioned that telegenomics was, we certainly didn't mention all of the different technologies that are potentially available. I would certainly think that telehealth, telegenetics would be within the realm of technology, augmentation for the research agenda, but we'll make sure that that's included. Chenhua. Yeah, so I just wanna add a little more about that responsibility model because I feel a lot in genomic medicine, a lot of delayed diagnosis or misdiagnosis are actually due to the poor coordination between a primary care and then geneticists as specialists. So I'm thinking in clinical informatics, Leah has been a long time researching care coordination. I'm actually thinking like for genomic medicine we need a better coordination among all these different type of care providers between specialists or primary care. I think that needs to, let's actually critical and if we can have better support between primary care and especially so geneticists or genetic counselor, we can potentially shorten the time for diagnosis and have better patient management. Yeah, I just wanna point out that. Yeah, that's a very important point and certainly is an area of critical importance to informatics research and with our, as we think about our rare disease patients in particular, this is a major issue. I think it was the Undiagnosed Diseases Network that noted that a review of medical records alone resulted in a diagnosis in somewhere between five and 10% of the patients that were referred to the UDM. In other words, the information about the diagnosis was there but it just had never been communicated to the primary care physician, to the patient and their family. So a very critical point to make sure that we would include that into a research agenda as we talk about data and interoperability and all the technical things that there are these human factors related to handoffs and coordination communication that also have to be considered. Okay, let's move on to session five. And the takeaways here, researching the data interoperability between clinical systems that is focused on the implementation of genomic medicine, develop research on specific use cases to support genomic medicine implementation through informatics and prioritize those use cases based off diverse stakeholder input, coordinate and synergize research findings with the broader health IT community. In other words, try and make sure that research findings, research standards that are developed are shared with groups like the US core, HL7, GA4GH, et cetera, and then facilitate the last mile of clinical implementation, identify what's ready and support implementation research around it. Other takeaways from session five. Hey, Mark, this is Jeff. This may not rise to the level of the kinds of things that are on the slide, but I did want to reemphasize Sandy's call to action with the leaders of systems to convene to be the end users of our research and to help us make sure that our research agenda is moving towards their needs, the C-suite people. Yes, I explicitly listed the C-suite people on the overarching theme slide, so. Oh, okay, sorry. You know, it's okay. Again, we're going through this quickly, so it's easy to miss stuff and I didn't capitalize the C in C-suite, so easy to miss. Either we did a phenomenal job of capturing it or we have completely exhausted the group. Oh, I can always count on Bob, so go ahead, Bob. Mark, could you say something a little bit more about point three there, coordinate and synergize research findings? So this was the point that came out of our discussion around implementation guides. I think Ken was talking about how we generate certain things in the course of projects that we do, but we don't necessarily pass off the knowledge around some of the standards into the standards organizations. And so I raised the question about could, would it be appropriate for the NHGRI to say, hey, and by the way, if you're gonna be doing this type of research, you should also be creating implementation guides as part of a dissemination plan. And you and Ken thought that was not a great idea because the standards world is a pretty arcane group of specialists, but I think there still is the need to make sure that new things that come out of research that have implications for standards somehow get passed to that specialized community so that they can take advantage of the work that's been done. So that's what I was trying to capture there. Okay, so thank you for clarifying that. I certainly agree with what you just said there. I didn't derive all of that from the bullet point, but I like what you're saying and I would support adding some semantic around that. That basically helps to close the loop between the research use cases, the development of standards and the support of standards for those use cases and then the downstream adoption and utilization of those standards for the next set of research. Yeah, and as you said that Bob, it really it emphasized to me that this is a two-way street. So in other words, the research group that's working on a specific use case needs to make sure that they are aware of and use any extant standards that already exist rather than creating new standards that aren't needed. So that's the front end piece, but on the back end, they may find some gaps in the existing standards that need to be addressed that they can fill and then that needs to be handed back. So that's a little virtuous cycle in the standards world that should be reflective. That's right. And in fact, I have a slide on exactly that, that standards development is not linear. It is iterative, it is circular for the pattern that you just described there. Great, so if you wanted to fire that to Ken and me, who knows, it may just end up in the summary somehow. Marilyn, I saw your hand up. Yeah, thank you, Mark. One point that it came up earlier, but it came up in this session as well, that I wonder in bullet two, in talking about the use cases, I wonder if being explicit about some of those use cases being the reuse of the genomic information where for example, an exome for a Mendelian gets reused for a pharmacogenomic or a panel for pharmacogenomic, it's reused for a PRS or a Mendelian gene. Just so that we don't end up siloed, that we have implementation use cases that are in each of those spaces, but we haven't actually done the use case of reuse or integration across those different types of genomic medicine content. Great, that's a good point that we can definitely add. And then Jeff added in the chat that we did see that learning healthcare system diagram, the left-hand side of that diagram is the basic science. And so even though this workshop was focused on clinical implementation of genomic medicine through informatics, we don't wanna forget the idea that sort of a superset of what we were just talking about in the standards world is that there's going to be additional knowledge that is generated by the implementation of genomics that's going to generate new questions that are relevant to the basic science community. So how can we create that virtuous cycle to make sure that that information gets back into the hands of basic science researchers to facilitate their work? So it wasn't something that we really had a specific time set aside to discuss, but I think if we're going to use that diagram as one of the anchors for our report, we definitely need to say this is an important piece that should also be addressed. Other comments or questions? Okay, we are on time. This was fantastic. Obviously now your job is mostly done. The group that's the organizers will have a lot of things to sift through to kind of pull this together. The next steps, obviously all of the presentations, the video and everything will be posted to the genomic medicine meeting site, so that will all be available to anyone who is interested. So if there are people that you think are interested that weren't able to attend, please point them in that direction. We will be looking to generate a final report for the group that we will circulate for additional feedback. That will be used for planning purposes within the NHGRI. However, we're also looking forward to developing this as a manuscript for publication. The challenging thing with these manuscripts is that most journals are not looking specifically for a meetings proceedings type of publication. And so we'll need to reframe the information that we pulled away into more of a research type of article. Those of us that have been involved in these meetings have done that more than a few times. We generally offer the opportunity for authorship to the group that was involved in organizing the committee and also to the presenters. So we'll circulate any manuscript around to the group. The expectation would be as if you wanna be an author that you would need to have a substantial contribution, which at this point would include a critical review and comment and approval of the final manuscript before submission. And there are a couple of you who are on the invite list, and I don't know if you're still on the call or not, who actually serve in the role on editorial boards. And so if any of you are thinking, gosh, this would be a great article for my journal, please let us know because we would love to submit this to a journal that would have specific interests. We're thinking the primary journal that we'd like to go after would be a high-profile informatics journal since that was the primary focus of this meeting. But we're open to other alternatives that people might suggest. That is it from my perspective. Can any other final actions or closeouts before we call it a day? Other than, I just wanna say thank you to the Corning Center, Pamela Williams, Teji and NHGRI AV people for making, pulling this all together for us. Thank you so much. This would not have been successful without your input and hard work. And for our analysts also, Madison, Ana and Lori were also helping, Captain knows and helping, but some of the logistics. So thank you so much. Yeah, thank you Ken for mentioning that. It's as big and complicated as a meeting as this was. It was about as seamless from a technological perspective as I can recall. After doing a year of this, that's really delightful. And again, I wanna thank all of you who were co-moderators, presenters and most importantly, participants for all your input. That made the meeting really worthwhile. And special thanks to Ken for, he's been involved in planning for this meeting on and off for what about two years, Ken? Yeah, and some of the co-monitors and speakers were also with me on this. This has been almost a two year effort to get this going. So thank you so much for putting up with me to help get this forward. Very good. We'll enjoy the rest of your week and we look forward to hearing from you down the road. So, and at one point, we'll actually be able to see each other. One more thing, Pamela, how do you- The three dimensions. How do you want people to access the slides? Is there a process you want them to use? Yes, actually, I was just getting ready to chat every, chat that out. You guys can email me. You guys all have my email address, I know because I hounded you guys for like a month and a half. Yeah, they're probably going to call the cops on me, but so you can just email me and I will share that particular folder with you or invite you to share that folder and that should give you everything you need. So, Pamela, I was wondering instead of, would it just be easier to just send to the audience? Because you have the email list of everybody, could we just- And I mean, I could try sending them all, I mean, unless they're asking for just one, but they're very large files and I'm not sure that Duke will allow me to do that. I know I've gotten a couple of slides from people that they've had to send it to me via Dropbox or something because it was too large to send. So, I definitely can try that, Ken. That's a very good idea. I defer to your judgment. That's fine. No, I wouldn't do that if I were you, but okay. Well, that's it. We can also- This is, this is, yes, this is Teji. I will also say that as in the past, the NHGRI website for this meeting historically puts the PowerPoints and the presentations, they link them to that website so we can also direct people there. Yeah, that might take a couple of weeks though. Okay, yeah, and I do have to get with Macaulay anyway and I can discuss that with him. So maybe he can give me a timeline, but yeah. Thank you, Teji. We'll definitely figure it out. All right, we'll see you all. Thanks, everyone. Thank you. Have a good weekend, everyone. Thanks for doing this. Yep, thanks. Bye.