 So, I have 135 now, so we can get started. So, thank you everyone. So, we'll start our session five, which is actually genomics in a fragment of healthcare environment. And our co-moderators for this session is Dr. Fremont and Dr. Jeffrey Ginsburg. Dr. Fremont is an assistant professor. I have biomedical, biomedical informatics at the Mayo Clinic Center for Individualized Medicine. He's also the co-chair of several work groups, including HL7's Clinical Genomics Work Group and the GA4GH Genomic Knowledge Standards Workstream. Dr. Fremont's research program focuses on developing scalable methods, tools, and infrastructure that facilitate the translation of genomic data to clinical practice with an emphasis on knowledge management and delivery, and as well as the development of data and terminology standards. Dr. Ginsburg is the founding director of the Center for Applied Genomics and Precision Medicine at Duke University Medical Center and for MedEx, which is a partnership between schools medicine and engineering to spark and translate innovation. His research addresses the challenges of translating genomics and digital health information into medical practice and integration of precision medicine in the healthcare. He was a member of the advisory council to the director of the NIH and a co-chair of the National Academies Roundtable on Genomic and Precision Health, and is also a founder and president of the Global Genomic Medicine Collaborative, which is a nonprofit organization aimed at creating international partnerships to advance the implementation of precision medicine. Dr. Fremont, Dr. Ginsburg, I'll turn everything over to you for the introduction of the panelists. Thank you very much, Ken. As Ken just described, this session will explore some of the challenges and opportunities related to the integration into and use of genomic data within a fragmented healthcare environment. This figure is from the NHGRI Strategic Vision paper that was published in Nature last October, and I think it's been shown at least twice already during this meeting. It illustrates the connection between basic genomics research, which generate large data sets from which new knowledge is derived, and the translation of that knowledge into a nascent genomic learning healthcare system, which can build on foundational understandings at the molecular level to enable deeper understanding of the impact of genomics on a patient's health and the utilization of those data for clinical care. For this virtuous cycle to be possible, research is needed to address critical gaps and limitations within clinical electronic systems. Next slide, please. We have an August panel of speakers for this session. Marilyn Richie will discuss some strategic opportunities and challenges to using genomic data in a clinical environment. The Aaronson will outline a complementary set of barriers to information exchange, and Ken Kawamoto will identify challenges and provide recommendations for scaling solutions through interoperable standards. We will have time for only a short question or two after each presentation, but we will have a more in-depth panel discussion at the end of the session. Our first speaker, if you could bring up her slides please, is Marilyn Richie, the director of the Center for Translational Bioinformatics and Associate Director for Bioinformatics in the Institute for Biomedical Informatics at the University of Pennsylvania School of Medicine, and she is the associate director for the Penn Center for Precision Medicine. Dr. Richie's research focuses on developing novel approaches for understanding the relationship between our genome and human phenotypes. She has an extensive bibliography of more than 350 publications, and she has received several awards and honors including fellow of the American College of Medical Informatics. Marilyn, the floor is yours. Thank you so much for the invitation to participate in this wonderful meeting. Before I get into the meat of what I want to talk about, I feel like I should start by giving you kind of two biases that are kind of part of my personality and my view on this topic that I'm going to talk about. One is that I am an optimist, and so while there are a lot of ways that using our genome for life is challenging and could be problematic, I'm talking about this and thinking about this from an optimistic view where I think that this is something we should be doing research towards, obviously the rest of us do too, or we wouldn't be participating in this meeting, but I'm going to talk about the highlights as to why we should be doing this. The other is that some of the points that I'll make are not taking into consideration the regulatory elements around these these viewpoints, and some of the ethics and really there are nefarious things that one could imagine could be done if certain things happened and we build these this infrastructure for our genomic data to be available through our lifetime. I'm putting those things aside because I what I want to talk about is the research agenda and the research that we could be doing in this space, so that we could move this forward and use our genomic sequence data throughout our lifetime, recognizing fully that there are ethics and regulatory challenges but that's not what I want to focus on today what I'm going to focus on is the research that we could do in order to enable this to happen. Next slide please. So, once upon a time, we would use genomic information and genetics in the clinic, really in the context of someone presents to a physician with some sort of illness or disorder, and genetic testing was ordered so that a physician could use that genetic information for a diagnosis, or to develop a treatment plan. Next slide please. We're moving toward now is a place where our patients have genetic information when they show up in the clinic, whether they obtain that through a vendor and direct a consumer and they quite literally walk in with it, you know, a print out report or the data on a thumb drive, or the data are being aggregated because it's being ordered for something somewhere else and then the data just exists. And so we have the information on patients and I think we're moving to a place in the future where we could have data on every patient stored somewhere in the cloud. And instead of using it at the time to diagnose perhaps we could be using it to develop prevention plans for our patients or screening paradigms for the patients and if they can then present with an illness, we can use that for diagnosis and treatment. When we have this information accessible to the healthcare system, we have the opportunity to identify patients who are at risk of developing certain conditions. We could do deeper evaluations or deeper phenotyping of those patients based on their genetic risks. We certainly could imagine a time where we could identify patients to participate in genotype guided clinical trials. I think we're seeing a lot of evidence that for particular genotypes, there are medications that work in a better way or a more effective way than other patient populations. So we could do genotype guided trials if we had access to the genetic information. And then we can also, and we are starting to see that we're returning information to patients about their risks of either developing disease or adverse drug reactions. And this is, this is the future that in some ways and at some healthcare systems, this future is now, but this is the future that we want to get to. Next slide. There are a number of ways that we can imagine using our genetic information for healthcare throughout our lifetime. Certainly the Mendelian disease risk genes. Many of those are used in children and as part of newborn screening. Many of those are also used later in life as we start to develop different later onset conditions that happen in kind of mid life and later in life. That information would be useful to healthcare providers. When a patient presents kind of at mid life with a condition, being able to couple that phenotype with their genetic phenotype to actually make a diagnosis would be incredibly useful. This was a point in some ways that was brought up in the last session. In the context of Heidi Reem's question, you know, what if someone did get exome sequencing to look to see if they had a particular variant in, let's say, a cancer gene. And perhaps they didn't have a variant, but perhaps they do have a variant that is important for epilepsy or for a cardiomyopathy or an arrhythmia or Lynch syndrome. Some, perhaps the testing, they were looking for breast cancer genes, but they identify something for a different type of cancer. But perhaps that information isn't necessarily being returned to the patient or the provider at the time, but years later, as the patient presents with a phenotype and I think Lisa made this point in the chat. We want to be able to marry that phenotype with this genetic information that we have. We need to have that data stored in the EHR and accessible to in the way that a patient and a provider can understand it. Pharmacogenomics is the one that's most near and dear to my heart that genetic information is useful when a patient needs to have the drug prescribed, but we would need to have that information in the electronic health record or available to the provider and or the patient in order to use it. So where we're heading in the future are these polygenic risk scores. I think there are a few traits for these are being used in the clinic currently but I think that this is an area that will continue to expand. And if we had the genetic information accessible across all of our care providers, and they could use that to estimate what we are at risk for and especially as we start to have additional medication, comorbidities and environmental insults throughout our lifetime. Those risk scores will get more and more predictive and be more useful if we have access to them. Next slide. So I think there are a number of opportunities for clinical informatics to conduct research in this space so that this can be a reality. One that is ongoing is the EHR or electronic health record systems, they are starting to build capacity to store genomic information as structured data. I'm not the first person at this meeting who has said this, this has come up before, but this really is key. The variant data has to be structured such that it's machine readable, not just the PDFs, you know, that we've done for many years. One example of this is the Epic Genomics module I should disclose I guess I have no interest in Epic I have no stocks in Epic that just happens to be the EHR vendor that the University of Pennsylvania is using. Next slide please. But we've been working very closely with Epic to use the genomics module that they make commercially available in the context of our system, which is called pen chart, by building out this precision medicine tab. So the screenshot that you're seeing I'm using with approval from Epic so this is, you know, a view of the EHR that you see with the different tabs in our system. So the precision medicine tab is a tab that we built just over the last couple of years that captures the genomic var files or the variant files which enable you to have structured content. We've actually built this out with at least one molecular lab I think we're working on a couple of others right now but the Ambrie testing lab we have it built out that they can return the results as discrete elements using HL seven directly into our Epic chart. And genomic indicators are the the knowledge the interpretation of that variant in the context of the clinical data and then the genomic indicator is what we use to build the clinical decision support. We just published a paper on this in December in genetics and medicine that I'm happy to put in the chat when we're done the URL to that. Next slide please. So the other opportunities before I talk about the challenges, you know, with clinical decision support, we can disseminate the relevant information to providers so that they can use it. I think that there's enormous potential in the clinical decision support if we do it well, and specifically for using it outside of medical genetics clinics, you know, the medical geneticists are trained to think about this so this is that one group of providers who actually do get this content in medical school. Many other providers don't get this and so we need to train them. One thought is leveraging the CDS systems and leveraging Epic if we do it well in order to teach the providers what content is relevant to them. So it becomes very important not to have too many alerts because we all hear about alert fatigue so we can't alert them to every, you know, every relevant pathogenic variant in a patient's chart, only the ones that are important in the context of the care that that provider is providing. Health information exchanges are another opportunity where relevant genetic information can be shared across health systems. That is one mechanism that information could follow patients wherever they go. I think Sandy is going to talk a little bit more about this in the context of genomics, I will say there are limitations and challenges to that which I'll talk about in just a few minutes, and, or in just a minute. And as we get more knowledge, we need to be able to push that information forward. Next slide. So the challenges. There are many clinical decision support we need to build the genomic indicators that such that they can be updated regularly and that the providers have it in a language that they understand the health information exchanges. Currently, genetic information is not uniformly shared, not all systems, especially depending on vendor can access that information, but we do need a way to leverage these EHRs and have the content kind of passed from system to system because as was mentioned earlier. Some patients get specialty care at one clinic and primary care at another. And then the genome annotations. As we learn more knowledge, how do we make sure that that information is pushed to the people that need it. The last challenges. These are actually mentioned from the survey. So many of you answered these specific points. There are compute limitations around the costs of storage and egress from the cloud. That may only exacerbate health disparities if we don't develop approaches and mechanisms that allow us to use this data uniformly, whether you're at a community health system or elsewhere. And then the ontology standards around the phenotype and the genomic annotations. You know, we have lots of standards, but we need to do research that allows us to determine which ones are most adoptable across the different systems and we have to get away from only the academic medical centers. These have to be adoptable at our rural clinics at our community health centers. Next slide, which is my last slide. Just to summarize, I do think that there is a lot of positive research we can do by leveraging the vendors and the tools that they're building. But doing the research to figure out how do we get this to the providers in a way that they can understand? How can we get the annotations updated regularly? Informatics can do this. We just need to develop the right paradigm in order to make it happen in a timely way and make it accessible to all. Those health disparity issues will only get worse if we don't find mechanisms and approaches to do this that are low costs and accessible to all. And with that, I will stop and happy to participate in questions in the discussion later. Thank you. Thank you very much, Marilyn. Appreciate the talk. We have been taking some notes here. We've got a couple of questions that have come in through the chat. And Jeff and I are teaming some things up for the panel discussion at the end. We do have just a couple of minutes here for a couple of questions before we move on. And there are a couple of things I'd maybe like to highlight. In the chat here, if you could comment a little bit further on the University of Pennsylvania's implementation here. Does that precision medicine tab that you're building out there work for clinical trials and to what extent does it support reanalysis? That's a great question. So I think the answer short answers. Yes, it could be used for clinical trials. Currently, it's mostly being used in the context of clinical genetic testing. So within that tab, a physician is able to order a test. And so currently, they are ordering mostly from medical genetics clinics, but we have some ongoing research to figure out how do we get non medical genetic specialty to do those orders. So we are starting to think more about specialty clinics and primary care. And then for the second part of your question, the reanalysis, because the data are structured, they can be used for reanalysis. So we are doing a lot right now to analyze not only the variants that are in there, but the the clicks that are happening through that tab to try to learn about how well it's being adopted. So we're looking at the results, how are they using those results. And based on that actually one of the things that we've just built that just got released this month is once a provider orders a test and the result comes back. Instead of only notifying the provider, it's notifying the provider group, and someone from that group goes into kind of this holding place to look at the result, look at the interpretation and then make a decision to push it into a patient view so it'll also go into my pen medicine, or defer so that they can have a conversation, you know between the genetic counselors and the medical geneticists, or to not push that result forward. So perhaps, you know, there wasn't anything conclusive and so rather than push it into the chart and confuse the patient, they keep it and do additional testing before it gets pushed to to the chart. So we're starting to monitor that so that we can do research on it and learn, you know what's working and what's not working. Right now, as I said, it's mostly being used by medical genetics, although we have seen a few orders from neurology, and the hearing loss clinic. It's mostly been medical genetics, but we want to learn how do we get it out of medical genetics clinic they know what they're doing, we need to get this information to the other providers that need it as well. That's great. Thank you, Marilyn. Given the time that we have here I think what I would like to do is reserve the rest of the questions that have been coming in here for the panel discussion. And so we will come back to as many of those as we can. As I mentioned, Jeff and I are carefully trying to record all these so please keep the questions coming. And we will, we will get back to as many of those as we can. The next speaker for today is Sandy Aronson who is the executive director of mass general bring them personalized medicine information technology. His teams are focused on both supporting clinical genetic testing and on using technology to improve clinical processes, partnering with clinical leaders to redesign clinical workflows to reduce cost and improve outcomes. He is the founder or co founder of several companies, including gene insight, which provides him with a dual perspective on challenges and solutions for genomic medicine. Sandy. Thanks, Bob. Can you hear me. Oh, can you hear me. Yep, you're just fine. Thank you. Oh, great. I'm really enjoying this conference, and I hope to add in a small way to some of the concepts that have been brought up while talking about why the heck do we not yet have a broad clinical genetic information exchange. And more importantly, what do we have to do to make sure that when we do have one, it will help rather than harm health equity. Uh oh. Just a quick disclosure. My employer does get royalties from sales of gene insight, which does include some health equity. I'm sorry, health information exchange in for infrastructure. It's a team that many of us have been focused on I'd say since about 2005 surrounding genomic information exchanges go something like this. What if all of the clinical laboratories of the world could send structured genetic reports into an information exchange that would route those reports to any of the provider organizations that may order their tests, to deliver them in structured form into the patient record where it would be combined with other clinical structured data and not only that, but where the world's knowledge repositories would also be integrated into this exchange. So that whenever there's a change in variant knowledge change in gene knowledge that would be sent through the exchange delivered to the provider hybridized appropriately to the patient record. Those things would come together. And we develop clinical decision support that delivers the power of genetic knowledge right into the on the point of care, improving care, but also giving us the ability to constantly learn from this combination of genetic data and clinical data. So that as we improve our genetic knowledge that knowledge will fall back to the genomic repositories be distributed through the genetic knowledge exchange throughout the world. Thus, sparking the continuous learning genetic aware health information system and delivering the greatest improvements to human health that we've ever seen. That's the dream. So what's the reality. What the reality is to the best of my knowledge, every one of these exchanges that exists now that's that lives in the clinical world is small. And why is it small. It's small because every time an institution, a stakeholder wants to connect to one of these exchanges. There is a cost there's a cost to set up the connection. And there's a cost to maintain the connection. Bender infrastructure standards best practices can all help a great deal, but there's still always going to be a cost. And the reality is that the number of institutions if you include physician practices we're talking about hundreds of thousands of institutions. The only way that we can ever fund this is if those institutions in mass start to decide to invest their own dollars in connecting to these exchanges. So what did they need to believe in order to do that. Well, first of all they need to believe that it's going to be technically feasible for them to do that and I think that emerge three provided a great deal of evidence that yes, this is technically feasible. They need to believe that this is going to deliver clinical benefit to specific patients that the decision making, if they invest in this infrastructure it infrastructure, it's actually going to change the decisions that they're going to make and that's going to improve patients. Lots of work been done on that. I think that as others have mentioned I think that the real that the biggest hurdle here is financial viability that if they make this investment that and maintain this investment that there's going to be some other source of dollars that's going to come in. That's going to pay for it. And once they've gotten over all of those hurdles they have a potentially viable project but that project needs to be prioritized against every other potentially viable project that is a priority for them. What is this value proposition for that prioritization layout. So, I think if you look at it for clinical laboratories. I think there's a strong argument, always hard to free up dollars but a strong argument for the laboratories it's going to make their tests more powerful for their customers and it's going to differentiate them relative to any other laboratory that is not on the exchange. That's one of the knowledge repositories and also argue that knowledge repositories are fundamental existence is based on disseminating knowledge so I think there's a strong value proposition for them to the challenge really lies on or I think the largest challenge lies on the provider side, and the value proposition for the providers. Perhaps the most important point that I'd like to make is, if we think about this, if there is a lot of us are from value are from institutions that have that may have very narrow margins but have margins and are larger. We can invest in cutting edge things that don't have an immediate payback. But if you think about institutions that are operating on no margin or negative margin, and having to make the decision on whether to invest in this kind of infrastructure they can only do it. If there is an immediate source of dollars that is going to be generated as a result of that investment so if we can't deliver that value proposition for them. We're not only creating a situation where the patients of that institution won't be served. We know that often the clinical use of genetics drives a great deal of the research use of genetics so we're, we're excluding that patient population from certain forms of research and and increasing the risk of that research being less equitable so the cost and like hard economic math of these value propositions. I think actually has a significant impact on equity as many others have mentioned as well. The value proposition look like for providers well there's cost, and there's value. So fundamentally the cost comes down to, you have to pay for establishing the IT infrastructure you have to pay for maintaining the IT infrastructure, and you need to pay for establishing and maintaining the clinical processes, the changes to the clinical processes that are going to use that IT infrastructure to actually deliver the value that you're seeking to deliver the new clinical services or capabilities that you're seeking to deliver. What is the value proposition look like so when what I find in in in my role is that the value proposition for people who actually control the spend of clinical dollars every one of them really wants to do what's best for patients. They really want to consider things like what will reduce the total cost of care for that patient over time, but the actual numbers that they have to make pencil in order to make their budgets work often come down to things like patient acquisition will will providing this service lead to more patients coming to the institution. And what will the effect be on either fee for service revenue or value based care metrics which get complex to to play against each other. Within our institutions often grant revenue and things related to thought leadership will often will come into play as well. But I think that we need to keep in mind that for many institutions, those aren't drivers of dollars and therefore that this can bias us towards infrastructure that's more accessible for us than for than for other institutions. So, what suggestions can can can I make put forth for research. So, there are three what what I'd suggest is that it would be really useful to convene form forums of the provider economic decision makers to ask them what types of genetic services what types of genetic use cases would excite them enough such that they would love to invest money in them so that they could provide those services and what value relevant levers with those services move in order to generate that excitement and I think that the type of people that we really benefit from talking to our people like CEO CEO CFOs chief medical officers chief marketing officers, both from large institutions with reasonable margins, and also from small institutions, ideally with some representation from some of them with large margins. And I think that that would help us identify what types of functionality we could focus on what types of genetic services we could focus on that are most likely to be adopted and then we can think about what types of research is most likely to give rise to the development services and the killer apps that could actually be the apps that that that would drive people to spend money in order to implement them. And then that would open up the possibility for really exploring okay what are the clinical workflow and economic barriers to the it support that folks most want in order to expand their clinical services and offerings, and this is a place where I think standards can really come into play, where we taking the broad standards that are intended to be generally applicable, and really making sure that we make them extremely robust for the specific kinds of apps that we hope to make as cheap as possible for folks to for folks to implement. And these types of things be enormously difficult to to execute on, but, but I do think that they could help create a path towards developing a path we're creating a much broader and more equitable genetic information exchange infrastructure. Thanks a lot. Thank you so much, Sandy. Great talk. Thank you for all those ideas. There've been a couple of comments here in the chat. And one question by Jeff Jeff, would you like to chime in on this. Okay, great. That was a wonderful talk and my comment in the chat you answered after my comments to some degree but it really is defining the short and long term, you know, value propositions as well as how does this integrate with value based care, but I wanted to instead of asking that as a question I wanted to really endorse your idea of bringing the leadership together that have to make the financial decisions, and also to include both leaders and Luddites. So we have the ones that have actually taken the risk and have shown maybe some ROI to the investment and ones that don't know where to start and why they should even be paying attention to this. And having that as some of the stakeholders that are going to really receive the value of the research agenda. I think it's critically important. So I'm really happy that you raised that and I hope Mark and Ken will take that forward as a potential action item is to come and really have a convening that you mentioned. Yeah, thanks Jeff and I do think the fact that the value propositions will be different for different institutions. I think it's been a theme of this conference and also an important thing for us to consider and make sure we have that diversity and doing that. Thanks Sandy. I was struck by your final points regarding the economics and the way I interpreted it was the need for carrots in addition to sticks to motivate that institutional investment. There is a parallel between those challenges and the ones are domain faced regarding the cost of sequencing prior to the development of next gen sequencing technologies. Do we need a similar revolutionary advance in clinical informatics methods and technologies, rather than incremental ones to break through those challenges and get to the point where clinical genomics infrastructure and tools can be widely and cheaply deployed and integrated into clinical environments. But my take on that is that clearly that would be very helpful and that would be awesome if it occurred. But I think that I think it's important for us not to focus just on that, because I think that the other side of it is, what are the services that are going to, you know, sort of fit into the world as it is as opposed to the way we would like it to be. In terms of generating enough value for institutions to offer. And I think that that comes down to understanding what kind of genetic services. Do we think we do we think we could enable institutions to offer that they could actually get reimbursed for it then it then driving down the cost of those services, I think becomes critically important. We need to understand that there's a revenue model, in addition to understanding how we control the cost. Great, thank you. I think there's there's one more question that came in the chat here. I think I would like to, to propose it to you Sandy and if it's something you'd prefer to defer to the panel discussion at the end we can do that. And she asks, how does the movement of patients into and out of health systems, depending on things like their job and insurance affect the value proposition, their potential solutions. This is critically important because I think that we often feel like, if we can reduce the total cost of care of caring for a patient over their lifetime, but that should be attractive to providers and providers want to deliver that value. But that value does not always accrue to the provider it doesn't even necessarily accrue to the payer. And, or if the dynamics are complicated relative to the, to the payer. So, so we need to find and that's where I think we need help understanding what services will generate return and the current environment that will also deliver that kind of value to society. Thank you so much, Sandy. Our final speaker for this session is Ken Kawamoto, who is Associate Chief Medical Information Officer of the University of Utah Health and Vice Chair of Clinical Informatics and the University of Utah Department of Biomedical Informatics. Dr. Kawamoto leads the Reimagine EHR Initiative, which is a multi stakeholder effort to enable standards based interoperable applications and software services to improve health and healthcare. He has developed some very impressive clinical applications and has earned recognition for his work. Dr. Kawamoto is also a member of the US Health IT Advisory Committee. Ken, looking forward to your talk. Take it away, please. Awesome. Thanks. Can folks see my screen? Yep, looks good. Excellent. Well, it's, it's really great to be here and this has been, you know, I think we've been all zoomed out, but this has been actually one of the most interesting meetings I've attended in the past, I guess year now. Thanks for everyone organizing this. I think it's been great and it's nice to see so many folks that we haven't seen for a long time. It's kind of crazy and Jeff was one of my key mentors when I was at Duke doing NHRIK, so it's great to be here. So I'm going to talk a little bit about the standards around fire that I think are important and relevant here and then talk a little bit about some of our experience using these standards and individualized medicine and touch on opportunities and challenges. I'm going to probably go fairly quickly through this so that there's more time for discussion, but if anything's unclear, please just holler and I'll stop. Just as disclosures, I have Honoraria Consulting Sponsored Research Research Licensing or Code Development in the past year with Hitachi, Pfizer, RTI, UC San Francisco, MD aware and the US Office of National Coordinator for Health IT of Air Security Risk Solutions. I was also until recently an unpaid board member of HL7 that develops a lot of these standards I'm going to talk about. I'm an unpaid member of the US Health IT Advisory Committee and I've helped develop a number of tools which we make commercialize. Okay, so I'm going to start kind of here right like why are we even talking about standards and I think it's kind of obvious but I think these are at least to me the some of the main reasons so I think for discovery. There's a lot of value in having normalized data sets for for being able to do science on it right so I think that's really important. And for clinical care. We need to figure out scalable ways that we can optimize care at the patient level and population level, and the desirable feature of standards is there's an opportunity to say if we do it this way. We can do it in such a way that it won't work just at our health system or just on the Epic EHR but it can work on the CERN or EHR can work on e clinical works that can work in federally qualified health centers so that's the idea. And the hope of using standards and I think what's also important to note is that there's a lot of synergies that can be achieved by using the same standards in the research realm and the discovery realm and the clinical application realm right because the data. Sandy was mentioning to like the data is being generated for clinical care and if we use different standards and won't talk with each other. So I'm going to just briefly talk through some of the relevant standards. So for a lot of folks this is probably going to be too basic but bear with me I'm just going to make sure we're all on the same level so fire. Fast health care and operability standards is an HL7 standard. It allows us to exchange health information and the key parts here is it's gaining rapid industry adoption right we've had standards forever the key difference I think is this one's actually being adopted in health systems and EHR vendors. And it's endorsed by NIH which is always good for grants for purposes right because you know NIH specifically we endorse it. There's several versions really not too much to worry about there and what there are things known as IGs or implementation guides so folks oftentimes don't realize but like you can use fire to in a standards compliant way say, you know, my car is a Toyota, it has, you know, six cylinders, and my wheels are 19 inches and I have the, you know, luxury leather package like that is a fire and you can't, you know, message you can create like it's that loose the what you what allows it to actually be an operable is what are known as the IGs. And what's most important is, you know, we proliferate a lot of IGs but the only one currently that is universally adopted, and the, and actually is enshrined in regulation is must that needs to be adopted is something called the US score IG. It's required as a part of this US score data for interoperability notion so this is, I would, I would keep in mind that like a key goal should be to get whatever's important to this community into the US CDI possible. And like I was chatting earlier like a lot of there's nothing genomics really even in the common stage. Right, and, and we can create all the IGs we want but if we are the only ones implementing it or have limited impact. So fire is related to fire it's basically a way to embed third party apps into the HR. This allows like what Sandy was discussing as like these killer app notions. And again the really interesting thing is this has white widespread HR vendor support so that's key right like, there's been a lot of ways to do a lot of similar things the key thing is vendors are actually adopting this. It supports patient facing apps like Apple Health uses all to delegate users data rights to an app you read and sometimes can write data. And I'll talk a little bit about some other standards that can be used within these to encapsulate your logic. And in some cases like on the epic HR you can augment the HR fire server to add needed data or to filter out unnecessary data. In our instant institution for example have about eight people on our team certified to build new interfaces and epic so basically if you can see it we can pull it right but you know that's not always going to be the case and other HR systems. CDS looks is another fire related standard. And it's been a standard since 2019 there's increasing adoption epic does adopt this and adopt it pretty well. So you can use this in within your rules engine, and it's a companion standard to smart on fire. That's met more for suggesting things so for example if you place an order, you can invoke a CDS service through the best practice advisory framework in epic, and you can retrieve data and provide guidance back like hey I'm going to, to this you know order this medication, order this genetic test, etc. There's also something known as the clinical quality language, this has been an HL seven standards since 2015. And is starting to get widespread use I'd say primarily because CMS is requiring this for their electronic clinical quality measures, and it allows you to do things like express computable phenotypes doesn't support logic and clinical quality so for example for the emerge work in computable phenotypes it probably I would think should be expressed in CQL. There's an the last standard I'll briefly talk about is called fire bulk data access or flat fire. So this addresses the issues that most fire use cases to date has been a per patient access. And that's just way too slow for population level data retrieval right. For example for research purposes. And this is intent to address this issue, and this could allow you for example to get normalized large data sets using the standard. The main issue here is this is still in development it's still early on. I think this could have a breakthrough impact on this field that just fairly early. Okay. So with that I'll transition to just talking a little bit about how we're using this and share some of our experiences at the University of Utah. We started a multi stakeholder initiative, about five years ago I guess to leverage this and other related standards to try to improve care value. There's institutional investment of about a million dollars a year we put into this. So with this we've been an early leader spanning research and operations. And I think what's been kind of unique here is that we have a really agile collaborative innovative environment, you know, some would call it wild last, but it's really easy to do things that would take much longer than elsewhere I've learned. And with this, and our steering committee, co chaired by our CEO and CIO. Today we've deployed over 10 solutions into clinical practice using these and right now we're a little over $35 million in grant funding. So just I'm just going to briefly run through a few examples which I think kind of illustrate how these things could work and these are all systems that are in clinical use now at our health system. So this one uses the smart on fire dashboard. And what we did was try to address the issue for diabetes, after you give metformin it's really not clear what to give in most patients right it's kind of a coin flip. So what we did is we created predictive models based on our data and the state of Indiana's data and created pretty good predictive models and now you can access within the system information like hey for this patient. So for an A1C goal of seven and six months. If you just continue metformin on the left hand side with the blue bar you can see it's about a third chance that they'll be able to reach goal. If they lost 5% of body weight it actually doubles it. If you add self in a year it's about 50% GLP one it's about 80% but then GLP one costs a lot more and we have insurance based information that we provide as well saying hey this patients on blue cross blue shield and this is what it looks like for them. Here's another one for lung cancer screening. This is based on ARC R18. And the issue here is low dose chest CT screening for lung cancer could save more lives than breast cancer screening but current screening rates are pretty abysmal and sure decision making is really important because the individual risk is very different. This is actually required by CMS for payment. So what we do here is we pull in all the patients relevant information and provide patient specific risk assessments that can be actionable. And it looks likely that this is going to end up being in the Epic Foundation system. So then it'll be available to all Epic customers, kind of as a natural course. This final one is on a population basis. This is for Firebase population health management. Here's identify and manage individuals at elevated risk of breast or colorectal cancer. This is an NCI U24 where Guillermo Delphi all my colleague is the contact PI and co PI and has let another NCI moonshot initiative grant. But the issue here is, you know, over 10% of us are at elevated risk for these early onset breast colorectal cancer and, you know, and usually we're unaware right. And it's really just a tragedy that something happens and it's like, well, it's more than that we have all sorts of data we were capturing in our systems to be able to identify it so what we're doing here is using a bulk fire type approach to gather data on our entire population running it against CDS services identifying risk for and it turns out there's thousands and thousands of patients who are at increased risk that, you know, currently we're doing nothing about. And we're using techniques like chatbots to automate the genetic counseling and genetic testing processes and, you know, it's, it's, it's quite doable and it's, it's, I think, you know, it makes sense to do things when we have the data rather than just doing what we currently do, which is generally leave it really hit or miss approach. Okay, so with that's just a brief summary of some of the things we've worked on. I'm just going to transition real quick to challenges and some thoughts so what are some of the key challenges you run into right so as with everything a lot of it's about the data data normalization is key. Like, that doesn't like magically disappear when you start using fire. It does in some areas but in most areas it's like well, if the data is not clean or data's not map data's not clean data's not mapped so that's a key one. Another one that typically a lot of folks underestimate is execution performance. Some of these standards and their implementation by EHR vendors are pretty slow. So for example in some APIs in Epic if you want to pull the patient's past medications like even like asking the question did they get prescribed something that's getting canceled today, it can take like 10 seconds to pull. Right, obviously there, those are huge issues when you're looking at clinical use. EHR vendor support also and standardization is currently mostly just in the basics right so, you know, great if it's supported, and if it's not, you know, it's like it doesn't really help you so these include detailed clinical models and I mean by that things like well, it's great you can get the office BP but can you tell how they did it was it an AOBP well and it's you don't automatically get the home BP or the 24 hour BP, those kind of things. So basically zero genomic support and at the current state, you know, it's not. It's at least in the way the standardization is going, I think it's a long ways away and probably something that this community can accelerate. And there's very little in the way of population based approaches that support in a standards based manner so as soon as you move out of patients and clinic I want to do something with the patient and more. I, you know, I want to survey them for risk and recommend for example, you know that they just get this genetic tested it starts breaking down. And then, of course, you know the last mile of translating into practice because this, and especially when it's translating into practice outside of a clinical trial context because all the things Sandy brought up comes into play like, well who's going to pay for it why would they do this why would they do this rather than buy another MRI machine and run a bunch of scans and get their money back in a year, you know those kinds of things. So recommendations, I'd recommend coordinating and synergy synergizing with the broader health IT community. Right, like, it's just such a common thing in the standards realm in particular, we tend to like to build our own standards and criticize the other standards for like not thinking about us enough. But, you know, we'll, we'll get beautiful standards but then nobody will implement them so I think key key here in particular is how can you get the stuff needed for this community into the US core as a fire implementation guide I think that that should be a central thought in this community if if you want it to be widely adopted. We also need to recognize the hard problems largely persist right like, you know, fire has gone wide adoption by promising it's super simple, and it's super simple to create a test application but if you want to do something real it's actually still hard. So I think that that's a message right like it can't be like oh we have fire like you know we like now it's a totally easy it issue that totally is not. And then we need to work together to accelerate key development so for example the US core data for interoperability. Some of the key criteria are it's implemented in multiple health systems across multiple HR platforms, and that will elevate it in the promotion process where it will then come to committees like the committee I'm on that evaluates these to say should we consider making this a national requirement. Right, and so there's, there's a lot that can be done to basically force the hand of regulators to say like hey at this point, like this is proven it needs to be done. And then, of course we need to facilitate the full translation life life cycle, including the last mile of clinical implementation because I know it tends to kind of go by the wayside in in a lot of scientific institutes but I think you know, and obviously the whole life cycle is important but in the end I think we need what we're developing to be used. Okay, so that's it and I think I'm actually a little bit over time so thank you very much. That's great can thank you. I do have one question that came into the chat that I'd like to toss your direction before we turn it over to the panel discussion. Mark writes implementation guides are very interesting regarding genomic applications could creation and deposition of an IG be required or strongly suggested as part of the dissemination implementation and data sharing plan. I'd say probably not. So, I mean, the IG, first the tooling is kind of a painful and there's only a few people who actually really know how to do it so it could be really really frustrating for people who don't know how to do it. Right like I believe and at least until recently it still might be the case it's like Excel spreadsheet based to build websites like just imagine that for a second right. I would say no but I think the notion of you know how are you going to. I wouldn't say a requirement but maybe even just like asking people if it's getting into the plate side how are you going to use standards how does it synergize with available standards is probably a useful thing to ask. But yeah, the challenge with standards is it's really complicated it's really deep and you need really specialized folks who you know attend weekly calls and follow all these things and review ballots and I know Bob is one of those folks but there has to be a way to scale it because it's oftentimes not possible for a regular like you know, researcher in this field to spend this the sufficient time to be truly expert and engaged in that area. Thank you very much can I completely concur with your assessment there. It is hyper specialized and very tricky. I would like now to turn the platform over to Dr Jeff Ginsburg who will moderate the panel discussion. Jeff. Thanks. And that was great and I can't resist quoting Mark Williams, who has been known to say the problem with standards is there are so many to choose from, or something like that. Anyway, so yes, we're going to have about 20 minutes for panel discussion. There's a there's already a question from Richard Gibbs. For the panel which was that with various health information exchanges that are, is this a logical place where genomic standards should enter the ecosystem and how could this happen. And I'll ask as many panelists are who are able and willing to respond to these questions to do so. No, well, it makes sense. I don't know if health information exchanges currently do actually exchange gnomic information but but yeah I mean seems like a reasonable place to start. Yeah, my, my feeling is it really has to start with the provider side use cases to like define what the genomic information is going to be used for. And then I think that genomic health information exchanges can can play a really important role in delivering that data. Yeah, I would agree with that. For our implementation at Penn, we don't currently push genomic information through the health information exchange so as I mentioned earlier were an epic shop and so epic care everywhere is the he that we're using. We could have turned on genetics but we didn't, and at least to start with that was by design because we're already trying to make sure that the providers that are local know what to do with the information. We don't want to push information out to providers who don't know that it's coming and don't know what to do with it and don't have the interpretation and the content there for them to use it and that it's partially also I guess a liability is we don't want to put something into a provider that they can't use, but then we've also punt liability that now they have this information that they can't use so that's something that we're trying to work through how do we make sure that not only the content is there but the interpretation and the how would a provider that doesn't know anything about genetics know what to do with it. I want to mention that I'm going to take the moderators' prerogative and ask that as you answer questions that you do so with your head on as advisors to NHGRI and to Mark and to Ken about what the research agenda coming out of this meeting should be. And I'd like to turn back to some of the challenges that Marilyn highlighted in the first talk. We don't have them all captured here but you listed the dynamic nature of genomic information and annotation, the standards of content and the standards of interpretation, the subsequent use in clinical decision support. So first of all, maybe you could help us, I'm speaking to all of you, help us prioritize the research agenda around those features and also whether they should be sort of be coupled from each other or whether they need to be done sort of as one sort of one process if you could begin to unpack that that would be great. I'm happy to go first. Can you say one more time that the unpacking of which two components? You know, the fact that we have a dynamic, dynamically changing genome annotation and interpretation that has to then be put into some standardized format for access by the provider community and for CDS to use. Are those three or four maybe separate research agendas or is it really just one? And how do we begin to approach it? I think it's at least two research agendas. At least that's my knee jerk reaction to it. I think one is the, what is the content and that needs to be updated so that annotation information. I think in some ways that is what ClinVar is intended to do. And one of the things that I haven't, and this is my naivete, so someone here may know better than me, how they've done kind of in comparison with something like CPIC where there are very specific guidelines that go with the annotation and the interpretation. I feel like CPIC has done a really good job of, yes, here are these annotations and they do evolve over time, but then there's also, here's the interpretation of it, here's what you do about it. What is the action that you take? I know that ACMG has a list of genes that we should focus on, but I'm not sure they have that same level. You know, PGX, it's a gene drug pair with guidance. Could we have something similar that, you know, ClinVar has this rich database of annotation? Could we have the annotation, the disease and the guidance standardized and put into guidelines? And maybe it has been since I've paid attention. It's been a while since I've really dug into the ClinVar data because I'm so focused on PGX right now. But I do think there's that piece and then there's the CDS that goes with it. I think those are separate people that build out the CDS, certainly, and this is something Carol and I were just talking about in the chat, how you build it in a way that the providers can read it. And so we need to figure out what is the language that they speak, because I think right now we're almost experiencing a language barrier that the informaticians that have expertise in genomics speak a language and we're being driven by the standards that we're trying to use. That's not the speak that the providers use. And so we need to write the decisions in a way that we can understand it. I think that's a little different from the annotation itself. We can have a standard to annotate it, but then we have to build that interpretation engine into the English that that clinician speaks. I really agree with Marilyn and perhaps to say the same thing and it's coming at it from a slightly different angle. And I recognize that this is a huge generalization that isn't always true but I do feel that often within the research agenda the funded research we're building, we're building infrastructure with the intention of enabling a thousand flowers to bloom. And, and I think that there's a need for focusing more on specific flowers and, and, and driving back into the infrastructure from from the flower. So for example, like a specific flower and I'm not a clinician so this may not be the right, the right thing but, but if there was a service that we wanted to make more broadly available, whereby if a cancer risk variant is found. We want to provide the it support required to help ensure that that patient over their lifetime gets the necessary screening, and that that's updated as our understanding of the implication that variant is updated. If we, if we were to focus on something like that. Then we can drive back into okay what are the standards need to be to support that use case what is the knowledge infrastructure need to be to support that use case, and really, and really try to find sort of more horizontally or vertically focused than horizontally focused, which as an IT person, you know, I love the infrastructure stuff. Um, but, but I think, I think that, you know, sort of those end to end use cases are just really important to make it accessible. Sorry. Yeah, it's me can. So, um, I was wondering, we've talked a lot about the use of fire and uscdi code uscdi for for in this in these two days mean but one thing that we still are missing is understanding how the fire can be used for research. I was wondering based off of no aren't based off of what we already know about fire where will be a good place for us to start to build evidence that fire could be used for research, because I think that's how you move forward to actually address some of these things as far as you know, I like the feedback from this group like if you had to come up with the research agenda for this where should it be when it comes to working with fire and uscdi. Yeah, maybe I'll give it a thought I mean, obviously there's a whole bunch of research, right, like if it's clinical research and how you actually, you know, use these in practice, then it can obviously be because, you know, probably the first step is you develop the evidence that this is the right way to care for patients regardless of if you're doing it with, you know, with any HR or not. Then, you know, figure out like what are all the different ways you can do that and then it's like how do you scale it, you know, how do you make it economically viable. The discovery side, I think the benefit of using something like fire is there's a lot of mapping activities occurring within institutions to make the data in fire. So for example, you know, there are efforts in probably all of our institutions to map through the beaker lab mount module. The lab components to be mapped to link codes and to be mapped appropriately. Well, if you use the fire interface to pull that data it's going to be done. So that's one of those issues with data mapping for getting data for research and running clinical phenotypes etc would potentially go away. So I think it's, you know, use case driven. But I think you probably just need to get folks together who are trying to solve problems and kind of like Sandy's notion right like first identify what you're trying to do. And then start from the technology and the infrastructure start from the problem you're trying to solve, and then get the relevant folks together and say well how can we do this and then solve it. You know, one interesting opportunity that that may have some some good overlap between the clinical and the research realms is cohort selection. So, so I think that there's an increasing I think that you might have mentioned this can and some others over the course of the day. There's on the clinical side, I think that there's more work being done now to like scan electronic health records for patients who are being sub optimally treated. And, and develop those cohorts to act on I think that that's the same kind of work that's often needed on the research side to to recruit patients. So that may be an area that could be focused on. Yeah, I like both. Both of your suggestions, both Ken and Sandy, but the one thing that I think it's tricky around the researchers is, and I think somebody said something in the chat I think Kevin, your comment in the chat just sparked this for me. There's a lot of challenges so writing a grant on infrastructure to somewhere like nlm. There's innovation we're going to build this new widget and it's going to do all these cool things. Writing the grant on I'm going to tweak. I'm going to use fire which is this cool new technology somebody else already built it. And now I'm going to do these like little tiny one off tests is not innovative. It's part of what we need to do is take this technology and innovation that's been built. And test it in use cases that in just in putting on both my grant writer and a grant reviewer hat, they're going to get dinged on innovation in terms of because people typically think of innovation is the cool new thing. It's not a cool new thing. So just something for us to think through how do we do ask these questions in a way that allow us to get at the crux of the issue, recognizing that part of the innovation was the technology that's been built. And there's a lot of chat on the use case topic coming up. So I and Mark, I think you, you may have started this I'm going to turn it over to you but I like it's possible to marry the use case idea and agenda to send these emphasis on the economic value proposition because if we can get a research agenda that actually tells us what to measure and how it will convince the ultimate stakeholders who have to support this. I think it would be a winner. So Mark, I'm going to ask you to elaborate a little bit and see what the panel thinks about, you know, if we can drill down a little deeper on the use case idea. Sure. I just wanted to, I'll just comment to just a bit on Marilyn's last point I think that, you know, the point that you just made is one that NIH is well aware of and I know that in a lot of study sections now we are being told that application of a technology to a new area should be considered innovative but it's taking a little while for the study sections to kind of pick up on that but I think that problem is recognized and hopefully will continue to improve. So the use case idea I think is really interesting. And the thing that I like about it is in the in the clinic and in clinical informatics, everything is based around use cases and use cases can be evaluated and prioritized in terms of where do we invest our resources. So the question then is could we base a research agenda on specific use cases that could be prioritized by in this case NHGRI and then to get at the point that Jeff was raising, you know, what would be the criteria by which you could prioritize given use cases and you can imagine, you know, this is where you could engage with a variety of different stakeholders to kind of get their perspective on what would drive their interest in a given use case whether it's closing a clinical care gap whether it's an economic argument whether it's a patient centered focus, whatever, but if you could actually prioritize use cases, articulate those explicitly, then you might get research that would have a higher likelihood of kind of crossing from the research realm into the implementation realm. And if I could just add quickly to that, going back to the discussion we had earlier about convening the sort of health system leadership and payers to be part of that conversation to know sort of what's burning a hole in their wallet that maybe genomics and informatics could solve that would be, you know, more of a needs driven approach. We often are, I'm just finished my own vantage point we're often developing technologies that are looking for a solution, or a problem to solve rather, and maybe we need to flip the paradigm a little bit more than we have as a community. So, Ken or Marilyn or Sandy do you want to respond to that. Maybe, like, I've thought about this kind of stuff a lot and, you know, I engaged in building like our cost accounting system and our health system which was very interesting right like you, you figure out all how the money flows and, and I think if you get the COCFOs, etc. You need the other I think somebody said right like, I forgot who mentioned but it was like you need those folks and the people who are really knowledgeable in this field to say these are the case studies that show this can be done because you get them together and they're just going to say well tell me how I'm going to increase revenue or reduce costs. Like that's right and they're not going to be like, maybe we can do this no like, and then you would bring in the case studies and say look what we've done at you know this amazing academic medical center and this is how we think scale what do you think I think it would need to be that combination or to be a very quick kind of conversation. Yeah, I agree with all of this I think it would be incredibly exciting to base a research agenda or at least part of a research agenda on use cases. And I think I think Ken's right you have to see them. The decision makers aren't going to know what is possible, but they could react to what is the potential economics of what is possible. And I also highlight that I've been exposed to these clinical ROI analysis much more recently. And they are intense and complex and take a lot of time to do. And I think look very similar to a lot of things that we attempt to do on the research side. It may be that there is a research agenda that includes investing and generating those things because it's, it's not like we can just ask a CEO and have them and have them be able to articulate that this will or will not generate a return I think they could provide guidance, and then we can investigate whether they would create a return. Yeah, I, the one thing I would add having the people from the C suite at the table also helps broaden the perspective of what is, where do you want to see the return so one of the things that we've experienced if you have only the one clinic. And you know the department chair from that clinic at the table. Sometimes their focus very much on their budget and what ROI would this have in my budget. Whereas if you get the CEO and the CIO and the CMIO, they're thinking about the whole system and so there are bits of information that we're talking about here that might not help in cardiology but are absolutely going to help in design and in cart or in, you know, pathology or somewhere else. So, definitely that's what we've seen having those folks at the table has helped the other is having researchers and providers and the IT folks, all part of the conversation that part of the way that this precision medicine tab at Penn happened is that we have a regular meeting that have not only the C suite folks and the clinical providers and department chairs and the researchers also legal and bioethics. We meet on a regular basis and talk about these things. And, and I think to Ken's point it, it, they have to all be part of that conversation to be able to even drive those agendas and infrastructures forward. Yeah, I would note though, I mean it's not rocket science to know what you need to move right. The things that will get COCFOs to really pay attention is it increases revenue because like you provide the service and the outpatient you get paid more than it costs you right like it could be just, you know, you like if you just have that for genetic testing you get paid more than it costs you, then that would really drive genetic testing right. Or in the inpatient setting it reduces your costs, because you generally get paid by MS DRGs which is fixed based on your diagnosis, and the sooner you can get the patient out or the less stuff you do for them the more you make. It could be even things like other testing you would have otherwise done. Just do it, you know, the day after they get discharged and outpatient which could make a several thousand dollar difference between doing it inpatient versus outpatient like the basics of hospital financing isn't like that's super complex it's just like in the inpatient generally we get paid a fixed amount so reduce costs and outpatient ED we generally get paid more the more we do unless you're in a valley based contract so you know figure out ways to increase revenue and that could be like you do a genetic test and you find that somebody has this like you know really complex disorder that requires a lot of treatment and maybe you just show that look like you do this and you get a lot more revenue and that's enough. Yeah, I think that there's that there's many different types of use cases there's there's functional use cases where you're providing a certain kind of functionality to everyone. And there's there's use cases that are intended to enable a new kind of service that you're going to offer. And when you're enabling a new kind of service. And that's where I think those assumptions about. Okay, what additional fee for service revenue or impact on value metrics might occur associated with this and and their trade offs where it really gets complex is, you know, I think that many institutions now have both fee for service revenue and value based revenue and something that helps one can hurt the other. And you have to like play out what this actually means for the institution in that kind of environment. So the different use cases may have different complexity but in general I think that the field will get the most value from from a research perspective on focusing on use cases that involve enabling new kinds of services. So thank you for that. I'm going to ask, I did ask Mark if I could ask one quick, well, hopefully quick question but I know it's a big one. And it's orthogonal to the discussion we're having but I maybe this will come up in the later on but Eric Green emphasize the, you know, the need for a capacity building of the workforce and diversity and I think the field of it and bioinformatics and this group needs to weigh in on how we actually do that. Because, you know, we don't have the pipeline that we need to really enable what we've talked about over the last couple of days and we certainly don't have it a very diverse one so I'm curious as to whether, you know, each of you could have a lightning rod, you know, sort of 15 seconds of thoughts or 30 seconds of thoughts before we close this session. You know, I believe that in addition to hiring diversity, one of the problems that we have is we don't know how to design for equity relative to supporting the workflows that will actually make these services available to underserved and minority communities. And I think by focusing on that. There's the potential to develop relationships with those communities that then in turn help us with diversity and hiring. And that's a really interesting idea. Thanks, Andy. Marilyn or can do you have any additional thoughts. Thank you. Oh, no, I think it's always the case right like, we have to start early, but this becomes more just like a societal thing. Like in the end, we have to start with like, you know, early childhood education and childcare support and all that kind of stuff that being said, like, it is obviously hard, all of us are trying to like address this and it is super hard. And maybe one of the things that like a group like NHRI could do is just, you know, take what's happening in every single department, every single program across the country, say in this area and figure out how can we like figure out ways to do that better together because like it is happening everywhere right this is this is a key issue everyone's trying to address and it's like, I'm sure we're wasting enormous amount of cycles, rethinking the same thoughts that somebody else already knew. What I keep coming back to in my mind is how to use it and informatics to reach people where they are. And, you know, can we so certainly on the provider side, can we push content to them that they can understand on perhaps the, you know, undergraduate side. Can we have all this content can we just learn from other educators how do we take the content that we have and write it in a language that they can understand and then deliver it in the platforms they use perhaps we should be making more, you know, educational videos on the laptop. That's what the kids are doing these days so can you take these concepts and meet them where they are in the language that they're speaking. Wonderful it was worth going over three minutes and 35 seconds to hear those important thoughts and I want to thank Ken and Marilyn and Sandy for some outstanding contributions and also by my co chair and helping me, helping us move the session forward now I'm going to turn it back over to Mark and to Thank you for a wonderful panel discussion. So this concludes session five, we now have an hours break so they give Mark and I some time to kind of pull all this information together for the next session. So we will reconvene at 350. And thank you again, Marcus anything you want to add. Nope but please come back and ready to discuss. We have a lot of interesting themes we don't have a lot of time, but we'll try and make sure that we have captured all the important discussions so thanks very much.