 Great, we are back it's 1255 I hope you had a good break and many thanks to Rex and Renee for doing such a fantastic panel and all panelists and the participants so that was terrific. We're now going into session two on it infrastructure our moderators are Carol bolt and Christopher shoot, please take it away. Thank you very much. I'm a Chris shoot from Johns Hopkins and the Carol and I are going to co moderate the discussion I'm going to moderate these initial speakers and Carol has the hard part she's going to wrap up at the end and help us with the summary. Our first speaker today is is Ken Wiley program director in the division of met a genomic medicine at NHGRI again take it away please. Thank you. Let me share my screen. Let's get started. Okay, can everybody see my screen. Can everybody hear me. Yes, yes, you're good Ken. Oh thank you okay so yeah I'm here to want to give an overview of the last you know what medicine meeting to know it missing 13 developing a clinical genomic informatics research agenda. This meeting was a virtual meeting that was held last year and February 9 and 10. And the goal of the meeting is actually to develop a research strategy on using genomic based clinical for max tools and resources to improve the detection treatment and reporting the genetic disorders and clinical settings. And so the objectives of the meeting included, there are three objectives and those were one to define the current status of general based clinical max and related knowledge gaps, determine the facilitators and barriers that affected development and deployment of genomic based clinical facts tools and resources, and also what researchers needed to address them. And finally, the, we wanted to, as an agenda is identified resources needed to improve how these tools and resources are being developed and this impact on pension and clinical decision making processes. We actually studied for the actual meeting there was, we start out with having Dr Eric Green who actually kicked off the meeting by giving a presentation about over the public strategic vision for improving human health at forefront of genomics, which also highlighted the future research priorities and opportunities in human genomics they're relevant to make your eyes mission. After that we went into six sessions that are described here to cover the objectives and goals of the meeting. And so, these, these sessions covered a broad range related to clinical genomics if max tools, anywhere from advanced technology to making the case for genomics, if max tools. Actually identifying from this stakeholders perspective also, in addition to the developers and the patients. Identifying research agenda that addresses the process for developing these tools and also understanding how these tools could be developed and research could be done and using genomics when it's in a fragmented healthcare environment. And also, we did a actual assessment. And so, I would like to thank Dr Mark Williams and myself to actually understand out of these five sessions what really is the best ideas for developing a general basic max research strategy. So the highlights from session one, in the case making the case for a clinical genomics for max research strategy those included identifying elements from the survey. This was a technical disorder survey that was actually done twice and I'll describe that later. But where significant progress have been made in which ones still require additional research support. We also highlight that there's a need to ensure that the development and implementation of these tools and resources is done in a manner that includes an equitable representation from diverse and underserved populations. We'll also make sure that reporting outcomes from these, from these tools and resources actually that they're able to capture data regarding both this benefits and harms and their clinical business support use to improve mitigation, improve mitigation approaches. In regards to the survey. Actually, we did a survey that actually listed elements that came from the technical disorder for the integration of genomic data and clinical decision support, which was a paper that was published in 2014. And also elements that were from the technical disorder for integration and genomic data into electronic health records as published in 2012 and those elements are listed on the far right here. This server was actually given twice it was given in the genomic medicine 707, 70,000 2014 and also in the genomic medicine 13 meeting. But these are different groups that actually spilled out these surveys and the service weren't designed that we would come back to them years later and do them again it just happened to fall in that we had these surveys we said well let's combine market I thought let's combine these two and see if there's things that actually changed between the 2014 meeting and the 2021 meeting. And so what I want you to take from here is that there was clearly that there are some areas that still had the same level of ranking between the two which is the one that's highlighted that in the columns that are in on the cells that are in white, such as CD knowledge must have potential to incorporate multiple genes and clinical information. CDS knowledge must also nest clinical decision support knowledge must also have the capacity to support multiple HR platforms with various data and representation at minimum modifications, and also access to and transmit only genomic data and clinical decision support. Well there's also other opportunities, other thing elements that we've actually made significant progress in, at least based off of the survey that was built up by the members of the two members of you know 13 group, which is highlighted in the yellow cells on this element so what it shows is that in areas we have made progress and there's still some, so a lot of opportunities related to addressing these elements that we still need to focus on. So when we look at session two, the need for research to advance and advanced technologies for genomic medicine, the highlights that came from that discussion what in presentations were to invest in research that advances a patient center approach and development implementation of artificial technologies. We also need to look at how the research is being conducted by the genomic community should complement efforts in the private sector so we don't want this to be done in isolation and solace we really realize we understood that there's a need for us to work with our what's in the private sector to actually help help address this effort, and that they shouldn't be done in separate in separate environments that they're, and finally also understand that support for research that generates outcomes can be used to inform both the business model of artificial intelligence, and that promotes open source development and attracts a broad range of stakeholders. What we realized also is that, you know, when you're thinking about developing these different tools and resources, it's not just for resources, you also have to think about the business model how do you encourage private sector investments and support to do collaborations to actually address this. And so you have to think about the business model when you're when you're putting something like this together. And this is not just for the private sector, but it's also for the patients and the clinicians also. When it comes from session three, we really the highlights that came from that session was really about the research and stakeholders perspective, which is really focused on the enablers and barriers that affect the integration genomic based clinical informatics resources in the healthcare system. These include research efforts that should incentivize collaborations and both the development and really also focus of developing a learning healthcare system or genomics. In addition, these tools and resources should also incorporate an educational policy research component that focused on reducing barriers and improving knowledge for patients and providers. To be focused on just the clinician side, you really have to bring the other half of that equation together which is really the patients and have them also be engaged in developing and understanding how they can use these information for their own care. Research should also focus on the development implementation and maintenance of genomic based workflows that is how I hear that also diminish the burden for primary care providers that tap into other healthcare workers and engage patients in the innovation that's beyond typical clinical decision support reliance on alerts and reminders. So, you can see here again that there's a running thing that you have to be more encompassing as far as the groups that you want to reach out to in order to encourage the development implementation and genomic based clinical max tools and resources that this really can't be focused on just one particular group, we really have to have a broader way of broader engagement for, from session four, which is defining the research agenda that addresses the process of developing genomic based clinical max tools and resources. We understand standards are important. We wanted to make sure that we understand that that's a key part, that's another key part that has to be addressed and it's not just those but actually having standards that could be broadly implemented and broadly utilized. We want to have research and I have to better understand how to improve the interfaces between EHRs, health level sevens and fast health care inoperability resources such as fire and laboratory information systems, which, and really help them understand more about how we can let those have those standards work with limb systems. We also want to lower the regulatory barriers for the development and implementation of these tools and resources that are done so without compromising patient safety. We all understand that the complexity of trying to get through the regulatory process for some of these tools but really in the researcher we can actually have an opportunity to help build these, to build the preliminary data that can help guide those regulatory and help those regulatory processes. In addition, we want to develop an implement common semantic frameworks and data models that reduce the lines of manual curation. One thing we realize that where we can automate these opportunities to automate these tools and resources that we really want to but we also understand there's a lot of effort that has to go and understand what kind of what automation means in this case, and how they should be done in a way that is not that is consistent across the heterogeneous healthcare system. In session five genomics in the fragmented healthcare environment, the highlights from this would really need to invest in developing a specific use cases that support genomic medicine implementation through informatics, while also demonstrating value and scalability for genetic inoperability. In addition to one invest in research that focus on establishing the genomic based health exchange system in a manner that synergizes with the broader health IT community efforts in this space. While also supporting efforts that facilitate these tools and resources in the last mile of clinical implementation in the healthcare systems by identifying what has been developed and supporting implementation science research. I'm such a six that mentioned for this is one word. Dr. Mark was really glad it's really summary overall what we what from the feedback that we've seen from the previous sessions and what you know we could really focus on as far as highlights and understanding what in terms of having these tools and resources developing a research agenda for these tools and resources and these really include incorporate implementation component within the overall scientific framework work also engage in multiple level stakeholders for a balanced value proposition and broad support. Identify ways to reduce regulatory process barriers to stimulate growth in this field will also developing research methods to identify mitigate inherent and pervasive bias and data. The information systems and access knowledge and clinical algorithms and care delivery that interfere with the meaningful and meaningful use of genomics and clinical care. And address the importance of implementation equity and low resource settings to ensure a broad genomic increment is not exactly health disparities. Again, you hear there's a running theme and all this is that we cannot do this by ourselves that we really need to build support for groups to help us with this effort. See again study human factors and unit interface in workflows and able scale shareable computer interfaces for inferences for jump knowledge and harmonization. Also sharing clinical decision support incorporates information from multiple genes and clinical data. Developing models that support sustainability and again creating education policies that bring great more together and understand how to use their genomic information will also advance in the interface between human cognition information technology and developing a method to reuse genetic data as patients move through the health systems in life. This is how I live with Dr. Williams mentioned before about how the need that you're going to have the genomic information. For more than just one time and you should have a someplace someplace in a way to access this. The outcomes from this meeting were actually very, we're actually, I was very excited about when the outcomes from this meeting was the fact that we were able to publish this the findings from this meeting on the general American Medical Informatics Association, which is shown in the left and Terry highlight this that there is a notice a special interest that was actually published earlier this year that actually indicates interest in addressing critical gaps in genomic medicine, regarding the availability of clinical max tools to facilitate patients understanding of genomic information in a way that consists in their ability to navigate there and help them perform their healthcare decisions. Finally, there is, there is the initialized genomic data science analysis visualization informatics lab space, which we've actually added another RFA to to actually promote the use of adding the clinical components to the end goal to help allow the help of clinical research is level to exist and will ecosystem to provide a suite of interoperable clinical components to assist the basic and clinical science research community. I wanted there's a lot of people have to thank for this this was not again this was as the theme here that we didn't do this in silo. We really worked with a lot of groups to help make this possible so the speakers that are highlighted here were tremendous help with this, as long as the co moderators are listed here also want to make a very specific acknowledgement to the names and blues to this meaning didn't start just with the joint medicine where this was actually planned years before with the groups and names highlighted here in blue, we really came together to try to understand how to use clinical informatics to understand what a research, a research strategy look for clinical informatics and we actually were able to work, work with the genome medicine working group members to help make this possible and Mark was a great asset helping us to raise those gaps and help us understand how we can make this this workshop possible and he was did a great job as my colleague for this. The names and orange here with the GS don't mess and work with members and the rest of the genome as work with members members here also helps significantly help them make this workshop possible and I also want to thank the Duke center of applied genomes and precision medicine, as well as a program analysis who made this possible along with the major office of communications who let us have this virtual meeting run successfully and of course always the genomic mess 13 by teams. And that's it. Thank you for your understanding. I don't see hands or clarifying questions. If that's the case, then I think we can move on. Our next speaker is Travis Osterman, who's director of the cancer clinical informatics at the Vanderbilt Ingram cancer center Travis. Thanks Christopher let me share my screen here. I'm going to start by thinking the meeting organizers for the opportunity to come here and talk. My name is Travis Osterman medical oncologist here at Vanderbilt and lead our clinical genomics work stream. Largely I'm going to focus on integrating clinical genomic results into electronic health record this will be largely a pragmatic discussion this is actual work that's that's really happening. My listing disclosure is the only one that's probably important here as I received no financial compensation from Epic who happens to be our electronic health record I'll talk about them throughout this talk because we do use them as our EHR vendor. So when I talk about precision medicine with our executives internally and externally I like to focus on this quote from the National Academy's report from 2011 I think we're all unified in this direction which is consider a world war clinical including molecular features support precise diagnosis and individualized treatment for this meeting though I want to go back to the 2015 report and can already did a fantastic job of summarizing this and so I want to hone in on two specific quotes from recommendations coming out of that workshop. And Chris right I didn't realize that one of them was going to come from you I didn't know you were going to be moderating this when I created the slides but first is establishing data standards and common ways of representing outcomes that would facilitate the scalability and efforts and the translation of genomic information into clinical care. The second is integrating genomic data into the clinic through clinical decision support so the guidance is scalable and interoperable so I want to really give an update on where I see the current state of the field again on that last piece the translational actual operation piece that provides patient care. So really the the kudos here go to the hl seven clinical genomics working group and so for those of you that that aren't familiar and Bob I think this is was one of your questions that you dropping in q a. This has been the evolving data standard for transmitting clinical genomics information. The standard for trial use one or stew one was initially published by this group in November of 2018. The first implementation, meaning end to end both the electronic health record implementation along with a reference lab implementation making that connection and utilizing this as a data transmission standard, I think was described by rush at epic user group meeting in August of 2019. So at that point to my knowledge there was only one organization leveraging this in the entire country. But how far we've come so stew two was officially released or standard for trial use to as a released in April of last year, and currently this month this is still up to date as of August there are 29 healthcare systems that are utilizing this data transmission to connect to reference laboratories and receive structured information. This, I think, largely goes without saying but I include this slide and almost all my talks to our executive leaders, because it's easy for us to get pigeonholed and to think about genomics or genetics just in one space, the program that I'm going to talk about is germline testing and pharmacogenomic testing and somatic testing. And, and so I'll use that to structure the rest of my talk. So our tagline at Vanderbilt is making healthcare personal and a part of us re examining our efforts in precision medicine led to this clinical genomics work stream which I said I helped to lead many of these efforts were already on going and it was bringing them together so we could leverage scale, each of our we jokingly call silos of excellence. We're doing and make that work across our enterprise. We're thinking about clinical decision support which has been a big topic here in the genomic space. Largely, this is I think what most of us think we're either recommending genomic testing or we're personalizing care for an individual patient so we've got a small table here and I would say that's probably true for what we did from 2010 to at least through 2020. Like most programs we have a robust pharmacogenomic program it's predict here at Vanderbilt it's it's been discussed at length. We started that program in 2010 and currently have almost 20 drug gene interactions that we track and so we're both recommending that test in our electronic health record and then able to integrate those results to help make sure that the right patients are giving the right drugs the right time. Again, this is, I think, not necessarily a new phenomenon but we're trying to expand those same principles into other spaces. So for instance this is Adam right and Kevin asked a couple of my colleagues Adam is in the department of biomedical informatics and Kevin is a pediatric neurologist. We were talking with Kevin and he shared that about 40% of pediatric seizure disorders are due to genetic factors. The ones that aren't potentially have an anatomic cause and therefore can go to surgery and potentially have a really good outcome. From his standpoint and pediatric eg's there were specific phrases that could be listed in those eg's that made him think that this was less likely an anatomic cause of epilepsy and more likely and a genomic or genetic cause of epilepsy. So we worked with him to in real time read those eg's as they come off of the system, and then alert providers who are reading those getting those results that specific patients may actually need genetic testing to identify genetic cause of their seizure disorder. And so this is live in our system and is I think one of the ways that we're trying to push forward in spaces that are outside of pharmacogenomics as well we've had several other examples but needless to say, routinely for the last many years we've been doing both recommending genetic testing and personalizing care for individual patients. But I think we need to do more. We need to think not just about individual patients, we need to think about how we do this for patient populations and we've already talked a little bit about patient health or population health and I think we'll hear more about that in the next, in the next session. But for me this just adds another column to our table so we need to think about how we recommend genetic testing or genetic testing patient populations, and then how do we care for and deliver care to the entire population to personalize their care. And I would argue that this is the direction we need to go the easiest box to, to fix is probably the upper left hand that we need to work our way down to the slower right hand box. To do this, we need to get the data out of the PDFs I know that's a controversial statement, but this is our genomic data strategy of Vanderbilt, it looks very simple but it took quite a bit of time to get us all to agree on one strategy. Largely because when I went to talk to our molecular pathologist and I said genomic data, they heard BAM, SAM VCF files. When I talked to my medical oncology colleagues, all they wanted were the final call variance on the report. And so this became our strategy we want to receive all of those discrete final results just like any other internal lab that we would order a CBC or CMP we want to order and receive it just like that. We want to flow downstream and electronic health record and then ultimately both provide patient care and then support our research enterprise. On the right hand side, we don't want to give up all of those raw and processed files whether that's from our internal genomics lab or whether that's from an external partner. We want to receive those those are going to go into an operational data storage and still may provide patient care because again we have an internal lab that may leverage those upstream files, and those are going to go into a research repository but at least it gave us a place to level set. So we went live on what Epic calls the genomics module which just is the way for us to receive those structured results from third party reference laboratories in July of 2021 and so we've been live just over a year now. It works exactly like you think it would I put it an order in the system, just like any other order that's transmitted electronically to a reference laboratory and the results come back to me in the electronic health record just like a CBC. The area of medical oncology that I treat as thoracic malignancies so I order a lot of these somatic tests. We were not first, as I said Russia was the first to describe this they're currently 39 or sorry 29 groups. But I put this list of logos up largely to show that what we're seeing is not an uptake across just academic institutions. We're seeing a broad uptake across healthcare systems that are interested in getting these data out of PDFs and structured in their system. But the real value isn't me receiving those genetic results in my mBasket into our electronic health record. The real value is how this benefits all of our downstream processes. So the order goes out and the variant data come back into into epics database which is called chronicles, because it's in our electronic health record it's immediately available to patients and we've already talked about putting patients at the center of this. So that means if I have a test here and I order it on one of my patients, those patients can then take that test and show it to another provider regardless of what electronic health record they're using, because they'll have those data on their phone. One step down from that. This is a piece that many of my colleagues really enjoy. We're able to leverage some of the tools within our electronic health record to do queries that we would otherwise need to do with one of our business intelligence tools. So here, I'm using a tool that's built into Epic again just to do a quick search for KRAS alterations. And because I'm doing this within our electronic health record I can find not only the patients with those alterations but I can find out who their oncologist is and I can find out when they're going to come back and see us. And similarly our electronic health record has the ability to do some kind of rudimentary data visualizations through a tool called slicer dicer. And because these are structured data again those work right out of the box and all of my colleagues can access those data as well. And then finally at the bottom because these are structured data flowing through all of our database systems. They also flow downstream to our research systems which we have the something called a research derivative which is a copy of our electronic health record used for research. And that is de identified and something called the synthetic derivative, which then is linked up to our bio bank bio view. And so, by putting these data in structured form not only we're spreading patient care, but these data flow all the way downstream. But how do we get to that lower right box and so Ben Park is seen here in the lower right he's our cancer center director at Vanderbilt. We have a project called prompt and it's a clever acronym that I can never remember what it stands for but before we implemented these structured data. We basically had a team of data scientists that when a new first in class drug became approved for cancer treatments that we had data scientists that would query multiple vendor systems, they would take that query and they would look against our electronic health record system to see which patients are still living who their oncologist were, and then they would give us a report on which patients would potentially benefit from these new treatments after their FDA approval. This process took about one to two weeks but we thought it was high value. And so, we absolutely supported that. And for, for those of you that are in the somatic space as much. Again, I treat primarily lung cancers and we have 22 approved drugs with targeted variants, and that number continues to grow every year. So we moved to receiving structured data. This process is much much shorter when we know that there's a new first in class drug approved. The clinical team can do these queries directly with an epic, and I'll just take you through that. The process, one to two hours is even probably generous. I'm not going to go through the screenshots but needless to say, you get five clicks to get to the report, and then the report looks like this. I want to find a gene, the genus Keras, and I want to find that it's present and that the significance is pathologic I want to make sure the patient's alive. And then importantly for this example which is the Sodoracinib example. This is a drug that specifically only works for the amino acid change D12C. And so I can also call out that I need that specific amino acid change, and then I just click run. And I've redacted this entire report which makes it not super exciting on a slide but what we get is we get 57 patients. And so this simple report which again I can run from clinic shows me 57 patients but it also shows me their care team which means who is their oncology team. And we can send that out the same day that an FDA approval comes around. And we're looking forward to leveraging these kinds of reports for other population management tools as well. So I said that we went live in 2021, but we like others have been doing testing for a very long time. And so we went back and took all of the old XML and JSON and any other semi structured format that we could find from any vendor, and we did a back load. So we took all of those data and did a tedious mapping from the vendor proprietary format into the genomics module standard which is largely based on the HL7 clinical genomic standard. And so we back loaded earlier this summer 12,000 results and currently have 16,000 results that are structured in our health record which we think is the largest in the country. So how did we do that we went all the way back this is a test that was an internal very, very small panel around 2011 as our melanoma panel just tested for B ref v 600 e. Clearly that doesn't go into the end of the electronic health record there was no HL7 for that. But we could map those results and then create our own HL7 messages and then basically it looks to the system like we're our own reference laboratory. It's important if you're going to do these kinds of projects that you turn off all the alerting. We don't want we didn't want our providers to get erroneous messages from results that were 10 plus years old. We also didn't want to alert the patients. And then one of the things we learned is we also requested a bunch of labs to be scheduled. And so we were able to stop that pretty quickly, but this also works for more complex testing and so this these are examples from foundation medicine. So this process really handles also very data fusions germline versus tumor if you're calling that MSI TNB, etc. And the goal is just to do a one to one mapping from the previous data standard into this epic data standard, which again largely is based on the clinical genomics working group open data standard. It creates an HL7 message that is in that HL7 format. And then we can fire that against our interface to ingest data. And so not only going forward but also you can think about going backward to take all those data and leverage them as well. And so how do I think we're doing well for the first one as far as establishing data standards, certainly not my work but through the work of HL7 I think we've made tremendous progress there. And second integrating clinical decision support I think we're pushing the envelope I think there's opportunities to do this even at the population level today. So what are the next summits what do we have to tackle next. And largely, I think the things that I had here were already brought up, I think that Rex brought up this concept earlier about providing the standard way for patients to provide and transfer genetic and genomic information I think that's certainly going to be key going forward. So I brought up the idea of really putting these data and the hands of the patients I couldn't agree more I'll take it one step further and say the patients are going to be consenting for large consortium studies and healthcare systems are going to then be asked to share those data with with that research consortium and we need to figure out ways to do that. Again, the idea that once you test someone's whole XM or whole genome, you shouldn't have to run that again you may need to rerun the bioinformatics pipeline, and certainly internally, we've thought a lot about reanalysis, and, and the role for reanalysis. And then finally, I think education is going to be key for both physicians and nurses. This has been a huge effort at our organization and everyone pictured here has been involved. It's been an opportunity to come and talk today and this is my email address if anyone has questions. So it's standing I do see one question, but I think we should defer it until the john from Jonathan Berg, until the discussion if that's acceptable it's not really a clarification question in my mind. So our last, but not least speaker is Gail Hermes Del Froh, who's Professor and Vice Chair for Research at the Department of Biomedical Informatics at the University of Utah. Go Hermes, please. Thank you. Okay, let me share my slides. Can everyone hear me and see the slides. Yes, we can. All right. So I will talk about project that has been funded by two grants from the National Cancer Institute, one of them focused on a enabling a software platform for population level genomic clinical decision support. And the other one is a randomized trial multi site trial. Using this platform for a specific use case. So the idea here is, I think it aligns with some of what the previous speakers mentioned in terms of taking clinical genomics to the population level. Instead of, or in addition to the point of care clinical settings. So the original motivation for this project was the finding that about 13% of individuals are at elevated risk for familial breast and colorectal cancer. And most of these individuals are unaware of their risk. And at the same time, there are evidence based guidelines recommending genetic testing based on their family history, and these are the three main sources of guidelines for that. So the goals of the project were essentially to enable a population health management platform that allows us through, you know, computable logic, identify patients who meet evidence based criteria for genetic testing, and then use registry based approach with patient outreach tools to manage the risk. We leverage family history that's available in the HR, and we do not try to collect or improve the collection of family history we essentially try to use what's available. Another essential part of the strategies to minimize primary care effort primary care providers are kept into the loop of this whole process, but they are not asked to basically do anything in the process. In terms of patient outreach, part of the innovation is to try to use automated, we go back here, chatbots for for this patient outreach process, which includes patient education and an offering of genetic testing now I'll talk about this more in a minute. And third, again, we are supporting the bridge trial which is about to finish enrollment at University of Utah and NYU and that's funded by the cancer moonshot program. So those of you who would like to read more details about the platform and its architecture we call it guard. This has been published in Jamia earlier this year, and I'll give you an overview, just getting a little more technical but on a very high level. I'm going to talk about the data flow so everything starts we have here on the left side, the open CDS platform which is an open source clinical decision support web service that can be executed. Ignorantly from any electronic health record system. In the middle we have a population coordinator that executes a number of tasks. And on the right side you have an EHR system in our case. We have worked with Epic at University of Utah and NYU, and also Cerner at Inner Mouth and Healthcare. So, first step here, the population coordinator identifies a screen population which could be a broad net. In our case is basically everyone who meets a certain age range and have been seen in a primary care at University of Utah or NYU. The population coordinator retrieves data from these patients and transforms everything into fire from the EHR. Next, fire data is transmitted to open CDS in bulk so we're running everything at the population level and open CDS has an interface based on the CDS hook standard which is in a nutshell a clinical decision support services standard that allows an independent service to receive a request to analyze patients. And according to certain logic, and then respond with the results of those analyses, all in a standard format and CDS hooks uses fire as the data standard both for requests and also the responses. Again, everything is done in bulk for optimization of performance. The results go back in our case we are running NCTN guidelines based on family history for cancer. And the results are then exported back into the EHR. In our case for Epic, it uses Epic's population registry solution. So we load patients who meet criteria into the registry and then the genetic counseling assistants use that registry functionality to manage the population and conduct patient outreach activities. This is just a brief example of the logic that has been implemented. This is for breast cancer. We also implemented for colorectal cancer. It's just an excerpt just to give you an idea of what's behind the scenes. This is probably kind of outdated right now and it's oversimplified. There's a lot more logic under the covers. And this is a screenshot of how the registry dashboard looks like in Epic. You can see a list of patients in the population. That's number one. Number two, the ability to filter patients according to any kind of criteria for example you can filter by clinic and work on patients that are assigned to a specific clinic or a specific provider. Number three, you can track the outreach status of specific patients so you can see for a list of patients who has not received any outreach message yet or what patients responded etc. And then last number four, for you select a patient you can drill down, genetic counseling assistants can look at specific data points that are relevant to the outreach and management process. Like I said, this platform is supporting the bridge trial, which is basically comparing two approaches for patient outreach education and outreach with the goal of offering genetic testing for patients who meet family history based criteria. And the two arms of the study are usual care, which basically involves a genetic counseling assistant making phone calls to those patients one by one, providing some education over the phone, and trying to schedule a genetic counseling appointment so it's very involved effortful. The alternative approach in a second arm is using an automated chatbot where the chatbot provides education about genetics and and then at the very end of a chatbot conversation, it offers the option to receive genetic testing. And I'll talk a little more about that in a minute. So this is the workflow for the standard usual care outreach so we again we run our population based algorithm using guard based on data that's available in the HR patients who meet and CCN criteria for breast ovarian or colorectal cancer are added to the registry. A genetic counseling assistant does patient outreach through the patient portal and phone calls, and they try to set up a genetic counseling appointment at the end of this whole process. The genetic counselors write a note in the HR they copy the primary care providers in the notes with recommendations, and they may add findings to the patient's problem list. That's usual care. The alternative workflow. We have two chatbots here a pre test chatbot where again we run the same population based algorithm. The patients are added to the registry but this time instead of calling patients patients receive a message through the patient portal with a link to a chatbot. They launch the chatbot interact with the chatbot. And at the end, they are offered an option to test. They decide to do testing, they receive a kit on the mail, collect the sample at home, mail the sample back to the lab and an outreach note is written back into Epic with the patient's decision and the transcript of that conversation. So the post that test chatbot once the results are received genetic counseling assistant reviews the results for patients who test negative. They receive another message in the portal with the chatbot link, go through a different chatbot that explains the implications of that negative result. Then for patients who test positive or view as then you have a genetic counseling appointment we're basically back to usual care workflow but most patients test negative in our managed in an automated fashion using the chatbot. Here again, in both cases, a note is written into the HR the with clinical recommendations for the post the patients who test negative that's an automated note. And for patients who test positive of us then the note is written by genetic counselor. So a little bit about the chatbot what it covers. These are the main topics that the chatbot goes through. And I'm just going to go over an animation here that shows how it works you click on the link. That's a patient portal message. It launches the chatbot you see that it builds item by item so the patients can control the pace of the conversation they can keep going. I here I can ask for more information if I would like more details I'll click tell me more. If I'm satisfied with the content and understand. I'll just click got it got it and move on. And then I'm going to walk through here at the very end. It's offering I would like to test, I could say yes no, I'm not sure yet if I click I'm not sure yet the genetic counseling assistant will give a call and address any concerns or questions. This is a paper about a pilot with seven participants that preceded the trial and showing a few interesting findings we looked in detail how patients interacted with the chatbots. We found that 70% of people who completed the chatbot agreed with testing. And interestingly the chatbot allows free text questions and answers matches with answers in a question bank question and answer bank, but they rarely use that functionality so it seems like the script in the chatbot had brought enough coverage that patients didn't really need to ask open ended questions. Okay, so this is going to show a little bit of on the bridge trial and some preliminary results. We ran the algorithm against 445,000 patients at University of Utah Health and NYU about 5% of the patients met NCCN criteria across the two sites, and a random sample was extracted from those, and then patients were split into one of the two study arms. Like I said the trial is about enrollment has been completed at University of Utah it's about to complete at NYU. So far we have over 3000 patients who received outreach in one of the two study arms. I'm not going to make comparisons here because the trial hasn't been completed yet and so we're just looking at overall you know Prisma kind of numbers. The 23% of the people who received an offer to use the chatbot or receive the phone call for genetic counseling completed the entire process so either they completed the chatbot interaction, or they schedule a general counseling appointment and had that appointment completed. Of those patients 65% had genetic testing ordered agree to do genetic testing and 5% of those who tested tested positive for a pathogenic variant. 50% were negative 44% of found a view us and we still have 80% 8% of those patients with results pending. Some lessons learned in this whole process. We found chatbots does seem to be a scalable approach for patient outreach man engagement it really manage minimizes. Data availability family history. We all know it's incomplete inaccurate in the HR but when you have a family history assertion in the HR, we found that it's largely correct, rarely those lead to false positive patient identification. The clinical workflow as we mentioned that we barely touch primary care in this approach. We tested interoperability at two EHR systems at three institutions we at Intermountain Healthcare we demonstrated that this works with a certain system. But we found health disparities, there are significant disparities in family history documentation across different patient populations. There's gender disparities at ethnicity disparities and also low socioeconomic status. Probably with the chat but it relies on smartphone technology and we know that 75% of the people in low SCS and rural populations don't have a smartphone. And we also know that our approach needs to be adapted for EHR that are used in low resource settings. Thank you. And thank you. I apologize my zoom and computer both crashed. So I'm reduced to a laptop. But nevertheless, I did hear most of that. I believe I've lost my question thread because it was in chat but Travis I do believe Jonathan Berg had a question for you, if we may, in our discussion. Oh, give me, were there clarifying questions for Gil Hermae before we go on. I don't see any hands. If that's the case let's go back Jonathan to your question to Travis Travis you've read it so. I'm happy to summarize and so Jonathan asked in the data back load process, what kind of clinical validation. Did we think about, which is a fantastic question. This is something I really championed at our organization. As we did the mapping there was certainly a chance that we would get something wrong. And so we loaded all of those messages into a test environment and then did a PDF versus load comparison side by side. So I think doing that on about 15% of our data set and did not find any, any errors in that population and so then felt comfortable proceeding with the, the rest of the load. But that's, that was actually the probably the longest piece was getting those manually reviewed which I guess was probably about 3000 results as I recall. And Bob Dole and I recall you had a question about terminology. Do you have a said he would answer it in his presentation. Did you have a follow on to that or did you feel he addressed your question. See. Okay, well then, then let's move on to the questions for good Hermae. I see both Heidi room has a hand up and a question in the chat. Heidi please go ahead. Nice talks. So my question to Gary is around. Well two questions. One is, it sounds like this was maybe a research study so insurance coverage wasn't the issue but do you anticipate that if you are have no involvement of a genetic council like a human live one wouldn't would insurance companies not necessarily pay for the testing. One question and the other one is why are V us is being returned in a screening context with one suspicion I have that is we're experiencing our health care system is that insurance won't cover screening tests but they will cover diagnostic tests but if you order the diagnostic test you get V us is back. So, so it's like a practical issue for insurance cover so that they sort of tie into each other. I'm probably not the best person to answer these questions. I would hope our generic counseling collaborator Wendy Coleman was the director of the generic counseling program at the Huntsman cancer Institute would be here. We do know that it is true that because it's not only because it's a research project but the Huntsman cancer Institute for patients who are uninsured at least covers that kind of generic testing for free. But I'm, I'm not sure what the implications are in terms of since there's no counseling pre test. Thanks. Thanks. Go ahead, please. Chris you since your zoom crashed you might not see all of the questions in the chat. I certainly do not so please save me. Yes. So, Jeff Ginsburg is asking again about the chat bot. If it's available in languages other than English. And how you handle individuals who are not technology savvy or on the other side of the digital divide. It's a multi part question and any experience with chat bot and underrepresented populations. Yeah, great questions. The chat bodies offered in English and Spanish currently, and the also this the message the patient portal message that suggests the patient to use the chat but also is link language specific so if the preferred language documented in ZHR Spanish they'll receive the messages in in Spanish and use the chat bot in Spanish. The chat bot is currently undergoing a cultural adaptation for patients, multiple ethnic backgrounds and also low socioeconomic status. And we have a proposal to renewal proposal for our platform the guard platform that involves adaptation to really address health equities. One of the major problems I mentioned in the talk is we did an analysis of family history documentation and found substantial disparities, not only family history documentation but also impatient identification. You know, all over the place all sorts of different categories of patient populations that basically we don't know what the underlying factors are, but the fact is that there's the documentation family history much more sparse for certain groups so we are trying to use natural language processing partial matching criteria for example patients who do not have an age of onset documented in the family history, we would still match against NCCN criteria, and then try to use other approaches to collect that information for example use a chat bot to basically ask do you know when your aunt had breast cancer or use patient navigators for that we need to use some more deliberate approaches to try to reduce those gaps. So there's another question about language preference measure for those who used or didn't use the chat bot. What is the difference in chat bot uptake depending on the primary language I think is we. Yeah, great question we have not looked into those outcomes yet. Probably starting within the next month or so once the trial enrollment is fully completed then we can drill down into those analysis. Did you see a. Did you capture reasons for so there are 30% that didn't use the chat bot did they give a reason for why did you try to capture why they weren't interested in using the chat bot. We do not capture. Well there's a survey at the end of the whole process they're invited to complete a red cap survey and that asks questions about usability satisfaction, and also reasons for testing or not testing, we have not analyzed those data yet so stay tuned. And I think the last question that specific for you is there a difference in the percentage of people who agreed to testing, depending on whether they consented via the chat bot, or the genetic counselor. That is the primary outcome of the trial. And we keep we cannot analyze that data yet. Again hopefully within a month or so. I think Chris that takes care of all the questions I saw in the. And I appreciate your saving me Carol. I do see a question in the Q amp a from Sandy, Aaron, just for Travis. He says it's awesome the way you loaded historical variant data I had your questions. Would it be possible to provide a sense of the resources that were required to accomplish this. That's the first question. Yes, Sandy, it's a great question happy to address that. You know, internally, I joke that this was a labor of love. And so, largely in the meetings, it was me and one of our data scientist, who has been responsible almost almost completely by herself for doing all of our genomic queries for all of the vendor systems for our internal systems, whenever we did clinical trial feasibility for instance and we wanted to know how many patients might be able to enroll in a new clinical trial, along with working with usually two representatives from our electronic health record. That piece, I think, was as much for their benefit is as ours they were very interested in basically testing the edges of their data model, and we had a really big sample that we were looking to push in. I think that was mutually beneficial and hopefully helps things move forward for others going forward. But I would say we probably did three hours per week of meetings for the better part of two months, just to do the column to column piece and then, as I talked about before the clinical validation was several months. We got it set up in our test environment because it just takes a long time as you know to look at PDF and then make sure that it was incorporated into the electronic health record correctly. Your second question about whole genome sequencing is who do you feel those variants should be handled by. So the way we're organized here is we do have an internal whole genome sequencing group. They do the sequence we you can either order the whole genome or you can do a panel based test where we only release the panel results, but they're leveraging the variance in the US is from other tests as they're trying to call as our molecular pathology team is trying to call variance on our internal test and so for us. That's largely driven by molecular pathology, but it still fits within our genomic data strategy of against trying to save as many of those upstream files as we can, because you don't know when your internal molecular pathology team might need those to inform the variant calling on the patients. It's a great question. Thank you Travis. Our question cases is diminishing but Ken, that was an outstanding summary of what happened last year in the virtual summit. I guess the obvious question is how many of the plans were where how do we phrase this holistic and clear and desirable, but the obvious question is what kind of progress do you think was made on on some of those, particularly for the engagement of diversity in education and patients and other modalities. And across the NHRI they've done a very, they're doing, they're actively engaged in trying to address this and you're starting to see an opportunities come out that actually specifically highlight how you're going to engage diverse populations and the tools and resources that you're developing. You're seeing, again, we have a new office that's been set up that's actually addressing this very topic too. So you're seeing this as actually NHRI actually being part of it across the outside of NHRI and NIH. You're seeing other groups such as HL7 and they're looking at how to incorporate this information. You're seeing standards groups. You're seeing these discussions that are happening and people are actually investing in developing these tools and resources to actually include diversity and inclusion of underserved populations in their capabilities. So I'm seen, I'm hopeful and because of the evidence I'm seeing in this, the truth, other issues that we need to address is that, you know, how are we going to work amongst ourselves in collaborations to help facilitate a more standardized way of developing these tools so that they can be deployed in this heterogeneous environment for that help, you know, diverse populations because right now everybody's doing their piecemeal approach, and you really need to start looking at this in terms of how can we collectively do this in a matter to address this name. So I don't know if I answered your question or not. No, that was very helpful, indeed. I see a hand from Mark Williams. Yeah, thank you. I just wanted to add on to what Ken said. First of all, in terms of, you know, again framing GM 13 it was really to try and set out a research agenda for the field of informatics and so I think this is really a critical aspect that's going to have to be incorporated into research and I think the message that we need to be doing this came across loud and clear. The other thing that was interesting to me is that it's not just that we don't have information about how diversity is being impacted by the systems that we have and there are plenty of publications that are showing that even things like clinical decision support algorithms and things like that. We don't incorporate inherent biases that you know lead to differential performance, but we don't have methods really to begin to dissect that so we not only have a result deficit, but we have a methodologic deficit in terms of how we actually address this So one of the other points that was a little deeper in the weeds so Ken didn't present it is the idea that they're going to have to be new research methodologies, particularly in the informatics space that try and get at these specific issues because there's so much that's just baked into our systems that we're having a hard time disentangling. And Mark, I appreciate that expansion. There was a question for Gilharme and the Q&A and I see Gilharme is typing his answer, but why don't you tell us. Yes, thank you. Yeah, so the question is whether there were any specific security requirements to use open source CDS to identify patients so the guard platform which is based on open source platform. We had to go through security clearance, both at University of Utah and NYU. In terms of deployment there are three options. One at the University of Utah we have it hosted internally inside the our firewall so we have a virtual machine that hosts the software. Data never leaves our institution everything is run internally at NYU uses cloud based solutions for their EHR and I mean they are really a cloud based shop, and they prefer to deploy guard into one of their cloud based servers. However, the way the architecture works is that guard never touches any of the NYU software infrastructure EHR infrastructure. The identified data is sent in a flat file from NYU into guard guard analyzes all that the identified data sends the results back to NYU and then they re identify and write the results back into Epic so that that approach allows you to deploy this solution in a cloud based way and address somewhat addressing the security concerns of you know external software having direct access to identify data within an institution. Thank you that's that's extremely helpful I see a hand from Terry Manolia. Thanks Chris. You were talking about family history data and how they're incomplete and and not randomly incomplete but but incomplete in in parallel, oftentimes with health disparities populations but you also said that they're I think that they may be sufficient in large numbers which I sort of take to mean, you can, you know, if you see something it's probably real and it's probably useful but maybe could you expand on that a little bit what you meant by that point. Yeah. So, we can say that I mean there's been several studies looking at the quality of family history in the HR, and we don't have full pedigrees on there by no means that's guaranteed. But for the analysis of evidence based criteria such as NCCN, all you need is for the patient to meet one of the criteria to justify offering genetic testing. One example is a patient who has an ant with history of breast cancer with an early onset. And that's sufficient to meet NCCN criteria you don't need to know about all the other family members and other cancers for that. No, that's that's very helpful. You're man I guess as you know, a former medical student who was traumatized by by having to collect three generation, you know, family histories and pedigrees. I think we do ourselves a disservice by suggesting that that's the gold standard and and I know within the Emerge Network we looked at family history information early on. Only 20% of the medical records had any mention at all of any family history. And, you know, hopefully things have gotten better but but do you have any ideas about how we might and this is a toughie lots of people who address it but but do you have successfully but do you have any ideas or do others have ideas of how we can we can possibly improve that. I think we can say that things are getting we have a paper on that that's on Jamie as well on it's a qualitative mixed method analysis of family history collection in primary care where that's where a lot of the family history is collected. And we found a few interesting patterns, one of them that seem to cross to other institutions as well. Medical assistance seem to be the main individuals collecting family history that's part of the standard workflow. Prioritize according to specific areas that are basically high priority and are available in the family history section of the HR so you have repopulated dropdowns and items that are easy to collect so they can easily go through those items. You know, you, as you expect breast cancer colorectal cancer pancreatic cancer are all on there. The other pattern we found was an increase use at least at our institution it became kind of standard of care that patients are asked to complete pre visit questionnaires in the patient portal, and those include a short family history questionnaire. And of course, we believe that this helps exacerbate disparities because very likely patients who have a better access to technology will be more likely to complete the previous questionnaires and therefore have better family history documentation so that's one of the issues. We will analyze that's one of the things we're planning to do is to see if there's a relationship between patients completing those previous questionnaires and actually being identified later on so in that so that might be one of the solutions try to promote the use of those pre visit questionnaires more and make it easier to those questionnaires easier to access for patients who do not have access to broadband internet. Great, thank you. Thank you, and I see, and I see HD has commented on their initiatives to link parent child medical records is noted in the chat. Oh, Jeff your hand is up sorry. That's okay thanks. Great session. This question is mainly for Travis but anybody could could jump in first of all I'm wondering whether any of your systems are returning genetic information directly to patients whether that's an option given the discussion we had in the last session and the question is really about establishing the clinical utility of return of results and I'm wondering, and maybe you said this and I missed it is there a systematic way to capture outcome data downstream of the decision support that's being given to the provider so we know what's happening and behaviorally decisions that are made or not being made, and ultimately what the economic consequences of return results might be. So, both great questions and certainly things that we think about here so institutionally, we are in favor of returning results immediately to patients when when available. And so, whether it's an internal result or external result for genetics or genetic testing, those results with very few exceptions are made immediately available to the patient and our patient portal at the exact same time that it comes to me and what we've typically advised our clinicians is that just to make sure you have that conversation with the patient that they may receive the results before you have a chance to review them and just set the expectations accordingly, we have not found that that has had any negative, at least not as many negative ramifications I think many of us expected so I've certainly shifted my perspective on that. And I think our patients generally appreciate having that information for exactly the same portability reasons that were brought up earlier. And then the second question is really about clinical utility and how we measure the efficacy of clinical decision support. We have a whole myriad of dashboards that we use to see how often we're alerting clinicians. The harder part is what action they take. And the challenge here is if the action is incorporated into the alert itself. It's pretty easy to track. But when we dive in to alerts that we think should have a high probability of changing behavior. And we find that they're just dismissed. If we actually look at the charts, not infrequently what happens is the alert is dismissed, but they actually take the action then outside of the alert. So we may recommend a different anti platelet agents, they choose yet a third anti platelet agents. And so that shows up in our dashboard as a miss that they've dismissed this but really the action was that they took the appropriate action. So that continues to be a challenge for us that's that's not we have not yet cracked. We also are most mature in this space with pharmacogenomics. And so, even some of our guideline based testing around patients with metastatic prostate cancer and getting those patients germline tested, we're still trying to develop dashboards to make sure that we follow those but I think you need an institutional strategy to do so but if you dig in you can at least get some data the data may not be perfect. Thank you. And maybe the last question from Mark Williams before Carol E. the wrap up. Yeah, I wanted to reinforce that the point that Travis made about the simultaneous release of information to patients and clinicians. When we initiated our my code return to results over five years ago we built in a delay, which was not consistent with our institutional standard of simultaneous release of lab imaging data to patients and the clinicians because we said this is new. We really just don't know how it's going to react we want to be conservative. And so we had a clinician advisory committee meeting about two years ago. And the question was raised how can we have this delay why aren't we releasing this information at the same time as we released all of our other information. And we said well remember when we had initially talked about it we thought that was a good idea and they all looked around they said well I was kind of dumb. And so we now also release it simultaneously and when we talk with our patients as part of our ongoing engagement. They say hey we hear bad news from you guys all the time you know whether it's an imaging study or lab study or that, you know we can handle it so don't treat this as being exceptional so I think. This is a really important point in terms of what our level set should be in terms of the sensitivity of this information. I couldn't agree Mark we're we're trying to get it to a place where genetic and genomic information is just like any other lab. Thank you, Carol. I appreciate your rescue earlier and I'm asking you to do it again. If you would graciously lead us through some wrap up discussion that would be helpful. Sure, I actually, but I have, I think I have one more question I'd like to hear the perspective of all the panelists on and that is going back to the sort of the patient centric aspect of this and how to deliver this information to to a patient so can this as a topic that was part of the research agenda meeting. And Travis, you mentioned it in passing that there's an interface that patients can use and I'm, I'm really curious about each of your perspectives on how to go about building those interfaces that are going to be interpretable by patients. How do we go about actually building an interface that a patient can log into and see the information and really grasp and understand what it is they're seeing. So maybe we can start with Ken and then just go to Travis and then you hear me. So, from my perspective, I mean this is happening already you're seeing the private sector starting to develop these apps and companies are starting to be stood up to do this very thing. I think that somebody said about these tools being developed is more about making sure there's a consistent message being provided that patients can understand and work with the clinicians on. In addition to that, we really don't have a choice but to move forward with those kinds of efforts because the information blocking provision and 21st century Cures Act puts this other other responsibility that we have to address as far as you know, providing these information to patients in a reasonable time, and there are certain exceptions to it. But I think, again, this is something where on the research side is understanding what is the appropriate language and documentation that needs to be provided that could be consistently shared is what I think the focus should be on because these tools are being developed. Carol, I think that it sounds like we're all very aligned in this. I think that we can take a lot of a lot of information from the open notes efforts that patients are increasingly knowledgeable. I think we have to be sensitive to health literacy, but I think we have to release the results immediately whenever possible to the patients. And then I think the next grand challenge is how do we support patients that are interested in investigating those things further. I think there are opportunities for centralized learning, etc. So, I think that genetic counselors certainly know much more about that than than I do. I don't think we'll ever be able to replace the personal interaction of a genetic counselor reviewing the report. And so I think we need to decrease barriers and delays and getting patients in touch with genetic counselors to help that with that interpretation. I think we need to be all for making the report results immediately available to patients and making those results easily portable, meaning that those patients can take those results to another health care system either by request or by transfer or to a research consortium if they want to participate in all of us or another large project. Yeah, I think there's lots of opportunities for researching in ways to better communicate results and educate patients about what they mean. The potential ideas I mean in the chatbot space, you could imagine any any genetic result could be coupled with access to a chatbot that patients can access anytime and there's some evidence outside genetics that chatbots have advantages over static materials like handouts or static web pages because patients control the pace, and also the depth of the information they can choose what topics to look at more in depth and also ideas like transferring to a human at some point which could, you know, I don't understand this. Can I talk to someone in a genetic counselor or genetic counseling assistant could could call. Yeah, I think there's lots of opportunity for for researching this area and you know in in the realm of communication as well how best better ways to communicate that using these kinds of technologies. Great. Well in wrapping up I just I want to thank Chris co moderator for working on on this session and to all the presenters really appreciate the insights that each of you brought to this session on it infrastructure. I mean, really, you know, can laid out can laying out kind of the environment or the background based on genomic medicine 13. And that research agenda for clinical informatics it really touched on many of the topics that we've heard over and over. Today, things like patient centered approaches importance of taking diversity and underserved population needs into consideration. Semantic frameworks data standards, things that promote adaptability interoperability data reuse. And all of those things being so key to making genomic medicine and genome or learning healthcare systems possible. So I think can really laid out a lot of the, the overarching themes that have to be paid attention to in this space. And the two examples we, we got about integrating genomic data into EHR is the the use of HL seven and fire standards as being really key to making these systems possible. And one of the very specific examples of how using incorporating the genomic data and having structured genomic results available drives both advances and speed of delivery of new information from the genome to the healthcare provider, and also does it in a way that serves multiple communities both the clinician, the patient, the researcher, sort of facilitating that workflow, but also still the need to somehow better do education for physicians and nurses and genomics so that information can be initiated and acted on. And then I think the, the population level example we have of, again, the value of having the genomic data integrated into EHR to identify populations of patients with shared shared characteristics for outreach for follow up either for counseling or both. And how to do that in a way that minimizes the amount of effort that has to be done by primary care givers who are already so stressed for time so I mean I think these were two really great examples that illustrated many of the general ideas that can laid out from the previous genomic medicine workshop and so I know I missed a lot of very specific points but but to me that the session was very, very valuable and going from the big picture down to very specific examples. And again I want to thank the presenters for for all of their their work for this session here here. Thank you. I think we have completed the session I know there's been discussion in the chat of ending early, which I guess we'll do a Terry. Great. Thanks. Thanks so much Chris. Yes, we'll, we'll adjourn take, you know, one minute off of the, the time for the break and and adjourn at sorry and reconvene at 245. We'll be back in time so however, I need to start my video there we go so you can see me but the message is the same will be back at 245 and enjoy your break and thanks to everyone for for two really great sessions. Thank you.