 Okay, we are back and we're going to move on to session two, need for research and advanced technologies to support genomic medicine. Our co-moderators for this session are Carol Bolt. Dr. Bolt is a professor in Nolton Family Chair at the Jackson Laboratory Mammalian Genetics Center in Bar Harbor. She maintains the mouse genome informatics database and is also the PI of the Alliance of Genome Resources. Our other moderator is Jim Semino. Jim is professor in computer science and a professor of medicine and director of the University of Alabama Birmingham's Informatics Institute. For those of you who are of the NIH persuasion, you'll recognize Jim for as many years of service as the chief of laboratory informatics and development at the National Library of Medicine in the NIH Clinical Center. So Carol and Jim, I'll turn it over to you. Oh, thank you, Mark. So I think I'm just going to jump in with some short introductions of our speakers and something maybe add a few things that are not in the handout. First is Gil Alterovitz, he's a PhD in electrical engineering and biomedical engineering. He is a fellow of American College of Medical Informatics, a fellow of the American Medical Informatics Association, currently director of biomedical cybernetics lab and professor at Harvard Medical School. Also relevant here, he's co-chair of HL7's clinical genomics working group. And he works in precision medicine and genomic machine learning. Second, we have Lisa. And I asked her how to pronounce her name, so I'm going to mangle it. Lisa Besterach. Besterach. Besterach, thank you. She's an assistant professor of biomedical informatics at Vanderbilt and the Center for, I'm going to have to point to last song here. Center for precision medicine. She is a master's of science from the University of Chicago in linguistic anthropology. She develops and implements new FIWAS and phenotyping methods, and most recently is published on phenotyping methods for COVID-19. And she is, her area of research is using EHRs to find cohorts and phenotypes. And then last but not least, Kevin Johnson. I'm near somewhere, sorry. So Kevin is an MD from Hopkins. He also has a master's in biomedical informatics from Stanford. Also a fellow of the American College of Medical Informatics, fellow of the American Academy of Pediatrics. He's known for researching, evaluating clinical information systems to improve patient safety and physician adoption. He has a very well-known podcast called Informatics in the Ram. I highly recommend you check it out. And he's also the director and producer of the award-winning film No Matter Where on electronic health records. One of the few informaticians with a IMDB webpage. Okay, so we're gonna start with Gil. All right, yeah, thanks Jim for that introduction. And I really have enjoyed the talk so far. So it's been really great. So I was asked to talk about intelligence automation at its finest, how to use genomic-based clinical informatics to change the clinical culture that supports genomics. So let's get started. So I think many of you may have seen this already, but just wanna point out that there are a number of different genomics use cases that can leverage artificial intelligence and automation that has been put together in the HL7 domain analysis model document for clinical genomics that was published. So that may be a good place for those who are kind of interested in kind of getting started. But what I wanted to do today is really talk about kind of how the field is moving forward in a number of different ways and kind of take you through a little story on that and kind of seeing where things are kind of moving and what are kind of some of the ideas on that front. So, when we look previously and previously I say, maybe like four or five years ago, there were really no or they're really little defined or accepted standards. There were a few kind of different efforts here and there. And then there was this need to try out standards to pilot to define the standards. And the good news is that that has been done. That has been successful. There are now standards in a number of these different areas. We've learned a lot from the piloting and there's a number of things that have been developed to make things patient centric. So that's the good news. Maybe the not so good news or the other aspect is that there's only so much we can learn from piloting. There's only so much we can continue of course to refine standards and so forth. But there's really a need to take the next step. And that's in just tracing and thinking about the areas that we've worked on and are now focusing on. It really got me to thinking about this. What are some of the steps that we're working on that others that we're collaborating with are doing that can really help us take the next step? So really scalability is really an important key here as opposed to piloting in kind of like small circumstances. A scalable workflow and process solutions, those are really the areas that are need development now. It is at the interface of research and development and practice and there's the need for interconnected applications, modules and interfaces. Whereas in the past, a lot of the work has been on kind of individual, one application here, there, working with different types of data, sometimes different standards, different versions of standards, now there's this need and that's kind of the next step is this interconnected framework. There's a need for AI-able or some people call it AI-ready and research-able or research-ready genomic medicine, essentially the merging of clinical lab as well as the provenance so that you can have the research capabilities from information derived at point of care. So essentially thinking about research from the beginning, I've seen this in a number of different areas. It can be as simple as thinking about how to do the consent forms and things like that at the very beginning. All the way to the next thing is kind of a networked ecosystem. So all these different pieces working together. And now that there are these different standards and the pilots have been done and a number of different pieces of the puzzle have been put together, there are different organizations that are at different levels of adoption. And so understanding that, promoting and understanding and evaluating those levels so that we can have an interconnected networked ecosystem is an important area that we've been working on. And then finally, in addition to the standards around how the data that's communicated, there's a need to really work around the authorization, authentication, security and privacy concerns. So for example, privacy preserving AI on clinical genomic data. So you can actually, so you can process the data without actually seeing the underlying data and create modules for that. Smart contracts, talk a little bit about that in a little bit. It doesn't have to be via blockchain or anything like that, but this idea of empowering the patient to be able to make these decisions. And then finally, the idea of promoting diversity. We heard about that from Dr. Green earlier and that's an important note as well. And so we've been working on this consortium to bring together a number of organizations around these themes of the next steps that we need to create a scalable infrastructure that has these different properties. So with that, we'll do a quick review of some of the aspects that have gotten us to this point and how the next steps can enhance some of this work. So this just shows an example of a patient-centric app which looked at chronic diabetes condition. And in this case, this particular one juvenile diabetes would involve a patient, it could be a child who may relate more to an avatar. There's actually a physical stuffed animal that relates to this in this application. Information that the physician can see and present as well as information from patient-generated data and their caregiver that's all presented together in one screen and embodies with it a genomics module which is standardized so that it can fit into different types of applications. So it's kind of an early example of kind of this idea of the interconnected apps and modules working together. So you would only have to write the module one time and it can work for different diseases. Another one is the smart precision cancer medicine or navigator example, which was done in collaboration with Vanderbilt and a number of interested parties now in taking that to the next steps. Just as a little thing on that one, it basically allows for different views of view as a patient compared to all the other patients in a view that a physician and a patient can go over the data together. Next up, just to show another example of is the idea of taking policies, implementing different policies of different organizations. So ASCO, which focuses on cancer oncology area made a series of recommendations on enabling precision cancer medicine. And this application, which we put together, basically implemented those recommendations and securely links patient-specific data from the EHRs through FIRE and multiple knowledge bases for information and treatment options so that the patient and the physician together can go over the data for that patient and decide on next steps for them. Just an example of a more recent one, the COVID navigator about empowering patients enables over 50 million patients to look at and kind of have a look at personalized COVID-19 risk factors, takes the latest papers that are out there and shows them about their condition in a relation to those other, to the most recent literature out there when they log in. And just to show kind of how this ecosystem is being developed, there are a number of different applications, everything from when you're ordering genomic tests all the way to reporting, viewing the results at point of care, there's integration of information from the medical record, information from genomics data, which may be off-site, maybe at a different location, and that ability and defining those process and workflows is really what's kind of the steps that are going on now. In the past, it would be around, maybe we'd work on piloting one of these or having them work in isolation and now it's about having all the different pieces working together. And so this kind of shows the change in how some of this has been going on in terms of defining some of the standard use cases that's been done, of course, it'll evolve over time and as new people and new organizations discover and develop new use cases and different documents will be enhanced over time, I'm sure. And then actually using different standards, just kind of in this case mentioned fire here, where there can be connected thons kind of at the local level, different pilots and now really getting over into production and getting these things used to make a difference for patients. And so the next thing is like, how do you, once things are, and we are seeing already a few of these going into production, how do you evaluate the different level? Because not everyone adopts all the different standards or all of a particular standard at the same time. And so you want to make sure that these systems can communicate together because often we see patients with data in different locations, genomic data can also be in different locations. So how do you make sure these systems communicate together? And so really the notion going forward is toward to enable this essentially decision making is that really we need to build toward an ecosystem. So rather than thinking about things as point to point, which has often been the case in the past with these apps and applications, it's more about an interconnected ecosystem and they're gonna be different levels of information available from each of these. There are different speeds of adoption. There's a need to really communicate between these multiple parties that may adopt different parts of the standard, different versions and so forth. And so how did it go about doing that? And then the other piece is around empowering the individuals I mentioned. And so individuals a priori can decide what it is that they would like to make available to both themselves, relatives, others through smart contracts that may also evolve and change over time. The notion of smart contracts has traditionally been associated with blockchain, but it doesn't really have to be. And the idea is that by empowering the patient and to make a decision about their data, that can be shared with them and with others as they'd want to learn about it. So just as an example, they may not want to know that they have a certain genetic condition. So they may actually want to prevent themselves from knowing a certain piece of information in the future if there's a particular disease or something that they may not wanna know about like Alzheimer's or something else. So that's the idea there are smart contracts. And then just wanted to make sure to leave there any points of any questions or any thoughts and especially eager to explore any of these ideas that we're now looking at and combining with a number of different parties. So I'll stop. Thanks, you're right on time. I have time for questions. I see there's two Q&A, but they're from a previous session I believe. Oh, okay. Free to answer them if you want, but I'm gonna give people another chance to type. So Gil, I was interested in your COVID example. Yes. And it looked like you were presenting to individuals like research papers related to different risk factors. So how do you measure the impact of that kind of access on behavior or outcomes? Right, right. So in this case, so that would be basically the next step. This is an application we just finished recently where we've integrated actually a few more. This is just one of like probably 10 screenshots of other things that can be done here. So the idea is to first learn and develop a way to assess what kind of impact this would have, right? Both on the patients and on the physicians of knowing these piece of information, does it lead to a different actionable decision? Maybe they learn about how, because basically combines all their conditions together and then looks for relevant new papers and new findings that may have come up related to that in COVID, because COVID is really a rapidly changing field, same with variants and things like that, right? So the thinking is that now that this application is done, working together through focus groups and then through kind of basically an IRB based study, we would kind of record and ask them to see if there were changes made in care based on this or if this affected the way that they did the standardized processes for triaging patients and so forth. I will mention that not this application, another one that I'm not showing here, it was kind of interesting that it was developed for artificial intelligence to look at prognostics information for the COVID patient and we thought it was gonna be used for one purpose but actually ended up being used for a different purpose and so that was really interesting and that we then are now changing the type of questionnaires that we're giving to the physicians because it ended up that they're using it more for palliative care decisions and when to do that timing rather than a resource allocation which was the original thinking with beds and things like that. So there's a question about whether the COVID-19 navigator is in Spanish or other languages. Well, that's a great question and currently it was just completed in some of these screenshots or just a couple of weeks old in English. We will definitely, that will be an item that I'm gonna put on the feature list. We had looked at Chinese but just in that when we were first designing this it was in China and there were a number of things there that we wanted to see if that might be a potential collaboration but I think it would be useful to have it in a number of different languages because once we do, but right now it's only in English, just to point that out. Let's see. It seems that we need, okay, important consideration. Yes, yeah, and as was mentioned that was one of the priority items that we had mentioned was promoting diversity. So that's of course gonna be one of the languages that when we do add support we're gonna have that. One of the features that we've been working on right now is adding in a couple of new databases that are gonna be updated around different drug regimens and things like that. But yeah, so thank you. Okay, we need to move on to the next speaker. Lisa, do you have slides? Yeah. There were a few questions in the chat and I'm gonna save those for the discussion. So if you asked a question that didn't get addressed we'll come back to that. Great. Hi, I'm Lisa Basterash. I'm a faculty member at Vanderbilt University Medical Center and today I wanna make a case for what we can do using clinical genetic test results for both research and to improve patient care. I was really happy to see Casey Overby-Taylor present some recent progress in getting EHR data into machine readable format integrated with the electronic health record. But I can say from experience that over the last 20 years we have not had a real streamlined process to do this. I think I'll show some examples to demonstrate that. Nevertheless, I think that this data is so valuable that it is worth the pain of trying to find this information because I think it can really advance the field in a lot of different ways. And I'll start with a particular example. This comes from the Undiagnosed Disease Network. The UDN sees patients who have mysterious medical conditions with no underlying explanation and this patient was no different. She was a 26 year old female who had multiple medical problems and including autism, some learning disability and hypermobility. She was sequenced and there was a really appealing candidate variant that came up in a gene called MSL2. The variant was de novo and frameshift, super rare. The problem was that this was not a known disease causing gene. This kind of thing comes up all the time in whole exome sequence. The UDN team was able to find a research paper linking MSL2 to autism but it left a big question mark around these other phenotypes that the patient had. So we were at an impasse at this point. Or were we? What if we had seen another MSL2 patient at Vanderbilt at some point? Sure, we asked the people at the UDN if they'd ever remembered an example and they run the genetics clinic and they couldn't think of one off the top of their head. But as you guys all know, a lot of people are doing exome sequencing at this point, not just people in genetics clinic, but also in neurology. And so we felt this was a shot in the dark but it was worth a shot. One thing that's nice about gene names is that they tend to be named by unique combinations of letters and numbers that don't come up in regular conversation. So because of that, we can just do a string search on a bunch of different medical records to look for these gene mentions. When we did this on over three million patients that we have in a database of electronic health records at Vanderbilt, we came up with only two matches. And a quick perusal of those matches showed that we had two additional patients who'd done whole exome sequence at Vanderbilt and had de novo mutations in MSL2. We sent this record to Ellie Brokamp who's a really talented genetic counselor and researcher who works at the UDN. And when she looked at those records, she found that the patients had really striking overlap phenotypically with our pro band. This was enough information for the UDN to consider this to be basically a solved case and the information was relayed to the patient and her family. So I gotta tell you, this was super exciting to me because I'm a programmer. I don't get to do this kind of stuff very often. And so this raised the question though, is this the only situation that we had without we're sitting on here? Might we have more people who are UDN cases or even just patients at Vanderbilt where we could just put two and two together with data that we're just sitting on in the electronic health record? Now, in order to systematically address this question, what we would love to have as a clinical genetic database with all of the information who was tested, what genes were found, all curated nicely as we could just do some queries, right? But that is not what we have in reality. In reality for the past couple of decades, most genetic information has come in through the medical record as PDF reports which are scanned as an image into the medical record which doctors can see, but me as a researcher with a research database, I cannot see it and I cannot search it. The saving grace here and the reason we were able to find the MSL2 variants is that a lot of times this information is actually hand keyed into text notes, at least at Vanderbilt, I can't speak for other institutions at this point. And so that's why we were able to reveal these two patients with a search. So this kind of spurred a large ongoing and I admit potentially quixotic effort to index the medical record with all genetic findings in the last 20 years. And one thing that we've observed in this process so far is that if you ask where did these genetic findings that up in the HR, they end up all over the place and all different kinds of notes. About half of the notes actually are structured or quasi-structured. So they come in the form of things like chromosomal microarrays which are done in-house at Vanderbilt and they have a kind of standard reporting system for those. We can write quick little parsers to extract that information automatically. There are also a lot of genetic data that comes through genetic clinics and because they have such high volume of this stuff, many of them, and I'll name check Georgia Weisner who's done a fantastic job with this in the hereditary cancer clinic, have come up with kind of templated ways of mentioning these results in the medical record. But we've also identified just this huge bolus of data that it's just hanging out in different notes. You're endocrinologist, your primary care doc, some in clinical communications and problem lists. In this case, the problem is a lot harder because the fact of the matter is genes and even specific variants are mentioned a lot of times in contexts which don't pertain directly to the patient. There are variants that was found in their paternal aunt. It's a variant that was found in their tumor. It's the variant they may have if ever we get around to testing or get insurance to pay for testing, et cetera, et cetera. I have to say it's a little embarrassing but I've taken a crack at this problem and my background's in NLP and I have not been able to come up with a fully automated way of extracting this information from the medical record without the compliment of some sort of manual review at the end of it. I've actually over a year working on this problem, come up with I think it's the accurate visual representation of the process in the last 20 years of adding EHR genetic data into the EHR. If you imagine the EHR as a car and you imagine the genetic data as water, it sort of looks like this. This goes all over the place. Some of it lands in the car, some of it doesn't. So it is a mess, okay. But even though this has given me huge trouble and I probably wouldn't have even started this if I knew how much trouble it was gonna be, I still love clinical genetic variants and I think you should too. And here are some of the reasons. Number one, they are free. Even though as Eric pointed out, the cost of sequencing has going down, research sequences are not free even though I love those too. Secondly, there's more of this data every single day. This is a chart showing the number of patients not just with tests but with actual findings mentioned in their medical record that we have so far curated in our database and you can see that it grows each year. Number three, they are consequential variants. This is a boon and a bane but you can see that there's a major bias towards pathogenic and likely pathogenic variants if we look at those that are mentioned in the medical record. These are the variants that are most likely to impact patients' care and their life and their health. And so this brings up a lot of interesting research opportunities to use this type of information to further characterize these patients. And I'll also say along these lines, as a data scientist in a non-ND, I really love working with this data because I feel like it puts me closer to the action. Clinical genetic testing is a way that every single day people are getting diagnoses and getting better treatment as a result of this process. And so I think it helps me put myself into a mindset of how do we build on the foundation of this process? And as a data scientist there are ways that I can enhance it and make it better. Another reason I love that is I think that it can help us by having a database like this take a fresh perspective on some data that is often looked at in a different context. So typically genetic testing results are looked at in the context of a patient or their family. But if you start aggregating up this data, you can start seeing that at the population level, broad patterns, and this is just a toy example of prototypes that we're putting together based on this database, of the percentage of patients who have breast cancer with genetic pathogenic variants that are related to their breast cancer and how that has changed over time. And you can also look at what kind of genes are underlying that molecular diagnosis for these patients. Another point, and this pertains to something that Janina Jeff pointed out earlier in her wonderful talk, is I think that if we aggregate this data together, I think we can monitor this super important issue of disparity based on genetic ancestry. As we all know, patients who are of African ancestry are much more likely to get a VUS than those who are of European ancestry. But this is not a static problem and it's certainly not an inevitability. There are things we can do about it. Using EHR data and a project that I'm working on with Georgia Weisner, we've been able to look at the trajectory of the fraction of variants of uncertain significance by ancestry and we see that that disparity is actually narrowing though it hasn't disappeared. I think that this is looking only at hereditary cancer variants. I think we widen the scope to other types of genes. The gap will be much larger, but I think this is something that we should consider monitoring and looking at to see that we actually tackle this super important disparity problem. And finally, I wanna loop back to the patient with the MSL2 variant. I think that having a database like this can help us take a little bit more of a rigorous approach to the patient matching that I mentioned earlier. So some of you might've been thinking when I was presenting this case, the skeptics among you and I count myself among you. What if this was true to run related, right? Of course, people will get a whole lot some sequencing at Vanderbilt. They're gonna be really sick. They're gonna have a lot of medical problems. So the fact that they had this variant and then they had the matching phenotypes could have just been coincidental. But if you have this data, the population level and you get a good, a decent method to do some high throughput phenotyping, you can answer this question very systematically, which is what we attempted post-talk. So if you take the patients who had the MSL2 variant starting with the proband and using human phenotype ontology terms that the UDN folks came up with in a separate exam, you'll find that among 18,000 patients just using retrospective EHR data, the proband is number one in 18,000 people in terms of their phenotype risk score. So they match themselves very well. If you look at the other two gene matching individuals, you'll see they're ranked 35th and 38th. So they are way on the long tail of the distribution. This means that if you look at a population level, these two patients do in fact match the proband much more than you would expect by chance. And if you narrow this down to the over 500 patients who we found who received whole exome sequencing, they're ranked first ninth and 10th respectively. So anyways, I didn't have time to talk about some of the awesome research I regret that's being done on these clinical genetic variants that we're putting together. But I want to show some pictures. Here's George Weiser and Chen Zhizeng who are doing a project in conjunction with eMERGE and some EHR data that's really amazing. Doug Rutherfer and Ted Morley are working on trying to find patients who are undiagnosed, training algorithms on this data and I have my UDN friends up top and my wonderful advisor, Josh Peterson at the bottom. And I just want to say that, even though it's a total pain, I think that we should be trying to exploit this data to the end degree because it'll help us figure out what we're going to be able to do once we solve these problems tomorrow. And I just want to thank you so much for your attention. Okay, Lisa, we have time for maybe one question. So Lisa, there is a couple of comments which we'll say, but the question was how do we scale these kinds of queries to EHRs outside of Vanderbilt? Could there be a fire enabled app that can send these queries nationally? I think that would be amazing. The first step is actually just to extract. If we wanted to take advantage of the last 20 years of data, we're going to have to find some process to extract that information from the EHR. And I suspect that that is going to be a per institution type problem. Early on in this process, I had ambitions of developing with a student I'm working with an algorithm that we could send to all of our friends who have EHR data to extract this information. But we have since been sort of overwhelmed by the difficulty of doing that given how complicated the format is. Once that data is structured, I don't know, I'm not really a systems person thing, but I hope that my friends here who are systems type people start contemplating that because I do think it'd be very valuable even just for the patient matching stuff to have a sort of network like that in the same way that matchmaker does that powered by EHRs. Okay, we need to move on to Kevin. Thank you, Lisa. Sorry, good day everybody. Lisa, fantastic job. I feel like I should start this up by saying hello Newcastle. I had some calls for you because so many of the people who are hearing this and who had a chance to present really had done so much of the work that leads up to what I'm going to talk about today. But I'm delighted to tell you a little bit about what we're doing at Vanderbilt and how it kind of relates to the strategic vision that NHGRI has already put out that relates to genome-friendly connected care. So I think it's important to start out with where medicine is going. And if you look at the upper left of this slide, you can see I think our new reality which is the sudden acceptance of telemedicine as a very viable alternative. With telemedicine is likely going to be a real resurgence in our interest in digital health or consumer health informatics. To the right of that, I show a picture or a program that we're going to spend a teeny bit more time discussing which is the program that was developed by Dan Rodin and Dan Maces and others called Predict which is something that we've been doing at Vanderbilt and that I think really was a harbinger of what you now see and emerge. In the bottom left, we've talked quite a bit about this but I think it's clear that a part of what medicine is now understanding is the role that not just genetics but things like individual behavior and social and behavioral determinants play in terms of our risk of disease and our likelihood of successful treatment with the options that we have available to us. And then I think the fourth thing is the emergence of AI. We had a recent paper that came out in clinical translational science talking about the fact that what we think about in precision medicine and the techniques that might be available to us through artificial intelligence can be brought together to help us to think about new ways to plan therapies, new ways to do risk prediction or diagnosis that are both genomic and non-genomic. And this kind of encapsulates the conversation that's going on right now in the field. The other key kind of preliminary components of this is that there are these enabling platforms. We've talked about fire a number of times and I anticipated that we would even more so I'm not going to but there are two others. One of which is the 21st Century Cures Act which really represents a sudden acceptance in the federal government for the role that electronic health records and data, a multi-trillion dollar industry needs to now play in helping to move the way we take care of patients forward. And if you haven't had a chance to read this, I think the key summary here is interoperability, data sharing and information blocking, among other things. The other big thing is that and I bring this up because it was one of those musings. I was watching the video from the Consumer Electronics Show. And one of the enabling platforms that we have to recognize now is that many of the technologies that are very science fiction of the 70s and 80s are actually commonplace at some point, at some place in our country or in our world right now. This is just one example of many automated robots for example, here to provide people access to masks when they're going into public places. And so it reminds us that the future is here, it's just not evenly distributed which is a great quote by William Gibson and that I think relates to both the talk that Mark started us off with, talking about clinical decision support and genomic standards that are here in certain places, the comments that Ken Kawamoto and others have already made which is while they may be here if you're an Epic or center site, they actually might not be here if you're in all scripts or ECW EHR. And that whole idea, I think is one of their big challenges of today. So at VUMC, one of our current capabilities which is an unabashedly Epic based current capability now is that we have a system that allows us to do relatively at scale, both prediction and pharmacogenomics genome informed prescribing. We have 14 medications that you can see on the slide here. We have a series of best practice advisors that have sort of followed the Jerry Osherhoff and other best practices for building an alert and a reminder. And depending on which study you've seen of ours, we get somewhere between 30 and 70% acceptance of these best practice advisors. But we also have patient access to this information via a genetic profile page in their patient portal. And we have a website called My Drug Genome that goes through the entire program that's available for patients to see. And we've done all of that with the hope that we send this 2010 could continue to scale an environment that supported what we've been talking about today. We have tools allowing us to do browsing of the star alleles and mapping them to specific drug genome interactions, term I coined in about 2008 that managed to stick for some reason. And then if you look on the right here, you can see that we've relied on data from groups like CPIC and NHGRI funded work to develop not just the mapping of the star alleles but also the actual words for adults and pediatrics and custom versions of this and research about literacy to help us make the best clinical recommendations that we can, all of which then feeds into a typical best practice advisor. So this is work that we've been doing many, many other people are doing it Cincinnati Children's, many of our emergency sites. And a lot of those data have come out in papers as recently as a Jamie a paper from this year or from 2019 talking about the extent that this is happening. So this is all work that a lot of people are doing. So the good news is that we have a lot of production and we're using the standards that are currently available to some extent. The bad news is the rest of the talk, which is the barriers that represent how we get to this vision that we have of combining both health system comfort with genome informed care and patient level comfort with genome informed care. So that implies that we need to be able to deal with things like reimbursement, things like scaling technologies that may not be as comfortable for everybody and also understanding the fact that not every patient seen at a place like Vanderbilt is gonna have all of their care either starting or ending there. So we got a group together this last year to start looking at the workflow that needs to be supported for us to accomplish that vision. And this is not yet published work, we're working on it, but I thought it would be really useful to share for this group. Sorry, hopefully that doesn't happen again. So the case just to drive this is a 19 year old woman evaluated in primary care after spontaneous pneumothorax with a heart murmur, the clinician suspects Marfan syndrome. So the typical clinician, if you go to the very left, will review the electronic health record and see the patient and update the electronic health record with the information that they have currently available. I think my slide might be automatically advancing. I apologize for that. We then have to ask the question, to what extent is that information going to be triggering clinical decision support? The patient then had or the clinician then may not be sure of whether or what to order and may want to order a test or may need to get a consult. That consult needs to be easy to access and orderable, which means there has to be work done in the electronic health record to support those orders. The consult occurs, the test is reviewed by genetic console, God counselors. And to do that, we have to make sure that the correct test is ordered consistently and easily. And that by the way, if we're doing those studies internally, we have panels in house. The results have to be returned, which means they have to be available in a consistent format, location and computable. And in our case, that means we have to basically use our existing technologies, which are two systems that allow us to get the results sequenced in the EHR. And then finally, any variance of unknown significant have to be thought about. We call Lisa, where we do some other work. If the result is clear, we have to make sure that those data are easy to access for interpreting the results. And to do that, we probably have to expand our e-consult system that we've already described that would help people who don't know whether the VUS is significant or not. So if you look at this technical foundation and the workflow and look at the works on the bottom, it's clear that there's still an enormous amount of work that we would need to do, even in a situation where we think our EHR and other systems are capable of doing it. There's also questions that we should anticipate. Which drug will be most effective in this patient? Should I be considering genetic testing and so what tests? And by the way, many of these questions will come from our patients as well as our providers. How do I interpret these test results? They don't have clinical meeting. What does indeterminate mean? The literature isn't specific to my patient. What have other patients like her at Vanderbilt experienced? Is there any clinical trial for my patient out there? What is the best way to treat my patient's tumor or other disease? So we recognize that to do this at scale, all of these questions have to be addressed and the infrastructure that we showed on a previous slide has to be a part of that solution. And so the health system challenges that we recognize right from the beginning could kind of lump into four categories. Interoperability and data flow where we need computable data from the outside lab and we need to be able to do recontact as information changes. Provider knowledge where we need to understand the nomenclature, star leels, et cetera and addressing any family concerns that might relate to these results. Information literacy where we need consistent knowledge representation especially to the point that Lisa just brought up if some of the data that are being brought in from the outside world or from an external lab are actually not computable and we are relying on either handwritten versions of that which may have all sorts of interesting transcription and autocorrect problems especially since people are dictating more and more and they have to be understandable to the late public. And then a return on investment for screening which is a gigantic problem where we need to have quality evaluation studies and payer and self-insured party support so that we can actually get the study, the tests done through whole exome or other arrays in advance of the patient needing that alert to be called and its clinical decision support to be fired. From a patient perspective, there's also daily living challenges. That includes equity, access to tailored care, literacy, understanding the nomenclature and family concerns, fear and misinformation which you see on the right and then life integration, technology literacy and recontact. Jim, am I out of time? Looks like I might be. Yes, you are. Thank you. Just real quickly, in equity we've talked about quite a bit. I won't go through it now but there are major issues there and the genetic discrimination fears also a giant issue and what we could automate which could include some of the ways we deal with workforce shortage if we think about it. So in summary, this is the research that is necessary now which includes provider and patient literacy expanding the systems that we have to help identify barriers of unknown significance, developing standards, return on investment and then basically more genomic expertise available for both patients and everyone else. Thank you. Great. So we've had three amazing talks on very diverse topics related to advanced technologies to support genomic medicine, very different perspectives and what I'd like to do because we had a number of questions and comments that we didn't have time to get to since we have a little bit of time now what I'd like to start off with is actually going back and having the speakers address some of those questions and comments and we'll start from the last talk. So Kevin, there was a comment about the impressive alert acceptance rate. I think you said it was like 70% for your system. Did that change over time? In other words, was it 70% right off the bat or did it take time for that acceptance rate to climb to that level? Yeah, what a great question. So full disclosure and these are all published data. We started out with our first paper showing an acceptance rate of about 55%. We then started to do some additional work and many of the reasons why people chose not to accept our original alerts which were for Clopidiviral primarily had to do with the fact that they did not believe that they were really the authorized prescriber. In other words, this was a drug that although was being fired, the alert was being fired for one provider, their belief was that other providers should be responsible for this. After as we got through education and some additional building of tools to allow pharmacy-based workflows and others, we were able to get the change rate up to 70%. So while the alert wasn't always accepted, the recommendation was followed. Actually over the last six years, it's going down. And one of the studies that we're now in the process of trying to understand is why is that? Our hypothesis is that there were many more indications for the prescribing of these drugs that's making it now not as clear what the equally efficacious alternative therapy should be and therefore people are not accepting those alerts. We undoubtedly had a bit of a hit as we first went to Epic because we had to build the system differently. It now works exactly the same, but that also I think affected some people and that's where we are. So one of our requirements now is to get a tracking system in place so that we can actually understand that better. But thanks for bringing that up. And another question that just came in for you is to what extent is the general public weary of genetic testing due to things like potential for disability or life insurance discrimination? Well, I think if Janina was to talk about this, she would say, I think it depends on which part of the general public we're talking about, right? So if you are in the upper middle, upper class part of Tennessee, we're quite comfortable with the idea that more people now would like this than ever before. One of our beliefs is that if we go to patients who've had any previous adverse drug event, they're gonna likely be very excited about genetic testing. So that's something we're gonna explore in the next year. If you go to populations where the system is largely untrusting and that's why I bring up Gina and why there's so much important work to be done there, what you're gonna find is they don't understand it and they're not completely sure what the collateral damage is of getting it. And it gets back to the point she was making, Janina was making, which is even if we do the test, people don't know why it's gonna benefit them. And one of the challenges of doing predictive genomics, so whole exome before the patient actually needs it, is they won't get results back when it's first done. There isn't a very clear trade-off until potentially a decade down the road or five or six years down the road. And so there's an enormous educational component to this that we have to begin in that I hope NHGR takes on, it's in the vision and hopefully that's really something that we take on across the entire health system. And that includes the frontline registration staff who in our place had been known to say things like, you don't really wanna sign up for that, do you? And so I think we have to actually educate the whole place. Great. Lisa, a question for you that we didn't have time to address. Are there more large databases that could be pulled into or linked into your EHR queries to help with the gene matching or could these be connected to the places doing a lot of clinical sequencing? Okay, thank you for that. One database that we do use regularly and I didn't have time to get into the details is just OMIM that's been annotated with HPO terms because you can then figure out what genes actually are related to breast cancer and then find people who have genetic results that are related to that and do some of the visualizations that I showed. But I think that the question is probably pertaining to other institutions that have both genotype and phenotype data. To my knowledge, and I might be naive here, I think a lot of the places that have the most genotype data, those like clinical sequencing labs are phenotype poor. So they're rich in genotypes and poor in phenotypes. So, but I think that, you know, at least our experience at Vanderbilt, once we started looking into the medical record to try to see what kind of genetic testing results we could find, we were surprised by how much was already there. I have a feeling that if you looked at other large medical, you know, medical systems that they would have the same experience. The question is, is it worth going through the pain of extracting that information to make a resource? I think it is, but others may disagree. Well, a kind of a related question to that was, how do you scale the kinds of queries you talked about outside of Vanderbilt? Could you send queries? I mean, I would imagine your vision is you'd be able to send these queries out nationally or internationally and have meaningful results returned. I think a lot of work that's been done with patient matching with things like patient like me and things like that actually have addressed a lot of these problems of how to do scalable queries. The question is, do we have the data structured in the back end of the first place to support those queries? And right now, like I said, no. One thing that I didn't delve into though, I think eventually the problem of getting structured machine-readable genetic data linked to the EHR is gonna be solved in the near future, right? And there is a fairly standard kind of nomenclature to with total precision say, this is the variant that somebody has. I'm not sure that we'll ever get to a point where we can do that with a phenome, right? It's just too slippery. It's too, there's too many nuances to actually, with the same level of precision kind of pinpoint the phenome, but something that we've worked on a lot. A lot of people at Vanderbilt and elsewhere, of course, is how do we extract accurate phenotypic information out of the EHR? And I think that, I think that, so that's another barrier, but I think it's one that we may not have perfect solutions for right now, but I think there are ones that are ready that we could actually start experimenting with networking out. Carol, may I make two comments? Of course. Okay, first, thank you. First, without, I don't know what every vendor's doing, but I will tell you that I know that Epic has a plan to use Cosmos to sort of implement some of what Nigam Shaw and others have done in patients like me. So perhaps at some point, it would be very useful for NHGRI to bring together some sort of vendor panel to understand how they are viewing this, to what extent are their information blocking and other strategies going to confound our ability to scale some of this work? And that's just one point. Lisa brought up the point about unstructured data, and I have to respectfully challenge that assumption that we're getting closer, because as long as we have things like direct-to-consumer testing, and as long as we don't have standards that are in that space, we will likely always have this fragmented, rich data source. And I think that's going to be a problem until we actually really figure out how to address it. What's very clear is that varying information is lexically complex and semantically complex, which means really, really small changes in one character completely change how we interpret it. And therefore, OCR is very risky without a very significant quality assessment product around it to make sure that quality control, to make sure that there's no possibility of a false positive being introduced. Great, thank you, Kevin. Lisa, one last quick question for you, and then I have some questions for Gil to go back to Gil, and then some panel-wide questions. So, Lisa, are you still turning to ICD-10 codes or have you moved past that? That was one of the questions. Me personally, I'm dipping my toe back into NLP after I ran away screaming, realizing how hard it was to do that in the EHR. I love billing codes. I think billing codes, if you wanna replicate a finding and you're saying, we just need billing codes, people are like, okay, if you say, we need you to run this NLP pipeline on all your notes, they say no, right? So the scalability, the ease of use, the fact that billing codes, usually when they're applied to the patient, it's at least, it can be interpreted sometimes incorrectly, that that's something that's happening with the patient and not their aunt, and not something they're worried about, blah, blah, blah. Those are all wonderful properties. But of course, the medical record contains a lot more information than can be gleaned from billing codes. One thing that we've just started doing in the last month is we have a really amazing programmer who has done NLP work and a custom NLP pipeline on all of our notes. And so I'm gonna work on integrating some of that into phenotype risk score and basically measure how much more easily and how much more certain can you be that people have a particular genetic diagnosis if you integrate the information that you get from NLP? I actually don't even really have a guess as to how valuable that is. I will say the pain of getting NLP out of notes means that you should require that you get a pretty good significant bump in performance in order to go down that road, but I'm willing to try again. Great, thank you. Gil, I wanna pull you back in because there were a few questions for you as well. And then we, I have a few here that are good for the whole panel. So Gil, one of the questions is, what are the barriers to these networked ecosystems you've talked about sociological or technical or both? Right, right. So there are a number of different ones that we've seen. I think we've seen both the sociological and the technical, but that the kind of the cultural, the sociological ones are kind of the larger ones at this point. So a lot of the technical challenge on this side maybe goes to another question, but in terms of standards and so forth, they've been developed, there's a little bit of fine tuning, perhaps here and there, but I think it's more a matter of that there are different types of organizations and they have different incentives for their work. So for example, the genomic information is just sort of on the tail distribution of what the electronic medical record, record vendors are looking at. There are a number of other priorities that they may have that may be kind of ahead of it. But then you look at other, if you look at different lab vendors, the same thing, it may be that only a few of the lab vendors really genomics is kind of their main area of focus. But I think there was, there's a sense that the field potentially was changing rapidly, which it was. And now things I think are a little more stabilized. So I think it's really just a matter of time before the different parties that have different incentives will see that when you look at it as an overall network, there are a lot of different, the return on investment will be there in adopting the technologies that are already there. Great. Another question that came directly from your talk was the social contract process is interesting for patients to specify preferences for care. And is there a vision for how this could be coupled with clinical or patient decision support? Right, right. So, yeah, I mean, it can certainly be integrated, I mean, the goal is for it to be integrated with that. And I'm not sure if it was this question or another one was talking about, you know, it can change over time, right? There's smart contract and those are all true. And that's why there are a number of different approaches to doing that. It should be, and the approach should be user is one that allows for that to be updated, amended. Just like, you know, you have today, you know, you may have a will, you can make a change, you have a bank account, you can decide to make a change in kind of who gets to see your information if you want to share it with one organization or another, you know, deposits and to have automated deposits and things like that. So, that's gonna be the very nature of it. And so, who you share that with should reflect how you wanna use, if you wanna use for clinical decision support, then of course, you know, you would allow that, but others may just want it to share with their offspring or their family or part of it with themselves. So, all of that, those are all things that can be a part of that. That's whether the nice things, the smart contract will be triggered based on different rules, different time periods, maybe that they wanna make changes at certain times in their life or they may wanna sunset certain provisions later on as well. So, there's a couple of questions I think would be interesting to get the perspectives of all the panelists. And the first one is how much more work to do in developing genomic data structure standards for electronic health records. There was optimism expressed about progress, especially in your talk, Gil, how far to go next, I guess. And so, Gil, maybe you can start and Lisa and Kevin can jump in if they have some thoughts. Sure, and it depends, I think it may depend on how we interpret the question because there are a number of different standards related to structuring unstructured information and so forth. So, in some areas, I think, I mean, we're essentially, there are standards out there that are being used, even in pre-production, in production. And then there are others that they're in potentially, they may contribute to other two ways of capturing certain pieces of information and they'll be ready with time. But I think for one of the topics that we had concentrated on around delivering clinical information or genomic information at the point of care, those standards are there. They're being used there and with time, I think you'll see, you'll go from kind of the larger, some larger institutions to some of the others out there that are wanting to get out there and to practice genomic medicine. My only comment is that, well, first of all, I appreciate all the work that's getting put into getting these standards together because, as you can tell, I think that the data that we can get from it is super valuable. But I just want to, once again, in case it didn't come through, I think that the time to start developing methods around what is going to be a common wave of clinical genetic variant data that's gonna be linked to the HR is right now and that we have at our medical institutions, the materials to start those studies. I think that we've barely scratched the surface of the type of stuff that we're gonna be able to do. And so we'll need time to develop those methods in anticipation of when all you smart people figure out all those problems. I guess I would go back to education again. And I had to gloss over the inequity slide, but that's just such a big one. I believe that the standards are far enough along that they can be tested and they can be iteratively refined. But we need an equally informed group of experts across the equity spectrum so that we can actually get various systems and their various implementations at the table to make sure that the standards adequately represent the information that needs to be conveyed. So I guess what I would say is, it would be really helpful if NHGRI could begin to think about a research agenda as they are that looks not just at the workforce equity, but also at implementation equity and the various levels of what we're gonna have to create. There've been a number of, there've been a couple of grants that have already started to address this, the Ignite grants among them. But I think that there's just a need to... Again, I think the chart chat brought it up. A need to make sure that some of the sites that don't have as much access to these larger electronic health records are also understood. So I guess I would ask for implementation science research from NHGRI. Great. A question that came up that is another one that I think you all would have an interesting perspective on has to do with sort of the longitudinal aspect. How do you track care from a pediatric setting to an adult setting to old age? How does a decision made at one point in life influence decisions and outcomes at later points in life? And how are these systems going to grapple with this kind of longitudinal tracking issue? I'll just start by saying brilliant observation. I'm a pediatrician. So I've been living in this world for a while. And I would say this is a space where we actively need to do research. We also need to understand the law, right? Every pediatrician wants to ask the question, how do the things I've learned about the mother get transmitted into the baby's chart? We've been asking that question for decades, and it's an extremely hard problem. So what you just got into is not just the chart by chart interoperability, but the point that was being made earlier about if a paternal uncle has something, is that relevant? Yeah. Are people really gonna fill out nitri-type tools so that they can have the data that we actually need to do the research? It's a big, it's a really big challenge. I appreciate you bringing it up. Gil, Lisa, any comments from you? Mark? Yes, I'd like to, this is something I thought a lot about. And it has to do with some of the unique aspects of genomic data. The problem with electronic health record systems is that they are system focused. They're not patient focused. And they serve the needs of systems. They serve the needs, less so of clinicians, and they almost never serve the needs of patients. But genetic data, germline genetic data is something that has relevance for a patient across their entire lifespan. So I think one of the things we need to be thinking about is how do we move this information with the patient as they move through the healthcare delivery system? Because even though we have a fair number of our patients that get cradle to grave care or a geisinger, that is not the typical situation that we experience in this country. And absent a national healthcare system or a national healthcare system informatics infrastructure, this is a key problem. And so I've heard from several talks, the idea about more engagement with the patient and how do we make this more patient centered? But I think that's really foundational for this particular problem because the data has to move with the patient. It's not system data. It's not provider data. It's patient data. Do you want us to tell you how we feel about that? Sure. Bill should probably go first on this one. Can you repeat the issue, like the patient's centricity of it or is there no... Yeah, how do we move the genomic data with the patient as they navigate through the entire healthcare ecosystem as opposed to an individual system? Right, right. So there are a number of ideas and approaches that are being taken on this now. One is sort of this notion of almost like a bank account. So if you have a currency in an account, you can go to another bank. You can move it somewhere else. And it is sort of, you have a credit card or something with you that links you to that piece of information that you can always carry with you. That is kind of one approach. Another one is, people have been using kind of these systems, but they end up many times being a link to, eventually linked to an institution, these sort of genomic archiving communication like systems, right? It's almost like a PAX like system, you know, where imaging data is. But we are starting to see trends. You may have heard of different electronic health record vendors starting to work with different cloud vendors and so forth. And so once you see some of that, once you see the fruition, some of that coming into fruition, like those are starting now, then you'll start to see potentially that data being stored in a cloud type of environment that could be accessible in multiple different locations. And that's why, you know, I don't have slides up now, but some of the key things that I mentioned for the future around, you know, the authentication, privacy, you know, security, all those kind of issues because imagine, you know, if someone had it, I mean, you know, you're not gonna carry around a USB kind of a thumb drive. I mean, that was kind of the old way of maybe thinking about it, but essentially it almost is like that in that it's out there somewhere and you have to make sure that if multiple people can access it, it is the people that you want that can access it. Because if you can move that data, potentially someone else can move that data. Yeah, so coming back to the purpose of the meeting and I'd be interested to hear Kevin and Lisa's perspective on this as well, what are the research questions around this type of patient-centered approach? You just raised, I think, one set of them, the authentication, privacy and security aspects of this, you know, that could be a very interesting idea for research. What are other research questions that might be relevant for this? Two that come to mind immediately are recontact and especially recontact in the absence of provider oversight, something that we all talk about. We know that it's going to likely happen with all of us in some other environments. And then, you know, we used to have this video that we made here that kind of was a vision video of a patient that was using the barc, scanning a barcode at the grocery store and found that there was a over-the-counter med that they shouldn't take due to some patient characteristic. Let's just say it's genomics. I have absolutely no idea what knowledge would be necessary for a patient to be able to do that, to understand it and to find the right alternative other than to barcode scan everything else on the aisle. So I think there's a whole conversation we have here about patient empowerment and what are some alternative models for helping patients in a world where there may be information on their Apple health kit or other devices that could improve their care or that of their loved ones. Thanks, yeah. I call that the Google map problem, which is, you know, almost everybody can use the navigation on their software. We don't have to know the street-level data, but there's some level of ability to interpret what the app is telling us that allows us to get from point A to point B. I kind of see us needing that sort of thing in genomics. So there has been a robust chat and discussion going on and there's no way I'm going to be able the next two minutes capture it all. So I hope that I've been able to present most of the questions and points that have been posed by the audience members. And thank you very much for all of your questions. Thank you very much to the panelists for your really excellent presentations, very thought-provoking. And Jim, I'll turn it back to you to wrap up. Okay, thanks. I don't have any closing remarks. We'll just break now and I think we come back in 30 minutes. Is that okay? So I'd like to thank all the participants and the co-moderators for session two. As Jim pointed out, there'll be a 30 minute break. So we'll be back at 340. And relevant to Carol's comment, we are capturing, I hope, I think we are, all of the comments, the things that are appearing in the question and answer. And so after the meeting, even if you don't have a chance to discuss it now, we might have opportunity tomorrow to bring some of these points back for additional discussion, but we'll certainly synthesize this after the meeting is over. And in the post-meeting materials, we'll be able to put all this together. So please feel free to enter information to the chat and Q&A. It will be dealt with. So we will see you in 30 minutes.