 You know, it's required that I stumble on the down arrow and the space bar for a while, and then I'll go ahead and start. So I want to thank the group that worked with here, Mike Gesiano, Stephen Kingsmore, Aaron Ramos, and myself around these issues of the electronic medical record workflow. The issues around transportability, the interaction with databases, and that sort of topics. Also, I shamelessly stole Carol's slide format, literally used her slide format that she'd sent out. And so hopefully I have changed enough words, Carol, where it won't look exactly like yours. But anyway, I thought I would make that plug before you got me later. In terms of the importance and impact of what we're trying to look at, the quantity of data that's being generated for each patient has just made it so that medicine can't be practiced the way it used to be. And it's still quite remarkable interacting with colleagues, even in the academic setting, how much they still want to have that format. There's still quite a few folks, mainly older than me, that have the cards in their pocket or have their iPhone app that they call their peripheral brain that really is not very much of a brain. It's more just a list of things and acronyms and such. And so there's an opportunity to take advantage of the informatic space that really is not happening now. But there's also a necessity that just wasn't there before. The complexities are also not just in one part of it. There's complexities in the generation of genomic data, the annotation of it, the interpretation, implementation, the application, probably other shuns that I didn't put in there. The idea that we can just do it and it's simple is not there. There's still a lot to be learned. And this is especially true when thinking about our electronic health record, electronic medical record, what do you like to call it, which was principally built as a billing tool that has now been morphed somewhat into a clinical management tool. And yes, there are centers that built their EHR as a clinical management tool. But most of the commercial available products started as a billing tool that is now being morphed. And we have all the pains that come with something that is not purpose built and such. There's also a need for bi-directional sharing of genomic information. Not only the taking data from ClinVar or other sources in terms of interpreting a readout for an individual patient at the clinical level, but also being able to put that data back into databases in order to then benefit from that new Bayesian type strategy. And then the point I jumped over is that patients, the way clinical care is delivered now, they're seeing a number of different clinicians. Certainly in our area, not only do they see a number of different clinicians, they're neurologists, they're cardiologists, they're oncologists, but they might have one up in Ohio as well that they spend half the year or as little time as they can before they come back down to Florida. So there are some complexities there that really make this important. And then, of course, there is the data information knowledge wisdom paradigm in which we currently spend a lot of our time around data, some around information, very little around knowledge, and then we have no wisdom. So the idea that we need to try to build in approaches to tackle that is part of this. Now, let's hit the wrong button there. Mike gave a couple of slides around the VA program, and I'll talk through them, and then he can, at the end, correct everything that I said. But the reason for showing these slides was just to try to illustrate the complexity of data that is in his system. And I think most of us or many of us use the VA electronic health record as our example of how it should be. And certainly on the calls, it was very quick to point out that, first of all, it's not a electronic health record. Secondly, even within one site, it's a conglomeration of health records. And so our gold standard, I don't know what he would call it, but maybe it's a bronze standard that we have in terms of what we're shooting for. And we need to learn from that and work forward. But here's showing just some complexities of how, for an individual participant, there might be survey results, clinical results, non-VA clinical results from CMS, et cetera, biospecimen and molecular data that comes with that, all sorts of data within the surveys and the other data, the National Health Death Index, CMS data. And then, of course, I'm not sure how well that shows up. But the amount of data is really quite large in terms of how it goes in. Looks like this slide got squished a little bit when I put it in here. But there's quite a complexity in terms of within the data warehouse and then, of course, across the different processes. And so as we're thinking about clinical flow and the issues, there is a high level of complexity just in the structure of the way the data will flow, much less in terms of the analytics and such that are implemented on that structure. And we'll come back to that in a second. There are related programs that are playing in this space. Certainly within NHGRI, the Emerge Network is a key one and we'll talk a little bit more about that. CSER has some activity in this space or a lot of good activity in this space. ClinGen has a lot of activity there. So these are NHGRI programs and they're probably more that they didn't put in there. And then, of course, at the NIH and broadly, NCBI is active here and other of the institutes have activity within the informatic infrastructure. The barriers, I listed some of the barriers and we'll have more of them that come up during the discussion. But one of them is that the curation of the data really, it's unique within each of the EHRs. It's not like we can have an analytical tool that will work at Epic in one site and then it will work in Epic at another site. So even within a given EHR, there's a lot of variability. And so the idea that one can build some algorithms and quickly implement them is not always true. And we'll come to a slide about that in a second. Emerge is making progress in that but there's still a lot to do. Many of the EHR elements are not biologically driven constructs and this came out of some of the discussion on our phone call. One of the examples that Mike gave was around cancer and how cancer is moving away from an anatomical basis to much more of a molecular similarity. And so before, one would identify that it's a breast cancer and then identify that there was estrogen receptor expressed in it and then would use a hormone therapy of some sort, aromatase inhibitor or a serum, like tamoxifen to block that estrogen receptor. And one still does that except one now, especially in advanced disease, looks more broadly and certainly we have thyroid cancers and renal cancers that are being treated with the same drug even though they're quite distinct in terms of histology as well as anatomy. And so our EHRs are built under the anatomical paradigm and not necessarily as easily useful for the way patients are now being treated. There's also very few practical analytics to aid the use of EHR data. And this came up a little bit yesterday but it is not trivial to develop an app and stick it on the EHR. I mean, those of us who don't do that for a living talk about it quite trivially. Well, just, you know, there's an app for that. We can just do that, right? But the actual implementation of it is quite challenging and the robustness and certainly if it's now something that we're using for clinical care or for patients to use in their own management of their life is there's a robustness to it there. And then also there's very little investment in mapping within electronic health records. And that is as a non-informaticist it is an extremely boring issue. It is something, you know, do I really want the federal funds being spent on that? And yet it is critical if we're trying to make progress with this. So Mike gave the example that across the VA system there are right around 3,000 variables that have the word albumin in their title. And so some of those are the measurement of albumin in serum. Some of those are albumin in another biologic fluid. Some of them are albumin that has been glycosylated or as some of their features happen to it. And there are some very distinct uses of each of those. Some of them are redundant. Some of them are useful by themselves. And the idea that those just are out there and can be easily used is just not true. There's also differences in the way clinical groups use ICD-9 codes or soon to be ICD-10 codes in that there's a lot of the specialty groups that are very careful about their use of the codes. And Mike mentioned an example around the use of some of the all-summers codes if I remember correctly, how within the neurology community they're very precisely used, but in the family medicine community as they wouldn't map back, those codes were really just reflecting dementia of some sort. And so again, using the ICD-9 codes without cleaning the data from a research standpoint is problematic. And then also if we want now clinical decision support or other things to fire based on those, there's also that same mapping that needs to be conducted. Now, there certainly are synergies within some of the existing programs. I mean, it emerges a natural one because it's really doing a lot of things in this space. There's been great progress made. Many representatives are here on that. For ClinGen also, there's some very nice links there and I'll come back to that. I didn't put in a slide for Ignite mainly because one wasn't easily available, but as it could have. The Precision Medicine Initiative, the beauty of an initiative that hasn't yet been formally constructed is that it can do everything that we need. And certainly, there's very few problems that it is not gonna be the answer to according to this meeting. So I think as reality comes forward, as we decide what is the bite-size part of precision medicine as opposed to just shoving the whole thing in our mouth, we're gonna figure out that there will be some issues that we can solve and then there will be others that we need other solutions for. And so that, again, is to be determined. Now within ClinGen, some of the work on data flow is certainly being worked on. It's relatively early days, but you have expert groups, other research projects looking at ClinVar in terms of variant-level data, developing a ClinGen knowledge base that is being curated by the expert groups, pulling in data from other sources, and then you get expert curation at the various levels, and the different working groups that are out there. And certainly, that model is a, I, as a participant in this, I think that is a very successful model, but it's a very challenging model in terms of scaling. And certainly, the amount of data that's there, the amount of effort that has to be put out, even to curate the easy areas. Pharmacogenomics, to me, is the easy area because there aren't that many genes and there aren't that many drugs. And yet it has been very challenging, not only in terms of getting a consensus around what is actionable and what's not, but also even the wording. Pathogenic has very little meaning in the context of a drug and the genes and drug. And so we've had to come up and one of the points of slowness has been working through some naming that can be, that will resonate in pathology labs and in clinical pharmacology suites and in the doctor's office, as well as in informatically. And so even the simple things still take a fair amount of work. The good news is something like ClinVar has a large number of data and the number of variances is growing. I think ClinGen efforts will help these efforts grow even further. But certainly there's a large amount of data coming in. There's a lot of different groups that are putting this. This is just showing some of the groups, screenshot from Heidi, actually via Erin, on the large number of groups that are submitting data and then some of the folks that are working on the different ways of naming. You know, as we dig down further, we see a lot more new names coming up for describing different phenotypes. Sorry, Jonathan, for not drawing a beard on your picture there. But anyway, I think people will still recognize you. But so there's a lot of effort that's trying to prepare for the clinical flow, for the databases being useful in that model, but there's still a lot of work to be done. And then this is just a reminder from the eMERGE standpoint. This is showing there's a phenotype of interest and this algorithm that can be developed and there's a manual review that can happen and some analytics around that in terms of positive, negative predictive values. It can be deployed at one site, the testing can happen, and then you get validation that happens in other sites. What's not shown in here is that these small little steps here that look so easy take a huge amount of work and, you know, emerges one of the few groups that have developed algorithms at one site and then worked with expert collaborators at other sites to try to really make that happen. And so we have examples where, you know, Chris's team might have developed something when he was in Mayo and it works great on the Mayo EHR and then the hard work begins to make sure that those analytics can also work in Marshfield or also can work at Vanderbilt and each of the sites have their own version of that. And it's really been illustrative of the hard work that has to happen. And right now there are very few efforts outside of eMERGE that are really focusing on this in terms of phenotyping for the types of diseases we're thinking of. And so I think there's an opportunity to enhance that further. Sorry, skip over that. The other point we're supposed to touch on was training opportunities. And really there's not a lot of NIH focused right now on the training of electronic health records scientists in the way that we've been talking about it over these two days. There are programs for training computational sciences and informaticists in working with electronic health records. But the ones that I'm familiar with are very much focused on the structure, looking at structural challenges, not as much on the analytics part of it. And that doesn't mean there's none. I said not a lot, there's not none. But there really hasn't been an emphasis there. And partly because there's not a national institute for electronic health records. And partly because the NCBI and the others that do work in this space have a lot of territory to cover. But we're seeing a little bit of activity in the universities in terms of growing these programs. Not as much activity in the private sector yet, although they like to hire the people from universities. But there is an opportunity there from a training standpoint for someone to try to enhance that. So I put a couple of points on here that are, and then we really want the discussion to flesh it out into more specifics. But one thing I wanna make is that NHGRI should not, cannot, and does not need to solve all of electronic health record workflow and transportability issues. Now, they are in our way. But NHGRI is not the electronic health record institute. And the problem is that the problems that we're having, the whole field is having. And there's a temptation to come out of here and say, all right, we're gonna recommend that this program be developed to solve the EHR. It would take all the money of the institute has and totally change the focus. And it just, you know, in some ways that bullet point is just to make sure that the NHGRI program staff know that it's not solely your responsibility to solve all these problems. A lot of the tools that are designed for billing, almost the EHRs, are not gonna meet all of our needs. And so I think some of the discussion yesterday really brought out that some of the solutions will be morphing the EHR, but some of the solutions are gonna be extra EHR. They're gonna be something that's outside the EHR that interacts with it. And certainly within pathology right now, within radiology, within a lot of the clinical tools, the metadata might be available in the EHR, but the absolute data or whatever the right term is for that is not, it's in a different place. Partly because of the practicalities of size and such, and partly because of the lack of need for that level of data. And so there might be some solutions there. I think the new precision medicine initiative well require these kinds of informatic workflows and analytics, and so there are some opportunities to ride on that initiative in terms of creating some of these issues. And then the last little piece is that training is really not organized on a national level. There is not a K23 or a K08 or such focused around bioinformatic EHR analytics or things like that, at least not that I could find. Someone may correct me. And so I think there's some opportunities to develop there. So I'd like to stop at this point and ask Aaron and Steven and Mike to fill in the blanks and then we can open up more broadly. Thanks, Howard. I just sort of had two points to reiterate what you already said and not really fill in any gaps, but I was glad to hear you say it wasn't our responsibility to solve the EHR workflow. Solely, that's solely responsible. Right, solely. But so that made me think what can we do to better engage and collaborate with the Office of Data Science and the BD2K program. I don't know if, I know they recently had a call, for example, for administrative supplements looking at promoting data interoperability, but I don't know and I'm really curious to know if any of that work is in the EHR space. Eric, do you know if they're doing anything in the EHR space? You know, I'm not entirely sure. I was gonna make a slightly different point. Do you want me to make this now? Because in terms of where locusts of expertise might sit in EHR and I was actually just looking up to give you a more precise time. So as many of you, one thing that is relevant at an NIH corporate level is that this area sort of has gotten attention, especially around precision medicine initiative and recognizing that this is one of these classic examples of an area that slips between the cracks of institutes and centers. It's not, you know. And so keeping that in mind and recognizing that it has been on many of our minds of late and certainly genome institute is not gonna solve this problem. We are certainly interested and we'll help if we can, but this is a much bigger than us. Some of you may recognize that Don Lindbergh longstanding many year director of the National Library of Medicine retired recently and rather than immediately launch a new search since lots have changed. Many things have changed in this world since the National Library of Medicine was created and even since the last time it had a director appointed. So Francis Collins appointed, I'm a working group of his highest level advisory group, the advisory committee to the director to look at the NLM and to sort of rearticulate a contemporary vision for it going forward that could serve as a blueprint if you will for the new director. And I was asked along to co-chair that working group along with Harlan Krumholtz of Yale University. And those who are interested, I was just looking up because I wanted to make this point at 11.15 on Thursday, the report of that working group is being presented to the advisory committee to the NIH director in what is an openly webcast live webcast totally public event. The working group report will actually be posted at that time online for you to read and a series of recommendations will be presented. Actually my co-chair is the one making the formal presentation to the ACD. So if you're interested in that I would encourage you to do it. And the reason it's relevant is it was one of the areas that was recognized by the working group as an example of something that NLM should play a larger role in providing intellectual and programmatic leadership recognizing that this is something that is of relevance to not only the precision medicine issue but many of the things going on at many of the institutes and yet it does slip between the cracks. So this is not a solution and what's gonna happen is now we're gonna be a search for a new director and they have the working group report as a blueprint for going forward. Obviously this won't change overnight and probably take several years but I do believe that this is an area that is now gonna be recognized as being important for NLM to show leadership on and so within a few years one might imagine having a more clear locus of expertise. On an interim level I think sort of the big data to knowledge the associate director for data sciences office is sort of in this area but I don't know off the top of my head maybe somebody around here does whether any of the funded programs under BT2K to hit this head on but again any of that I think is gonna be interim to where I think it's gonna likely be much more looked at by the NLM over the long run. Thank you. Erin did you have other points? Well the only, the last thing I was gonna say is well we've gotten pretty far in ClinGen for example and some of the other programs and developing the pipeline for pulling in data from ClinVar and then doing the additional annotation and interpretation of validity, pathogenicity and actionability but and then we're working on getting that out then and making it available through some of the work that Mark's doing to EHR systems but we haven't figured out sort of the best approaches for pulling the outcomes data like you alluded to and then bringing that back into the system to improve and iterate on the interpretations that we've already made and I think that's a barrier that we do need to spend more time thinking about. Well I think the EHR group within ClinGen is quite an active group and so I think there'll be some opportunities there to look at transportability and building again, building a tool and how do we get it into the various places so maybe we'll come back to you after that. Steven or Mike? I think on your first slide you made a very provocative point. What NHGRI is doing in this genomic medicine effort is fundamentally somewhat different than anything I've experienced before in terms of changing the practice of medicine and the genomes are really the first place where medicine, the old system of practicing medicine becomes broken irrevocably and where physicians, as you pointed out, can't compute, can't cope with the computational needs. That at a very high level is revolutionary and so how do we deal with that? We're essentially going against the grain of routine medical practice and so how do we win over physicians and persuade them that it's in their best interest to let go when all of their training has been about not letting go, in fact doing things repetitively to the point that they can do it in their sleep? So there's a fundamental issue there that's gonna be tough in terms of practical implementation of genomic medicine. There seem to be two ways of doing that. One is to win physicians over but the other is to build essentially an automated delivery system for genomic medicine and I think that's inevitable. I don't think we can retrain medicine to cope with genomes and so sort of the overarching theme here is building a system that delivers somewhat automated genomic medicine and it starts with obtaining phenotypes by natural language processing to computing which patients are gonna benefit from testing to then running the testing and automatically interpreting it and then turning that into clinical practice guidelines and alerts that go back to physicians and it's almost autonomous and so there's elements of that that I don't think we've talked about but I do think at a high level there has to be some consideration about is this the right way to go? Are there alternative approaches? And how do we engage physicians, physician leadership, medical school deans, all those sorts of folks and have them participate in this dialogue about recreating genomic medicine? Yeah, thank you and I don't think that this is a revolution. This isn't the fenestration of Prague or something. This is more the British invasion in terms of the Beatles where the kids are won over and then pretty soon the adults are humming along too and so I think that your strategy of, certainly many of us at our centers are trying to figure out how do we do this so we can be delivered in a way where people almost don't even know what's happening so that they can use it in practice and not have to think they have to retrain or whatever so I think that's great. Mike, before we go open, we'll rub it. I think that's an excellent point that historically medicine was done. We did the computations in our head and we got some data from these clunky billing systems but now there's some of the computation that has to happen. We need to assist the practitioner but just a couple of points about the EHR problem at large and then what parts of it I think we should try to focus our attention on. So there are many people working on various aspects of the EHR problems. One is the interoperability for clinical purposes but that's a different problem I think we might be trying to solve. You want a doctor sitting in Ohio to be able to see the data, the same data that was when the patient was admitted with pneumonia in Florida in February and that's one flavor of the problem, that kind of interoperability of the systems talking to each other and what we're trying to do and where I think we should play a bigger role is some level of curation of that data and I do believe that to echo Dan and Chris that it's a major investment in mapping the human phenome for our purposes, for our particular purposes to understand the relationships between omics and phenomics is well worth it. Each EHR and within a quote single EHR like the VA there are very unique aspects to those environments and so I don't think we can underestimate the importance of what you had said and what Emerge has done is taking experts within those systems. I think that some people have this misconception that you can just take data from lots of EHRs, dump it into some central place and then that's quite usable and I think there's a lot of reasons why we need to rethink that particular construct. And then the curation of the phenotypes in ways that we've never thought of before. It's not just the simple grabbing of the ICD because the ICD might be wrong. I mean, the codes that were made for billing and we have done, we've had some successes in common variant issues but in a lot of cases we've grabbed low hanging fruit. We tried to do neuroleptic malignant syndrome as a phenotype and we just decided we couldn't. It was just no way. It was a diagnosis of exclusion and the diagnoses were so circular. We just couldn't get to, there are codes for it. It's just that you look at the charts. We couldn't come up with a gold standard from a clinician. The other thing is what we're doing is we are gold standarding 95% certainly to a clinician's perspective. The clinicians are looking at the chart and say, yeah, he's got the disease. What if the disease is a syndrome that's actually 20 different physiologic entities? We've created an algorithm to something that's, so we need, I think, to begin to really rethink the field in general. I think that the purpose that we're mapping, interoperability is different for different purposes. A clinician wants to see just the exact same data that the other hospital has. What we need to do when we're talking about interoperability is making sure we're talking the same language from a biologic perspective. So I think that NHGRI could help in helping to define the process. And I think any collection of bright people trying to solve the same problem could create some standards in setting the process. Perhaps something like eMERGE has done but on a grander scale of creating libraries of curated algorithms of solutions to some of these problems that are potentially accessible by others who are doing genomic or non-genomic research from that sort of biologic perspective. So I do want to just reiterate the need for, I think, investment in this area about thinking about how we get lots of these different systems to talk to each other as we're trying to compile larger and larger resources with PMIs being a very good example. But I think that NHGRI can do an awful lot in this space. Yeah, it is. It is. And I think one of the things we talked about last night, we're excited about trying to engage more with Canada. And this is one of those areas among many where there's a commonality of the issues. And as we're looking at transportability across EHRs, I mean, each of your centers has EHRs, each of our centers, there's things like that that we could look at at a grander level to try to work on the genomic component. And so I think there's some good opportunities there. Mark, I know you had your hand up and then there was a bunch of others. I think Heather maybe second, so it was a great, Mark. Yeah, so two things related to the points that were brought up. One, just a brief reiteration that in the training space, ACMG and the American Medical Informatics Association is exploring training opportunities related to the Clinical Informatics Fellowship and Genomics. And I'm leading that group along with Bob is involved as one of the past chair of the Genomic Medicine and Translational Working Group of AMIA along with Jesse Tenenbaum from Duke. One of the things that we need to follow up on is to see in the NLM space what the training they are doing and whether there would be some opportunities to partner with them and develop something, but we'd also be looking for potential funding opportunities there. The second thing relating to the electronic health record, you mentioned that one of the things within the ClinGen space that we're looking at is information movement back and forth. I would have to say that for the first, for this grant, what we're really looking for is more access to the ClinGen resource as opposed to actually having clinicians put information into the ClinGen resource through the EHR. Ultimately, we wanna be able to get there, but I think that that's not something that is envisioned within the first round of funding for that. But again, if we have a vision that that's where we wanna go, then what we can begin to instantiate within the connection is using the standards that we know would allow that going forward. So I just wanted to mention briefly that we do have a couple of training opportunities within Extra Merrill at NHGRI. We offer the K01 and the K08. These are mentored scientist opportunities. For the K01, we focus on genomic science. So if there's a biologist or an engineer who wants to train in informatics or technology development, they would be eligible to apply for this to get that cross training. Similarly for the K08s, we're looking for MDs or PhDs who wanna receive cross training again in informatics along the lines of electronic health records, technology development and really being able to push this implementation of genomic medicine into the clinic. So these are new areas for NHGRI, but we have those two opportunities as well. And then stepping back, we also support several pre-doc and post-doc fellowships within the realms of genomic medicine and genomic science. And we also have a new genomic medicine T32. These are institutional training grants. These for the T32 for genomic medicine is limited to post-docs only. We're looking at supporting somewhere between four to six trainees over a five year period to be able to really immerse them into an institutional training program where they receive didactic coursework, they have mentorship and they really start to understand how to put this into the clinical workflow. Great, thank you. Yeah, I wanted to come back to the points that Steven and then Mike made a little while ago. And I think it's a central issue that needs to be addressed. One is as I also mentioned yesterday, focus on the provider and where the provider and the doctors are overwhelmed with the demands of genomic medicine and need help. And hence sort of the idea of, well, how can we create an automated tool and a tool that can automate the process of injecting genomic information into the decision-making process? I think it's an essential one. And I would also want to add to that, that looking at the EHR providers for this may be a bit of a frustrating experience and has been to some extent already. And hence the conclusion would be, in my mind, to think about seriously how can we actually push and incentivize the development of systems that take on this task of automating genomic medicine but live outside of EHR. And essentially all they have to do is interact in real time with EHR. And all the EHR solutions that I'm aware of really have the capability of plug in and play. And so I think one important aspect of this particular conversation can be to think about in a concrete way about promoting efforts to come to automated systems to enable genomic medicine to support physicians who are overwhelmed and these systems to interact with the EHR but not directly be EHR. Thank you. Rex and then John. So I think Heidi's not here, but I sort of feel obligated since it was in the title actually to talk about the importance of us trying to figure out how to facilitate all the various clinical laboratories out there being able to submit in an easy way to ClinVar. So creating maybe a plug, to use your term plug and play pipelines that some of the clinical laboratories connect to and thinking about the fact that we could really maximize the flow of information if we made it easier. So it seems to me that should be high on the agenda to think about how to achieve that. Heidi would have said it much more elegantly, but I think it's an important thing for us to be doing. Thank you. Thank you. Jonathan, did you have your hand? Oh, I guess he was holding his hand up for you. Thanks, Jonathan. I wanted to say a few words about the network formerly known as the HMORN, which Mark has mentioned a few times. And the HMORN has actually been working for about two decades on the problem of trying to extract phenotypic information from the medical record and doing this in a way that allows collaboration across the 17 sites in the HMORN. And I think one of the challenges that the HMORN has had in developing collaborations with people outside of the HMORN is the lack of recognition of this area of expertise. And I don't even think that we have come up with a term at this meeting that is consistent across all the panels of what we even call this area of expertise. I call it medical informatics, but I'm not sure that's the right word, but we need a name for it. And I'm reminded of even when I was in training in biostatistics that that area of expertise was not fully recognized as something that you need to have on your project. And even at that time, biostatisticians often wouldn't even be considered important enough to be a co-author on a paper, but would just be in the acknowledgment section. And I think we really need to recognize what is this area of expertise, give it a name and require it on our projects. I think it exists out there and there have been people working on this for a very long time, but it's really under-recognized. Yeah, that's right. I wanted to, as a physician, I'll be having practice in genetics, and my first job in genetics was in, in human genetics was in 1979, so I've been in genetics, steeped in it for a long time, but I did want to push back a little bit about the concept that physicians can't compute or will have difficulty with capturing genetics and genomics into their practice. I think that the key point is that it needs to be made relevant to what they're trying to do for their patients. Physicians in the practice of medicine involves on-the-fly computation of probabilities in your head all the time. That's what we do. So what you need is the tools that assist you to do that. Computation, physicians use computational tools or computational intensive data all the time. Think about CT scan, MRI, those where the computation is done, but they don't spit out an answer that says there's a brain tumor here. They spit out an image, which still requires careful interpretation. So I think that we need to not assume that the providers can do everything. We also need not to assume they can't do anything. We need to provide them tools that help them do exactly what they're doing now. Yeah, that's good. Mark? Yeah, what we have here is a problem in the sense that even if you take genomics off the table, the amount of data exceeds human cognitive capacity, and that's inarguable. And so the idea, and I think the point that was being made, perhaps not as clear or elegantly as it could be, is that the best human can do five variables. And medicine, every single time we see a patient involves way more than five variables. So what do we do? Well, we pick five variables, and then we make a decision. And that's why medicine operates at a half a sigma reliability. So the issue here is that we're in an era of data complexity, even without genomics, that is going to require expert systems to assist in consistent high quality delivery. And that means developing fully functional electronic health record systems that can really provide that synthesis so that we can present to clinicians the data pieces that are most critically important for clinical decision making, and then let the clinicians use judgment based on their patients with a number of variables that they can actually deal with in a reasonable way. So genomics is adding to that problem, but it didn't create that problem. But we need to add to the solution, which is now just kind of in the early stages of how do we develop a highly reliable approach to patients. And the only group in medicine that's done it is anesthesia, where they've gone to essentially airline related checklists and other things to deliver care that's at a four sigma reliability. So this is a major issue for healthcare, and it represents the move from a craft based training into a 21st century mass customization approach, which is what Precision Medicine is really all about. And our training programs at the present time are still essentially working off of a medieval apprenticeship model. I agree with all of that. So I'm just wondering if we can kind of come back to where in each year I might be able to help in this space. Sounds like- It was obvious, Gary, for God's sake. Like probably, maybe, yeah, that's right. So the world is coming apart and it's a terrible thing, but hopefully there are places where we can work. So where we have focused in the Emerge program and maybe some of the others is in defining phenotypes, not because phenotype is our business, it's very much not our business, but the only way to do genomic research was to define these things and nobody else was going to do it and so we did. So we would like to see that farmed out to our sister institutes and have you take on a number of others that we just can't handle within our programs. But in addition, there's this question of how do you best integrate genomic information? How do you provide it in APIs or however it is that one does that? That seems as though that is within our remit and it's something that we can do. And are we doing enough in that area? Or we're never doing enough, but can we do it better? Can you help us to sort of figure out, prioritize what issues really we can tackle? Well, you know, I'm stuck on the naming problem and I very much liked your presentation with the 57 flavors of albumin or 3,000 flavors or however many it was. Shades of gray. There you go. In the context of genomics, I mean this point was raised yesterday and Bob and others have been working assiduously to solve it, but when I was babbling on about grammars yesterday, the way we name genomic variants is not amenable to their application and use in a clinical context. Clinical decision support environments have to be able to grab a nameable entity. And we all know that the star allele system is collapsing under its own weight. That will not serve as a framework for nomenclature of genomic variants. I think one thing that is relevant to NHGRI's remit is how do we go about consistently and comparably naming, identifying genomic variants so that they can be inserted into the clinical process in a consistent and reliable way. Until we have that, it's sort of left as an exercise for each organization to solve. It goes back to the old dreaded laboratory test naming problem. Why every laboratory in the country feels they have an inborn right to come up with their own darn names when perfectly good Loink names exist. We're seeing that phenomenon again on the genomic side and it might be timely to nip that one in the bud with a concerted effort of just nomenclature associated with genomic variation. So just to address that, aren't there HL7 groups and others that are tackling this? And Jonathan, maybe you're familiar with them, or Chris? I'm not familiar with the HL7 group. I just wanted to put it in a plug for the ClinGen data model working group which basically has spent the last probably six months really fleshing out an allele model to describe exactly what you're saying. The very particular definition of what an allele is and how you represent it so that you can have essentially a standardized naming system for every variant that's possible. And so that is something that ClinGen is working on and trying to harmonize with the GA4GH and other sort of groups that are doing that. I would urge you to harmonize also with the clinical space. Yes. Because as I said yesterday, for reasons that are obscure to me, although I'm guilty, academics feel they have a blank sheet of paper and that there's no real world out there. And that we can, the fact that you're not familiar with the HL7 activity in this area is actually somewhat disturbing. I'm not running that data model group. But they're actually discussing fire and other systematic terminologies. I was just gonna add, they do have representatives like Sandy Aronson that are actively engaged in HL7 and working with the IOM, I guess it's the Digitize program now. So we're trying to make as many links as we can, but the point you raised about having the clinical perspective, we were discussing at our steering committee meeting and we recognize and we are trying to bring in folks like Heidi and others that are sitting on that group but it's a good point. Yeah, I'd like to get back to what Chris initiated here in terms of the conversation about standards and terminology with regard to variants and alleles. And while I absolutely agree that it's very critical and that groups need to work on that and I'm glad to hear that this is happening, I don't believe that this is sort of the essential task to deliver the autonomic information in the clinical practice. I think that this gets us to the last mile but the last mile to make it useful for physicians and for us all to achieve the outcomes that we hope to achieve with this so that the physician takes the right decision and acts with genome information. The last mile needs to be a translation of alleles into what it means for the care of the patient in front of the physician. I think the physicians are not going to act on us telling them about alleles. We need to actually translate what it means in terms of risk or in terms of prescribing practice for the patient. And so I think there's a translation tool that's needed rather than a convention nomenclature tool. And I actually think this is something that we can summarize on the going of the last mile into practice, which takes it from all the conventions and nomenclatures and translates it so that the physician is comfortable with using the information. But I think part of our goal needs to be to make genomics less special at the clinical level because from a practitioner standpoint, it's genetics information is not that different from the other types of information that are coming in. We think it's special and there definitely are nuances that we need to solve. But the final piece, often we try to make it more special from a job security standpoint, but I think our goal needs to be the opposite. I think Bob, or do you have a point directly on that one here? And then Bob, period with me. Yeah, so more related to the sort of physicians using the genomic information. We've had some recent progress from the Cesar and Emerge Consortia who have recently co-authored a paper regarding sources of heterogeneity and the EHR related to genomic information and how it's stored. And that paper really highlights the sources of variability that come into the EMR related to various clinical labs, so many genomic information and clinicians notes and just how heterogeneous all of that information really is and how hard it is to standardize across various categories of information. So I think we're starting to define the gaps and the opportunities that that paper is impressed at Jamea, I believe. So I think we've had a little bit of progress in that area and it I think directly speaks to the continued needs to do additional work in this area. Great. Bob? Thank you. Just to clarify, the ClinGen project is actually quite tightly aligned with what's going on in HL7 and the IOM Action Collaborative Digitize. Larry Bab and Sandy Ironson have done an incredible job of reaching out to all the groups that play in this space and try to align what's going on in ClinGen with these other efforts. And so there is, it is true that it's a bit like herding cats but at least they're on the job and achieving some success. On the topic of genetic nomenclature, you know each one of the systems that are in current use today were designed for a particular purpose. They're fit for that purpose and that means by extension that there is no one right tool for the job. What a computer needs and what EHR system would work best using is not what a clinician wants to see in front of them as they're caring for a patient and vice versa. And so we shouldn't try to use one nomenclature that was designed for human readability to shove it into the EHR and pretend the EHR is gonna be able to compute on it. So there are different systems that are needed and what we need to do is we need to make sure that each system that is required is fit for the purpose that it is intended to be used for and does that job well and then don't try to necessarily have a square peg ground hole problem and reuse that same system someplace else where it wouldn't necessarily be as good of a natural fit. Then of course we do need very robust translation systems that will allow us to translate one of these naming schemes into another so that we can cross that bridge between humans and computers. Thanks. And many of us survived the change in liver tests from SGOT to AST. We took us a while to change the way we said it but we did survive and we still knew how to use it and all that kind of stuff. So we can go from star nomenclature to something else or whatever it needs to be so it can happen. I think Trina and then Rex. I wanted to mention the point that I think a lot of our consortiums that are here today are really focused more on the discovery and identification of people who carry a risk alleles. But we need to think a little bit further beyond that and the job is not done when we've identified people and given them a diagnosis and we actually need to think about management of populations and that's actually made really difficult by the fact that there are not specific diagnosis codes for most of the genetic conditions. Not even for something as common as Lynch syndrome is there a specific diagnosis code and it's really hard to find people after they've received that diagnosis and it's in their EMR because they're mixed in with so many other diagnoses that it makes the problem of managing that population very difficult. So I think that's another area where we need some work on standardization. It's even in ICD-10 there's not a code for Lynch syndrome. I think in a year I can have a role in the genomics part of that. So I think that is something that some of it, I don't think there's necessarily money in this particular case, it's more trying to put focus and pressure on some of these larger initiatives to pay attention to the genomics portion. And I think even the EHR, if you look at the EHR vendors, there was a little bit of a turning point when they were kind of brought together with one of the previous meetings for GM5 or whatever it was when EHR vendors were there. And they were like, okay, I guess somebody does care about this, maybe we should pay attention. And then now the market is causing those changes. So that's good. Rex? Yeah, I just want to emphasize something you said, Howard, and I know that Emerge is paying a lot of attention to this in the EHR working group and I assume the interaction between Emerge and Caesar probably is doing the same. And that is just the simple fact of putting genetic variant into the electronic health record as a lab value. And that seems to be, I think, making good progress and simply by doing that, then you created in the health record in a way that it becomes a computable entity that as I talked about yesterday, you can merge with a knowledge base such as ClinGen to actually produce clinical decision support that can then fire just as we do for any lab value in the EHR. Yeah, I think we've all lived through trying to hunt down the PDF that was uploaded someplace in the EHR and has the genomic data. And it's a phenomenal way to hide information. I mean, NSA can't even find it, so anyway. Yeah, I would just want to come back to the vendor. And this is something that Aaron and I have talked a little bit about. I think that there are real key, we've talked about making sure that we're connected in with all the different standard organizations and are using that. But there's also some utility in having the vendors engaged to understand what the end game is here. We were meeting with one of the major vendors a couple of days ago and it was interesting to me that when we were talking about some of the strategies that we were using like info buttons and this sort of thing. Well, we think info buttons are a dead end. We're going this direction, but they're in meaningful use. So there's some push pull within the vendor community in terms of what they're doing. And the same thing goes for HL7 where they're saying, well, we think HL7 is going to not be the way to go on this. And so the crux then is, well, what's the rules for engagement? So if we'd like to have vendors involved in the EHR section of ClinGen because we have an interoperability goal there, under what circumstances can we involve them? Do we have to have all of the vendors there? Some of the vendors, could we do it with if there's only one vendor interested? And so that's more of, it's not an interesting research question at all, but it's something that's a logistical issue that we need some guidance on from how we can actually effectively engage and utilize the resources that are available. IOM has the space to be able to operate in that type of a thing, but it's not as clear that that can be done within the context of an NHGRI grant or cooperative agreement. Thank you. Other points? So Howard, did we get that sort of crystallization of the things that NHGRI can do or should do in this area? We've heard what it shouldn't do. It shouldn't be responsible for all of it. I think crystallization might be an extreme word, but I think there's a couple of elements that came out. One was NHGRI advocating at the pan-NIH level to make sure that attention to EHR, the degenomic component is certainly part of that. I think that from a training standpoint, I mean, Heather mentioned some of the training opportunities, maybe some of those need to be either crafted around this area or some joint training with NCBI or others could be something that could come out of this. I think the emerge activities around transportability and such need to be broadened, and I think they will be. Certainly interactions with the VA, with Canada, with other aspects would help with that. I know a lot of that is happening and certainly the Air Force is active in there and others, so it's broadening out beyond just Epic and Cerner, but I think that that's one area that could be put forward. I don't think that we wrote the start of an RFA, nor would we ask to, nor would we want to exclude ourselves from applying by doing that, so we didn't give you that part, but those are at least some of the things that came out of it, and hopefully during the exchanges after the meeting we can enhance that even more. Steven? Just add one thing. I think what we've identified as a need for additional software development, that there are gaps in terms of the software pieces and that it's unlikely that without NHGRI involvement that they will soon be filled and that that would be a place where there has been a historical precedent for investments that have been transformative, say, in just genome analysis, and now we need to, I think, migrate that out a little bit in terms of patient identification, clinical decision support, helping to make the entire organism more usable by physicians and other healthcare providers, I think would be huge. And I would also ask, we've talked a lot about the electronic health record aspects of this, we haven't talked much about clinical workflow, and it seems as though that is maybe not as big a barrier, but at least is a barrier in some places to being sure that we have turnaround and a rapid enough pace. And Steven, obviously you've done fabulous work in that area with the NICU. So are there issues that we need to address there or are we kind of confident that that's moving along at its own pace? Clinical workflow is always local. And so I think the issue there is, the point that was made, I think by Rex on Heidi's behalf, that the laboratory workflow, in terms of getting information into ClinVar, is probably something that's more amenable to a generalizable solution. But the clinician workflow, I think the point of emphasis needs to be more on having the tools available that would allow the potential for deposition, but I don't think there's a way to really research a generalizable solution for the clinician workflow entry. So I would tend to deemphasize that as an area of priority for research. Bob, and then Jeff. I'm just gonna sort of second that as well. And I think that the focus for clinical workflow should be on the kinds of tools that help manage the data in thinking about timing. I mean, besides the income from practicing medicine, the second most valuable thing to providers is their time. And so the kinds of efforts that you have to go through now to put together a case and figure out what testing needs to be done, interpret it and deliver that to the patients and so forth is a lot of time. And that's a huge area where clinical workflow can be improved. I think one thing that's new about genomic medicine is that it's pulling genome data into users who traditionally weren't involved in it. And they don't necessarily have the informatics support of a powerhouse genome center. And the software universe is very, very fragmented at present. And so we wind up spending months. I mean, just if I think about our insight teams and how some of them spent months and months collating, collecting, comparing the various bits and pieces that they needed to put together to build a workflow or a pipeline, I think a lot could be done to help that that would assist an awful lot of new entrants into this area to get up to speed. Jeff and then Erwin. So a lot of what we've been talking about in this section has huge economic implications for the electronic health record business community. So I wonder if there's a research stream here that can actually be done, and I don't know if this is NHGRI, but that could build the business case as to why the vendors really should be paying attention to this area, so that it engages them in why it's better for their business to move in this direction as opposed to having it sort of pushed on them from a very engaged and important research community. So I just wanna throw out the notion that an economic model for the incorporation of genetic and genomic information into the EHR could be a topic for research and investigation, just as the director's office has broadly engaged in the economic model for personalized healthcare. This should be at least a component of that. Yeah, I'm glad you mentioned that, Jeff, because I have a different opinion, and that is, if I may, we've a lot of experience with working with some of the big vendors, and I think we all in Emerge and outside of Emerge have made the experience that the EHR vendor community is focused on satisfying and responding to very different pressures, and I think we don't necessarily as a matter of approach to get genomic information into the EHR or at the fingertips of the clinician need to have solutions that come from within the EHR community. The EHR systems, by and large, are interoperable, and as was mentioned, I think focus on developing tools that specifically interact with the EHR and translate the message just as well as if the message was generated within the EHR should be the focus, in my view, for a more speedy delivery of genome-informed CDS rather than trying to engage the vendor community again to that cause. Can I just reinforce what Errin said? At Emerge, we've had at least one experience and it's sort of in the broader Emerge Caesar community, I think even a second experience at actually bringing in the EHR vendors, and they really are sort of waiting for us to tell them how to do it, it's pretty clear, and the second piece, I couldn't agree more about the need to make the economic case, but before we're gonna need to make the economic case, we've got the bigger problem to solve of how do the payers actually gonna pay for doing this, and I think it sort of becomes the second or third step away problem that's just, I think we need to get some of the more proximal ones solved first. It's a real problem, but I just don't know how we tackle it, at least not at the NSURI level. This has been very, very useful. Teri and I- Do you see if there's any other? Yeah, any, yeah. Chris looked like he was about to know about it. All right, so Teri and I, from a time standpoint, we still have plenty of time, we're not planning on going over our appointed time, but we will move the break up to, right now we'll go to a 20 minute break, and then we'll start off with Mary and the rest of our team in panel eight at that point. So we could be back here at 10.40, sorry, 10.39.