 Head through the final session. I've asked them to leave the food in the back until we're finished. So don't feel that you need to go racing back there to grab something. You can get it on your way out. In fact, you could probably get quite a lot on your way out, but at any rate. So you may have noticed that we sent around these slides from last night to everybody. I got a few comments back from some erstwhile folks who were looking at their email late at night and would welcome. There were Blackhawks fans trying to get over the Tampa victory, but we'll leave that one alone. Well, Tampa, eh? Yeah, but at any rate. So what I did was just to show in yellow, because you know I'm a color-oriented person, to show in yellow the things that have changed from the six panels that you saw previously, and again we'll send this around. The only real changes here were in addition to need criteria for quality and types of evidence that we should support. Oh, this was the mandate question that we should, you know, we should mandate, as it were, identification of types of evidence across these programs. And it seemed like maybe calling that supporting or encouraging or whatever would be better. And then it was suggested, and we wholeheartedly agree, developing collaborative projects with Genome Canada and other groups, but particularly Genome Canada up here. We would love to have the chance to do, so that would be great. There was a point made that we should have an emphasis and structure similar to the emphasis that we've traditionally placed on sharing genotypes for sharing phenotypes, because we're currently at a space where we really need the phenotypes to be able to understand the genotypes. Having said that, the complexities involved in sharing phenotypes, I did not appreciate it. So I was trained as a cardiovascular epidemiologist, and we used to share blood pressures and all that all the time. And it really wasn't a big concern. When it comes to somebody's actual medical record, there are big concerns there, particularly having to do with identifiability, but also a lot of regulations and statutes around HIPAA and what can be shared and what can't. And so the challenges involved in that are not trivial. And just kind of wondered from this group, what do you see as NHGRI's potential contribution in the sharing phenotypes area other than to tell everybody they have to do it? I mean, are there ways we can facilitate that? I think that perhaps participating in discussions about safe harbor, that under what circumstances can it be shared and under what conditions? I think that's the sort of thing that needs to be done. And I don't think it's something that NHGRI can do on its own, but you might be able to facilitate discussions on creations of safe harbor so that there's at least a level playing field for an understanding. So could I ask because not everybody is familiar with that term and not all of us use it in the correct way, perhaps? Could you define it? So this is certainly not a definitive definition, and I can't get to Wikipedia fast enough. But a safe harbor would be a situation under which data, which usually could not be shared under certain rules, regulations, legislation, whatever, is allowed to be shared under a certain set of conditions. And that basically having those, if you meet those sets of conditions, then you can share without fear of liability or other sorts of leave. It's a protected type of thing. And so this is not a perfect analogy, but we heard talk about FISMA compliance. And FISMA is a certification process and a set of rules under which somebody can collect data. So the Newborn Screening Translational Research Network is a FISMA compliant repository, which allows them to collect data and hold it under a certain set of regulations and policies and procedures that provides a safe harbor for the sharing and use of that data. Alexa, you have some familiarity with that. It's FISMA compliant, right? As I understand it, the UDN is FISMA compliant because it takes data from an NIH program. It takes federal data. Exactly right. But I think as we've said in our discussions, it's probably that's going to be the way of the world going forward. So I'm wondering if there's an opportunity. So I think there's an assumption, for example, in DbGaP, that all the data go up there. And in general, while there's some approval process, somebody can access and use those data without the involvement of the primary investigators. And I think the irony is that to access NIH-funded clinical trials, that's often not the case. And so, for example, we just put in a proposal that is not genetic related but would use all-hat data. And the rules for accessing all-hat data was you had to have an all-hat investigator. And so I'm wondering if one of the solutions to the phenotype thing, which has in part to do with just getting the data up there, but a lot of times the phenotypic data, whether they're from a clinical trial or from an electronic health record, they're very complex and they're not easily understood as a data set if you haven't been sort of part of that data generation. And I'm not sure that this is completely different from what Mark was saying, so that you put up what phenotypes are available or what kind of phenotypes might be available, but then someone would access those through a collaboration with the investigators at the site. So it's kind of a middle ground and it really more aligns, I think, with how data are shared from NIH-funded clinical trials. Well, it depends on the clinical trial and also the vintage of it. So all-hat was something that began in actually the late 1990s. So it really predated and in many ways helped us put pressure on some of the data sharing models. But NHGRI tries very hard using the model of the Human Genome Project where there were absolutely no restrictions whatsoever on using it, other than that you couldn't publish before other people and that, but basically, I think trying to say, to require people to have a collaboration just wouldn't, that's just not in our DNA, as it were. On the other hand, we do try to encourage collaboration and when we began the Gain Project, which essentially gained genotyping, the Genetic Association Information Network, which was our first 4A anti-genomic association studies, and the Framingham study, again, their first 4A anti-genotyping were the impetus for putting DbGaP together. And when we did that, we said we don't want there to be any strings attached whatsoever, but we encouraged and the design papers that described it and that you'll be much better off if you work with the investigators involved, if they're willing to work with you and they have the bandwidth and that sort of thing. So I think we may not be able to go quite that far, but it's good to think about. Right, and I don't necessarily see it so much as being about the investigators. I mean, I think you have a lot better in product if the primary investigators are involved, because they really understand the data. But in all honesty, there's a lot of phenotypic data that's not on DbGaP, because people are meeting their minimum requirements from the phenotype perspective. And for a variety of reasons of complexity and data sharing, there's more there. And so you just wonder, especially when you move into sort of talking about electronic health records, which clearly you couldn't put up, whether there's a mechanism that makes it clear that there are additional phenotypic data that might be accessed, but it might take a special process that includes a data sharing agreement between the two institutions or whatever. Well, I think we had such a great conversation over these two days about phenotyping and about the different dimensions of challenges in phenotyping. And so, I mean, it's sort of self-evident, but taking a step back, in a year I might want to look at supporting or requesting people to do interesting experiments in how to collect these data. We've heard about novel ways to make value out of the EMR with all its warts. We've heard about patient reporting and what could galvanize that. We've heard about wearables in terms of automated and scalable phenotyping. And we've heard about gamification that some of the DTC companies are using to try to have an ongoing relationship. We know about the peer process, the peer platform, Sharon Terry is advocating for allowing people to granularly be involved. And we heard about the PCORI stuff, which is deeply involving patients in the decision-making process. So, you know, there's so many initiatives afoot. And I would think that sort of initiatives that sort of said, okay, we're going to compare these or we're going to try this clever way of combining them would be really interesting before the world decides on how to standardize phenotype collection. So just to bridge the second and third bullet points on that slide, and Alexa might want to say something to this as well as standardizing or continuing to support efforts to develop standards for describing phenotypes, especially in ways that allow crosswalks between human and model organisms. I think that will be really important for variant interpretation and functional characterization in identifying variants that are going to be of real interest to the clinical community and finding the appropriate model organism systems in which to do the validation. So I don't know, Alexa, if you want to say anything about that too. I want to support that. And I think that there's already, as we said yesterday, there's already some interesting work going on through the MONAR collaboration and others who have begun to look at this. And you know, the idea being that you can, you could, there's more data for the model organisms than there is for the humans. And let's take advantage of that. But there's work that needs to be done to do the appropriate translation and so on. So I think it fits with the comments that Robert has made as well. There really has been a lot of discussion about phenotyping in the last couple of days. And I would love to see some kind of emphasis on that. And you know, what's enough data? So you were saying, Julie, that there isn't enough data. What counts is enough. We know what's too little often. But do we know how much we actually need in order to do the science and looking for the outcomes? I don't think we really know. So there's a lot of sequencing going forward. And the leverage is the sequencing. And there's value added to the phenotypes. And so there's lots of pressure that can be brought to bear to sequence that which is best phenotype. The other issue that we kind of give lip service almost to is that we're a dynamic organism. And longitudinal data is incredibly important, but we've never come to grips with it. And I think there are ways of leveraging the sequencing in those resources that are already in freezers, lots of places. We're not going to do it going forward so much, but we can do it going backwards because there are resources available. Broadening the sharing of phenotyping data encourage maximum. And when we do that already, but maybe putting a little more teeth in that, yeah. Well, I was just going to say, I think this is a particularly ripe area for maybe GM9. In terms of the phenotypes that will be useful? Just how to bring together the basic model organism investigators together with human model organism investigators and think about best practices in terms of sharing phenotypes, think about how to structure the data in a way that it's maybe even computable. I think all of those things seem to me like a great topic for a meeting. Yeah, I would agree with that. And you could imagine in fleshing out the meeting that you would have, as we talked about earlier today, some examples of fruitful collaborations, but then taking some of the comments that have been made in this section and perhaps others to say, here are some topic ideas where we want to have a more of an open discussion about how can we really do this, have somebody present, set up the topic, and then have discussion around that. So I think that would be a very fruitful organization for that GM9 meeting. Given that we'll be so GM9, we can remember that. Yes, Howard. Our problem is we need an institutional memory. So I think along those lines that the other part of this is just not the basic research from the standpoint of knowledge-based gaining, but I think the other part would be interesting is that can we learn to do this at speed that benefits patients, right? Because I think that's a very different endpoint than we want to get a paper or we want to get a grant. So I think that's another topic to add into this. Because it's not really, I think a basic scientist would not consider it basic science because we're not looking at this zinc finger on this protein, we're looking at more of a translational coming from the clinic and understanding mechanisms. So I think we'll have to get the right people there in order to answer the questions we want. It's not just science for science sake, I guess. I think some of what? Oh, yeah. Some of the basic scientists would. Oh, no, they'll all surprise me, but they may not call it basic science. Okay. I just wanted to add, I think in that conversation it'd be worth bringing an industry into this because they have a very big foothold going forward. I mean, they're the ones that we're expecting to take forward the things that we're translating. So it would be good to be able to connect the R&D, the early clinical discovery phase that we're talking about here. Okay, that was topic one. So no, actually that was topic two. So did I skip topic one? No, we talked about time. All right, so we've done two of them so far. Institutional memory. Institutional, yeah, I know. That's why I said you guys are hurting. So anyway, this was the first of those slides. We also added looking at the potential of crowdsourcing for phenotyping, which is partly mobile technologies and other things. Are there better ways of doing that? Suggested over dinner, could we find a way to add a family history tool? I mean, I think somebody, perhaps Jeff, should NHGRI going forward, not sequence anybody who didn't have three-generation pedigrees, I think he said that to be a bit provocative, but then became a little bit more focused to say, gee, wouldn't it be cool to have family history information on 20,000 sequenced people and really be able to then compare in a systematic way, what does the family history information add to the sequence? What does the sequence add to the family history? When are they, how do we synergize them better, et cetera? Did I capture your thought? So that institutional memory comes and goes. Anyway, and that would be something that could be done, could almost be an add-on, because family history, you almost want to collect it as close as possible to the sequence, so that sadly as many people as possible have had events if they're going to have events. So that could be something that could be a relatively painless add-on. And then we talked about encouraged, more extensive data sharing and accelerate the exploration to benefit patients. This did not change, so I'm going to flip through because we have three, oh, sorry. You want to change? Yes, I actually had a chance to think about this a little bit more. So for the first one, add a clinical trial type studies, I wasn't very clear when I wrote that. What I'm trying to say is that I think as part of the existing studies that are going on, the dynamic nature of data return is something that could be looked at. I think a lot of the projects we're looking at is the first pass analysis, and then we have, well, what are we going to do across time? So I think just putting in some of the dynamic nature around this and looking to see what the impacts are, Robert is in a better position than most to look at return of data and how people respond to this. But I think that's a question of the dynamic nature that can be handled across the board. And I would add to that. I think, and we've talked about it a little bit, the question of re-annotation. And so I think places like Robert's study, working with ClinGen, it would be interesting to look at examples of where somebody was, and I know there are a few examples, where somebody was given a result that says, we think that this is likely benign, or this is likely pathogenic, and then that annotation has changed. So I think to really begin to learn what the consequences for the participants of that changed annotation is, would really help inform us going forward about, in general, how do we return results? That's a great idea for a question that could be practically asked across almost all of the sequencing consortia. It's also a very interesting question from a patient engagement perspective, because obviously we're thinking about it from the actionability perspective, but we'd also want to say, what's the impact on the patient being told something different going forward? Is that perceived as useful, unuseful, harmful? I think that would be a really interesting agenda, research agenda. And the physician engagement as well. What the physician that has to go through, presuming it might be involved non-geneticists, how does this impact the way they think about genomic medicine and want to practice genomic medicine after something like that changes? I personally can't think in clinical practice of any other field where a year or two years later, we get a report saying, oh, we have a better way of analyzing the X-ray, and there was a brain tumor or something like that. I think the scale of the problem is certainly new with genome sequencing, but there was a time when we didn't understand some certain chromosome abnormalities and told people about an abnormality that then later was refined. It is an old question, but I think the scale of it is very different now. We're seeing this as a researcher opportunity or a problem or at least a gap. Many people are seeing this as a business opportunity. So you sequence everybody's genome, and then when they get ready to be pregnant, you send them one type of variance, and then when they, in midlife, you send them another type of variance. Just as an aside. And just to follow up on Jonathan's, I mean, the discussion that we had previously was virtually always in the context of a traditional genetics visit, and we had a certain comfort level with uncertainty, and so we could always say, well, we'll continue to work with you on that. But again, as Robert pointed out, the model now is now going to be geneticist focus, and that is a dynamic where there probably hasn't been that level of comfort. And of course, Robert, to your scenario, there's always the possibility that as we change more and more annotations, they have even another reason to go back. I just came in, so I'm just coming in on the end of this discussion, but I think I know what you're talking about, and I think there's an opportunity here, which is to use the current system of indication-based testing, you know, putting down an ICD-9 or ICD-10 code to say, you know, I want to do this test and use that mechanism to essentially order the test result, which is already out there, and have it come back. And Mike and I were just supposed to run back and forth. He mentioned that when guidelines change, there can be some re-evaluation that goes on in terms of which cholesterol value, which type of cholesterol, which PSA. That's a good point. And again, that's not, you know, so there is some stuff there. Make sure it gets full credit. Well, and just recently, you know, now fat in the diet is not so bad, right? Right. Right, I mean, I've known that all along. Anything else on this one? Okay. And this was, this slide we didn't change. Howard, did you want to change anything here? No, you're good. Okay. This metrics and impact didn't hear anything last night. And again, you know, this is not your last chance. So, you know, do take a look and give some thought to these. We'll send these out and around. Change is there. Or on EHR functionality. Sorry if we're going through so fast, but as I said, we have three panels to do. Diversity. Yeah, so I didn't get any changes on those. All right. So we started this morning talking about clinical workflow. Ended up talking mostly about EHR integration and came to the conclusion that clinical workflow really is a local problem and probably not one that we can contribute much to other than to say you guys have a real problem. I hope you can deal with it. So specific roles for us in the area of EHR or EMR. Agreed upon nomenclature for alleles that would help in pulling information by clinical decision support systems and other systems reporting and that sort of thing. That seemed as though that was sort of not easy to do, but at least something squarely in our wheelhouse. Would anybody disagree with calling that blue? You're frowning, Carol, so maybe you don't think so. No, I think it's well within NHGRI's wheelhouse. I just think that there are standard nomenclature gene and strain and allele nomenclature groups out there in the human and model organism communities that could be tapped into. And it's not necessarily the name of the alleles. That's the important thing, just like gene symbols change all the time, but those gene concepts have unique stable identifiers associated with them so that when you develop systems that compute over this information, you share those IDs which are not human readable. We heard this comment this morning. And the name can change depending on what's known about it. So there are systems in place already to kind of deal with this, and rather than try to reinvent the wheel, we should tap into the groups that are already working on this and sort of get them involved in addressing this particular issue. Good point. I think we did hear in terms of some of the clinically relevant ones, that there are multiple annotation or naming systems, particularly the star alleles and pharmacogenetics that have nothing to do with the standard ontologies, as I understand it, in the experimental realm. Yeah, but I don't think that's any different than the other. There used to be a wild west of naming genes, right? And so it might be the same wild west for alleles now, but the fact is that there are rules, existing rules, at least in some organisms, and I don't know if the human genome and clature group deals with alleles or not, but the fact is I think we can tap into existing expertise and methods for developing these rules that will fit in well with things that are already in place. Yeah, I think the spin I might put on it is that, you know, we don't have a Hugo to sort of say, this is what we're going to be doing in the naming thing. So is there a role for NHGRI perhaps as, because it has funding around so many different projects that are addressing this issue, to sort of take the high ground and say, we really need to solve this. I mean, the star allele issue, it's not just that the star alleles are there, it's that different laboratories report out the same star allele, but they use a different combination of variants to define that, and that's obviously not what we need to know in the clinical realm to be assured of that. And so it's not just a matter of adopting existing nomenclatures because some of them are inherently flawed. Since this has come up again, I didn't mention it the first time, Lisa Kalman at CDC has gotten a large group of people together to comment on this issue with respect to pharmacogenetics, and there's lots of members from CPIC and PharmGKB and international groups to participate, so there should be a lot of consensus, but they're putting together a manuscript that addresses these issues, and say I think the role is also to document what test was done, what was that test capable of detecting, what was it alternately not capable of detecting, because when alleles are reported out, it's not always clear what could have been missed or if a new test was done, what was changed. So at any rate, I think at least for pharmacogenetics, somebody's trying to come to the rescue. I'm just going to say that I think one of the issues around this is the decision of whether we report something out as qualitative or quantitative. So for example, when we're talking about is it a high metabolizer or medium, is that's like ordering a sodium and getting only the flag that says it's critical or normal, and so physicians can manage quantitative data, and fundamentally that those results are quantitative and the same thing with next-gen sequencing, there's actually quantitative data behind that and to boil it down to a single string of figures actually obscures some of the nuances that may be important in interpreting the result. Yes, Craig. So just within the workflow, this is just my ignorance. What's the timeframe that this has to be done in to get this to make it useful in this? And is that a role for NHGRI? Yes, I would ask the clinicians around the table who are dealing with this. I mean, obviously the shorter, the better, but... Well, it's dependent and we have patients at our place that are having surgery and that tissue is being tested and they are not going to be treated in the next three months while they heal, so you could have three months to get the result back. They have others that are in the ICU for fungal infection and you want to know that data in 12 minutes or less and there are assays that will give you a 12-minute readout for very limited amount of genomics and everything in between. So it kind of depends on the context you're looking for. But I guess you could say what is, if not ideal, optimal, or what should be standard of care? And you think, I mean, most clinical tests a week seems long and you probably wouldn't want to go much beyond that. I mean, is that something we should be aiming for or is this not something we should... I mean, I know there are groups that... And Stephen, please speak up that do it way faster than that. How could you help us here on what we should be aiming for or encouraging? I think a good concept is Acuity Guided that each patient has their own acuity of illness and in some patients they're seen in ambulatory clinics and they'll be seen once a month most but more likely several times a year. Whereas for in-patients the acuity is different and in intensive care situations it can be different again. And so it really depends what the information is, what the acuity is. There are very few instances where it's, you know, 12 hours infectious disease is one really but for genetic diseases there aren't that many. You think of maple syrup urine disease where really a couple of days is very, very important but there aren't too many others that are raised to that bar and the other issues are that often patients are not referred by their physicians. So part of it is the physician behavior where it's an emergency because consideration of a genomic test is thought of so late. There are also workflow issues around that and having practice in a rural environment I know some of the problem is the patient may come for four hours away to see you and you do a test and then they have to schedule coming back to you whenever that result is back and the faster it comes back the easier it is to do, the more fresh the pre-counseling you did is in their mind. And so there are lots of other reasons besides medical acuity to try to do the best you can to make it as short as possible. And I would suggest the process be you figure out what your ideal is and then appeal back from there based on what is practical and feasible. And then Terry behind you, I mean for me recently in discussing at least within CSER some of the discussions about turnaround time also get very linked to cost and so there's always this issue of well this could be done in a different time frame but there's a huge implication on what it would cost you. I think we do need to keep that in mind that ideal is ideal in what context and not only cost but accuracy and you may, you could give up a fast answer that's a variant of a non-significance or you could really do a lot of tracking down genotyping other family members, that sort of thing and get an answer two or three months later that is definitive. Just to follow, I mean for me the ideal is the genome is there, it's electronically queryable and matched up with the current interpretation data so it takes a matter of seconds between when the physician enters the order and the result comes back into the electronic health record system. To me that's the ideal. Ellie do you have the code for? Oh nevermind, something else. Oh to turn on the mission. So I can see the slides and the fifth point. And then you can pass your laptop around. Yeah that's right, the fifth point is really a good one. Yeah really, so the other blue highlighted one is on joint training opportunities so can we in the EHR Informatics space, maybe in collaboration with BD2K, the National Library of Medicine as they are reinvented, could we look at training opportunities in specifically electronic health records and could that be explored? Sounds like ACMG might have some activity in that space as well. Right, ACMG and EMEA and actually Alexa and I had a chance to chat, break and she's going to be joining Bob, I've been invited to. You've been invited. I haven't yet accepted. No I've already, I've accepted you. So welcome. Don't talk to him if you're not sure. Exactly, that's exactly the point here. So I operate on the same rules that Terry does. So but I think you could probably pass that off for updates to me as a report out. Just to, and Alexa in particular because of her connection with some of the NLM work, that's something that we don't have as much experience with and so that'll be a nice addition there. So that'd be super and I think also we talked yesterday a little bit about not only joint training opportunities but could we develop fellow projects? Yeah, I mean there are a couple of times we won't run up many times. We run up against real barriers and say, gosh if only we had somebody to work for three months on such and such a topic and trying to find ways that we could encourage people to do that. So is that a mechanism that I'm just, I want to follow specifically up on that. So short projects, small projects, types of things, how does that work? Well, there would be a variety of ways one could do it. As you would do as a PI if you have a thorny problem and you want to bring in a collaborator and say let me support your statistician for three months or whatever and that or could that be a supplement or could it be a way of shifting around funds to the ground? I mean I think that the how is less important than being able to understand or being able to identify and make these links and then we'll figure out how to make it work. American Society for Human Genetics has taken a real interest in this as well in the last year or so. So can you consider them as well? So any sure I may not need to be in this space. If there are, no seriously, I mean if there are this many groups that are working here, obviously we like to think we have a convening role and all but we're also happy to have others convene and so maybe give some thought to whether there are other groups that should take the lead here. Sorry. Well, and certainly to the extent that at least Amy is pretty well aligned with some of the informatics fellowship programs. They might be an obvious place to take a lead on some of those. Maybe where our role is is pointing out how they should be doing all of their work on our problems. That could be a role for us. Yeah, exactly. Let's see. So I think we got all, yeah. And then on the also in relationship and I'll change the name of this to HR integration and that. We just heard about exploring the balancing turnaround time with acuity cost and other needs, promoting software development for presenting genomics to clinicians. One that we heard multiple times was that while clinical workflow is always local we could focus on tools that help to manage data in multiple settings, not necessarily at a single local one. And that laboratory workflow may be more amenable for, at least we could develop some tools or work with ClinVar and NCBI colleagues to facilitate ClinVar submissions. And let's see, assisting new entrants, building on tools and knowledge for more expert settings. So again, this is somewhat the ignite model. It's almost the June 2011 colloquium model where we said can we kind of lay out what it takes to be able to implement one of these programs in a new setting. And then building a better business case for EHR vendors, which is again a tough nut to crack, but rather similar to other NIH health economics efforts. Teri, comment prompted by the last bullet, but more generalizable than that. So one of the strategic questions about where to put money is where the market is not going to solve the problems that are out there, right? So, and a couple of examples of that are things where the investment horizon is so long that companies aren't going to be going out. You know, like there needs to be seed money to invest in things that are really way downstream and things where there isn't a business case for industry where there aren't sort of market incentives to do things. But I think some of these things, like the last one, you know, we have a, hopefully we have a functioning market that should be able to solve that problem without a lot of sort of public direction or public push or public monies. So some of the strategic questions come into like, you know, where can industry in the market solve these problems for us? That's an excellent point. And, you know, we're, I think the intent of this bullet point, and I think it was Mark who suggested, is maybe, you know, doing a better job of leading the horse to water. And, you know, is there a way that our research can kind of point that direction? Was that reasonable? Great. All right, so that's the MR integration. On the clinical, clinician education, we heard a number of challenges with the ISCC. This is a society of societies that they encouraged us to form. But it is sort of a volunteer effort. And probably at this stage, now that it's two plus years old or so, needs to have a little more support than just what we can do, you know, glean from the voluntary society. So we're working on that. This opportunity of the UK effort, and I think Erin, it was who sent me. Did you send me a link? Could you describe it? It's actually a very interesting model that they're doing. Did you get a chance to read the link? Oh, I just skimmed through it. So it was the NHS genomics education program. And, I mean, it looks pretty comprehensive. If you just Google that, that'll take you to the website. And, I mean, they're doing so many things. For example, any clinician that uses their system gets free training in medical informatics. They're putting together, it's starting in September of this year, a master's in genomic medicine program that they're offering free, I think, to any of their providers. And they describe a lot of different activities. There's newsletters that go out that push particular topics that are of interest to the clinician. There's so much to do. I really couldn't suggest it all in a short period of time. But it looks pretty exciting and comprehensive. Thanks. I mean, I think you've summarized it quite well, at least from what I was able to skim in a short period of time. And, boy, if there are newsletters for heaven's sakes, can we, we have friends in England. Can we get some of those? This master's program is really an interesting idea. And they're proposing to train, I read 450 some clinician scientists in this area. That is really cool. And really something, I think, that we need to learn from. So I'm not sure we'll have to talk within an HGRI as to where the nexus for that should sit. I'm thinking it might live in a division that has E in its name, which is our division of policy, communications, and education. But, obviously, it's relevant to all of us. So, okay, great. The question, kind of the eternal, or at least a question that's been around at least as long as I've been trying to encourage genomic implementation of medical care is how can clinicians provide useful consultation without being board certified geneticists? And there clearly are ways of doing that. This master's program in Britain may be one way, but there are folks currently who are doing this kind of work in a genomic consult service without that kind of certification. So is this something that NHGRI or others can help to promote and encourage? Because it does seem as though it's going to be needed. Otherwise, these results are going to be misinterpreted and misused, and then people will get discouraged and won't order any of them, and the patients won't benefit. Are you raising it yet, Julie? Yeah, I really liked Howard's idea about sort of the CDE equivalent. Because it really, I mean, there are certified diabetic educator. So, I mean, you know, a whole bunch of different type of clinicians can qualify for it. There are clear standards about what you have to do. It might be interesting to explore. So there has been, and I think you heard from some of the folks on the ACMG side, there's been a lot of resistance to this idea from that group. Largely, as I understood it, because it's a huge effort to put something like this together and to maintain it and make sure that it meets standards, et cetera, and the estimated uptake was quite low. Now that may have been several years ago and things are much better now and that sort of thing. But I don't think it'll be an easy slog. We also have a challenge in that we tend not to use NIH dollars for clinician education because otherwise we would be supporting, you know, educating neurosurgeons to make a million dollars a year and that wasn't felt to be a good use of the public funds. And so where do we fit in this? Maybe again with our convening power and encouraging people to do it, but, you know, what role could we play potentially in encouraging this kind of thing? Pa? Well, I think experimental studies are important. Ours is looking exactly at this with primary care physicians. But I guess one of the more fundamental questions is, are we documenting anecdotes, of which there are many, or are we really systematically finding out whether it is impossible to roll out genomic medicine with a modest amount of training such as the kinds of training organizations that were mentioned such as, you know, putting them into your re-accreditation or your specialty training. In other words, is this as big a problem as everyone in genetics is saying it is? And I'm not 100% sure that's true. And I think this is heresy. But I think that clinicians, these things are going to diffuse out into the practice of medicine with early adopting clinicians first. There will be able to use resources like genetic counselors and specialists, the way they've used them. And I'm sure there will be some rough spots. But our medical system depends on generalists and moderate level specialists being able to titrate their degree of comfort with specialty situations. So I think one question is documenting this, doing this in experimental ways and documenting whether this is as problematic as we have all convinced ourself that it is. And maybe, yeah, I think that's a fair point. But right now there's a problem, but that doesn't mean five years from now there'll be a problem. And I think, you know, are we solving, we're fixing, we're filling the gap or is there a need long term? And part of it's just us all guessing. But, you know, certainly, when the diabetic educator part came out, and I am not one, so I don't know from an insider view, but any chronologists weren't all that in favor of it. And then as their workload went through the roof, they were the biggest supporters. And it's something that's carried on just because the volume has been a problem. And I think we're going to get a bit of that as some level of sequencing becomes normal, which it's not today, but as it becomes more normal, then I think there may be some level of sustainability there, but maybe not everybody. You know, so I'll see. I think one thing we need are a few more pilot projects. I mean, we're talking about impressions. We really need to generate some data around this. There are a few studies that are starting to do this, but I'm not aware of studies where we're really saying, okay, can we scale genomic medicine in this particular clinical application to 500 people a year or some significant number, and then say what are the barriers to that and collect that evidence? Well, Geisinger's going to try it for sure real fast. In addition to Geisinger, to some degree, the Emerge programs are doing this on a much more system-wide scale as well as Ignite, but you're right, we need to collect data on them and figure out what the barriers are. And this brought me to a point that it's not represented on the slide, and I think it should be. And that would be the NHGRI function of consolidating and aggregating a clearinghouse of all the materials that are being developed and all the approaches that are being developed as well as how they're being studied. So that, again, it's not just Emerge doing their thing and Ignite doing their thing, but we can all contribute. And so if there's something from Emerge that says, boy, this looks great. I'm going to grab this for Ignite and use it. And we begin, again, to define not imposing a standard approach, but using best practices and testing them different settings to see how generalizable they are. And that's a relatively low resource investment cost. And you could even imagine transitioning that into G2C2, where you create a repository. It seems like what you're describing is G2C2. It is, but I think there's an intentionality related to your programmatic direction for the cooperative agreements, which is to say, if you're thinking about returning the results, here is a group of things that have been tested, and we want you to contribute A. And B, we want you to tell us how you went about testing the effectiveness of this particular invention. So I think it's beyond just contributing things to a repository. It's really also show us your work. Well, I was going to say, this morning I talked about what motivates providers for getting educated. And another question is what motivates us to think that they need education. And I think one of the things that we fear is patient harm. And either from not having access to genetic testing or genetic medicine that would benefit them, or from misuse of it that will harm them. And so that's another sort of way to, you know, a tunnel to look down is the patient's safety direction to think about an end point with respect to education. So this is not something in each year I can fix, but just in the context of this discussion, I sort of feel obligated to say it. And that is, we would go a long way, I think, if we could figure out how to better empower and appropriately bill for genetic counseling services. That would have a big impact in this area. Okay. Yes, Adam. I think I want to combine a little bit of panel seven and panel eight with this comment. But, you know, we're talking about clinician education, and I'm thinking about how the electronic health record is going to be used in this manner. And I'm thinking about the knowledge vendors that are feeding that information into this. And we've already heard earlier in our discussion that not everyone's info button is pointing to the same place, because there's a whole bunch of them, you know, whether you're using up-to-date or clinical key or any of the other probably half dozen that exist out there. And I wonder if there might be a need to sort of bring those players in to have a discussion about their best practices for actually collecting and utilizing the information that everyone in this room and hopefully everyone watching on TV is helping to generate. You know, I wonder if we can start thinking about feeding the knowledge base itself that's going to be driving some of the clinical decision support that's going on, which will eventually lead into the clinical education. Buying knowledge bases? Yeah. Yeah. Does that seem like... That seems a little bit of a big order, a tall order to me, but maybe something that we could work with our partners in the IOM and elsewhere to try to address. I just wanted to pick up on a term that Robert used, the early adopter, his view of diffusion and early adopters and so on. And it may not fit squarely into this topic, but it seems like this group represents a small fraction of the practice of medicine and the 95% of the practitioners are... Yeah. Of the groups participating in these programs are in the community and other places. And I guess I'm thinking about how to... It's a low cost but potentially high impact opportunity is to really embrace the affiliates notion. So bringing in the early adopters from the community to be part of the programs here, a lot of them may find it quite attractive to just be with the cutting edge scientists and thinking about how to bring their local practices and community hospitals to be at the cutting edge. It gives them a competitive advantage of nothing else and it may actually satisfy an important aspect of their education and intellectual challenges. We haven't really talked about how do we manage affiliates and across the various NHGRI programs, but I know in Ignite we're really beginning to expand our repertoire into a lot of groups that are not funded by the program but are willing to make contributions and participate. This is actually something that we borrowed, Mike, from ENCODE. So the ENCODE project has had an affiliate approach, I think, for two or three renewals, if I'm not mistaken. Your notion, I think, was that they would contribute almost equally data sets and that sort of thing just on an unpaid basis. Is that, maybe I'm describing it poorly, but there was a, you know, they had to contribute something as well as get something back, correct? Yeah, so ENCODE is an open consortium that I think this is what Terry's referring to and we have on the public website that if people want to join the project, they can, and it says these are the expectations that the project would have that you would contribute, you would say what you do and the rules you would have to follow, but people could join without having been funded to be part of the project. I think there's another opportunity because there might be another tier of affiliates that just feel that they are attending almost nothing else, a scientific meeting that's educating them about the way where the field is going and that could create quite a large amount of value in its own right in addition to having one groups that are more willing to participate in the research agenda. Particularly if those who wanted to get, could get themselves sequenced. Well, we'll send them to you, Robert. But I think, you know, we've done this in a merge for quite some time. Is that what you were going to, yeah, why don't you comment on that? No. Because I've been talking a lot, so go ahead. Just that we have, I mean, the Air Force, for example, was I think our first affiliate member and we actually just worked out a deal with ENCODE to be a participating member as well. So I think there's a lot of value to that. The only danger to that is the Steering Committee meetings grow quite rapidly. And we also have, and I sort of put affiliate slash associate because we do have groups that just want to hang out with us. And so there have been a couple of private hospitals from different parts of the country that are really interested in what Emerge is doing and they come and eventually one of them, I think, became part of Ignite and that. So yeah, so that's something we can do as part of our dissemination rule. Yeah, because I was going to say that I do also think that's an important way to think about some of the dissemination opportunities outside of the major research universities and major research hospitals. Great. Okay, and then another several bullets. I'm sorry, Wendy, I forgot, I'm sorry. Since we were going from, let's see, group seven and eight, how we can sort of cross what the groups are doing, I was going to go from group eight to nine and react to how strongly, you know, patients can be engaged in the process, either through patient advocacy groups or because they're early adopters themselves and they get the 23andMe test and go to their doctor. So using the patient engagement is a way to get the physicians engaged and the professional societies engaged could be pretty powerful. One thing that I don't think came up that I find is a big issue in ISCC is engagement of physicians. And so the use case, which Mark Williams started and I took over with Reed Piritz, you know, we're grappling to find use cases which clinicians find important and useful in the context of their practice. And yet it's hard to have people come to the play, you know, and say, what do you really need? But if it's instead the advocacy group saying, my doctor doesn't understand anything about this important disease, Lynch syndrome, or, you know, something a little more rare, if we combine those and prompt the physicians, this is what your patients are clamoring for that might be more powerful. Yeah, that's an interesting thought, you know, really talking about patient engagement. Some patients, as you point out, are really quite knowledgeable. Sometimes they are the most knowledgeable people on the planet about a given condition. On the other hand, if it's that rare, it's very difficult for us to engage, you know, your average physician, but maybe gleaning from the patients, you know, what is it that you would want a physician, whether it was your disease or not, what would you want a physician to know, to understand, to ask that sort of thing? Yeah, I think if the rare cases are just used in a way to generalize situations of, gee, this is rare, I don't even know where I would get started looking it up. You know, what are resources that I can really respect, and, you know, where do I get started? Yeah, we're almost to panel nine, but we actually have another slide of panel eight yet, and we have to get on to, we have to talk about next steps as well. Did you have a closing comment? No? Okay. And then, let's see, and again, you know, these start to look all alike, at least to me. So, we'll be sending them around to everyone. The two that sort of seem to stand out here a little bit was the point that the education around when to order the test is harder and probably more important than what to do with the results. I guess maybe not the more important part, but probably the education about when to order is harder. Would people tend to agree with that? Maybe that's an area where we should have, maybe a little bit more focus than we've had in the past, because I think we've been focusing on returning results and not when how to order. Mark. Yeah, I just wanted to highlight the comment that Erin made that you do have one project in the space that's actually studying this, and so perhaps a first step would be kind of a report out to the Genomic Medicine Working Group about some of the learnings from the early efforts. I know it is early, but that could be informative. The other thing I just wanted to... So, I'm sorry, so the Erin, so this is in ClinGen or a specific... No, this is the SBIR. This is the SBIR grant that we have. Oh, I'm sorry. For a final consult. Oh, yeah. Thanks. And then the other thing I just wanted to clarify, Ruth had mentioned about infobutans being useful with laboratory reports, but infobutans can be used in the lab ordering system as well to say, to provide the information about how to order a test. So, it's not that infobutans are only after the fact they can be before the fact as well. Good point. We also heard that we should have more engagement with clinician end users as to what they need. I think we aren't doing a real good job of engaging. I think maybe on a more local level that is happening, whether we're bringing that back in a more systematic way. I'm not so sure. Okay. And then moving on to participant engagement. Again, just the highlights. We noted and perhaps shame on us or something for us to tackle. There's little patient engagement in our genomic medicine programs, at least at the sort of systematic or overarching level. While there is some going on locally, we could probably learn more from that and there is more to be learned there. And something we may not be doing as well as we should is developing tools in clinical settings and evaluating them in clinical settings. We're often doing the development evaluation sort of separate from that because you don't want to put it in a clinical setting until you're sure that it works. But given that you've made that initial testing, then there needs to be further follow-up in a clinical setting. That seemed to be what people felt was important. That may be it. I thought there were two for participant engagement. I'm sorry. So anything we missed in the participant engagement space or any here that you think would be important to highlight? Yes. Well, just a comment that I was making earlier, which was that when you're looking at your existing portfolio and sort of the requirements that you have on returning results and making sure that patients, participants actually have access to that data. And I know that it's a little bit fraught, but at least looking at that problem of how you could do that. I know it's fraught. Yeah, it is fraught. But nonetheless, it's... Patient access to data, we'll just leave that. Thank you. Yeah. Okay, but it is a real challenge. So I think then given that, we have a very nice picture that we will share with everybody. And we should talk a little bit about what we're going to do here in our sort of our next steps. So, well, you know, I love... I like Caesar's Gallic Wars, what can I say? But we will certainly do a meeting summary as we always do for these meetings, as well as... And this is next steps, guys. Of course, you know that. So next steps, a meeting summary that will be posted on our website. All the video, Alvaro and Chiara are incredibly fast at putting these videos together. And that will also be posted with the slides on our website. And I do want to take just a minute to thank Alvaro and Chiara for the incredible work that they've done coming in early and late. So that's super. And then, you know, sort of what kinds of hard products do we want from this meeting? We sometimes do white papers. It seems like there might be a white paper in kind of what are the research directions, the new directions for NHGRI and others. We, you know, would never limit ourselves to our little teeny budget. We want to actually co-opt, you know, everybody's budget if we can. And so these can be long or short. Howard and I were just kind of talking a bit. You know, the long version is like a 4,000 word review. The global leaders paper that just came out, that was from our GM 6 meeting and the original implementation roadmap. Both of those were long. The ISCC paper that Mike Murray and I did was short. That was 1,200 words. And it was really a very focused, very targeted thing. We're kind of leaning toward the 1,200 version, the 1,200 word version, you know, targeted in terms of what are the research directions, rather than trying to kind of review the field and that. Do people feel that's, is that a comfortable place to be? Okay. And then the question is in terms of... Julie, go ahead. Oh, I'm sorry. Julie, go ahead. I didn't have a comment on that, maybe the next point. So, well, in terms of a, in terms of a related, a difference is deliverable than a white paper. So, you know, I think one of the things that we heard especially a lot yesterday was, you know, how to, across the different networks, how do we take advantage of, you know, common measures or things that have been learned or whatever. So, in a very specific context, my understanding is Emerge 3 will be constituted soon. Ignite meets next week. And so, you know, probably some of that is easier to do as new networks or reconfigured or whatever. And I mean, maybe you would want to make it charged to Ignite and the other networks to provide input to emerge of things that they could think about adding. Well, and... Think about doing collectively. It's timely because I think in September, the plan is for Emerge and Ignite... January. Emerge and Ignite are going to actually have a joint meeting, so... Right, but I think as Howard was saying, if you do these constant one-off meetings, this group with this group, it's impossible to cover all the possibilities. So, you know, it might just be an opportunity to charge the other networks to think about what are the things that common measures that they have or things that they could share and Emerge could disregard them or not. But it might be a way to sort of start that collective process. No, that's an interesting idea. And, you know, maybe asking each of the programs to recognize, okay, the mission of this program is this, but within that mission are there things that are relevant to the work that you're doing that you'd really like them to highlight? Or, you know, where there are commonalities, because there are many, you know, can we do some things in, you know, joint or collaborative ways? And maybe at the least we should ask the representatives from each of these groups that were here at this meeting to actively feedback, you know, maybe wait until the meeting summary exists. But, you know, because there are sometimes, you know, I don't know what you're... I don't know if I'm up on you on the spot, but I don't know what your plans were, but it could have been that you remembered it or it could be that you've been charged with reporting back. It's a very different scenario. Right, right. And that would at least step towards that, but I do agree that if it's carefully thought through having people actively look at or at natural collaborations, I don't think forced ones would be any of these. Steve? Yeah, you know, to build on that, it's been a bit frustrating for me at this meeting to learn about how much is going on across consortia that I'm not involved in, that overlaps with the things that are going on at the consortia that I am involved in, and that we're all sort of, you know, feeling around the elephant from our own consortium's point of view. And I'm wondering how much thought has been given in the past to, you know, approach to informed consent or an approach to actionability or an approach to electronic health records, that it doesn't happen consortium by consortium, but that there's a single informed consent group that all of the consortia can participate in if it's relevant to the aims of that consortium. I'm certainly not suggesting adding another layer of groups on top of the existing groups, but rather something that crosses consortia that replaces the sort of group by group consortium by consortium approach that we have right now. Lucia, do you want to speak about the actionability group as a model of that? Yeah, we can say a little bit about that. So we have a voluntary interest group called the Actionability Interest Group. It's open to currently the members of NHGRI Consortia that are doing research in the area of actionability internal results, and we meet once a quarter, actually. And it's completely investigator-driven, so we've had presentations from Emerge, from Ceaser, from ClinGen, from Ignite, I think, and it's just a good opportunity for people to just convene and hear what other colleagues are doing. I think we've tried to keep it low burden, so sometimes people tend to forget that we have them, but we have, for example, been able to tackle issues like FDA, IDE regulations, so we had a series of investigators present on that, and then in the next one, we'll have the FDA come and talk with us. So that is one model. We've had, you know, two to three dozen people, including NIH staff on those calls. Well, and also... Sorry, go ahead. Who's that? And also, Emerge and the Ceaser, they have, we have joined a group teleconference. If we rely, we have some common issue among these different consortia. We can have joined a teleconference monthly or quarterly that can share those physical or common issues. Yeah, and don't take this the wrong way. I hope I say this tactfully. Terry, you've done an amazing job in... In taking us from GM1 to GM8, and I'm looking at these nine slides, and I'm wondering whether you have enough staff to help with the oversight of everything that we're talking about and bringing together the different groups as others have suggested. It just seems like now is a time to potentially take a step back and look at how this can actually work. So the answer to your question is no. And, you know, and I won't turn to the gentleman, too, to my left and say, Eric, you've got to solve this problem. Oh, the wrecks. He has it. Yeah, I have it. I'm sorry to be clear. That's my wrecks. Because in his defense, I think at GM4, was it? Eric said, Terry, how are we going to handle all of these things that are spinning off of this effort? And this has been a source of perennial anxiety for us, and yet it's really important work to be done. So there are some things that we do need to spin off and have other groups take on, unfortunately, or, you know, fortunately for the global collaborative that has been one that we've just not been able to be as active in as we had been in previous ones. And so we've made some choices not because one area is less important than the other. It really has had to do with timing and workflow. But that being said, you know, are there ways we can leverage the kinds of things we are able to do and choose? We need you guys to help us prioritize, so we'll be sending these around. And that's another, I left that out. And then there's precision medicine. Oh, God. That's not even talk about lists for priorities, so we will ask you to do that. But yeah, and I think the other thing, and we talked about this a little bit yesterday, you know, it's hard enough for applicants to, you know, run the gauntlet of getting funded in that and get their grant and they have their project that they need to do. And then we asked them to participate in a consortium where they not only have to do what they said they were going to do, but as Dan diplomatically called it, the unfunded mandates of cross consortium, you know, common elements and other things. Then to ask those consortia to participate in consortia of consortia, you know, you sort of feel like you've just got everybody kind of going around in circles. And so we need to find a balance there. And I think this idea of, you know, if there's a topic that really is overarching across all and is not the primary mandate of one where they are so responsible for it that, that, you know, they live or die if it gets done and the others could be, could come along or not. You know, maybe those are places where we can look for groups that would be cross consortium. And we would rely on y'all who are participating in those programs to identify those for us. So give some thought to that as well. So come back to Steve's point. This is reactive, not really planning ahead, but some of the groups have some natural overlap because there's single institutions involved in multiple. And so there's an inbuilt liaison. And I'm wondering whether some of the groups that don't have that need to think through do they want to have liaisons to some of these, you know, some of the natural partners? It's not ideal, but if there was somebody, I don't know which group you were referring to, but if there was somebody who was, you know, part of their job voluntarily was to go in and listen in on the, and be in on the calls or go to the meetings or whatever, and report back in a very active mode, you know, maybe some of those missed opportunities would be found. But there's always going to be a problem of Mr. I mean, I'm talking about something more fundamental, which is to replace the idea of liaisons and replace the idea of sort of voluntary second layers of groups that we can choose to be on or not in addition to our own consortium groups with for those topics that are truly cross-cutting and that affect maybe not all groups, but many groups, electronic health records, informed consent, actionability, those are sort of longstanding challenges that we face as a community. And why not have one set of groups that address them that people can plug into from any of the consortia? Just a single layer. I'm sorry? You know, one of the things we've really tried to do, we did this in the GWAS era was, you know, anybody could come in and sit and listen into our association analysis calls, because we wanted everybody to learn how to use these things. You know, maybe something we should do, I think it's very useful to webcast these meetings. It takes obviously a tremendous amount of time for our staff and costs and other things to do that. WebEx is not terribly expensive and those can be recorded. So might we consider, you know, making those VLC, see if people look at them or not. But I know we use WebEx for many of our meetings just to facilitate them and maybe that's a way of at least having some record or something that people can come back to. Yes? I was going to say to the point that, you know, that grid that we've seen a number of times where each of the site or each of the projects and what they were sort of involved in is sort of the starting point for what are the cross-cutting topics and we could maybe redistribute that to the various programs and make sure that everybody's on board and has filled in all the blanks. I don't know how that matrix got generated but I could envision that some of the groups in looking at them might add X's that they think they're involved in. I pointed to you but it's probably should point to Jackie. No, no, I, you know, that one I didn't leave to Jackie. So that was on my patio a couple of Saturdays ago. I mean, frankly, that's, you know, what we did, we asked each of the groups to identify their objectives and their barriers and then I just tried to pull it together. So it has had absolutely no curation. Do not blame Jackie for that. So it would have been much better but I think you're right. What we should do is redistribute that and have people, you know, add to it and correct. And particularly if there are areas that, you know, if there's something that something touches on, it's good to note that but if there's something that's really an emphasis, you know, I think maybe we want to have like one plus and two pluses or something like that. And I saw a hand, was it Janet's hand or somebody's hand up there? No, I was hallucinating. I was trying to ignore Mark. I know you were. Good luck with that. And Janet's been trying to do that for 37 years. Yeah, I can speak up here. No, I think one of the categories actually that I would add would be some more information about patient engagement and activities specifically. Right now it's lumped together, clinicians and patients and maybe just separate that out would be helpful. But I think that the objectives and barriers, grids are then would be the basis for the survey that would go out afterwards because in addition to making sure that we mark it up correctly, like we're in the space or not, but that could easily lend itself then to how important is the issue and then how tractable is the issue to solve, have people rank on both of those and then you could begin to aggregate across. Same thing with the barriers, you know, how important is this as a barrier, how easy is it to solve? And that would be a nice framework to work off of. So again, almost there, we were talking a little bit about having a white paper and I realized, you know, you wonder are these really useful? I am told that they do tend to be used by groups who are trying to do this kind of work and so it feels like a dissemination activity that we should do. So what I would propose is that, you know, everybody who was a panel member or a moderator, which sort of everybody around this table would be a co-author. If the journal will accept that, depending on the journal, some of them for the short reports, they'll only take two authors, in which case I would propose it be Howard and me, since Howard had to put up with me for all through with doing this. We'll go for the bigger group. Yeah, but let's try for the bigger one. Are people amenable to that? And then recognizing that if your name is on a paper, you have to have participated, by IC, DICEJ, CME rules, or whatever they are. I'd just correct your middle initial. Yeah, yeah. Well, it would be nice to do more than that, but yeah, yeah, really. But we do need to hear back from you if you're going to be a part of a paper. And so please don't be offended if we try multiple times and if you don't respond, you know, we have to drop you. So anyway, I think that's everything. Any other next steps? We, well, we heard and actually Pierre, please go ahead. I was just going to read out my action list that I'm going to take back to Canada. First of all, thank you so much again for inviting me. It's been a very enriching experience for me and I've learned a lot. And I think there are several things that I'm going to take back to both Genome Canada and our colleagues in CIHR. CIHR are very involved in all of the training aspects of what we've been talking about. And so I think, you know, looking at the EHR issues and there are, Canada is certainly not a model for you. It's very, very complex, very fragmented. And I think we're not where we should be. But I think we can certainly, I think in terms of the education and training piece, I think there are things that could be done together. Very interesting discussion around the genetic counselor piece. I think this is something where we share a lot of the issues. We don't have enough genetic counselors for sure in Canada. Also, the genetic counselors in the States, I think are more empowered than ones in Canada in terms of being able to order genetic tests. No, Mark is shaking his head. No, genetic counselors don't have prescriptive authority. And so while they actually do all the counseling around the tests and may make specific recommendations about the test, it does require a physician order. Is that right? Not in every system or in every state. Yeah. That's what I'm going to do. They can't practice independent in most of the states. They can have private practices. It's regulated at the state level as opposed to national, so that there's a lot of variability. Got it. Okay, okay, okay. So it's not that great. Let them in. But there, I think there's maybe a lot of cost fertilization that we could look at in that space because we've certainly identified it as a key space and we want to encourage genetic counselors to get involved in our own projects and so on and so forth. And the other link is that I've noticed that the president-elect of the National Society for Genetic Counselors in the states is actually going to be a Canadian. So that's very good. First, yeah, so I think it's the first Canadian that's been nominated to that position, so that's nice. And then the last thing might be around industry links. So there are, you know, one of our big programs is not in our genomics and personalized health, but we have coordinated from Canada the Structural Genomics Consortium. And this is an international partnership between the leads are in Toronto and Oxford in the UK. But there are 10 pharmaceutical companies that have been investing in this with no intellectual property position taking since 2004. And they've all come back and said, we're putting in another $4 million for phase four, which will be phase four of this Structural Genomics Consortium. I can tell you that most of the heads, the R&D leads of the pharma guys are based in the United States. So I think there are potential links that we could explore through that. And oh, yes, and there's now a node of the SGC that is being spun out of GSK in North Carolina. And I think we'll be based within the university campus or something. So that's something to look out for as well. And then lastly, I think the links with the Global Alliance for Genomics and Health, I think it's going to be a critical one for us for sure. But I've noted that we should be encouraging that group not to reinvent the wheel. And if there are things that they should be using in terms of APIs or other things that already exist, we should know about them. And we should be able to do that. We should know about them, and we should encourage them to use those. But I think you are sufficiently linked in with that group. I know that some of you have already gone to Leiden for the meeting whenever that was today, or it's today. Yeah. And once again, I will make sure that I'd be linking back in with CIHR in terms of the more of the training stuff, the EHR stuff, and other items that will certainly be of great interest to them. So thank you again for inviting me. No, thank you so much for coming. We appreciate it. Did you want to comment on that, Jeff? Yeah, I just wanted to thank you as well for being part of the panel yesterday and also for your input, Pierre. I wonder if you would consider two things. One is once you have a chance to bounce these concepts off of your colleagues, setting up a telecon or something to think about ways to really enhance the partnership, which I think we're hoping would be the case. And secondly, I wonder if, particularly with GAPH running in parallel with a number of the programs here, whether it's appropriate to think about a joint scientific symposium that really showcases what's going on in both programs and maybe to reinforce some of the potential for collaboration. Both great ideas. Jeff and I'll certainly follow up with those, for sure. Great. Thank you so much. I think this brings us to the end, and we promise that we'd end on time and we're a little bit late. Just a quick reminder that we did also talk about having a GM9 as sort of a basic science collaboration meeting and another scientific meeting that we would find a way to encourage, be funded. And Eric, did you want to make a point? Thank you, everybody, for a great couple of days. We'll be following up with you and safe travels. Thank you. Take care.