 Yeah, you're trying to get rid of me. I tried my best, Mark. We were the second smallest group. I have had plenty of things to say, but there's never any shortage. We actually shared the room here with Murray, and it was quite interesting because I noticed that Murray was unable to come to consensus with himself over several points, which, you know, just as a comment. So our group was Betty Graham, Pearl O'Rourke, Laura Rodriguez and myself, and we ended up talking about, I think, some overarching issues that have come up across many of these projects, and I'll just go through these very briefly. One that came up quite a bit yesterday was IRB-related issues and the fact that there's a high amount of variability between how our institutional IRBs approach these types of projects. And so the idea is, would there be the potential to develop some general guidelines to review, perhaps a point to consider, or FAQ, best practices that could be disseminated? There was a fair amount of pessimism that we can engage with OHRP to actually have them issue a guidance about some of these issues that we're coming up with. But there's also perhaps the opportunity to do some discovery, and so we were thinking about the idea of taking information that had been collected from a survey that Wiley Burke had done and had been recently published, and to use this as a basis of a tool that could be distributed to members of the Genomic Medicine Group, to investigators and also to the group's institutional IRBs to try and answer some questions, which is, can we understand why IRBs are giving different answers for the same project? And if we can identify some of those areas, is there a way that NHGRI or other entities could provide some guidance or fill that gap? Now this is something that turns out has been sort of on Laura's to-do list for a while, and so she volunteered to take leadership of this, and so there will be more to come. The second issue that came up frequently was the idea of the clinical research interface. You know, what is clinical care? What is discovery research? There are blurring of these issues, and there were several sort of case studies or examples that we came up. Some of it related to logistics and electronic medical records that in some places research results are contained within the same data warehouse. As clinical results and while there's attempts to firewall research results, in some cases when you have a clinical investigator they have actually access to some of these research results, which can lead to mixing of research and clinical data, probably not the best way to do that. This is not new with electronic health records, if that can certainly and has happened with paper. It's just probably the energy barriers are lower. So this was the example that was given was an oncology clinical trial where genetic testing was done as part of a trial for discovery and that somehow those test results were then available in the electronic health record even though we'd had no idea what to do with them. HIPAA treats family information differently than OHRP. There was an example given from the Virginia Twin Study where OHRP issued a guidance or said, if you have an identified subject and then you identify a relative of that identified subject, so the mother of an identified subject that that subject is by definition identified and is a potential research subject that may be, or is a research subject that may be subject to consent. And while we've talked about it to some degree in a hypothetical situation, the reality is our clinical laboratory colleagues deal with this daily trying to make an assignment of whether something is pathologic or benign or uncertain. And there have been, this came up more at our previous meeting last week, that there are issues relating to when does that cross the line into research and at least one clinical laboratory has an IRB approved protocol that allows them to do family contact to look at whether variance or de novo or familial and to get additional phenotypic information. The return of research results that are actionable that we discussed today and the blurring of the term consent. So we give consent to receive clinical care, but in some cases the consent to receive clinical care also indicates information provided to say we're going to be using residual specimens for research or other purposes, and so there is some blurring in that area as well. There is the potential to do discovery around these questions of blurring. There may also be the opportunity to do evaluation or inventorying of different approaches, what are we all doing to deal with these issues, and also to study consequences of this blurring. And this just by chance happens to be the subject of the panel that I'm going to follow my talk, so we'll have plenty of opportunity to discuss that. Variants for clinical use. Different groups are making recommendations about what's ready for primetime. We've heard about those. For the most part, with a few notable exceptions, these decisions are being made on a siloed basis or they're institution-specific. Would it be possible to develop criteria that we could agree on that would allow us to evaluate the clinical implementation of variants? And if so, how could we facilitate the consensus building around this? Is that a role that this group could fulfill? This was also an issue that came up in the previous meeting and so one of the proposals that we would have would be to create a work group that would be a subset of members from this group and membership, from the previous meeting to address these issues since they're relevant to both. We've heard about implementation science and while we have a lot of varying expertise in the room, we don't have a lot of expertise related to people that are actually the implementation scientists that are studying implementation. So the idea could we create a group of consultants for systems, members of this group that are looking to implement something. So this would be implementation scientists, quality experts, to help with some of the issues that need to be considered in terms of how do we actually successfully implement something. I mentioned yesterday the liaison with the dissemination implementation group through NIH of which we have some representatives here, David Chambers, who we'll be hearing from shortly. So that may be a possible opportunity to enhance our knowledge around those areas. Developing a suite of validated methodologies to collect data out of the clinical setting, to answer clinical and research questions, to complete that dynamic loop so that when we put something out into clinical practice, we can actually collect the data to show that this really is having an impact. An example being what we talked about earlier with Murray's project that if we can show that we're canceling fewer surgeries, that that could be a signal to administrators that would say, hey, there's something important about this genomic medicine and maybe we should be looking for more opportunities to take advantage of it. We also probably need to understand more from our colleagues that we're going to be imposing this on about what their experience has been with the genomics. This was a very enriching for us when we were looking at family history implementation to actually find out what it is they were doing, how they were experiencing. We just don't often ask them, what are you actually doing with this and what do you think about it? We learned some things that we really didn't suspect going in and had we gone forward on our preconceived notions that we would have not had a successful implementation in a couple of different areas. Those are ideas that we kicked around that were there to be the possibility for collaboration or perhaps more importantly incorporation of these concepts into some of the other projects that are more specific to some of the content areas that went previously. I'll take questions. I have a question, no. But will you be able to come to consensus with yourself? I'm of two minds at that. Mark, I think the idea of aligning the rules for sharing family information, both family history and genetic information is critical right now because of the state of Massachusetts law, the lawyers and partners have actually discussed not allowing some of the practitioners who care for a patient to see the genetic results on the patient they're caring for. So it's a long step to when I'm going to be saying that I want to know what the mother and father's genetic results are to care for the patient. But that may be a local issue in part, the broader issue of really defining when you can share medical records amongst family members is really going to be important. And I know Jeff's laundry list is pretty long for the family history group but maybe this is something that could be potentially incorporated in that as well. When you had the issue of making group decisions about variants and we heard about that with BRCA 1, BRCA 2, it's recently as yesterday to my left as someone who leading the CPIC activity has brought a peer group together and they are, for all the pharmacogenes, are one by one going through them. And there are some things that are obvious slam dunks but there are other things that are not such slam dunks and I think this is an example, the BRCA 1 example is one, this is another one and Mary you're too modest to say something but how is that gone because those are models that I think you're implying we should be applying. Well and I would nuance that just slightly in the sense that it's perhaps less so, I think the decisions about implementation are always going to be local decisions and because it's always going to be the local people who are going to decide is this for our system ready for primetime or for not? I think what we were talking more about was a consensus around the necessary information or criteria to at least make the decision and CPIC is one of the examples of a consortium that's coming together although I think it's a consortium of like-minded folk and in some sense advocates and so that could be at least subject to someone saying well these are all people that just want to do this and so are they really being objective in terms of their criteria but it's at least a model of people that are trying to move this forward. Yeah I agree and we have as a goal to and we always did after we do the first few examples what were those things that made us consider this a slam dunk. Right. And we haven't had to delve down into the examples of gene-drug pairs yet where there's a lot of disagreement and a perfect example of that is CYP-2D6 and Tamoxifen so there are some very strong advocates and there are some people that are really against that when we come to those controversial issues then we're going to have our hands forced to really say what are the criteria to advocate for implementation of this, this and that variant. Yeah. So we haven't really been pressed to make those difficult decisions yet but I guess our feeling was you start with the things that are non-controversial then you figure out why you thought they were non-controversial. Right and I think that's exactly what we want to do and if we take the process of CIPIC is using plus the process that many of us are using when we're making these decisions and we gather the information together about okay, how did we come to the idea that this was controversial and non-controversial. We should be able to identify some common themes which I think could lead to some standardization around a methodology. The other thing I would point out is that controversy in this context is a really good thing because that's going to set out your discovery projects which is to say okay, half of you think the tamoxifen thing is good, half of you don't. Let's develop a research agenda to answer the specific questions that we have concerns about because a lot of times what we find is the studies that are being done aren't answering the questions and I think if there's one thing that I can point to for EGAP that has been very useful is that even though the majority of the recommendations have said not enough evidence what they've done in their recommendation statements is to say this is precisely the kind of evidence that we need to actually answer the clinical question and we'll have to take that into account you know how much of it is actually pragmatic enough to be able to be captured but at least you have a chance then to direct people to try and answer the question that is important as opposed to answering other questions that might otherwise not be useful. So Mark, the five clinical action five clinical sequencing UO1s that were just funded all have the aims to return either exomic or genomic results to their subjects and so they would be good partners in a clinical action group think. Good, and eMERGE obviously also has a subgroup that's working on you know how do we make decisions about this so there are a number of groups in the space but as usual you know they're all sort of the cross talk is not necessarily being facilitated and I think that that's something that this group could potentially do. So again, I mean I don't want to belabor the point but I want to go back to really having this done in a very systematic way. So when we do this for rear say one and two it's really based on prior probabilities so that you have something that's deleterious or the prior probability of 99 to one something that's suspected has a prior probability of 95 to 99 to one and that there are different levels of evidence and it's been determined how you weight those levels of evidence and so I really think that this is an area and Karis can also chime in where we really thought about this a lot I think that I was talking to Mary about what the levels of what the types of evidence that you have to use we've seen functional studies be wrong they aren't weighted that strongly conservation is weighted depending on how far it goes in the species you weight co-segregation you weight things in trans you weight LOH there are lots of levels of evidence that are used to move these together you do modeling of people who have deleterious mutations and large sample sets versus people who have an UCV or depends on VUS and you look at it there's lots of ways to approach this and certainly in the rear say one and two field there are a lot of people who thought a lot very hard about how to do this and to really be very rigorous about how you move things between different categories and I get a little nervous about this conversation when it's sort of we think it's good or we think it's bad it really has to be done in a very rigorous thoughtful fashion yeah I don't disagree with that although I think we're talking about two different things there are things that I think we clearly have have rigorous evidence about where we can look at implementation where it's really a very different question than arguing about the level of evidence or the binning there are discovery things that are going to be much farther down the pipeline we need to account for both of those ends of things but I think we also have to be pragmatic which is you know the current output of EGAP to just pick on one systematic synthesis of evidence is one variant a year that's not going to cut it and so we have to figure out ways to do this in a systematic way and do it rapidly and then be willing to learn quickly whether this is having an impact or not and this is an issue that we deal with in medical practice all the time there's almost never certainty about what we do in medical practice but we have to be able to put it into an environment where we can actually define and capture outcomes to say do we make the right choice or do we not make the right choice so there's compromises in terms of how we're going to do it and we need approaches that are not only systematic but are also scalable so I'm going to follow Mark's admonition to be practical and suggest that we break now lunch is ready if everybody will just grab their lunch in the next 15 minutes or so and then we'll start Pearl's session at 12.30