 Good morning. So my name is Sarah Hull. I am the chair of the NHGRI Intramural IRB, which is an IRB that's been around for a little over 17 years and was constituted specifically to be able to handle the ethical review of research that involves genetics transitioning into genomics over the course of that time, ranging from social science protocols to gene transfer. In between, the majority of the kinds of protocols that we review are natural history studies of rare diseases, of rare genetic diseases. And I've also, in the time that I've been there, seen a transition from a presumption that most genetic results would not be disclosed. They'd be experimental. They'd be used for deepening understanding of the diseases under study to a very shifting and almost inverse now presumption that there is some affirmative obligation not only to disclose primary and as well as secondary or what we used to refer to as incidental findings that are revealed over the course of the study, but perhaps even to affirmatively look for certain kinds of findings that are considered to be so clinically beneficial that if you have the data in front of you, there's an obligation to look at it. So I'm coming from that perspective. From working with an IRB and a group of people that includes genetic counselors, medical geneticists, social scientists, and community representatives who include parents of children with the kinds of rare disorders that are often featured in the studies that we review, thinking about how to assess risk in the context, sort of holistically, in the context of a study with relatively little experience thinking about genomic sequencing as a device, as it's been defined in the proceedings today thus far. And so part of my disclaimer is to let you know that I'm speaking for myself and not necessarily, not necessarily, well, not on behalf of NHGRI or any of the other federal government employees, agencies that employ me, but I'm also speaking as somebody who's relatively new to this, who's already learned quite a bit from listening to the presentations this morning and who's looking forward to the discussion as an opportunity to learn more from everybody. So I'm going to begin a little bit more broadly by sharing with you what our usual role is in assessing risk. And according to the common rule, 45 CFR, 46, and other guidelines that inform the conduct of our work, how we gauge risk on a continuum from minimal to greater than minimal risk. And then some reflections on this relatively, it's not a new role and certainly not new with respect to devices in general, but thinking about genomic sequencing as a device, how we might shift into thinking about the continuum from non-significant to significant risk. In general, we think about many of the risks of genetic research stemming from the risks of disclosure. This might be the risk of disclosure to third parties on authorized reaches of confidentiality to third parties who may in some way use those results to cause harm to individuals. It could be harm in the form of depriving them of some kind of economic benefit or entitlement. Disclosure is often one that gets mentioned, harms from stigma and other kinds of psychosocial harms. There's also the risks of disclosure of both, I listed secondary, but even primary findings to individuals themselves to the patients and participants enrolled in the study. People have talked about the potential for anxiety from the receipt of information about disease risk. So the possibility, especially if the information isn't correct that the results could be used to lead somebody to engage in another kind of risky procedure such as mastectomy, particularly if that wasn't warranted because of a false positive kind of a result. And then perhaps another category of concerns around the uses of genetic information in ways that would conflict with fundamental values of a donor or a donor's community. I just came back from a conference that was hosted by the Native Research Network that focused in great detail about how some indigenous communities have been harmed and wronged by the use of genetic information in the context of research without their agreement in many cases. So that would be one example here. Now in reality, I'm, except for that last example that I mentioned, I'm not aware and I try to solicit this information when I speak to different audiences. I'm not aware of any cases of how a breach of confidentiality, a security breach related to genetic information in the context of a research protocol has actually occurred. There's a great deal of attention on the theoretical risks of this happening but very few of any case examples. There's also emerging evidence that there's a low frequency of adverse psychological consequences of learning genetic information or negative, not a negative result, but bad information, meaning information about a potential risk or diagnosis that this results in adverse long-term psychological outcomes. And I refer often to this relatively recent analysis by Dave Wendler and Annetta Reed who are current and former colleagues of mine in the Department of Bioethics at the NIH where I hold a joint appointment. They argue that for the most part, and given all of the protections that we currently have in place for genetics research, most genetic research should be classified as minimal risk. And they go through a very thoughtful analysis. I have the reference to this entire paper at the end of my slides. If you're interested, they go through and compare how genetic testing, genetic information might be compared to a few other, a few ways of thinking about risk. The regulations talk about the risks of daily life. So for example, the risk of taking a left term across Rockville Pike to come into this meeting might be an example of a risk that one could compare to genetic analysis. The routine examination standard, and I actually think there's some traction here. This is perhaps the category that's most analogous to what we're talking about in terms of IDEs and the FDA evaluation. And then a charitable participation standard. If the point of research is to do something that's societally beneficial to generate knowledge that is beneficial to society as a whole, perhaps the idea of charitable participation is the right comparison. So if I volunteer to build a house for a habitat for humanity, and I expose myself to some construction-related risks of using a hammer because I'm a klutz, perhaps that's the right standard to compare it to, they walk through all three of those and come to the conclusion that for the most part, genetic research on their analysis would qualify as minimal risk, assuming that there are certain safeguards in place. I'll come back to what some of those safeguards might look like in a moment. This is some summary statements from a working group that I participated in that was chaired by Les Beesiker. It was an intramural working group at the NIH to look at what kind of resources should be in place to support investigators who increasingly are being called upon to both look for and to disclose some results, secondary findings from genetic research to participants in their studies. And this, the paper focused on an ethical justification around the idea of clinical benefit, that identifying, validating, and communicating these kinds of results could provide substantial clinical benefit to participants, and so builds a case for an obligation, but then acknowledges that there's an open question about which studies this kind of obligation pertains to, precisely which findings, and how one would have the resources to go about doing this responsibly. And perhaps it's no coincidence that the next slide, the conclusion that's stated here is framed in the way that it is, because there were actually three IRB chairs represented on this working group, but the group concluded that it is the IRB in consultation with the investigator who is the appropriate body to work through to determine which are the right studies and which are the right kind of findings to be disclosed in a program of looking for and revealing secondary genomic findings, because of their experience with analyzing the potential benefits and harms of research, as well as their assessment, their positioning within an institution, although the centralization of IRB review might change this to some extent, but their awareness of the availability of counseling and analytic and other kinds of resources to support the investigator in this endeavor. So steps that we look for as an IRB to help to minimize the risks of genetic research. We look for evidence that the investigator plans to get, we look for the magic acronym, CLIA. We want confirmation, that confirmation is part of the research plan, and I can't think of a study that we've reviewed where this wasn't the case. We work very hard on getting better at being able to identify the appropriate threshold for what kind of results will be returned, how to define what counts as clinically relevant and actionable, so in other words, not just a pathogenic variant, but some clear course of action that's available to a research participant based on that information to improve their health outcomes. And increasingly, we're able to rely on lists that are being generated by professional societies, and we heard a reference to the ACMG list earlier. We've also had investigators take advantage of the expertise that we have in the intramural research program at the NIH and external as well to convene expert committees who will help with the review of these kinds of findings and define the kinds of results that would be returned. We also pay very close attention both to the plans for counseling, genetic counseling resources, as well as the information contained in the informed consent process to help ensure that a participant who is getting results actually knows about and wants to receive these results and has an opportunity either to decline the receipt of results or to decline research participation, depending on how a study is set up, to know this upfront, but then also to help with the interpretation of the results once they receive them, and one particular point that our IRB has been very tuned into is how participants understand the potential for false negative results. In other words, just because a pathogenic result is not identified, that doesn't necessarily mean that they're not at risk, and it depends entirely on what's being looked at, how the testing pipeline has been set up, and so a lot of attention is paid to how the consent process and genetic counseling will help to manage expectations around these kinds of findings. So now we're starting and being charged with thinking about genomic sequencing in a slightly different way, and we've heard presentations this morning about how it's this entire pipeline that's defined as a device, and I'm one of the people who's misinformed and think of this more as a process than as a device, but I'm learning to use this terminology appropriately, and we understand that there are three different categories into which a device could fall. IDE exempt, the Investigational Device Exemption exempt seems, I prefer to just say that the rules, these regulations 21 CFR 812 don't apply, so I don't get that double exempt issue, but then there's the abbreviated IDE process. If there's a determination that it's a non-significant risk device, and then there's still requirements that go along with that, but it doesn't require the full submission to the IDE, to the FDA, and then there's the IDE process. I was very happy that my next slide, defining what we're talking about when we're talking about a significant risk device for genomics, matches the last speakers, so I can skip over that, and now I'm referring to the points to consider documents that I noticed was included in the folders that folks here at the conference received, that ways in which to consider the risks of genomic sequencing in terms of device definitions, we would be concerned about the possibility of incorrect results, results that might either lead somebody to forego a medically important or a medically necessary treatment because of a lack of information, or a false positive that would motivate them to receive some kind of intervention that was both risky and unwarranted based on incorrect information. The points to consider also talk about assessing risk in terms of who is getting the testing, are these healthy volunteers enrolling in research or are these sick participants for whom the information has a particular kind of importance or salience? So these are some of the factors that are both mentioned in the points to consider document, but I would also argue are consistent with the range of ways that IRBs have been thinking about the risks of genetic information all along. So who determines the IDE risk level? Again, I was relieved to see this maps on to what some speakers said earlier. It's primarily either the sponsor, the investigator, or the sponsor investigator's responsibility, but that determination has to either be confirmed by the submission to the IRB who will indicate whether or not they're in agreement with the investigator and or, depending on the ordering, the FDA who has expressed a willingness either to take a look at this as part of a pre-submission review process or consultation that could either occur prior to, in parallel with, or after the IRB review. And from some of my offline conversations, I understand that it's actually the FDA that could ultimately trump an IRB's decision. So if it comes to the FDA and the FDA disagrees, that's how that would play out. So here's a graphic that I've adapted from my former colleague at NHGRI, Jonathan Gitlin, who used to be in the policy office. And I found this a very helpful way of thinking through the different decision points when I'm consulting with investigators. I should mention, I had meant to mention earlier, our IRB is connected with a unit that's called the Bioethics Core. And we have a habit of encouraging investigators to talk to us at any point in the process of review. Very early on, when something has been submitted to the IRB, afterwards, when questions come up. So I always encourage our investigators and they know that they can talk to the IRB through the Bioethics Core. So if they were to come to me, this is the tool I might use to help an investigator decide what he or she needs to do. The first question is, will the results be used clinically or returned? Increasingly, the answer to this question for genetic analysis, as I mentioned, is yes. Although there may be some circumstances in which it would not be required or even appropriate to track down and to give people results in the context of certain kinds of biobank studies, for example. If that were the case, the IDE regulations wouldn't apply. If the answer is yes, the question of whether there's confirmatory testing, according to what I'll just call the best standard available, because clearly there's open questions about what counts as adequate confirmatory testing. But we've been considering Sanger sequencing or Sanger confirmation to be appropriate. If the answer is yes, we've been, it has been suggested that this also means that the IDE regulations don't apply, and that the IDE doesn't need to be submitted or considered further. If the answer is no, that's where the PI is charged with thinking about whether the device includes significant risk or not significant risk. If they don't think it's significant risk, they can submit this to the IRB who may either agree or disagree. If they agree, then the IRB is able to grant the NSR IDE. If the IRB disagrees or the investigator already has the foresight that this is likely a significant risk device, they would need to submit the IDE to the FDA. And again, there's some, the arrows might go in different directions if an investigator decided to consult with the FDA first or to go through that pathway first before submitting to the IRB. And there may be questions about timelines and efficiency. Our office at the very least is willing to try to work with an investigator to think through what perhaps the most likely outcome is to help them figure out what might be the most efficient approach. But given how many open questions there are about how these rules apply precisely in different kinds of situations, we might also turn to the FDA and want to consult with them or we might encourage an investigator to go there first just to make sure that we're getting this right. So I wanted to just present a case based on the very limited number of examples of these kinds of studies we reviewed. I realize I'm coming close to the end of my time, but I want to describe a case where we're talking about a natural history study, a rare disease protocol where there is a process for disclosure of secondary genomic research findings built into the study. Structured in a way that through a confirmation process maximizes the positive predicted value of the ability to identify these secondary variants which might be disclosed, a very careful list based on which those variants will be identified that are clinically actionable and appropriate to disclose, but maybe trading the possibility of some false negative results for that positive predicted value. And the kind of factors that an IRB might be thinking about in a non-significant risk determination would be relying on this very carefully vetted list of gene variants, perhaps in consultation again with an expert advisory group to inform which would be the kind of results that would be included on this sort of panel for evaluation and disclosure. Again, looking at the plans for counseling, consent and the reporting out of results, both positive results as well as the possibility of a no findings kind of result to make absolutely sure that the participants understood that the absence of findings was not equivalent to a clean bill of health and no need for genetic testing. And then actually in a number of cases, and given that we have a social and behavioral research branch, many of our studies include surveys or studies of how participants in the study are understanding this process that they're going through to try to improve their understanding of genetic findings. So actually doing research in real time to understand these uncertain questions is we find is a helpful adjunct to our ethics review and our investigators have been very willing to report back to us and help us to improve our processes based on the science that's built into these protocols. So we can get a better understanding of, in fact, the consent process and the counseling process actually works and helps to minimize the negative outcomes that could be possible from these kinds of results. And here's just some examples of language that I extracted from some of our consent forms that focus on the management of negative, false negative findings. You could be falsely reassured by receiving no results. This is not a complete genetic health assessment. If your doctor thinks you need a genetic test, you should act on that rather than the results of this study. And also language suggests there's limits to the analytic abilities in the study. And this is information that goes along with an oral consent process and genetic counseling and other things like that. But just to give you a flavor of some of the kinds of language that some of our consent forms include. Here's some of the references. The second bullet point is the standard operating procedure, one of 43 operating procedures we have for our human research protection program that instructs IRBs on how they're supposed to do risk assessments in the context of IDEs not limited to genetics. But I found it an extremely helpful resource, given that there's quite a lot of experience that IRBs have looking at the other kinds of devices on the list of devices that might be helpful. It's a publicly it's open to the public and that might be helpful to IRBs out there who do this kind of work. Thank you.