 but I think we need to move to our overall discussion. And I believe there were a few unasked clarifying questions. I think Dr. Wenschelbaum and Lynn, you had questions for Steve and Bob, you had a question for Laurie. So if we wanted to start out our discussion with those, feel free to do that or we can move into general discussion as well. Actually, what I was going to do is bring to the group's attention something that Steve said in passing. I'm on the advisory committee for his U54, so I've seen it up close and personal. And he showed you that one page where they teach the children about what extensive metabolizers and poor metabolizers do. That's part of a booklet that I think I would hate to see loculated within NICHD only, that I think the NIH would find really interesting. And I would hope Steve, you would send Terry and Eric examples because what they did was collaborate with the Art Institute. He's being far too Midwestern modest. I think that'd be great because what they did was, it was a fun for the kid to explain genomic testing. And then when the child went back home to take that, when they took the drugs, they got little stickers to put in the book. And I have to admit, dealing with my geriatric hypertension population, I said maybe the stickers would work with them too. But I honestly thought this was extremely creative and the sort of thing that anyone dealing with children immediately said, that makes sense. But if you have a few images, I think the group would actually find this helpful and interesting. We will make this booklet available to anybody who wishes to see it. I don't have it loaded here. I can show you the PDF document on my computer, but we'll send that to anybody. Yeah, actually Steve. Steve, if you'd send it to me, we'll include it in the meeting materials when we distribute them. That'd be great. Yeah, actually whispered to Mary that I thought we should just close up the CPIC nomenclature reconciliation discussion and just go with the turtle to rock it, because I think it would work better. Yeah. So this is one of the images that explains, so the trial, I glossed over a lot of the details. The study that we proposed in our U54 is an exposure escalation study. And so this is the image where the clinician sits down with the family and explains whether we're going to increase the dose or there's going to be no change in the dose or whether we're going to reduce the dose. This is the image that is part of the permission ascent process and explains that anybody can terminate the study. The investigator on the left can terminate the study. The parent can terminate the study or the child, the participant themselves can terminate the study. And so there are a series of images that explain each of the concepts and it's accompanied by text. So yeah, I will send you the booklet. It's a product of the, the only tangible product of the grant so far. Do you have it on your website? I mean, you know, posting it there. No, we don't have it on the website. We really, we haven't actually started that particular study, so we rolled it out to our center advisory committee and sent them home with a copy. And Steve, before you leave, and I know Lynn wants to ask you a question, can you comment? I mean, it looks great that there'd be some sort of dosing algorithm, you know, wrapped in this sort of educational format and Terry was encouraging you to think in terms of implementing it in a randomized design. Can you say a little bit more about what you would compare this to in terms of how you would actually tailor precision therapy in the pediatric population? Yeah, so the, so what we have done is we have taken the, the data, the pharmacokinetic data for those 23 subjects from the genotype stratified PK study and we have built a population PK model around that. One of the reasons why we've delayed the clinical study is the fact that if we dose everybody to certain exposures, so buried within the literature, Lily has some reports of maximum concentration, peak concentration of 800 nanograms per ml being associated with a higher probability of clinical response to the drug and it's mostly the PMs that achieve that and not very many of the non-PMs. And it was in the clinical trials that was also the threshold for making a decision as to whether somebody would continue on to a higher dose escalation study. So when we start to look at what doses would be required to achieve a peak concentration of 800 nanograms per ml in everybody, we will exceed the FDA recommended dose for some of these kids, the more rapid metabolizers. So before we kick off the study, the clinicians are prepared to do that because they understand it's not the dose, it's the exposure that's the important thing. We wanna make sure that the algorithm or the dosing model that we have right now is reasonably accurate, so we are conducting a validation study to make sure, to see how well it performs and then to update it with, instead of just 23 patients, it will be updated with data from 47 patients. Now the problem comes in with respect to the design of this study. So would we be comparing in a, so there's some biomarkers that we're also looking at in this trial study, so we're not really ready for prime time with a comparative effectiveness for what may have to be regulated as a device. And so this is where it becomes challenging and in that we know that if we dose atamoxetine the way it's recommended in the PDR, that the PMs will become, most likely a lot of them will become toxic because they have extremely high exposures and some of the kids have no probability of benefits. So I don't know how we're gonna address that with the IRB given our current status of knowledge. And so I think we need to think that through. And number two, we don't have the resources to do it. We don't even have the resources to do the study that we proposed in the U54. The only reason we're able to do it is because the institution believes in it and is making resources available. Okay, next Lynn. Steve, right at the end of your talk you started to discuss ADHD. And in our area many of the pharmacogenomic laboratories are marketing to pediatricians and parents directly. And our pediatricians who are very interested in pharmacogenetics initially contacted our group because parents were coming in with results from panels predicting response to ADHD. And it would be a great service to help distinguish which of those markers are actually medically actionable and evidence based because the parents are refusing to have the physicians give certain drugs that are on the red light report. And some of that association just doesn't have any literature or very little evidence base. And so in the real world, they're really needing guidance from groups like these and that would be whether it's Mary doing some nose in CPIC and or you working together. That's happening every day. So this was the fifth point I was gonna make had I not rushed through everything so focused on being on time for the first time in my life. The fifth point is lack of prospective validation for pharmacogenetic tests. So one of the tests that you're talking about is this one for the adnergic alpha two receptor position, it's 1291 upstream. And here's one of the studies that is cited in support of it. It's a methylphenidate study done in predominantly in the tentative type ADHD. And if you go into the paper, you can actually pull out enough data to construct a two by two table and you can see the sensitivity and specificity. To me, it's the positive and negative predictive value that's most informative because if you think of this in terms of what's actually gonna happen, well somebody's gonna come in with a test result and present it to the pediatrician or the pediatrician's gonna get the information from the testing lab and it will say whether or not the risk variant is present. So if the risk present is, risk allele is present, the positive predictive value for this test is 72%. So that's maybe not so bad. But here's the second study that's cited in support and interestingly the other allele was associated with a beneficial response. Not the G allele is in the previous paper. This was done in patients with autism spectrum disorders with co-morbid ADHD. Now the positive predictive value is 40%. So it's a coin flip, right? So not very useful. So what's this telling us? Well, the sample sizes are really small. I think it was 50 some patients in the first study and I think, what is it? 30, 58, 58 in this one and it's a similar size in the other one. Are we talking, are these just sample size related? Are they related to the fact that there are two different processes, co-morbid ADHD as opposed to in a tentative type? I mean, I don't know. So this is why we need the prospective validation if we're going to figure out what to do with these test results. Bob? Oh, I'm sorry. Is that a follow up to the point that was just made, Mary? And then Bob will go to you. Can I just add that we're aware that the ADHD population is one of those groups that's heavily marketed to by genetic testing laboratories almost exclusively inappropriately. But that's why ADHD drugs with SIP2D6 where there is some reasonable evidence, we are working on a CPIC guideline but for these other genes that are widely touted but for which there's no evidence we won't have a guideline. That's another example, one of those level C associations where publishing in the negative might be helpful to clinicians and patients but right now we just don't have the bandwidth. Very helpful. So for the outcome studies, the outcome that you reported on was for example, the MACE events. And did you track other events, potential negative outcomes from switching drugs for example? And I know for example, in Mark's case, Valporate isn't without its own risks. So if you just look at the specific event that's or type of events that you're trying to prevent with respect to the drug that you're not giving, are you missing something potentially? Yeah, the issue with models is that of course they can become infinitely complex and you're correct. And I was actually a strong advocate as we were initially discussing that if Valporate was gonna be the alternative medication and we needed to look at the adverse event profile of Valporate compared to carbamazepine and we needed to model that. It turned out that when we actually looked at the literature and assuming a best practice use of Valporate with appropriate monitoring that sort of thing, the risk profile is actually very favorable. And so we were able to say there's really nothing that we need to include in our model relevant to this because you have to make some assumptions which is we can't model the fact that people don't do what they're supposed to do while you can, but it becomes a really disheartening enterprise. And so those types of critical questions informs who needs to be sitting around the table and Larissa will recall as she was the one who was convening the group that in, as we were talking about, I said, you know, we really need to have an epidemiologist, you know, kind of talk to us about this. And so we had one meeting that was just devoted to talking about issues of alternative therapies for this type of epilepsy. We had another one where we had a dermatologist and an expert in this work, but let's talk about the different types of severe, coutaneous adverse events. So we knew that we were in the right ballpark. And I think that's the danger is that if you don't have that expertise across the table, at least in the initial stages of developing a model, you can miss something that's critically important. Dan, I think you said you had a question for Laurie, or for Laura. For Laurie. Laurie. Laurie, but it's for all of us. I want you to talk about the Taylor study for a second, the Taylor PC study. So the generic issue is can you randomize people who are homozygous null? Mary has the same issue in randomization of people who have TPMT null alleles. So what is the approach in that study to randomizing people with that? And it's a paradox in our world because those are the people with the most obvious effects, the highest effects, the most likely large effect sizes when you do a study. And those are the people who may not be included, so you end up doing a very large study in heterozygous. And I showed some heterozygous data just because I think that's much mushier and you're gonna have to study lots of people. So tell us about the Taylor PC study and how they're approaching that because I should know, but I don't. Dan, I think since it comes from our place, I can probably do a better job than she can. So here you go. But it's a generic issue that I want people to also, so you could talk, and I wanted her to talk and also Mary to talk about this, sort of the whole question of, we're obsessed by randomized clinical trials and at the same time, we're totally two-faced because we then turn around and say, well, yeah, okay, but we can't do it and those people, because we already know that those people are bad. And the problem was, just to remind people the history, that the FDA came out with a black box warning in March of 2010. In June of 2010, the American Heart Association and the American College of Cardiology came out with a position paper authored by, if the first author was the president of the American College and he comes from the Mayo Clinic, it runs our cath lab, saying this is premature until there's a randomized clinical trial. You now have the situation, so of the FDA saying, it's a black box warning. So in order to do what the cardiologist wanted because we know that the vast majority of them don't genotype, there were extensive discussions with the FDA to say how would you design this study? So the study was designed by the FDA, basically. And it was a two-arm study. In one arm, a blood sample is taken, but no genotyping is done. That's what Dan is talking about. And some of those people are gonna be homozygous variant. The genotyping will be done one year, which is the observation period, one year after the blood sample's taken. The other arm was what you're asking for, and which I think makes the most sense, that the genotyping's done instantaneously in the cath lab in 40 minutes. And so you don't get that push bolus of cloppedrigal, which we give to all patients, no matter what their genotype, because we didn't know the genotype until now. And if they are homozygous variant or have a variant allele, they're then randomized to an alternative arm with an alternative medication. And that's one of the reasons why you showed the recruitment numbers. The power calculation said you had to have about 5,500, which is a number that you showed it, in order to have adequate power with that design. And it's a design that came from the FDA. They said they were comfortable with that. Now does that sound? It's sort of, see no evil, et cetera, et cetera. The study is now, as of last week, at 4,033 patients. So it's clipping right along. It's 27 centers, three of them in Korea, because of the increased incidence of the variant there. And I think five centers in Canada the rest across the United States. So the recruitment is going along. This is funded by a heart-lung and blood grant, so it's undergone peer review. But I think Dan's asking a really good question, that what do you do in this kind of situation? This was complicated immensely by the black box warning. And then equally immensely by the strong objections from the cardiovascular community. Now with that as background, I think Dan's asking a really good question, because TPMT, which some of us have some strong feelings about too, I think creates a real problem. How do you randomize somebody who's almost, I guess, star 3A, star 3A? And I think that would be pretty difficult to do. I wouldn't wanna do it, so does that help? Does that set the stage? I just want the stage to be set and I wanna sort of generate a discussion, because I think that as we go towards thinking about multiplexed randomized clinical trials or some other model, those are the kinds of nitty gritty trial design issues that really get in the way. Mary and I are both on the SAB for the UPGX and that has been an enormous issue. And continues to be. But this was really complicated by the black box warning. I think it's... Well, I mean, had the black box warning not been issued, the whole trial would have never happened anyway. I think that's the other way to look at it. The black box warning provoked the whole field. Okay, I know Terry would like to weigh in and so would Mark, but before they do, Laurie, I know the question was directed to you. Do you wanna say something first and then please have the others weigh in? He just made you talk about the effect sizes in the homozygous and the heterozygous if you have those data. Yeah, so we couldn't actually look in the homozygous because almost everybody got switched to alternative therapy. So there was no comparator group. But we did look in the heterozygous and there was still, I don't remember the exact effect sizes but it's very similar to what we observed in the overall population. So it was a significant difference. I know at age, I mean, it's a good question because I don't think we would participate in a trial at this point where we had to randomize people undergoing PCI to genotyping or not because based on these data, we would feel kind of weird about that. There was a trial present AHA, which with peripheral artery disease, I think, in which they were using clopidogrel versus ticagular, I can't remember, one of the other antiplatelet agents and they actually excluded poor metabolizers. So they genotyped people ahead of time and they were not included in the trial and I think it was because of concerns of treating them with clopidogrel. Homozygots, yeah. So they still had heterozygots included but I think it's a really good question. It's got sort of similar to the TPMT. Do we feel comfortable randomizing these individuals who, when we've been giving them, maybe different if you're not in a position where you're already doing genotype-guided therapy but here we are. Great, thanks, Terry, go ahead. So on this point, I mean, this is the challenge of trying to do a randomized trial. So I mean, what we're doing around the table is designing a trial that will have the most important and most effective genes and variants and drugs taken out of it because we all know that it works. So are we at the, we're all sort of in the choir here, are we at the clinical equipoise level where some of us fervently believe but don't have evidence that this does work and there are others who fervently believe but don't have evidence that it doesn't and a trial is needed or is it really unethical? And I don't know how to answer that question. Mark, go ahead. Yeah, and that was the point that, as you responded to Steve about, that you include randomization as you do the implementation. And I'm thinking, at what point does the IRB hold their hand up and say, we don't think that's ethical because we think there's sufficient evidence. So it's not just what we think. I think it's gonna be what other people that are involved in the decision-making think about that as well. And that's why I would come back to the idea of saying there are other ways to do this. So you could, if you had a staged implementation and a large system, do a cluster randomization and you could randomize at the level of institutions. And so there would be ways that you could take advantage of the fact that implementation just doesn't roll out overnight everywhere to develop data, to look for differences that is part of the normal activity. And that's where I think we need to be more creative and I think where Laura's talk is very instructive is that the whole point of implementation science and the dissemination science is to take advantage of what happens in dissemination implementation to try and derive knowledge, not only about how the dissemination implementation works but also what is the impact of the intervention as you do the dissemination implementation. And we really haven't explored that very thoroughly. And there's expertise and funding within NIH through the dissemination and implementation program of which NHGRI is a contributor to do that sort of work. But isn't the UPGX trial, a cluster randomized trial and Dan and Mary, you're still having major ethical issues with it or am I misinformed? I think that the way it works is that every center will decide whether they're going to first be in the arm where all their patients get genotype-directed dosing and then in the second half of the study switch to the other or vice versa. And that's the ethical problem. Well, to me, the ethical problem is that they're randomizing on many genotypes which I consider to be the evidence to be solid that there's not equipoise and that the right thing to do clinically is to change the dose. I mean, I am curious for this Taylor trial, is there a DSMB? What does the consent form look like? It's an NHLBI trial. Okay. Of course, there's a kind of safety monitoring committee and they meet regularly. I mean, how could there be anything close to this level of difference in events and then and it not be obvious at this point if they're actually enrolling patients? Do you understand the mindset of a cardiovascular physician better than I do then? Because frankly, I had thought that what we originally wanted to do was to design a trial that was not randomized at the level of genotyping and that was absolutely insisted upon by the cardiovascular community. And maybe Terry can give us some insight into the cycle. Yeah, we'll ask Jeff. And does the consent form say that there's equipoise? The consent form says the medical community really doesn't know whether there's a difference in outcomes. Time that the trial was designed, that was the insistence of the cardiovascular community. So Jeff, maybe you can give us some insight. It was fascinating to me to watch this whole process roll forward. Sorry, but I think you're still struggling with some of the cultural issues that we discussed yesterday about the cardiologists generally, favoring level one evidence, particularly when it comes to genetic studies for reasons that are completely unclear to me. Me too. I mean, I think, so there's really two things here, right? There's evidence and then there's adoption of that evidence and there are many, many examples in cardiovascular disease, 20 years for aspirin post-MI to be widely adopted. Beta blockers, 20 years to be widely adopted post-MI. And not until they were JCO standards did these things really begin to happen sort of universally. And so I think we, I think sometimes we're confusing these two things, right? That the physicians don't buy into it means that there's not sufficient evidence, right? They really are two separate things. I don't think anybody would argue that it was the early 1980s that we had the evidence about us aspirin post-MI. And so I think our physicians not using TPMT to guide thiopurins, are they not using CYP219 to guide clopidogrel? I think for those two for sure, we can argue now that there's substantial evidence that supports you should. The fact that they don't is not surprising because we know that for many things it takes 20 years for adoption. The fact that they don't in my mind wouldn't make it possible or ethical to write a consent form. Right? You would have to say in that consent form to patients, the medical community is slow to adopt science. Right. So I'm in complete agreement with you that you can't, I don't think you can randomize those examples, even if in a local center they're not doing it because I don't think you can really argue the ethical point, race on the evidence. Just like I don't think there were aspirin trials, right? There weren't, people weren't, they didn't continue randomizing to know aspirin. It just took a long time for it to be adopted. Right. So that's a great example. Practice changed for aspirin, maybe it still hasn't changed. No, it's guidelines. Okay, and so guidelines influence practice. Sometimes you just keep ringing the same bell over and over and over and over again, and eventually somebody listens and pays attention. But the right way to address that intransigence in the medical community and changing their practice might not be to do another randomized trial. There's other things that we have to do, including practice guidelines, which we need more of. But Mary, I think in fairness to the cardiovascular community, which has been beat up here pretty badly over the past few days, the oncology community astonished me as a hypertension doc with TPMT. Because frankly, I realize I'm a little biased, but I was convinced long before the oncologist were willing to accept that kind of genotyping as a help in the clinic. So I think it's only fair to say this is universal in clinical medicine. It's not one discipline over another. And I had never really understood the psyche of an oncologist who feels that his or her professional competence is to push the patient to the edge of life-threatening toxicity and then swoop in like Superman and save him or save the patient. In hypertension, we avoid adverse drug reactions. That's very different in oncology. It took me a long time to finally figure that out. So I think that the cultural issues are very real and what, and I think Mary's right, we keep going back again and again then after a while, and it may take 20 years, we begin to see some acceptance. And if the cardiovascular community insists on a randomized trial, which they did at the time the tailor began, then, and they still do actually when I sit down and talk with them, our job is to keep bringing the evidence back again and again. And probably this is not the private purview of any particular discipline in medicine. Medicine is a very conservative activity. And our job is to be out there pushing the boundaries a little bit. Do I share your concerns? Of course we all do, but this may be the way to one of the pieces, what Julie's doing is one of actually getting the cardiovascular community to begin to adopt this aspect of science, which is the use of genomics to guide drug therapy. I think the reason it's coming up in the context of this meeting is that one of the objectives that we have written down for this meeting is to talk about whether we can or should try to plan some national clinical trial. And I guess to me this is the point where some of us are trying to make is maybe there is room for some kind of national clinical trial, but a randomized trial comparing genotyping to no genotyping and acting on it for variants that already have the level of evidence that some of these do would be unethical. And just because it might be effective at bringing over a few more intransigent physicians to change their practice doesn't make it the right thing for us to do or advocate for. So I just wanna make one comment before I'm gonna go to Rex and then to Mark and then I see some hands down here. I just wanna make the point that I think Laura and her talk really did us a service in distinguishing typical clinical trials from implementation research. So even though all of us are passionately convinced that the evidence for genotyping for clopidogrel is solid. There's still a call for implementation research to say that even though the evidence is clear, we need to study what are the structural characteristics, the context, the culture, the implementation climate. All of those things that we know will still need to be addressed because the evidence is not enough to convince and get behavior change. And I think we already know that practice guidelines are critical and hopefully the Taylor PCI trial results will get the American College of Cardiology and the American Heart Association practice guidelines to be in place, but it's not sufficient. And that still would call for implementation research to address those other factors. And so we still might be designing that study. And not just for clopidogrel, but for subsequent pharmacogenetic variant testing that would follow with sufficient evidence. So Rex. So I think this really highlights maybe where language is important and I think one of the things this discussion has highlighted for me is that we really need to stop talking about evidence gaps in certain of these cases and start talking about implementation gaps. Cause I think the evidence for some of these is absolutely just crystal clear. And for us to even pretend that, oh, we need to go back and do more work just to convince more people that they should adopt it is fundamentally flawed. I mean, what we really need to be focused on is, why aren't you adopting based on the evidence that is crystal clear? And so I think maybe clarifying the language to talk about, no, there is no evidence gap in this case, but in these cases there is an adoption gap. I think is one of the ways we need to start to think about this. Thanks. And since I invoked your name, I'm gonna allow you the floor here. Thank you for summarizing my points. I think you did it more saliently than I did. I'd also say, again, I'm not an expert in implementation science, but I've been reaching out to various people for slides and to get, and the implementation science community was so excited to hear about interest in this area from PGX. In general, I think it's been more the purview of sort of behavioral interventions and quality improvement work. Although one person I spoke to said quality improvement and implementation science are essentially the same thing. It just depends on which funder you're talking to, but there was excitement, her words, not mine. There was excitement from that community about PGX work in this space. So the point I was gonna make was that it seems like there's an opportunity, and I'm kind of looking at another Laura here, for an interesting white paper around the ethical aspects of this conundrum that we've been debating here for the last 20 minutes or so. I think that would be something that this group could potentially weigh in on as a product to hopefully inform the discussion about PGX. I mean, I think there's two points. There's a point that Rex brought up. Evidence gap versus implementation gap, which is, as Dick mentioned, is, you know, it's endemic in medicine, and we know that that's the case. But the idea that, saying that we need to go back to randomized control trials where there's sufficient evidence, has ethical implications that perhaps are not being considered sufficiently as these trials are continually being, you know, redone. But we've got to stop letting people who for whatever reason, cultural, inertial, whatever they are, who are standing in the way of implementing something, we've got to stop letting them say, oh, but that means there's no evidence. It's not that there's no evidence. It's that there's other reasons that they're not implementing. Okay, Jeff was going next, and then Terry and then... Maybe it's the guideline gap. Bob, Bob, Bob's been... Oh, my God. It's a tsunami. Maybe it's the guideline gap, right? Because as Julie said, you know, aspirin and other drugs have been in the guidelines, and yet it takes this long time horizon to get people, practices to adopt them, but also with pay performance and quality metrics that include those things, that has really been a significant impetus for adoption. So we heard from Mary yesterday that a number of professional organizations have aligned with CPIC, but I didn't see the ACC, for example, as one of those organizations. So I guess I would ask, Dick, when you designed the Taylor Trial, did you have a conversation with any of the guideline committees and did they indicate that if this trial were successful with the endpoints you've created, that that would be sufficient to get them into the guidelines? And I would even extend that to the payer community. Did you have that discussion? I'll just say one more thing. I listened to a talk on the ubiquitous pharmacogenomics trial by one of the investigators last week, and I asked them, so if this trial is successful, have all the participating national nations that are involved, will they then now take this up and make it part of their standard of care? And the answer was no. It was not part of their discussion when they designed the trial. So the author of the position paper for the American Heart Association and the American College of Cardiology is a member of the Data Safety Monitoring Committee for the trial. And there were discussions that if that were to, if the trial were to be successful, that that would be a powerful message with regard to the guidelines. There was some malice aforethought there in terms of trying to be sure that the opinion leaders were involved in design of the trial and in the monitoring of the trial. I hope that makes sense. Okay, Bob, you're next. So I think, in my opinion, the time gap for adoption that's been talked about is most likely related to a black sheet phenomenon. So nobody, especially in a litigation environment, is really willing to go out there and be that person or that person, single person at the institution to make a difference there. So, and there's also a cost to the change. So if there's gonna be a randomized clinical trial, maybe it should be around how to turn a black sheep into a champion. So that's my comment. My question or clarification is, with respect to the original discussion, does this trial that's ongoing now, is this a trial that has an arm that was designed by the FDA that contradicts the black box? That was the whole point of having the FDA involved in the discussions. And the purpose was that 90, at that time, 99% of the patients were not being genotyped in this country. They were perfectly happy with saying draw a blood sample and put it aside and wait until after the one year observation period to do the genotyping. Because that was the standard of care at the time the trial was designed. And probably still is. I think frankly, still is in this country. So most people aren't getting genotyped before they go to the cancer. So what does the black box say? The black box warning said that if patients had that genotype, then they should be used, an alternative drug should be used. I don't think we should get into the details of the design of the trial. Clearly this does illustrate, I think what really illustrates and in this room, there's going to be one point of view. I mentioned my feeling about TP-MT which has been a standard test at the Mayo Clinic since 1990. The FDA held public hearings on TP-MT genotyping and in the labeling in 2002. Lynn Leonard and I had demonstrated already in 1987 that this was a critically important way to avoid life-threatening adverse responses without pureing drugs. It's not, I think it's not fair really to talk about the medical profession. The FDA took how many years, Mary, before they came around and made this a part of the labeling. And you were doing, it was a standard test at St. Jude, it was a standard test at Mayo. So I think we need to be very, frankly, I think having watched this over decades, we need to be very careful that we not become an echo chamber among ourselves either. And realize what's happening out there in practice. Today in the United States, and I know Julie would agree, 95% of the patients aren't being genotyped. Yeah, yeah, probably more than that. So frankly, what we need to do is find the best way, and we're talking about implementation science, I'm beginning to move it a little bit. We're never going to change the basic and appropriate conservative nature of physicians and the pressures particularly the pressures are under in the practice to adopt this new science. So what we need to do is find how do we have what's real world like out there, which is still 95 plus percent of the patients aren't genotyped. And will this piece of evidence help to bring the societies in cardiovascular forward so that they change the guidelines? And I think it will be one piece of evidence, clearly what Julie's doing is another piece of evidence. And the one thing that you learn after spending decades doing this is that no individual piece of evidence ever changes anything totally. It's accumulated series of events that basically eventually becomes standard of care changes. Now, that may sound too philosophical, but as a matter of fact, it also happens to be what the way the real world works. The vast majority of people in this country aren't genotyped for anything. So, and pharmacogenomics is, as I said yesterday, it really is clinical genomics for everybody everywhere. And what we need to do is then use this as a wedge or whatever analogy you want, the camel's nose coming into the tent, because there'll be a lot of other places that genomics should be used so it's not being used. So I think you're right, Dick, that we do run the risk of being an echo chamber and just to kind of take us back a few years to think about an area where there was really strong, compelling, repeated, worldwide observational data showing that hormone replacement therapy after menopause prevented heart disease. Hands down, no question about it. And a trial was designed to test that question and it went through an incredible gauntlet, including a congressional hearing to try to determine whether that was ethical or not. It was done and it disproved the fact, the fact, the alternative fact that everybody believed that it would have a positive impact. And careers have been built on trying to figure out why there was that difference between the observational and the randomized data. Now we can say to ourselves, well, genetic data is different, you're randomized at birth and there's not an adherence question and that sort of thing. But I think we did see in Laurie's data that there were differences between the loss of functional allele carriers who got alternative therapy and the loss of functional allele carriers who didn't get it. So there could be some subtle biases in there that we need to at least be open to and recognize, can we as a group agree, when is it important to do a clinical trial of this and when is it unethical? Rather than sort of, as you say, being an echo of, gee, we are convinced we have enough data and anyone who thinks differently is doing something unethical. Great point. That speaks to Mark's white paper idea. Yeah, okay, great. And Howard? Two things I wanted to mention. One was that there are plenty of centers in the United States that'd be more than happy to participate in a randomized trial of a personalized approach versus a standard of care approach. They're just not represented in this room and most of us are at centers where it's just unethical, unthinkable to do the trial, but most centers that are not very far from our own centers are more than happy because they're not doing it now. They'd love to know. And so I think that there is a way forward. It's just not with quote unquote academic centers. And I think Coag fell into that problem where Coag was done in a more academic model and may not have had the same issues if it had been done in a community type approach. Second point is that the thing that has moved the needle faster in oncology than anything else is futility. And so if you can demonstrate futility, especially if it's associated with dollars, adoption happens very fast. The KRAS testing, which was one of the big testing to start with adult oncology, we'd known about KRAS mutations for years. We started knowing that it was associated with futility of epidermal growth factor antibodies and then insurance companies stopped paying for the drug unless you had done the test. Suddenly testing was done very often because no one wanted to carry a $20,000 cost in their practice when they may not get paid for it. And so having some sort of futility slash Benjamin related stick as in the $100 bill is a really important, I know you did because you're a rapper, but I was talking about the others, Dan. But the idea of really going after that, because that'll change that time period that Rex was mentioning of adoption. Okay, oh, no, I can see you, Marilyn. This conversation is making me think about a presentation that I saw by Edward Tufty, who does data visualization work. And he talked about the space shuttle Challenger and that the engineers tried for months and months and months to convince the senior leadership that there was a problem with the Challenger. There was something wrong with the O-rings. And so they kept trying to show the data in different ways. They showed charts and then graphs and then pictures and they tried lots of things and they could not convince the leadership that there was a problem. And hence the space shuttle went up and it blew up. And this conversation is making me wonder, so Mary, to your point that we just need to keep going and ringing the bell and ringing the bell. I would challenge us to also, oh, that was a challenge pun, that's good. I didn't even mean to do that. To think about every time we ring the bell, are we showing them the evidence in the same way? Or are we taking it to them in different ways? Because I worry a little bit that if we just keep showing them a Kaplan-Meier curve again and again and again, they're starting to gloss over and not look at it because they've seen it. And so I don't have the solution for what the other ways are to show the evidence, but I think we should think about that. And maybe if we put the information in front of them in different ways, not just the same way every time, maybe that will help. So just something to think about. We need Richard Feynman to dump the o-ring in an ice water. And we have a number of opportunities to look at how we've shown the data in different ways with the natural experiments, for example, that we have from Emerge PGX. It's just one example. So I just have a question. We have a number of individuals that have indicated they still have questions and Terry, we're at time. What would you like to do? Sure, it's your meeting. Okay, great. Our meeting, okay. Well, so much of this discussion echoes a lot of the experience that I've had as well. And going to the primary care environment and maybe someone like John from UnitedHealthcare to help with testing in a preemptive way and actually try to meet the medical necessity that someone has high blood pressure and shortness of breath and could be a real candidate for a PCI procedure. Having that information in the medical record, and I know that every primary care and cardiologists do not share the same EMR, but at least within those systems that they do, just like what the FDA Black Box Warning says, if the information is there, people have to act on it. And if you ignore that it's there, you just can't ignore that information. So, although it's much more direct to go to our cardiologists and say, hey, what can we tell you that this is important for you? And I think it's a great idea to understand culturally why different cardiology practices may not, and it's not just the practice, it's the individual cardiologists within the practice may not be taking this up because there's all different reasons and there's different reasons every year and every month. But if we go about it having that information available, then physicians will have to use it. So, is that the implementation research that needs to be done to move this into the primary care practice where you can actually say as a payer, this is worth covering this test for $249. That gives me 18 other markers, but I'm only looking at it for this medical necessity for this potential ACS patient down the road. So, I think there's a couple of answers to that. It's a good question. And just for the record, I've got to be clear with y'all. I'm at Optum, which is part of United Health Group. I'm not at UHC, so just make sure everyone's aware of that. Let me kind of share, as I listen to this conversation, let me share a few thoughts. So, the first is, inside Optum, we have a division called Optum Labs. Some of you folks may or may not be familiar with it. That's the mechanism by which we provide a lot of academic researchers access to our data assets. And as I mentioned yesterday, a lot of those data assets are substantive. Given it strikes me, and you folks will be much closer to this than I am, it strikes me that some of this activity is already going ahead. It's just not going ahead everywhere. And so, given that, given the mass of data that we have, one opportunity is to see if there is leverage of those data sets to help answer this question in a little bit more depth. Or put simplistically, with a large enough population, you've got a natural study. If you can aggregate the data from everyone and run comparisons, you get some pretty interesting answers. So, I'd throw that out as y'all consider whether this is appropriate or not. There may be another way to solve the problem. To answer the immediate question of implementation, I mean, we know I'm going to wear kind of two hats. The challenge of getting a physician to adopt a behavior is something we have teams of people thinking about every day. And we think about that from both the UHC side of the house as well as on the optimum side of the house. I think it warrants further discussion, a probably separate discussion out of this meeting, to answer your question in more depth about how should we think about creative ways to leverage some of our capabilities on the basis that we believe the evidence is there to get it more established. So, put simplistically, I would almost want to take your question and spend a couple of hours on it versus give a 30-second response to it now. I would love that. Okay, great. Thanks. Our next question is from Melissa. Melissa, go ahead. Yes, I think it's been a great discussion. I just didn't want to lose the point that what a great emphasis physicians pay place on whether something will be covered or not. And we've talked around a lot of, there are multiple reasons, but I think that's one of the reasons at the top of the list and surveys have shown that. And we also can't underestimate sort of the unethical lab practices that have been out there with the balance billing. So it's really made physicians gun shy between having to go through the prior authorization process with payers and then also patients being dissatisfied on the back end coming back to the physician and saying I thought this was gonna be paid for. So overall, I think it's made physicians very jumpy about ordering pharmacogenomic testing even if there is evidence for a single drug efficacy. And so between clinical guidelines and payment practices, I think those are the two biggest implementation pieces that need to be looked at. The other piece I think that it's important, especially among physicians that serve minority communities in which there has not been adequate testing in terms of inclusion of diverse populations and looking at some of these questions, I think there's also a large degree of skepticism among those individuals because they don't see that any of the results that have been gotten may necessarily apply to their patient population. So I think again, along the lines of the implementation science, those are a lot of questions I think that still remain to be examined and looked at. Great comment, Melissa. Anyone else have questions? We've got a few more minutes. Apparently one of our other panels is gonna go, a panel discussion is gonna go a little shorter. So as long as your bladders hold out, we can stay here for another five minutes. Sure, Mary. I was gonna amplify on the points that Howard made about in the cancer community, there's many, many examples where they've taken up testing based on a single study, sometimes based not on even a single study. So the cancer community has taken up proton beam therapy before there was a single randomized trial showing that it was beneficial, even though it was an incredible outlay of capital and resources. It was immediately covered by third-party payers for some miraculous reason, despite the level of any evidence. So this has come up at one of our previous GM meetings, I don't remember which one, that may be more attention to the various reasons why sometimes things are covered and then that helps to address the point that was just made up about the coverage decision being an impediment. But many laboratory tests, especially in pediatric oncology, are taken up not because they're going to save money, but because they're going to save lives. And so detecting minimal residual disease in leukemia patients is something that started as a research test in research labs for 20 years. It was covered off of grants. NIH finally decided we're not gonna pay for this anymore. You guys figure out how to do this in the clinic. And it's happening, it's messy, but clinicians see that it's helpful in directing therapy and saving a few lives. Probably not very cost-effective, but we don't really care because it's saving children's lives. So there are other things that go into decisions about how laboratory tests get adopted and there are certainly other things that we can advocate for besides just saving money. Thanks, Mary. Anyone else wanna weigh in? I guess a couple of quick things. First, I think we've talked about a lot of the problems, but I think one thing just to make sure we don't lose sight of the fact that from an NIH's credit standpoint, we are now in a situation where we do have a whole bunch of genetic data that we, a lot of us feel we shouldn't be acting on and that's due in part to a credit of a lot of stuff that NIH has funded. So we don't get too lost in the negatives here and think about that. But the other thing, thinking about the, if we did a prospective trial, and I agree with what Laura and Pat are saying about the implementation thing, that it's really two separate questions and maybe if we go at it from an implementation standpoint, we genotype some people that were not originally gonna be genotype because they're in one of these, the 95% of the population that was not originally gonna be genotyped. And then after we genotype them, go back to their physicians and say, why did you not genotype them? So we have real, real evidence that we can then go back and say, rather than saying, sitting and speculating whether it was because it was gonna cost too much or specific ones or in general, they didn't believe in the evidence or whatever to actually accumulate the hard data that says, this is now the reason why it wasn't implemented. And then we can try and address it that way. It's an implementation science trial for the drugs where we think there is enough evidence, clopidogrel, TPMT, those ones, it's an implementation thing, but they might in the same way, it may generate more evidence for some of the other drugs. If I could just follow up, and my understanding of implementation science is imperfect at best, but my understanding is that one of the questions that one asks there is how to implement something, what's the best way? And it's almost, I mean, we'd wanna be careful that we weren't designing a trial where we were basically trying to have a really bad implementation so that we could test the question we really wanna test, but we feel is unethical. So I'm not sure how to get at that, but. So just to clarify, I think implementation science rarely comes up with the best way to do something, but the two metrics that Laura alluded to are what is the fidelity of the implementation? In other words, how close do you get to what it is you're trying to get to, and then how much customization is required to do that implementation at a local site, and that the preferred strategies are ones that have high fidelity and minimize the amount of customization, but there's no elimination of accounting for local factors, but it's that learning about the differences, which include many of the things that Todd was referring to in terms of practitioner attitudes and leadership and stakeholder perspectives and that that have to be included as part of the environmental assessment for which the implementation is taking place, the thing that that preparatory work that we didn't have the opportunity to do in as much depth in the e-merge PGX as we wanted to do. And Lori, you can probably comment on this, but my understanding is the ignite network did use the consolidated framework for implementation research and there's even some tools in the tool kit in the toolbox that has a standardized assessment based on CIFR. So this isn't my area of expertise and I didn't work much on this, but there are some surveys that were done for both patients and providers.