 Great. Thank you. And thank you so much to the organizers for asking me to be here today. Is this better? I could hold it. How's that? Alright, I will hold it. Okay. Thank you for inviting me to be here today. I am from the Preventive Medicine Department in Northwestern, but here I'm today representing the Emerge PGX Working Group, and I would like to acknowledge that many people in that working group are in this room, and many other people who aren't here have been tremendously helpful, particularly Dan Rodin and Josh Denny. We're both former co-workgroup leads with me, and Cindy Prouse, who isn't here today, is my current workgroup lead. So I want to quickly describe Emerge PGX. It's been mentioned a couple times today, but I just want to give sort of the minimum amount of information so we can all understand what we're talking about, detail the outcomes that we've worked on to date, and then highlight areas where we've encountered some challenges, and then talk about future directions for Emerge PGX outcomes, and other outcomes project with an eye on promoting implementation. So the Emerge Network, this is a map showing all the current and former participants in the Emerge Network. Emerge PGX was funded as a supplement during Emerge 2. We're currently in Emerge 3. And the supplement came about. A bunch of sort of fortuitous things came together. First of all the PGRN came out with the PGRN Seek platform. This is the paper describing the platform here at the bottom. It's a targeted capture sequencing panel for pharmacogenetic research, so it has 84 pharmacogenes where you can generate next generation sequence. And PGRN Seek came together to develop that panel, and it became available. And at the same time, there was supplement money available for extra sequencing. And we said, what can we do in Emerge to promote both discovery and implementation using this nice resource? So we came up with Emerge PGX, which had three aims. So the first aim was to recruit people who we thought were at high likelihood of being prescribed one of our pharmacogenes of interest. And how we determined they were at high likelihood varied very much between different Emerge sites. And then each of these people that were recruited were sequenced on PGRN Seek. Then as I said, we had an implementation aim. This was our second aim. And at that time, very few people were running next generation sequence platforms in a clean environment. So we had to generate validation genotyping for the genotypes that we wanted to put in the electronic health record. So we generated clinical variant validation. Then we put these results into the electronic health record with associated clinical decision support. So each site developed clinical decision support for a certain number of gene drug pairs. And then we had associated patient and clinical education. Now the CDS handled some of that education, but as we've discussed previously the CDS isn't usually entirely sufficient. So we also had individual immediate efforts for patient, excuse me, and clinician education. And then our third specific aim, we populated a variant and phenotype data repository to promote discovery efforts using PGRN Seek. So pretty early on in the process, we decided we needed to publish a design paper. And we did this for two reasons. First of all, the implementation of PGRN Seek by necessity had to vary a great deal between the different Emerge sites. So a lot of what this paper is, is tables describing the different ways in which we had to implement Emerge PGX at the different sites. And this is just sort of one example of ways we've tried to summarize some of these implementation differences. Different site it's implemented different drug gene pairs. We had some sites that were pediatric. We had some sites that were primarily adult. Again, if you read the paper, lots of tables that summarize this. The other thing we wanted to do in the paper was sort of layout a roadmap for the outcomes we were intending to measure. It's the first thing that we said that the outcomes we were going to achieve is we were going to recruit 9,000 people. And we did. We recruited 9,015 people. And this is a breakdown of the number of people recruited by Emerge site. I also wanted to highlight here the number of people who were recruited by different racial and ethnic groups. We did have a fairly large number of African Americans over a thousand, but similar to many previous studies, study population was predominantly Caucasian. Okay. So what were some of the outcomes? I did the somewhat painful exercise of going back and re-reading the paper to see what we said we would do. And I thought I would tell you what we said we were going to do. And we actually have done a lot of it, which was nice to see as I did that. So we said we would sequence 9,000 plus people on these 84 pharmacogenes and document variation. Marilyn mentioned this paper yesterday. This was on the first half of the Emerge PGX samples. There's another paper in progress that will document on the second half. HLA is sequenced on this paper. On this platform, CYP2D6 is as well. Obviously those pose particular challenges, but there are work groups investigating those, and I think there'll be papers coming out about variation in those areas as well. We said we'd create a searchable variant repository. And we did that. This is Sphinx. I would urge you to go, I'll go to the website and check it out. It has a public facing and a private facing side. It summarizes a lot of the sociodemographic variation. You can look at different gene frequencies. You can look at the number of people who have a certain ICD-9 code. We don't have individual level data in this because that, considering we're using electronic health records, could potentially violate patient privacy. But there's a lot of population level data here. So what else did we say we'd do? This is a sentence from the paper. We said we'd talk about recruitment and sequencing. We've done that. We said we would summarize genotype validation. Again, we did both PGRN-Seq and an orthogonal platform. That's in press. We talked about provider education. That's in press. Patient education. That's working its way through. We talked about the complexities of EHR integration for the CDS. And, Mark, I apologize. This wasn't entirely a PGX paper, but elements of it did include things from the PGX experience. So I've borrowed it here. We also talked about how we'd summarize actionable rare variations. Six of the ACMG genes were on this platform. And one paper's already been published focusing specifically on the arrhythmia genes. And there's another one in development, I guess, looking at the six ACMG genes. So that's what we've done so far. And we actually accomplished many of the goals we set out in that paper. But the bigger question is what's missing? So I think the first very obvious thing that's missing is cost. And actually, when I got invited to speak here today, I got an email saying, will you talk about cost effectiveness analyses in eMERGE PGX? And I emailed back and said, I would love to, but I can't, because there haven't been any. There have been no systematic network-wide efforts at cost effectiveness in PGX. There may have been a few minor efforts at the individual sites, but nothing that I have seen that's really, you know, come to publication or move very far ahead. Frankly, the other thing that's missing is any assessments really post-implementation. Other than the genotype validation paper, most of what we published was sort of about what we did to get PGX up and running. We haven't published very much about what we've done since it's been going. And I'd say the plan at baseline was that we would the outcomes would largely be individual level and would be assessed through the EHR. And we do have several projects in progress. We call these, you know, our PGX phenotypes, things like maceftric lipidogrome, malignant hypothermia, etc. People are working on these, but, you know, as we mentioned in when we're doing pharmacogenomics, the number of people that have relevant genotypes is often pretty small. And then when you take a general presentation like PGX, not everyone is prescribed the genes of interest, so you start getting to really small sample sizes very quickly. So we just haven't had a lot of power to do a lot of these analyses. And in a lot of cases we're waiting, hoping more samples will accrue. But the other thing we haven't done is we haven't really summarized the challenges involved in getting eMERGE PGX up and running. And honestly, they were considerable. So what I wanted to talk about for the rest of my time today was eMERGE PGX was made in a highly heterogeneous way. And in many ways, I think that's inhibited our ability to talk about outcomes to date. But how do we account for this, but also really capitalize on it in future research? And I thought it would be helpful to present a specific example of ongoing work. So there is a PhD student at Northwestern, Tim Hurd. He's a PhD student in biomedical informatics. And he's really interested in CDS implementation and what goes on behind the scenes in clinical decision support. What are clinicians doing? What's happening after they click that? And he really wanted to do a project around that. So he proposed a network-wide project where he asked how did you do it? How did you implement CDS? And how well did it work? So he did informal interviews with different people. And he found the right people at each site who really knew a lot about CDS and the sort of back end of the CDS. He started with informal interviews. He moved on to a formal questionnaire. And this was a highly detailed formal questionnaire. He needed a lot of technical expertise in the EHR and in the CDS to understand this. And then he tried to do an analysis where he aggregated and identified trends. And he presented this at AMIA. And I put these numbers up here because Tim did a ton of work and he generated these numbers. But when you want to present them, he said, it's just really, really hard to come to any conclusions based on these because each of the sites were so different. Some of the sites, what ignore meant at one site might essentially be the same as override at another site. At some sites, the CDS wasn't the first time they were seeing the information. So maybe they'd already received it as a test result. So what does it mean if they ignored it, if they'd already had a conversation with the... So trying to implement this cross site was becoming incredibly, incredibly difficult. So Tim, he's rewritten the paper and he's about to circulate it. And I just quoted some things that he said that I think are really true. There's significant variation in how these are designed. So they create a real barrier to trying to analyze the target physician response. But instead, on a positive note, what we've done is created a series of natural experiments with a variety of alert design and DGI choices. So a single site study could compare physician response across different DGI's, and here I'm called drug gene interactions, I apologize for the acronym, across similar technical infrastructure. So maybe we can compare how people are responding to synvestatin versus a clopidogrel alert, but in the same HR system where it was designed essentially the same way. Or maybe multi-site studies could focus on closely targeted analysis of specific DGI's where design choices make meaningful comparisons. So maybe we could look at clopidogrel implemented at one site versus clopidogrel implemented at another. And this is where I started talking about implementation science yesterday. I feel like if I had read a little bit more of the implementation science literature before Tim started working on this project, I could have probably alerted him to a lot of these pitfalls before he did some of this work. So I just wanted to talk briefly about implementation science, and I am in no way an implementation science expert, but I think this slide's been really helpful to me in thinking about it. And a couple of things I just wanted to point out. The traditional unit of analysis and randomization in implementation science is not the individual, it's the clinic, the team, the facility, the school. So in eMERGE PGX, I think the unit of analysis that we should think about for some things probably is the site, not the individual. And again, another thing that I think is helpful is looking at these outcomes. They talk about adoption, adherence, fidelity, level of implementation. These are things we heard coming up over and over again when we were talking about challenges for implementing eMERGE PGX and we weren't necessarily capturing them in a consistent and cohesive way. And I think we had a good reason for that. So when I dug a little further, this is a paper from implementation science published in 2015, again, listing some of these important implementation science outcomes. But their conclusions were that the instrumentation is underdeveloped. There are many different instruments available, but there's limited, you know, most of them are not validated and there's very little consistency. Each group is sort of creating their own. And this is a real problem, right? If we want to do this, we want to do it well. We need well-validated instruments to assess these outcomes. Another thing I want to bring up that I've seen up coming a lot, come up a lot in the implementation literature is the notion of hybrid designs. You know, so we have effectiveness research and that's, again, where we're looking much more at the individual level. And we have implementation research, which is where we're looking more at the site level. And multiple people have proposed that we should really be thinking about these hybrid designs. And the difference between the three is just really the balance between the emphasis on the individual and the emphasis on, say, the site. And I think as we're moving forward with different implementation studies and studies that we want to really use to help other people implement pharmacogenomics, I think we need to be thinking more about these hybrid designs. So I just wanted to leave you with a couple lessons that I think we learned from doing the Emerge BGX study. And the first, and this is, again, something I now have stumbled upon a fair amount in the implementation literature, is it's really important to document where you started. Particularly in a multi-site study. And this came up for us when we went to publish the provider education study. And, you know, we were talking about different educational techniques that we used at the different sites to educate providers. And a reviewer very appropriately said, well, where did you start from? You know, had you ever introduced the concept of pharmacogenomics to anyone at your institution before? Or were you implementing several other similar programs and people were very conversant in this idea? You're exactly right, of course. You know, how much education you need to do depends on where you're starting from. So this is a very small table and I'm not assuming that you're able to read it. But this is something Cindy Prouse put together in response to that criticism for the provider education table. It went back in as a revision and it's now accepted. But, you know, we, in Emerge BGX, we really need to document where we started from. And the reality is, across the Emerge BGX sites, we started from really different places. Again, I doubt you can read this, but, you know, Cincinnati had, to some extent, an existing BGX program for eight years. Same with Mayo Clinic, whereas CHOP said that their model really only been implemented for one year. And again, some of these sites really had no previous study. So it's, of course, not surprising that they needed to use very different provider education models to get everyone up to speed. So just some other lessons learned and this is entirely my ideas. I didn't have time to run this by the Emerge BGX working group. But, you know, I think one of the important lessons learned is you need to plan to capture outcomes in advance. And we didn't really have that luxury in Emerge BGX. We had this really nice situation where we had this extra sequencing money. But at the same time, we had to start recruiting people like now. So we didn't always have a lot of time to think about exactly how we wanted to set it up to analyze and advance. And obviously, that helps. But more than just thinking about outcomes in advance in general, I think you need to decide, are we focused on clinical effectiveness? Are we focused on implementation? Are we focused on both? I think there's a really strong argument to focus on both and to plan to collect outcomes in both areas. If you're going to do that, though, and this is not a need that is unique to PGX. It's a need that spans implementation science. We're going to need some better, you know, validated ways of measuring implementation outcomes. And finally, I think at kind of a challenge we have as PGX researchers is, we need to get better about learning how to share implementation challenges. And I think part of the reason I like implementation science is it gives us a nice framework for doing this. But lots of us are grounded in clinical trials, in cohort studies, where fidelity is king. And, you know, you have to adhere to that protocol. And if you don't adhere to that protocol, you are punished in many cases. You don't get to continue in the trial. You don't get to enroll as many people. But when you're doing an implementation study, you have to make adaptations to make this work at your local environment. And I think we need to be better about being very open about that and talking about that and using that as a learning experience rather than something you're sort of ashamed about because you couldn't get X, Y, and Z to work at your institution. And again, I just wanted to acknowledge the members of the working group, Cindy, Jan, and Josh, and anyone else here who's been helping with PGX along the way. Yes. Go ahead. So I can't overstate the importance that this project had for the installation of a whole series of things at the Mayo Clinic that you haven't captured there and that I think you need to include. Number one, John Black at our place did the sequencing in a CLE environment in his department of laboratory medicine. This project acted as approximate stimulus for I think the largest academic reference lab in the country to really go whole hog into next-gen sequencing. It was a test bit for that. And I think that needs to be something that you take credit for. Number two, it stimulated our educational efforts for both the providers, patients, et cetera, which are now quite mature, but this was the approximate stimulus to kick that off. And number three, it was clear to us that the 1,013, John Black keeps reminding me it wasn't just 1,000 at Mayo it was 1,013, that the 1,013 samples clearly were inadequate, which is why we're moving on and taking many more of the ours were Biobank samples from the Biobank. So I really think I remember that there was some concern that at each site this really wasn't an adequate number. And we were underpowered, Terry, but the fact of the matter is it was a very strong stimulus and somehow or another I'd hope that when you move forward with saying what the project did, it's certainly at our place. It was an extremely important step toward much broader implementation. So congratulations. And I hope that can be reflected in what you end up saying. Great. Thank you for that clarifying addition and that was an excellent presentation. I think while Steve's coming up to get his slides loaded, we can have time for one more question. Yes, go ahead, Sandy. So really, really neat work. A question that I'm wondering about is about when you were implementing, about how long did it take? I know it's a general average, but the sites to set up a clinical decision support role. Assuming these are event based intervening if it appears that something different should be done when a drug is ordered. So I'm going to take the cop out answer and say it varied a lot. And Mark chime in, you know, because this was really the EHRI working group was really involved in that. I mean, some groups I feel like less than six months, but others more than two years. Yeah, so the hair paper that Laura referenced does have some of that information in it. And it's also worth pointing out that in this current version of Emerge, along with Caesar, we're in the process of trying to really cost out what it actually cost to stand up the CDS rule across different organizations. So we're very interested in some of the, of quantitating that type of information because I think for our IT staff, they're very interested in knowing how much of this, how much of my life is going to get sucked into this cool science project that you guys are doing. Thank you, Laura. Our next speaker is Steve Leader from Children's Mercy in Kansas City, and he's going to speak to us about the Goldilocks Project.