 So, I'm going to cover both the efforts at Vanderbilt and then talk about the emerging – pun intended, I guess – efforts across the Emerge Network to bring sequencing into pharmacogenetics prescribing. And the one serves kind of in some ways as a model using a genotyping platform for what we're going to do in Emerge. So I think this picture is a favorite at Vanderbilt that Dan has taught us all to use and probably familiar to many of you. In 2000, this was in the New Yorker and we like pointing out that just the complexity of handing your genetic sequence there to your pharmacist who looks a little bewildered amongst many medications there and how do you actually translate this into practice. So we start with a lot of rich biomedical research. I think it's been increasingly important as we've talked through this meeting and I think we all recognize the importance of information technology both on the research side and the clinical side. And then we're going to have to think about new models of the healthcare system towards learning models of the healthcare system as was talked about yesterday, as has been talked about today. And think about the healthcare system both in a discovery fashion and a closed loop as we have many, many rare variants that we're going to find. And then the system has to be able to adapt and change quickly. And I think that's another area that highlights the importance of information technology as a driver for delivering this evidence. And so this is another shot of the article last year in Nature talking about where we're headed. And I think that we see Predict and EmergePGX sitting on this far end of the curve as moving it towards implementation and using that to actually drive new discovery as well. We've talked about the FDA list of integrating pharmacogenetic information into drug labels and just want to highlight that there are 83 germline medications, medications have germline variants now listed in the FDA labels. And so this number has been increasing, it was started in 2007, and we looked at this in detail about a year and a half ago. We had 57 medications and we took those 57 medications with germline variants and then looked at how many patients in a medical home population, meaning that they get recurrent care in an outpatient clinic at Vanderbilt would be exposed to one of those medications over five years. And we found that 65% of those medications would receive at least one of those medications in five years and, you know, incidentally one of those patients received 18 different medications in their time period at Vanderbilt and about 15% received four medications or more during the time period. So we find that this is not an uncommon problem as we begin to have dense genotype information or sequence information available in the record. So why are both of these programs looking prospectively? I think we all know that the most value to having this information is before the prescription event if you can. Not only does it lead to the right prescription the first time, but a lot of the adverse events are more commonly occur near the initiation of the medication start. That's not true for all medications clearly, but the risk factors are greater a lot of times for many of them when you start. So this is one of our screenshots for Predict. Predict is a local effort that we've been working on for the last two years. We started genotyping in September of 2010 using the Lumina ADME chip and our goal is primarily to prospectively identify patients and then genotype them on ADME platforms with embedded decision support in the clinical record. One of the other components are reactive genotyping or just-in-time genotyping on certain patient populations in which it's easy to do so. When you know you're going to get a replacement, for instance, you have plenty of time to do the genotyping before they get warfarin. And we also do it when they come into the cath lab. And an important part of this is the workflow to actually get the clinical buy-in. And so we've talked a lot about this. We've started before we unveiled the test talking to cardiologists. The first thing we went live with was Clopidogrel and doing work with the Pharmacy and Therapeutics Committee, and then convening actually a special subcommittee of the P&T committee to work through these processes with us. And then a lot of focus groups after we unveiled the program to talk to clinicians, get their reactions to not only things about whether they think genotyping is right or whether we should be doing it, but just how you physically or electronically implement this in the workflow to make it as smooth as possible and make them aware of what we're doing. So if you look at the a little over 10,000 patients we've genotyped so far and who has been genotyped, why? If you say we have about 400,000 patients that have come through the system that have been unique, we have about 90,000 visits in our clinics that were targeted, mainly primary care, cardiology, nephrology, and vascular surgery. Amongst those 24,000 were flagged for prognostic testing. 5,000 of those people received the prognostic test. There's some time of implementation here, but not everyone orders a test that's recommended as well. Some of that's just workflow issues. And then about 5,000 have been tested for the reactive indication. So initially we started testing in the cath lab and joint replacements, things like that. So about a total of 10,000, and then you can see the chances in which an advisor would fire. So about 22% of the people prescribed clopidogrel if they had genotyping would receive the reviser based on being a poor intermediate metabolizer, about a quarter of the synvastatin users. Warfarin fires in everyone, really regardless of whether they have genotyping or not, because we use clinical recommendations if they don't have genetic information. And then rarely would you see with a thyropurin, and that's obviously a rare medication that we don't try to predict exposure for. So we talked about identifying people prospectively. We use an algorithm that uses demographics as well as clinical diagnoses to predict their exposure within the next three years of warfarin clopidogrel or synvastatin. We trained it on a medical home population looking for first exposure to that medication. And this is actually generated just with easily available billing data and demographic data so that it would be fast and easy to implement within different EHR systems. And in fact, pretty much all the emerged sites have done something like this as they look forward to emerge PGX. So the model for what we do with genetic information, this has been alluded to in other discussions, is we get those 184 variants off the ADME platform. We know what variants don't work, and we drop them. And then we put all of the variants that do work in a database, and only a select few of those for which we have validated that the results work. We have existing clinical decision support in the medical record, and of course, it's been reviewed by a pharmacy and therapy committee. Only these enter the medical record. The rest sit in this repository. And as we implement new drug genome interaction advisors, new decision support, validated match, surpassed level evidence, then it goes back into the EMR. So this is what it looks like. And this is, I realize, very small. This is a patient. We have this section in this face sheet that says drug genome interactions. And this patient has a variant for warfarin, synvastatin, and thyroid purines that would change their risk. And the patient happens to be on warfarin in amiodarone, which affects warfarin dose. They're also on synvastatin at a low dose, fortunately. And they are not being prescribed azathiopin or recaptor purine at this point. But of course, could later develop that exposure. So if you look at the current 10,500 patients, we've tested 2.7% or high risk for clopidogrelated pore metabolizers. Another 21% are intermediate metabolizers. Synvastatin in ratio is a little bit higher, a little about 28% or at some risk of myopathy due to one or two copies of SLCO and B1 star 5. And then if you look at the four drug genome interactions we have available now, and then look at the incidence of having variants in any of these, it's actually most of the population is not normal when you start adding them all up. So it kind of goes to what we said before when you look at multiple exposures, you get a greater chance to actually use the information, especially when it's already into your medical record. So about half have one variant, another quarter have two variants, and we have a very small percentage of people who actually have four variants. Only 17% have no variants. So this is what the decision support looks like for clopidogrel. If you try to order this, they have genetic information, it'll pop up. I think, yeah. And it'll advise you to prescribe Prasigrel. This is actually outdated. You could also prescribe Tecigrel as well. And if you don't do that, it asks you why you're not following the advice. Initially, we launched this for just port metabolizers. About six months later, we rolled it out for intermediate metabolizers as well based on changes in the evidence applications. And then we've had to make adjustments based on whether or not we wanted to include, for instance, doing a higher dose clopidogrel as an alternative. The reason I mention those changes is just to emphasize the rapidity of which this sort of decision support seems to evolve compared to dosing for kidney failure, for instance. Those recommendations seem to be don't change as fast. These are our data amongst people who are tempted to be prescribed clopidogrel on a patient with genetic information. And this, we on the x-axis, for intermediate and then normal metabolizers, you can see that the vast majority of normal tablets are getting clopidogrel in blue. The poor metabolizers, actually about 60% of them get switched in manual review. And then a dose, a frequency of those who are intermediate metabolizers, are getting switched to prasagrel. Really not seeing much use of tachycagrel or yet. I imagine that will increase over time. So we can see it is making a difference in prescribing habits. It is not 100% here, but we haven't thrown out people here in these numbers who have had a stroke who are over 75 relative contraindications to using prasagrel. This is what Warfarin Advisor looks like. With discussion with physicians, we do show the information used to calculate the dose recommendation. We tell them what the weekly dose recommendation is in a daily dose. We have a link here that tells them how to break that up into terms of number of tablets per day. And we try to recommend a single tablet, a single dose every day, as opposed to giving off the bat a complex regimen to someone who's never started Warfarin before. This doesn't fire for people that we know have received Warfarin before. Looking at one week of this, the first week, this was just launched at the end of last year. 31 new inpatient starts. Seven of those 31 patients had been tested genetically for another reason in the past. Two of them had a difference. What I really found interesting, remember I said that this fires based on whether or not you have genetic information in the chart. Only six of the 32 patients did the prescriber actually give the sort of industry standard of five milligrams a day. So most people were using the guidance to tailor their therapy. They may not do exactly what it said, but they anchored one direction or the other based on it. This is one example of someone who didn't follow it. The system recommended nine milligrams. They were prescribed one milligram, really anchored complete opposite of what we recommended. And you can see in red their change of dose of our time. In blue their change of INR. So eventually about a month and a half later they got to what we recommended at nine milligrams a day and reached the target INR. So now we're talking about eMERGE and eMERGE PGX. So eMERGE I think is a network familiar to many of you and obviously many people in this room are from eMERGE. The nine sites or 10 sites are listed here. The green ones are adult. The blue are children sites. And you heard from John just a minute ago. One of the goals of eMERGE too is not just discovery but actually implementing this into practice. So integrating with the EHR both results and other data and decision support. And so the goal of eMERGE PGX is actually to use sequence data, embed it in the medical record, do decision support around it. It is a collaboration between PGRN and eMERGE. So the PGRN has developed the PGRN Seek platform, which I believe it's been talked about here before. And this is a platform, I'll talk about it more in a second I guess. And a lot of other efforts of PGRN such as CPIC guidelines we've talked about. Translational Pharmacogenomics Project, looking at putting data into practice. And then of course the platform being efforts of PGRN. Then eMERGE, having expertise really about finding phenotypes in the medical record, a strong informatic component of how you integrate the stuff in the medical record. And then developing decision support. I don't think I need to tell anyone here about the importance of rare variation. So I'll skip this. These are the aims of the eMERGE PGX project. Aim one, being to identify target patients for whom we think we can make a difference in the future. Looking at important pharmacogenes, actionable variants, using things like CPIC guidelines to inform actionable variants. And then looking at putting those actionable variants that once they're validated and having validation methods, putting them into the chart with decision support around them. Looking at outcomes around performance metrics, process measures, attitudes, and impact. And then aim three, being a discovery aim. So we collect the sequence data. We'll have a rich EHR record on these patients. Can we start to use that to learn new things? The PGRN Seq platform covers 84 pharmacogenes. It was designed through the PGRN, 14 sites with multiple rounds of validating. Has been well tested. It is available to Nimbogen. Custom capture array includes these 84 genes in flanking regions. And it can be ordered beyond just the eMERGE PGX. And this is something I don't know as much about. So I would defer you to its creators, Steve Schurer and Debbie Nickerson, to ask more about this. If you're interested in being involved. It is one of the unique aspects is it does give very good capture of those 84 genes. And this is the mean read depth for individual. We can see our axis here. We're looking at 4 to 600 here per individual that was done. These are HapMap trios here. And then the mean read depth per gene across these genes. And I highlight genes of interest here. So we have a lot of reads in multiple hundreds for really all of these targeted genes that we're looking at initially in eMERGE PGX. There are some challenges. It did very well compared to ADME concurring an 88 of 95 samples on 150 sites. But problem areas, as with many of these platforms, would be 2D6 and HLA variants. And this is nothing new to most. I keep coming in and out, don't I? Yeah. All right. And you do have a lot of competition. So I don't know what's going on next week. Yeah, they're having a lot more fun next week. Yeah, they are. They are. Could they be having more fun? Could you step up your game a bit? Right, right, OK, OK. Food's coming, right? It's all I can offer. So look at that. These are the candidates that we have identified through eMERGE so far. These are the primary ones. And some individual sites are doing some other things as well. Same things we looked at at Predict. And they all have CPIC guidelines behind them as well or perform well in PGR and seek and have easy orthogonal validation methods. We planned a total genotype around 7,000 to 8,000 people across all of the eMERGE sites. Again, this is going to include both children and adults. And you can see how the distribution across the sites plays out. Looking at this analogy of how we're going to store things and where we're going to put them after we sequence, there's going to be validation. And we're going to have only putting the validated results into the EMR. And every eMERGE site's going through this sort of local buy in process now of working with the equivalents of the pharmacy and therapeutics committees, working with providers to figure out what they want to explore, how they would build decision support, integrate with the EHR, and what the workflow processes are for identifying these individuals and sequencing them. And of note, this is a consent and study. We plan to go back and survey providers, patients, look at outcomes. The sequencing and validation are being done in different places and different models. Several of the sites are doing their own genotyping, Mount Sinai, CHOP, Geisinger, Mayo. Several of the sites are using UDUB and some are using CIDR. And then different validation platforms, ADME, TACMAN, Sequinome, Sanger. So we have different ways of getting that. But really, everybody's going to be going after the sequencing and validation efforts for the variants that they're pausing in the medical record. Everyone's kind of doing the three variants. I talked about three drug genome interactions we talked about before. And I mentioned a few others like carbazepine, thiopurines by some. And some efforts around 2D6 and codeine using local validation that will provide some evidence around the performance of PGR and CEC on that. Everyone's looking at some sort of predictive evaluation or identification of patients so that we get sequence information and then called specific variants into their chart before, hopefully, they get exposure to these target medications. And then we can evaluate that exposure and what happened with prescribing, except for the pediatric sites, which are using more focused identifications. And initially, we're going to do some sequenome-based validation of at least these genotypes. And we're going to do a few more as well. And there's some discussion about what those are going to be. Probably 3A5 and TAC or lemus will be included. And a few others, like CYPTC 19 star 17 may be included as well. And then integration with EHR, we're going to keep one of the stuff we do. And a lot of the workflow that we have to work through getting physician acceptance and involves this component. A lot of unanswered questions here at this point still about how we're going to actually put it in the data into the chart. Clearly, we have to do this in a structured way, use paradigms and accept it like HL7 and make them available for decision support using that. And then build our advisors to use that electronic information. We're working with standards groups. Standards are always plural. There's many standards out there in informatics. So we may use one and then borrow from some to come up with what will work best in our different EHR systems. It is important to note that we have a lot of different systems represented. We have, depending on how you want to count, a couple of different homegrown systems. Many sites use Epic. GE is used as well as Surner. So we're kind of the real world of working through EHR issues that are going to be applicable to many sites as they come off on EHRs for meaningful youth. And hopefully that will help our data be more translatable to other sites. After we get our data, we're going to have a lot of dense genotype data. We want to make that available to the community. We're going to have a kind of a variant server. What exactly this looks like or where it's going to be housed or kind of being worked out. But this variant server will have the sequence data. And then we want to combine it with some sort of phenotype database derived from the EHR. A lot of we're working at those details as well. Maybe integrate some biological function data as well and have some sort of web interface to query this that probably would involve different levels of requests by. And so in some cases, maybe you have access to just variants. Maybe some levels of access would require you aggregate counts. And then deeper levels would maybe give you, with login password or maybe to emerge, would let you get more and more information around specific exposures. So this phenotype database is very much in development. We're talking about what we can put into it. Developing these phenotypes is hard and can take six months to 18 months. We've done a couple pharmacogenetic phenotypes. And they have been particularly hard to validate. And so looking at pharmacogenetic phenotypes that are in a curated fashion as we often do and emerge may or may not be in the scope of what's initially in this phenotype database. But we are going to feed it with a lot of data that we can get easily out of the EMR. And the current proposal is based on the counts of what we find in the chart. We may go after specific things that we can get at more detail, like maybe INR with warfarin and trying to get at warfarin dose, for instance, if that tends to be a very common exposure that we get before the end of Emerge round two. And so we're working through this. And this would be something that I think we could look at broader levels of openness for maybe just aggregate counts, like I said, not just to emerge network. We have a number of process measures that each site's going to investigate. I kind of alluded to this before. Surveys, accrual measures, performance of the PGRN Seq against other validation. Genotyping, which is obviously very important. Distribution to genotypes and whether many of the sites have patient portals that patients will be able to view this information. So we'll be able to track whether patients look at their genetic information, how often they look at it. And then, of course, as I presented with predict earlier, whether or not prescribers actually use the genetic information, whether it changes their care will be outcomes as well. And finally, we may look at rare variants as specific outcomes based on frequency of counts of those exposures and the variants. So a number of potential for collaborations I mentioned earlier that PGRN Seq platform is available for others to use as well. And we'd like to, as that happens, our plan is actually to make this variant server something that can be opened up. So if other people sequence and want to re-deposit that data, I think we would want to build in that capability. And then eventually, this repository is going to be available as well. So with that, I will end before. Wow, I just had one finger. So OK, finger, yeah. So I'd be happy to take any questions. Great. I have to be careful because I'm on camera. So Mark has a question. But before he does, are there other people with one? OK, so while you're thinking, go ahead, Mark. So going back to your very preliminary predict data on Warfarin, it was interesting to see how few actually followed the standard recommendation. And the question I had is that. Did five milligrams a day, sort of the default? Yeah, yeah, I agree. So one of the questions I had was that some of us have done some perspective looking at this really say, if you're wild-type, the dose is really six. That the five milligram recommended dose is based on the fact that we glommed all of the wild-type and the variants together. And kind of based on the allele frequency is the number that fits out as five. But have you noticed that people are maybe aware of the fact that six may be closer in wild-type and any correlation? Or is it just too early to even know? I would say it's too early. I'll just hold it. So one of the things I think we've observed is when we eventually pushed out clopidogrel, a lot of people didn't really. It was unfamiliar. People had strong opinions. And we had to sort of work through this idea of genome guided care. And I really believe part of this is Warfarin being the third intervention now that we've launched. And we've done all this sort of all these focus groups showing evidence and this sort of thing to finally come to a drug that everybody knows is highly variable. And they sort of work through this. I think there's sort of much more trusting of us now. I don't know if that's, you know. I think that, yeah, exactly. It may not always be a great thing. But I think they're starting to buy into the drink, the Kool-Aid, so to speak. Yeah, we've got to get away from that terminology. But anyway, Pearl, you had a comment? Yeah, thanks so much. You said that in the predict model, you asked people who did not follow your advice to write in why. Is there any, are there any major reasons why? Excellent question. I don't know the answer yet. We haven't looked yet. Other comments for Josh? Uh-oh, Wolfgang. Oh, Wolfgang. And then Mike. Just a quick question. There's a lot of new knowledge coming out, even on those standard tests and pharmacogenetics. So how flexible is the entire system when there's new knowledge out? And how do you respond to that on a system-wide basis? So I guess there's two components to that. One is, do we make a change decision? And then the second part is the informatics of actually changing the system. The first part, I think we're fairly conservative. We do require sort of this level of evidence and a committee of other people reviewing it to sort of say, OK, we want to do this. And we certainly have high standards about replication with the admi-chip and performance. If those are passed, we can actually generally implement changes pretty quickly. And so overall, we've had five different iterations of the Clopidogal Advisor in the last two years. And that probably is definitely above average for our decision support. And we've launched, I guess, a couple different, we've made minor tweaks to the war, for instance, it was launched to. Great. Mike? So I'm curious about how you're interacting with the infrastructure that ultimately supports this nationally. I mean, you obviously know the rare variant problem, which doesn't just apply to pharmacogenetics. It's throughout the genome. And it's going to take data from far and wide. But genetics is so low on the radar screen of most of the standards, organizations, and hospitals, because it's not a big financial blip on the radar screen that we're having to develop our own. So as you roll these out, or standardize your data dictionaries around these pharmacogenetic diagnosis evaluation and sort of analytical dictionaries, do you take them through the standards panel so they start to build the national infrastructure that allows this to happen in an EMR environment? Or is this restricted to your studies? You said 18 to 18 months, my experience has been you can get your data dictionary standardized in 18 months, and then it's another 18 months to work through the standardization organizations that make it national policy that Epic and others have to integrate into their systems. That was a great point. So that sort of timeframe was just talking about finding a phenotype. I didn't talk about sort of that wasn't the sort of standards process, which is a whole another process. So we do have an EHR integration work group, which Erwin is one of the co-leaders of. And we are interacting with those national bodies and have had them on phone calls and sort of sharing what we're doing. At our local effort, I would say that we have not quite been doing as much of that, just trying to figure out all the workflow pieces. It's kind of like principal informatics as you think about the process and do it on paper first. And that's sort of before we try to figure out what we're going to standardize, we were just trying to work through the kinks and get it through the system and see where the pressure points are. But we have sort of extracted models kind of like what St. Jude has done and others where you can start to break up these problems into things that are reproducible and structurable formats. And trying to publish on it, et cetera. OK. I think our last comment, Jonas. So emerges a very visible presence in the standards landscape. But it doesn't have an equivalent presence in the interoperability landscape. So if I look for an HTTP API just to be a little bit technical, this will be recorded. So maybe someone in your team will be able to react on this. It's just nowhere to be seen. And my question is, is this intentional? Is this something you don't want to initiate now? Or it's just maybe a lack of oversight over this particular issue of interoperability? So Erwin, do you want to talk about it? I'm just kidding. So it's not something that we're intentionally trying to avoid. We certainly are talking with Epic and Cerner. We have GE. We've invited them in. We are certainly talking with them and having those discussions. We have a homegrown EMR ourselves. So some of the sort of standard, the sort of API integration with commercial EMR questions are maybe a little bit harder for us to address head-on. But certainly we want to. And we want to get up to those problems. No, I mean interoperability with academic platforms. Not commercial. But there is lack of interoperability with academic platforms. Definitely, you're right on that, too. Yeah, I mean, this is a huge issue. And when it really comes right down to it, all of us that are implementing are representing the pharmacogenomic data as individually developed data elements, because we don't have any standard for how to represent genomic data in the HR. So I mean, we are so far behind on some of these fundamental needs that to actually proceed with implementation, we're essentially having to say, we can't solve this problem. We have no control over this problem. Therefore, we will ignore this problem and come up with local solutions. So in this particular situation, we really are, when you've seen one implementation, you've seen one implementation, at least at the code level, even though we're on the same page in terms of the narrative and the algorithms and those sorts of things. Yeah, I guess you could say the standard is a PDF document, right? All right, Josh, thank you. So Rex, if you can head on up there. Was there something else? Yeah, but so OK, so I should explain. So sorry. So Rex actually was going to talk a little bit about what our plans were for the next meeting, because we were thinking of having Jeff explain the next meeting. But we're going to probably make that the next next meeting. And Rex will make this all very clear when he talks, because I obviously didn't. So Rex. Maybe we could have the other slide. Yeah, Richard, if you could give us the archism slide.