 Thank you so This is something that we touched on a little bit yesterday, and it's something I've been working on for about the last 10 plus years I want to start off by saying that from an economic analysis perspective the importance of perspective and and one of the difficulties that we have with economic analyses is that most of them in the academic world are performed from the societal perspective So that's where you have the qualities and and all this other sort of stuff But if you're trying to make decisions at a different unit than society that type of perspective doesn't really translate very well, and in fact Particularly in the United States since we don't have a national health system a societal perspective is particularly problematic and so one of the things that I've done in Collaboration with my colleagues has adapted the tools of economic analysis for different settings this irritates the bejesus out of the Academic economists, but it does work and so I'm going to give you four examples of this We're going to go through these very quickly And so a lot of this falls into and then a miracle occurs between these two things So I'm going to set it up I'm going to give you the answer and there's a miracle in between there But I do provide references for everything so for those of you that are interested in checking our work. We do in fact explain our miracles Dan made an interesting comment yesterday about the afternoon that would take to slap together an economic decision analysis This is one branch of a tree that my Analyst came up with for the universal Lynch syndrome screening the entire thing actually covered his door of his office at eight point font So these things can get incredibly Complex, but fortunately through a process of pruning and other things you can actually get down to the point of getting useful information out and so We as a system when I was in an amount of health care decided based on the eGap working group recommendation that we were we're going to implement universal screening Tumor screening for Lynch syndrome, but there are several different ways that you can do that You can use immunohistochemistry with BRAF and methylation from LH1 and et cetera, et cetera, et cetera and so and the eGap Recommendation was silent in terms of the best way to do it And so what we did is we used our economic model to say well we know the costs of all these things We know exactly what it costs us For this and so we just use those as our input variables So we didn't have to mess around with sensitivity analyses or assumptions. We knew the number and what we found was that Using these different approaches essentially had no impact at all in the number of Lynch syndrome cases that we were going to identify But if you look at the cost of the different approaches, there's a significant Difference in terms of the cost per case of Lynch syndrome detected from 10,700 with this approach to over 13,000 So from a system perspective that's going to have to spend money on this This is a really important piece of information because they can spend $3,000 less per case detected Which in the long run is with no loss of sensitivity. So that's a was really important in terms of decision-making And I didn't show the data, but as we were implementing this We had seven different hospitals that were doing colorectal cancer resection One of the hospitals came and say well, we really should only be using this for under age 50 That's really the best place to do it. And so we said well Usually we have then at that point in an eminence based medicine discussion Which is well, I think this and you think that but we went back to our model and we modeled out the different ages And what we said was you know, it's cheaper if you use a under 50 cutoff But you miss half the cases and the system when they looked at the total expenditure said, you know If we poked around in the C-suite couch cushions, we could probably come up with the money to cover this We're screening everybody and so the data was extremely useful to move this forward in a consistent way across the system the second example is IL-28B and protease inhibitors in hepatitis C and protease inhibitors at that time were routinely used in HCV viral genotype 1 and there were several economic analyses that supported the cost effectiveness But because viral genotypes 2 and 3 are more responsive to therapy The standard therapy did not include protease inhibitors However, we also knew that there was a patient genotype in IL-28B that predicted good responders versus poor responders in response to treatment in all HCV viral genotypes, but there was almost no evidence Related to the impact of this in genotypes 2 and 3 and so the question we asked was if you did IL-28B genotyping and use that to select candidates where you use triple therapy from the get-go the poor responders you Initially used a protease inhibitor just as you would for genotype 1 How much improvement in sustained viral response, which is the intermediate end point of interest is needed to cross a threshold across cost-effectiveness and We did this analysis and what we found was that if you administered triple therapy to patients that had the resistant IL-28B genotype you only needed an improvement in sustained viral response of about two percent To cross a cost-effectiveness threshold, which was stunning Whereas if you treated all patients with triple therapy, then you'd need to see a Improvement of 11 percent. So this was groundbreaking work, which was immediately supplanted by the new medications So so this is all completely irrelevant, but it still was fun to do Science marches on the third Study that I wanted to briefly present shows the importance of using a patient perspective. So this was a prospective trial there was a small prospective pharmacogenomic informed trial of war for an at Intermountain Healthcare And so we worked with David Veenster and his group at the University of Washington To develop a policy model using these data to assess the cost-effectiveness And what we found out when we did this was that if you tested versus Preemptively versus not doing testing the arms were essentially equivalent. We couldn't really detect Much of a difference between the two arms. However, the prospective trial data showed that the tested patients those that underwent pharmacogenomic Testing required two to three fewer INRs to get the stable dose And so all things being equal I think a patient-centered perspective would say, you know, if you can disrupt that patient's life Two to three times fewer by having them come in for INRs That's a pretty important outcome from that patient because you're not taking them out of work Or you're not taking them out of home having come down to the clinic find parking etc. etc. etc. So This Basically says the costs are the same. So why not choose something that would be less disruptive from the patient? So that's a different perspective. We haven't done much of this from the patient perspective But I think we should do be doing more thinking about that And then the last thing I want to present was something that we've been working on recently On generic modeling now the problem with economic modeling is that it is complex. It requires specific expertise That expertise is not necessarily broadly available I was really fortunate at Intermountain that I had an analyst who was very interested and we kind of learned how to do it And we found it was useful, but most places don't have that type of expertise And the other problem is that most models are created for a specific Perspective so they're one-off so I can build my model for Intermountain health care but then somebody else would have to build a model for Geisinger or UAB or whatever And and all of the inputs then are customized which means you can't really reuse the model And so the question that we asked was could you take a generic approach? Build a generic model that would allow stakeholders to enter their relevant key parameters and Generate results that would be relevant to decision-making so we knew it wouldn't be as good as a Customized model, but it could it be good enough. Could we expand this? And then we wanted to see how does this model perform against a gold standard approach to modeling And so we used the test case of HLA B 1502, which we've heard a fair amount About in association with carbamase a pain to reduce the risk of severe cutaneous adverse events And the rationale was this is a medically significant issue It's already been implemented in some settings There are significant differences in allele frequencies of 1502 in different populations as well as cost and practice patterns That lead to variations in cost effectiveness, so we knew that we could see Variable inputs plus there was an existing gold standard economic model that had come out of Thailand And so we did this under the a supplement that was through the University of Florida Economic modeling project for pharmacogenomics for prevention Stevens-Johnson syndrome, so I want to thank Sponsors and our collaborators at the University of Florida and our global collaborators for doing this so This is an example of the decision tree here where you would have three options You would have no HLA B screening in which case you would have HLA B 1502 carriers That might develop Stevens-Johnson syndrome or not and the non-carriers and the We made the assumption which is okay for modeling that there would be no Stevens-Johnson syndrome or 10s in the non-carrier Population that's not completely true, but the rate is low enough that you can essentially treat that as zero We then had a universal Genetic screening arm where again with those tested positive were used we had them use an alternative drug Which in this case was valproate because we heard yesterday about the fenitone Issues and assumed that no one would have severe cutaneous event in that which is a pretty reasonable assumption We also it's not shown here, but we also built in the fact that VPA is not as effective a medication for for a seizure control as Carbamazepine in these particular types of seizures So that was built into the model and then in those that tested negative The truth is there's the possibility of false negatives on the testing that would be relatively low We know those numbers and that those individuals might be at risk for 10s And then we also model against the third alternative, which was being seen Just emerging in Taiwan I believe where everybody was just moving away from carbamazepine. They were saying I'm not going to test I'm just going to use valproate and we modeled that and I'm just going to show you a few of the This is the approach so here we said here are all the variables that we need and when we looked at the inputs There were three types of inputs one input variable was the one that the users had to specify So for example, what is the prevalence of this allele well that varies from population to population? So that would have to be entered for the specific population that was used in the generic model We also needed to make it clear that the prevalence is different from the allele frequency and that you needed to Multiply the allele frequency by 2 to get this input value. So we tried to make it as user-friendly as possible There are also the costs of care that vary from place to place and so these are the Pieces the components of the cost of care that would need to be included cost of disease treatment, etc We also had default values values For things like the laboratory test performance in that where we had pretty good Information that these are going to be consistent And so these could be pre-populated in the model and then we had a third group of inputs where we had default values That were based on reasonable evidence But if the group had a specific input they could actually insert their input into that value Sat rather than using the default and Then we ran this against three models the time model that I had mentioned and then there were two groups from Malaysia and Singapore that actually also Participated in this project and they developed their own country specific model, which we then compared Against this and I'm not going to go through all the numbers here because it gets reasonably complex, but the bottom line was is that we found That this actually Performed reasonably well. It's feasible And it's an efficient and timely value-based method So we get you in the ballpark of you know Is this something that looks cost-effective or doesn't look cost-effective using this a generic approach? And so we think that if we were able to do more of this generic modeling that it would allow more people to sort of rapidly perform this type of Modeling exercise without the extensive training that is needed to create these models De novo and that hopefully that would help us to facilitate how implementation in different settings Would really look and so the manuscript That is describing this has just been circulated to our internal group will be revised and submitted shortly So that's the one Manuscript that I don't have in my reference section since that's still a work in progress so in conclusion Defining perspectives is critically important Economic analysis tools can be used Pragmatically to rationalize decision-making. I will tell you it's tough to get these things published because the reviewers tend to be of the academic economist variety and They have problems with the way we use their tools But I I think in the long run We've demonstrated that there's enough value from the perspective of the decision-maker that irrespective of the Academic criticism they are still highly useful for decision-making These are references for the things that I had specifically mentioned and I for those of you who are really interested shameless plug here This is a book that came out two years ago economic evaluation and genomic medicine that I wrote with some colleagues in In Greece and the Netherlands The the thing that's a little bit unusual about this book is that we have an introductory chapter on economics for geneticists And then we have a chapter genetics for economists Which was they were very fun to write But we use a lot of pragmatic examples all the way through to illustrate the different perspectives So I find it very useful, but I do get a you know a a royalty for every copy sold So that that and another 50 cents will get me a coffee at Starbucks In the morning, so with that I'll end Great. We have time for some questions. Yes, Sandy I just am curious, you know from a computer science point of view But you know the first thought would be to take the generic model and you know add parameters to enable it to Be as specific as the you know the country specific models But I'm guessing that in in reality, it's more complex than that I'm just wondering Where that complexity lies and why that's why that's not possible, right? So the complexity lies Partly in the model construction itself So the the generic model trims down some of the decision points that if you really wanted to get a fine-tuned model You would add additional complexity So for example, if I think my recollection is correct that in the generic model when when we looked at the country specific models We realized that the difference in efficacy of Valpro 8 Doesn't contribute a lot to the end result and so I think we Suppressed that in the generic model Whereas it does give you a little bit more accuracy if you were to include that so what we tried to do is to say What are the things that would require a tremendous amount of effort to populate a model? But at the end doesn't give you much of a bang for your buck Meaning that you're using resources that are are probably either not available or Our could be used better in other ways And so that's how we tried to streamline this to some degree and I think that What you'd ultimately have to do in any of these situations is that you're not going to be able to create a generic model That's going to work, you know globally you're going to have to have some sort of a best practice model That is well understood and ideally probably tested in a couple of different settings to make sure that it's robust That you would then use to develop a generic model that could be put out Jeff Thanks mark. So since the notion of doing some of these economic modeling studies is relatively new to us at least Do you have a set of standard measures that we should be thinking about capturing in order to allow us to do those? To develop those models, you know going forward is there are things that we can incorporate into our ongoing studies. Yeah, so The world of economic modeling does have published best practices in terms of how this should be done And so what we've attempted to do in all of those is to go to those reference works of best practices And make sure that we're following those and so we don't need to Adapt those I think for genetics or genomics in particular. I think they work well And then it's a matter of Clearly identifying what are the things that we want to study and then being able to have the Outcomes that are important and find the data sources. And so that's the big decision for the lint syndrome one We frankly would not have been able to do that without with the current Published data that was out there, but we were able to work with Ohio State University and Heather Hample Who had done a very large scale? Study where they'd sequenced everybody so we knew sort of the right answer related to the Sensitivity and specific Specificity of a tumor-based approach none of none of that was in the literature, but using that sort of gray data We were able to actually make the model work So the data sources are the biggest are the biggest issues, right? And that's so I'm wondering can you provide or colleagues provide, you know It's the the types of data and where we can find them to incorporate into our studies It depends Because all the data are would be specific to a given modeling exercise So what you would what you really do at the beginning of that is to say here's the question Here are the inputs and then you have to map that to data sources and so You know some of it is here are literature sources that can be used and in some cases and with the IL-28B Study there clearly were no data for certain of the nodes And so what the way you deal with situations where there's no data is you generally get You know an expert consensus of what do you think the right number is then you use a technique called Sensitivity analysis that allows you to vary that say what if I'm off by you know, two or five or ten in either direction How does that impact the result of the model and what you find is is that some of those inputs for which? There's no data it makes no difference that the end result is the same Irrespective of what value you choose in there and that was the point I was trying to make yesterday is that the real value of modeling is that when you have No data for certain decision points you can use the model to determine Do we need to invest in getting that data or is this something that really isn't important in which case? We should ignore it and put our resources elsewhere Okay, yes, Mary Mark for your war for an example you mentioned that in the genotype guided group They got by with two to few two to three fewer INR's than in the non gene So how did that not translate into a effect on cost? because the way that the cost is model doesn't include all of the Travel time away from work and that sort of thing it just includes the INR costs And so if you added up the cost of the three and I three INR's it essentially was equivalent to the cost of the of the genotyping And so that's so that's why so if so you could literally legitimately criticize And say that if you included those attendant costs that that would have actually showed a difference huge, right? Not huge enough So if you actually three days where somebody has to leave work find a babysitter get a cab Get your daughter to take you to the doctor in the cold hard reality of a lifetime perspective You it's Several death decimal points down the road but the point that you're making is The critical one is that from an economic modeling perspective that makes no difference from an individual's perspective It makes a huge difference and the economic modeling is not adequate. No, that's not into it It is adequate and it's just it's the limitation of if you look at you're looking at lifetime events the the impact of a bleed Or a clot related to to warfarin The and the frequency of those is so small in the in the in the context of the rest of the individual's life That three days Doesn't show up in the model. It's lost in all of the other noise But we can't lose sight of the fact that three days for that individual is a huge impact So economic models will help but they don't answer all of the questions that we can we can answer So we have to understand the limitations of what modeling will tell us and we have to Understand that wait a second. That's three days And that's a huge input in that very short period of time for that individual in that one week out of their 80 years of life But you can't measure three days out of 80 years of life and expect that to show up in an economic model Mark what I heard you were saying is that economically it was a wash that you the cost Right, but when you take in patient preferences for having fewer, you know to your point Mary that that it Intuitively, I don't think they that you or Dave directly Assess this through patient, you know questionnaire or anything like that It just made sense to say that patients may prefer to have fewer tests. Is that the point you're making? That's the point I was attempting to make. Yes. Yeah But if you say the cost of those three days Is divided out amongst 80 years if you also take the cost of the test and divide that out among 80 years of health Care costs, then the cost of the test is insignificant also. That's correct So then it seems like it would almost always be either A waste of time to do any economic analysis because it's never going to be significant on the total cost or it's always going to be You know significant because it's such a small problem of presenting data like this at a very big because the impact of having a death Related to you know, so the data that Was presented about the you know having a mace, you know that impact economically is Enormous and it and it detects the signal the problem with you know The the the the warfarin is is that the at least based on the data at that time that we had available to us We did not have much of a signal at some of those major bleeding or clotting events that we could really You know that the would show up well in the model and that's why the economic modeling that was done around that time Because that study now is about eight years ago that we published that so there were a number of economic models of Warfarin that came out and they were all over the map because the assumptions varied so widely in terms of you know One model showed it was highly cost-effective Because it prevented all events which we know is just ridiculous. You can't prevent all the events So does the sensitivity analysis on the cost of the actual cost of the genotyping come out to be Significantly does it not really matter whether the cost is two hundred dollars or two thousand or it does make a difference But it depends on the model that that you're using so for example in the HLA model the cost of the genotyping was it was and we Show this in something called a tornado diagram where you show for all the different input variables What is their impact from largest to smallest on the model and cost of the genotyping testing in that one? was a significant driver of cost effectiveness for the For the model whereas for others the cost of the test is much less sensitive It's much more about the performance of the intervention Okay, I think we need to you can see that this group is highly engaged on the topic of cost effectiveness Which is great, but I think we need to move to our overall discussion