 Thank you for having me today. I have to confess I know almost nothing about the topic as I was asked to talk about financial impact of genomic program implementation, and the organizers were kind enough to allow me to talk about something I do. Oops, I'm not on. Okay. All right. The organizers were kind enough to allow me to talk about comparative effectiveness, something I am familiar with, and I do appreciate Ned and Sean for setting up the prelude to this. It will tie around to matters of finance and economics, because I think how we think about comparative effectiveness will be central to our ability to maintain a sustainable healthcare system. So I want to go beyond cost, finance, because ultimately we need to think about sustainability and affordability. Just a little bit, for an overview, I'll tell you a little bit about the Technology Evaluation Center at Blue Cross and Blue Shield Association. And then thinking about the introduction of diagnostic tests, perhaps we are engaging in deja vu all over again. I'll draw some examples from the world of imaging that may be informative as we think about genomics, talk a little bit about comparative effectiveness as a concept and a broad concept, and then touch very lightly on what I think of as the third rail of comparative effectiveness, which is cost and cost effectiveness. So we'll touch that gingerly. Blue Cross and Blue Shield Organization is comprised of 38 independent companies who make their own coverage and financial and other decisions. The Blue Cross and Blue Shield Association provides them with core services and representation, one of which is the Technology Evaluation Center. Collectively, the Blues have almost 100 million beneficiaries, about one in three in the U.S. Technology Evaluation Center has been in existence since 1986 and represents our systems commitment to using evidence-based in decision-making around health benefits. The core of our tech assessment process is rigorous review of clinical evidence, systematic review, and the ultimate question, of course, is does this technology improve health? We have an independent medical advisory panel that has scientific and clinical authority over our reports. We are proud to have Marin as one of our members. That medical advisory panel is of 19 members, only three are affiliated with health plans or maybe four. We have a dedicated professional staff within tech of physicians, epidemiologists, pharmacists who develop our reports, and we're proud to be an evidence-based practice center of the Agency for Healthcare Research and Quality, and we're designated as the comparative effectiveness review center for cancer and infectious disease. And I have been honored myself to have been appointed to the methodology committee of the Patient-Centered Outcomes Research Institute. So what I'll present today is really an amalgam of thoughts from these various vantage points and experiences. I do want to speak with clarity to one issue that is often confusing outside of the health care insurance pair world, which is what are we talking about when we talk about technology assessment? We see the world contractually as being in medical policy, coverage policy, and payment policy. Coverage policy, what is actually covered, is determined by the contract with the beneficiary. Typically it is a contract with the employer who provides health care benefits, and that outlines very broadly what the beneficiary is entitled to under the contract. And it may consider matters of cost or cost effectiveness with respect to including or excluding certain types of benefits. And how much the reimbursement is is another contractual matter, that's a contract between the health plan and the provider, whether it be institution, the clinical laboratory, and so on. And that determines the payment. The part of the world that we sit in at tech is what we would call medical policy. This is very much a clinical enterprise. It gives medical directors in our plans the clinical analyses to support their decisions with respect to medical policy. And contractually that means administering the medical necessity and investigational provisions of the plan. So just to understand our vantage point is the system. So as I think about genomics, I'm very influenced, Sean and I were both at a symposium on comparative effectiveness in imaging, where some fantastic work is being done. And it reminds us that in some ways what we can learn from the experience of radiology and imaging, I think, can be very informative for genomics. The advanced imaging that we now have both entered a different world than we now exist in, in terms of the healthcare system, in the issues of cost, affordability, and access. And to some degree, also helped to create this world because of the 80s was really a surge of high tech interventions and diagnostics, whether this be CT scanning, or whether this be organ transplants, there was a tremendous rapidity of innovation surges and capabilities and surges of cost in that, in that era. So we are, we inherit the era, the lessons of that era and still some of the dilemmas of that era. But I, I have here the diagnostic model for a continuum of efficacy. This was published by Frybeck and Collex in 91, and it was with regard to imaging, but it outlines three levels of efficacy, starting with the technical, which we tend to think of as pretty pictures, and pretty pictures are really not enough to establish clinical value going to societal efficacy. And I find it of interest because I think as you look at it, those of you who are familiar as most of you are with the ACC framework will see here a progenitor and how in various fields of diagnostics we really have been grappling with this tension between more information and the clinical use of that information for several decades now, trying to get the right applications and the balance. This is a, from a paper by Golden colleagues that traced about five technical innovations in diagnostic imaging in breast cancer. This is the timeline for MRI, but in general I think it is sobering and informative. What their overall finding was that utilization of new imaging technologies is driven by regulatory approval and reimbursement by payers rather than evidence that we provide benefit to patients. And so I go to many conferences and meetings of many stakeholders and often the question in the hand is how do payers cover, what will they pay, why do they pay, when do they pay, how do they pay, and the real question we have to ask is how are patients benefiting and what are we creating in our healthcare system to maximize our ability to provide benefit and improve outcomes to the population. This is illustrative again, this is from the world of imaging, but it just shows how the number of scans in CT and MRI has risen along with capacity and demand has grown in lockstep with the actual capacity that's out there as you are probably well aware. Healthcare tends to violate the basic law of economics which is that if you have more capacity and more supply in relation to demand, you ought to have some moderation of prices, but it seems in healthcare that actually capacity drives both demand and can be price increasing. And we need to be very cognizant of this as we think of a world where affordability in our own healthcare system is critical. We have 53 million individuals who are uninsured, locked out except for emergent situations from the healthcare system. We are trying to bring them into the system and into the tent and balance all of these variables for access and affordability. So I have to thank Peter Bach for this slide and he thanks Dr. Seuss, but we will call this medical suesonomics which illustrates that if you have a capacity, you will have throughput. If you have throughput, you will have revenue generation. What we don't know is whether you will have a health outcome benefit and that's the challenge to us to achieve this. So thinking about, I call this diagnostics, prognostics, agnostics because I think we need to take an agnostic stance, imaging, testing information in and of itself as Ned has pointed out is neither good nor bad. It has the potential for both beneficial and adverse effects. I think as we begin to think about how in the age of comparative effectiveness, we're evaluating evidence, we have been in the age of technology assessment. And I think we actually need to move beyond that. We tend to start from the test. I've got this test. Does it work? Can I do something with it? What can I do with what I've got? Well, maybe I'll compare it to another test. But we really have to talk about starting from the patient and about the clinical decision at hand and saying, given this clinical situation, given this context, given this nexus, given these attributes, what is really important to making a decision and to management? And what are the strategies available to accomplish the best we can for these individuals? So I would like to invite us, particularly as we're really on the doorway in genomics to a new area, to also have a new paradigm and a new way of thinking about evidence evaluation to contextualize it. And to be working from the clinical problem rather than working from the test that we would like to introduce or get reimbursed and that we hope will improve outcomes. But I think if we start from the clinical setting, from the clinical question, we have better prospects for achieving that. I think we need to realize too that in comparing test to test, another part of the context that we may lose is that test results are rarely definitive, they are often suggestive, and very often additive in the context of other clinical information. So that when we think of the test, per se, we really need to think more broadly about the setting of use for the test and what the test represents in terms of all the information that goes into the strategy. So I'd like to suggest that we think that clinical management strategies have a context for evaluation, that tests and treatments are the components that help, that display an analytic framework. All the steps that take you from a diagnostic information to an outcome involve treatment decisions, treatment interventions, the knowledge and behavior of providers, the knowledge and behavior of recipients of patients, it's a very long and complicated chain to get from information to outcome, and in that chain we have a cascade of consequences, some of them for the best, some of them perhaps for unnecessary testing, misdiagnosis or harm. Certainly we've become more aware of this in the area of radiology with advanced imaging, the remarkable exposure from relatively young ages to radiation that has simply become part of our medical practice, and which I think has been taken for granted and not appreciated the potential for harm until in the last three, I'd say the five or six years there's been more documentation of this and more awareness of this. So diagnostic results in themselves are intermediate outcomes, they depend on treatment outcomes, and all the treatment outcomes are called direct in the language of technology evaluation, technology assessment. In the real world, in the world of true clinical effectiveness, they're mediated by operator skill, the health delivery system, and patient adherence. So it's really complicated. So that's why I think it's important to understand that comparative effectiveness is much more than A versus B, or as I've heard it put, it's more than the red pill versus the blue pill. Again, getting out to this idea of a context, the clinical situation, the decision and the strategy. So comparative effectiveness, and I draw here from the work of the IOM committee on comparative effectiveness research, and also the work of the federal coordinating committee on comparative effectiveness. But comparative effectiveness addresses strategies to manage your condition, taking into account real world practice, and variations in patient populations. And I found it quite telling that the Institute of Medicine, the top 100 priorities of those topics, they're not strictly A versus B, intervention versus intervention, but half compared the health care delivery system, that is, comparisons of how services are provided and what the results are, a third addressed racial and ethnic disparities, a fifth addressed functional limitations and disabilities. Now, this is side by side with the usual suspects for evaluation in our health care system, where the large utilization, the large populations, the big issues are cardiovascular disease, psychiatric and neurologic disorders, cancer, and others. But I find it very telling that it wasn't just about these clinical conditions. It was about the setting of treatment. It was about disparities. It was about function. And we also know that outcomes that are frequently measured. And Sean alluded to this in the issue of the evidence gaps and who was really creating the research and what other stakeholders would bring to how that research is approached. So it's the issue of what outcomes are really important to patients. And that, I think, is something that the Patient-Centered Outcomes Research Institute is very much bringing to the center, trying to move from the vantage point of the patient's decisions, such as given my personal characteristics, my conditions, my preferences, what will happen to me? What are my options? And I think we need to keep in mind that it is especially as we get into the chronic diseases and we get beyond midlife. When we talk about the patient and we talk about the condition, we are not talking about an individual with one condition. We are talking about individuals with comorbidities. Chronic conditions are rarely observed in less than a set of three clusters together, yet guidelines speak to osteoarthritis or they speak to heart failure without really good guidance as to how all of these intersect. And we need to start thinking in a cross-cutting way about real patients with real complexities in their situations. And so obviously, the patients want to know about potential benefits and harms, and we hope that patients will become empowered to make decisions and have an impact on their care by being informed and interfacing with the delivery system. Just to illustrate, this is not at all from the world of diagnostics. It just happens to be one of my favorite topics. Think about the worth of what stimulating agents we at Tech have thought about them for about 10 years now because ARC has come back to us a number of times to do comparative effectiveness reviews in this area related to management of anemia of cancer therapy. Now, it became quite evident with our first report that the issue is not a poet versus Darba poet, and there's some class differences, there's some convenience differences, there's some pharmacokinetic differences, but there's really not much difference from a clinical vantage point between EPO and DARB. And the real issue in understanding of the orthopedic stimulating agents is not a poet versus DARB. It's how do you manage anemia of cancer chemotherapy, and really there was a whole transition in thinking from, this is a safe, benign intervention that will raise hemoglobin, prevent anemia, prevent transfusions to understanding that there were real risks involved. So going from thinking this is something very safe, more may be better, sooner may be better to understanding that there's really truly unanticipated risks. To me, this characterizes the whole notion of comparative effectiveness research. It's not about EPO versus DARB, it's about the situation of an individual going cancer chemotherapy, potentially becoming anemic, and what are the options and the benefits and the risks in that setting. Comparative effectiveness is not a de novo enterprise. We have been in evidence-based medicine era for decades now, creating an evidence base for decision making and for practice. So comparative effectiveness to my mind, as Newton said, stands on the shoulders, Newton said if I have seen farther it's because I stand on the shoulders of giants. And so in the same way, comparative effectiveness research has to stand on the shoulders of present knowledge. We know in our understanding of the evidence base and our understanding of the interventions available to us that there are significant obstacles to assessing outcomes to really understanding the health benefits. This is not the topic here, but I do need to highlight these. We have a pervasiveness of reporting of outcome measures that don't really measure health. We have inconsistent and often absent reporting of adverse effects. We've got selective reporting and publication bias. We've got a gap between understanding efficacy versus effectiveness. I will give one of my favorite examples from the device world, the implantable cardiovascular defibrillator for primary prevention. You have the original studies, you get it all lined up. You can see that there is some benefit and you can calculate the number of needed to treat and the hazard ratios and the improvement in survival. You know that you really don't know exactly who's benefiting, but you can see in that population there's a benefit. You get it out in the real world and then what happens? Defective leads and nobody knows when defective leads should be removed. They're not so good at how to remove them. A whole unanticipated cascade of events gap between efficacy versus effectiveness. What we knew from those trials in no way would anticipate some of the real world tough decisions that had to be made without much guidance given these implantation of these devices with manufacturer and defects. And as we move more to comparative effectiveness, we will be looking more to different kinds of studies. The randomized controlled trial has always been a gold standard that will continue to be an absolute platform of our knowledge, but we also need to be getting better using observational methods. I believe very much that modeling, modeling that has a robust empirical basis will be critical because there are questions of such tremendous complexity and such rapid evolution of circumstances and technologies that they really will not be able to be captured within the walls of an RCT. And it's how you use all of those methodological tools and how you understand and I think it is important to understand both the promises and the pitfalls of observational methods because just as in diagnosis and just as any other information, these observational methods offer us both benefits and possible harms. So let's think a little bit about cost. PCORI has actually been concluded from conducting any cost effectiveness analysis. We have a political culture which is very reluctant to address these matters and as I think of it, it is the third rail of comparative effectiveness. If your research institute touches it, you may be zapped. But the fact is we can't afford not to think about these things. This is the, everybody has their own projection but you can see the projections have in common that left to present trends. The healthcare spending could potentially eat up the entire gross domestic product that's obviously unsustainable. Our position in world markets has changed. We need some way of dealing with this. So in summary, we're looking at comparative effectiveness from the vantage point of managing conditions. It's different than moving from the technology. We need to think differently. It's a different paradigm. We have to take into account systems of care delivery. We have to think about organizational behavior, individual behavior, systems aspects. We need to think about the patient's voice in these decisions and the vantage point to benefits. We need to think about the tests and treatments as being components of the strategy but not necessarily the object of evaluation or assessment. We need to understand that our evidence base in general has certain deficiencies and that we need to improve them because if we are to stand on the shoulders of giants and to see further, then we need to improve that base. There are certain well-known deficiencies. We see that the politics around cost is very, very difficult in our culture but I think we need to grapple with it, that value and affordability are intertwined and that all stakeholders in this health care system are going to have to exercise stewardship in order to make it sustainable. Thank you. Thank you very much. I think we're running a little bit behind. So if there are burning questions, we'll take one or two and if there are no burning questions, either they're burning questions that everybody's ignoring because of time or it's sort of the post-lunch torpor. So we'll not figure out which one that is but we'll have the next talk instead.