 Thank you So I must thank and Ned first because I was wondering when I was preparing the talk how much of a detail and Specific examples should get into or not and I decided not to hoping that he would get into some and he did so thanks, Ned my talk I will spend just a few minutes on the background and Not Discuss many other things that Ned has already mentioned But hopefully move the conversation into what what is our doing? And what are the needs and the barriers that we are facing right now that need to be overcome? So the background is the first context we have numerous reports whether from the eGap the US preventive services task force from the NIH state of the science conferences in many other guideline developers and charismatic reviews There are large gaps in our knowledge of the impact of therapeutics and especially in diagnostics on patient outcomes in real-world clinical practice and this is not just for rare diseases although This is more so for rare disease. It's even in common diseases So that's one challenge that we face right now The second which Ned had mentioned as marginal benefit is this issue And especially for common diseases, we don't lack treatments. We don't lack diagnostic tests There are plenty whether it's for treating high blood pressure for lowering your cholesterol for treating osteoporosis We don't have a shortage of Drugs what we need to know for anything new is what is the added value of this new thing new technology new drug new test and Everyone who needs this information Has to be sure the information is valid and credible on what the benefits are and what the harms are Regardless of what the context of making the decision is whether it's the clinician Having a patient walk into the clinic whether it's a guideline developer Whether it's a pair who wants to make a covered decision or a federal agency that wants to make a regulatory decision There may be some other aspects beyond benefits and harms they consider but this is certainly the critical element Another issue that we face in their numerous examples of this is That for many diseases even in common diseases the natural history and the pathogenesis of the disease are often incompletely understood So this is an issue when you decide are we studying surrogate markers are these actually surrogate markers or not And if they are if you're seeing an improvement in a surrogate marker Will that translate to a benefit in the health outcome or not? And you've had numerous examples when that hasn't panned out to be true. So even say a common disease like osteoporosis Sodium fluoride showed increases bone mineral density. It doesn't decrease fracture risk And that's really what the patient cares about not that their bones are dense, but that you actually have fractures The same is true for screening for prostate cancer. Happy to see there are so many examples when the natural history of the disease and The unknowns limit the ability of a guideline developer to say clearly there are more benefits than harms So what are the reasons? We are facing this and I'll put two points across one. I think is There are limitations in our existing infrastructure capabilities So the electronic databases that we have they don't talk to each other The information is siloed often. It's not the right kind of information And so we have Problems that need to be overcome from the infrastructure point of view It's also a party to do with the study methods Whether it's observational studies or randomized controlled trials depending on the question being asked There are often issues that can lead to bias and confounding that affect the validity of the results So you want the results to be valid and generalizable and The last point of course, you know for several reasons we can have a long discussion on this The goals of biomedical researchers are not typically aligned with those of clinical providers so With this context, you know one of the challenges that we are facing is can we improve our health care delivery? Infrastructures that we can use it for research. We can use it for improving quality of care and for new information like genetic tests Now the other thing that had briefly been mentioned is comparative effectiveness research I won't go into the definition in detail. This is the one that the Federal Coordinating Council came up with I'll just highlight three things in here that I think are important One is that in comparative effectiveness you are looking at the benefits and harms of different interventions So it's not a placebo. It's not doing nothing. It's actually comparing different alternative Interventions where it's diagnostics or therapeutics in a real-world setting, which is important. It's not in an artificial Highly selected patient population highly selected clinical settings where you don't know if you can generalize the results It's actually real-world practice And the last part of the definition which I think is important is we are doing this to improve health outcomes Not the surrogate markers not for creating new knowledge But we actually improve the quality of life or the care of the patient So what ARC has done in the past several years and this started with the Medicare Modernization Act Was to create a new program called effective health care which focused on Comparative effectiveness and the four goals of this program are to create new knowledge To review and synthesize existing knowledge and that actually has been Something we have been doing for a long time The ever-spaced practice center reviews that Ned mentioned are part of what we use for reviewing and synthesizing the existing knowledge Then the two other components are to translate and disseminate the findings Including tools such as clinical decision support tools decision aids and to train and build the capacity in this field Which is still new So I have only one slide on genomics projects, but this is to tell you that ARC has not been inactive in this field So I mentioned the evidence-based practice centers of the EPC reports which have Helped many different guideline developers eGaP US Preventive Services Task Force the NIH state of the science conferences on family history CMS and their MedCAC process CDC and of course topics that get nominated by clinical societies We have also Done work in creating new knowledge. We funded a randomized controlled trial This was a March Field Clinic on looking at warfarin base gene-based dosing calculator and comparing that to clinical dosing calculator alone that was published in genetics in medicine and There are two add-on genomic projects in Prospect studies and I'll tell you in more detail what the prospect stands for We also created a new computer-based clinical decision support tool for assessing a BRCA Mutation risk in the primary care setting and this was done because the US Preventive Services Task Force had made a recommendation For the primary care that when there are women who had a higher risk They should be referred for appropriate counseling and testing the challenge is The primary care clinician does not have the time and sometimes some can argue the skills to actually get detailed cancer family history to know what the BRCA risk of the woman is so we created a tool for that and it's not live because We spent more time creating the tool with how there was much more knowledge about what to do in primary care turns out There wasn't so we spent most of our resources in creating the tool not so much on validating the tool And so we actually have a collaboration with the CDC to do bigger studies And get a sense of how well this tool performs in the real world Then we also had to I guess conceptual reports I would call them one was done in collaboration with the CDC to Look at the existing infrastructure in the US and to ascertain How well can we use the infrastructure to look at utilization of genetic tests or the outcomes of genetic tests? and another one which we recently released a few months ago was Looking at the analytic validity Quality rating and evaluation frameworks So this was a report to build on the work that eGap has done the preventive services task force has done the CDC has done with the ACC e-framework and an older Thornberry fry fryback framework on evaluating diagnostic tests So this report essentially Looked at the different clinical context in which are scenarios when you would use a genetic test Who the audience is who the user is and then one of the most important questions that are should be addressed in a evidence review So our work on creating new infrastructure. We started two pilot projects back in 2007 on Distributed research networks. So for those who are not familiar with distributed research The the traditional model of research is all the participating sites organizations send their database into one large centralized database Which then there are some issues about both the quality of the data as well as privacy and confidentiality of the information Available in the data. So people are always nervous in giving their data to an unknown centralized entity that can use it anytime in the future One way around this is can we actually do distributed research where the data or the databases actually reside in different clinical organizations they are partnering only on an as needed project to Distribute the information selected information so that you're not sharing all the information in one repository And this would allow you the ability to connect different electronic medical records to connect different databases and Overcomes some of the privacy and confidentiality concerns. So we had two different projects that we funded One was to create a new. This was a dart net project This is the University of Colorado. They had linked six different EMRs in the first go around link the EMRs with claims database pharmacy databases clinical lab databases And showed that can actually be done and that you can also collect patient reported outcomes using this linkage To improve the quality of care and use it for comparative effectiveness research The other was to enhance an existing Collaboration the HMO research network Which was there already spent many years building the virtual data warehouse The challenges can you actually get virtual data warehouses from the different organizations to talk to each other and generate the information? So we published that this was done in two years ago in Annals of internal medicine And we learned both from the successes and the challenges in these projects So that our goal was to build on this and build new Systems that are multi-purpose. So not just for research, but also for quality improvement for disease surveillance clinical decision support These are dynamic. So it's not just one data entry static You can't do anything with that, but you can go back and change add new fields change the data as needed These need to be electronic. So they are based on EMRs or EHRs from the get-go and they can collect prospective data In this span several of the ARC portfolios This is just to tell you that this has widespread interest at ARC and also this is a new multidisciplinary effort So our good fortune in getting the ARRA funding, which was for those of you who haven't followed it $1.1 billion dollars for comparative effectiveness research Out of this about a hundred million were spent in building these new systems So I had mentioned prospect earlier. So this is one of the RFAs I had taken the lead in writing on prospective outcome systems that used patient specific electronic data to compare tests and therapies We awarded six RO1s from these Then we also came up with two other RFAs and because of the time crunch, I didn't have enough time to think of creative new acronyms So these are just as is One was scalable distributed research networks. We funded three RO1s here and The third one is the enhanced registries that can be used both for quality improvement and for comparative effectiveness research The fourth RFA was you know, it's well and good to do the research Can you actually bring the lessons learned in a convening forum so that you can advance the national dialogue in analytic methods in clinical informatics and in the data governance issues, so we awarded to Academy Health a cooperative agreement on creating a new electronic data methods forum So the common themes across these RO1 projects The requirements were they had to be able to link multiple health care delivery sites So in this case, it would be in patient care outpatient care specialty clinics Nursing home long-term care. So these had to be Different care delivery sites. It's not just linking two clinics in one academics and rinsing. This is enough They needed to connect multiple databases Be a different electronic health records Be it linking with claims databases pharmacy databases They needed to focus on priority populations and conditions. So the concern about underserved populations generalizability of the results Those were to be addressed They needed to demonstrate they can collect prospective patient-centered outcomes To use it for comparative effectiveness research so that you can ultimately get valid and generalizable conclusions Another theme that we stressed was there was a focus on governance and stakeholder engagement And this is all in an effort to make it sustainable. We knew the RFA name was a one-time large bolus But if the projects do things that are valuable to different stakeholders be it patients providers payers clinical guideline developers professional societies Then the hope is once the initial investment is done There'll be support to sustain this beyond the three-year timeline of these projects Now the other Special features of the registry and distributed projects For the registry the requirement was to build on an existing registry because the three-year timeline did not allow us To start a new registry and then to show they can use it for comparative effectiveness research Another requirement was to do comparative effectiveness research and quality improvement So you've heard some of the challenges about the tensions and research and clinical practice But the same happens in people who do quality improvement and who do research Generally quality improvement folks don't have to worry about an IRB, but on the other hand They're not looking to publish findings to get grant funding. So They do live in different worlds and can you actually Bring those two worlds together when you're building the registry and make it sustainable and therefore hopefully scalable the other R01 other RFA focused on Distributed research networks Where the emphasis was emphasis was to build on multiple cohorts So we had asked for at least four different cohorts of at least two different unrated conditions So this is sort of a contrast to registries where registries can often be disease specific or patient population specific But All right, I guess I don't want to buy this now There's nothing confidential here. So there's no reason for a security on this slide And the other challenge as you as you heard is it's one thing doing research It's another thing trying to use the information in real life clinical practice So you need to have data that you can get Soon you can't wait for a few years and then say okay now. What do I do with my patient? So one of the challenges with these distributed research network projects was can you get near real-time data collection and analysis? And of course like the registries make them sustainable and scalable So I just spent a couple of minutes on what I hope is something that you can engage with the EDM forum So this is a central repository and resource for information on Collecting prospective electronic clinical data that is being done in all of these projects There's a website and I'll have that at the end that you can access as you want the purpose is for them to collect and Synthesize the lessons learned across all of these 11 projects to engage the different stakeholders in the science But also to learn from them What their needs and challenges are and to build the resources and tools to advance the science in this field? The activities of this forum are on analytic methods Clinical informatics as I mentioned data governance, which includes security, privacy and privacy and access of information And there's a new subcommittee on the learning healthcare system, which talks about what I would call non-research issues This is quality improvement clinical decision support and meaningful engagement. So this is the organizational chart I'll just leave this as my last slide There's a the PI of this is an whole we at Academy Health There's a steering committee and Ned Colange who you have here. He's the chair of that There are 11 projects and investigators in There are part of the forum and I'll stop there Mission for PCORI Certainly well arc of course predates PCORI for the longest time the Arc's mission has been the effectiveness safety Efficiency and quality of health care From our understanding and PCORI is still evolving it's focused primarily on patient-centered outcomes So what happens about issues that are not directly relevant to patient-centered outcomes? It's not clear if PCORI is going to be Taking those on or not There's certainly collaboration between the two of PCORI has Funded arc activities or will be funding arc activities on Dissemination on training so there will be some amount of collaboration But down the road what is it that PCORI will actually do hasn't yet been clarified that I think from what I heard last time We will know more about that in January about their specific topic areas and projects and the mechanisms of funding for those So I'm going to speak both as a member of the methodology committee and also having been very involved with stakeholders who work to put You know to to support PCORI Back when it was called the comparative effectiveness entity that is through the through the blues and I think the intent is That division of PCORI patient-centered outcomes research incorporates comparative effectiveness but it is Larger and will incorporate New kinds of information that will add to it so it includes that agenda it goes beyond it What the priorities in the agenda will be is still being worked out by PCORI the rules of the road for that are still being set methodology committee has pretty strict Task which is to get a comparative effectiveness guidelines Report methods report delivered in May. I think there has always been the intent At least on the part of the stakeholders who are Funding PCORI. It is largely funded through payer funds some through government funds that this should Amplify what ARC is able to do and not replace what ARC is able to do. I think there is a high appreciation that What we often need is new primary Evidence so many systematic reviews and other efforts and with the conclusion that we really don't have the primary evidence So this was seen As a vehicle to start to fund that primary evidence There really are not entities that exist now that have that is their mission or their interest Sponsors they're going for registration are interested in their product not comparison The NIH I think is more infused with the spirit of comparative effectiveness, but I has not really been The scene that is its mission and this really is sort of the one place that this important social Objective can be lodged and it's now enhanced with a broader vision of patient centered this so Thank you very much presentation Just the big devil in comparative effects research is channeling or put more simply New drugs are given to slightly sicker people, and I wanted to with your mythological research how you were getting on with that particular issue so they I mean say in the US There is the FDA labeling that tells you what the drug or the clinical scenarios of what it can be used and not used for But there's also what we call off-label use and The comparative effectiveness research doesn't limit itself to only FTA approved indications So any public so the main Issue for comparative effectiveness research is do you actually have the evidence not on What it was originally approved for what's being used for now So if things have changed over time and that change has been captured in Publications then that forms the basis of comparative effectiveness research But to how well this is characterized. I mean that's going to be the challenge is to make sure Many of the databases that we have or for example when you when I doing observation studies They don't capture the severity of the disease The test results So it's very hard to know what patient what type of a patients were given these medications and are they comparable? And so those are all challenges that I think once we get more clinical details in the databases and can link them Hopefully we can address some of those issues