 Great. So I'm delighted to be here to give this short talk about perspectives on the future of evidence synthesis in my role as editor-in-chief of the Campbell Collaboration for the SMART conference. So first I just want to declare my interests. I have main role as editor-in-chief of Campbell Collaboration. I'm also involved in various Cochrane groups and have had funding from CHR, WHO and NIHR not for this talk. So why do we need evidence synthesis? I think you've probably all converted and some of you may have seen this quote, professional intentions are not sufficient for selecting policies. Actually, well-intentioned interventions can cause more harm than good. Scared street programs are one example of this from Campbell systematic reviews where juvenile delinquents are exposed to prison and this actually resulted in more crime. In the social sector, almost 80% of programs don't work. So we need evidence synthesis so we can identify those that do work, identify how they work and in which circumstances and for which populations and also how to make them work better. So evidence synthesis is a way to bring together everything we know on one question. The third reason for evidence synthesis is collecting all available evidence. If we didn't do this, we could make the wrong conclusion. The Cochrane logo is a perfect example of this where in 1972, this is a logo which shows a meta-analysis of corticosteroids for women about to deliver prematurely. In the logo, each line represents a single study and it's confidence intervals. In 1972, the first trial showed an effect on mortality but subsequent trials, and you can see these in the middle, actually showed not statistically significant effects and when meta-analysis was finally done in 1991, this showed a 30 to 50% reduction in mortality of children. So where are we now with systematic reviews? Some of you may have seen this slide by John Ionitis. There are many, many, many systematic reviews and meta-analysis often on the same topic and with different fine needs. Just as a more recent example, hydroxychloroquine for COVID-19, which we know is not effective, has over 176 reviews and that's just in the last year and this is from the epistemological database. So what do I think about the future of evidence synthesis? I think we need to think about four things, timeliness, replicability, stakeholder engagement, and the evidence ecosystem. So first, when we think about timely, the time to publication of Cochrane reviews was 1.63 years. For Campbell, it's about the same. A quarter of reviews are not published after seven years and it takes almost three years for updates to happen. So how can we improve? I think most of you, again, are the converted. There are many ways to bring automation into systematic reviews as well as crowdsourcing to bring many hands to bear. James Thomas from the EpiCenter published this article three years ago on how automation could be used in each step of systematic reviews. This is obviously moved on since this article. In Campbell, we're working on evidence and gap maps, which are a map of all the evidence. Each of the circles in this graph represents studies or systematic reviews on homelessness. And this map was actually used to complete three systematic reviews in less than a year just by making all of the studies discoverable. One example is a review on discharge planning for people who are homeless. So future proofing time on this. Campbell's working on evidence and gap maps. We're also looking at a policy paper with evidence synthesis international to bring together the different organizations that do evidence synthesis. And the hackathon and I want to go through this with each of my topic areas. The hackathon is working on connected tools in R to expedite and make reviews more efficient replicability. Most of you will have heard about the reproducibility crisis. This is pretty old now six years, but we still have this issue, especially in the social sector, replicability of primary studies is an issue. But what about systematic reviews? Myself and colleagues led a paper on when to replicate or not replicate systematic reviews. And through a consensus and evidence driven process we came up with four criteria. The first that they need to be high priority. The second that they will have an impact on changing uncertainty about the decision. The third that they will have a large population impact and that could be benefit or harm. And the last that opportunity costs outweigh other things that could be done with the time and resources. Another way to think about replicability in systematic reviews is how do we make our reviews more open. And you'll probably hear more about this from Neil Hadaway. This is his framework for open synthesis that we need to think about much more than just sharing data, but also open discovery, open methods, open source, open code, open peer review and open education. So what are we doing for future proofing in Campbell we're encouraging open synthesis and hoping to move to expecting data sharing this year. In the hackathon we held in 2019 series of discussions about how to promote open practices with academic incentives for both primary studies and for systematic reviews. Stakeholder engagement. Again, you've probably heard a lot about this. There are many levels of stakeholder engagement, but co production remains at the, at the top. In systematic reviews, we're often thinking so much about the details of getting the review done. We sometimes fail to include the people who are affected by the review questions. For example, we currently require stakeholder engagement and the development of our evidence and gap maps, especially in the development of the framework of the outcomes and the interventions that are important for stakeholders. We're, we also have four systematic reviews registered on how to engage stakeholders and the impact of engagement on systematic review outcomes in the hackathon. There is work on going to support engagement such as crowdsourcing, especially for screening and data collection steps. I just want to raise this issue of evidence ecosystem. I'm sure you'll discuss this in the conference. We think about the evidence ecosystem is beyond just systematic reviews. We also have the public that uses reviews we have the primary studies. Other things which that collate the findings of the reviews so guidelines and checklist and portals at the hackathon in 2019. We carried out some foresight planning about the ecosystem and how to improve it and how to save off some of the possible negative future scenarios. And this is a report we published last year in Nature led by Shinichi Nakagawa. And you can see that we have the public, the stakeholders at the beginning and the end of the process. And we also think about how do we make the primary research more available for synthesis? How do we make the synthesis more available for the public? Another way of thinking about the ecosystem is systematic reviews are part of a bigger picture. And in this pyramid that Howard White developed, he's the CEO of the Camel Collaboration, systematic reviews are pretty low down in this pyramid. And what he postulates in his paper is that decision makers actually need more packaged evidence. So things like evidence portals, guidelines and checklists, which tell decision makers what the evidence, the policy implications of the evidence are. And so we can't stop at systematic reviews. So one example of a portal is this Education and Downman Foundation portal, the teaching and learning toolkit, which has boiled down systematic reviews into three numbers. Cost from 1.5, evidence strength from 1.5 and impact in months of education. And you can see how this would be immediately very appealing to decision makers. But it's extremely labor intensive for the toolkit developers to come up with a formula to summarize these numbers. So what are we doing for the ecosystem? In Camel, we're building links with portals such as the Education and Downman Foundation so that it's built on rigorous systematic reviews. We're also exploring guideline development in the social sectors. What do we do at the hackathon? We looked at foresight planning about how to improve those links across the ecosystem. So the future of evidence synthesis, I think we have an agenda of improving timeliness, replicability, engaging with stakeholders and thinking of our role in the evidence ecosystem. And I hope that you'll join all of us in the evidence revolution of taking these steps together. Hackathon is one way. We'd also really invite you to join Camel and decision makers at the What Works Global Summit, which will be held online in October 2021.