 Hello everybody, I'm Karise Neville and I'm based at the University of Leicester here in the UK. Now the Complex Review Support Unit, the CSU, have a suite of web apps for evidence since this, and this short presentation will be introducing one that is currently being developed by myself and colleagues Terry, Nicola and Alex. This is Meta Impact and it aims to enable researchers to design future studies whilst considering the totality of current evidence. As with the other CSU apps, many people work on these projects and their contributions are always appreciated. Here I'd like to give extra thanks to Alex for providing some of the content for this presentation. So in research as a designing new studies, a balancing act is going on when considering how many people to recruit. If there are too few people, then it may be that the study is unable to detect an effect even if there is one due to a lack of power in the study. If there are too many people recruited, in other words, the study could have detected an effect with less people, then some participants would have undergone treatment unnecessarily. Obviously this is even worse when the treatment assigned was inferior. Both scenarios are wasteful and unethical and should be avoided where possible. In the UK, before a treatment is offered to the public, governing bodies must approve them. To aid decision-making parties, it's common practice for systematic reviews to be presented where all the relevant evidence has been found and combined to give an overall picture. When there is a quantitative outcome of interest, a meta-analysis is also often considered to review the evidence. With this in mind, there exists an ideology that instead of powering new trials in isolation, researchers should anticipate their new study being added to the current body of evidence and therefore power it to influence an updated meta-analysis with said study included. Let's go through what this looks like statistically. Firstly, we need to conduct a standard meta-analysis of the current evidence in question. In this example, we have a meta-analysis of six studies with a pooled odds ratio of 0.8. Then using parameters from the meta-analysis, a new study is simulated. This involves sampling a new study effect from a distribution defined by the meta-analysis. Here, this is minus 0.15 on the log-odd scale. Then, by setting the probability of an event in the control arm at an estimated value, the probability of an event in the treatment arm can be derived. Here, this is 0.18. Finally, using the binomial distribution here as we're using binary data, combined with the probability parameters we've derived, we can simulate the number of events in each arm for a set sample size. Here, the sample size is 200 in each arm, and so we get 38 events in the control arm and 35 events in the treatment arm. The next step is to simply redo the meta-analysis, but with the newly simulated study included, and then noting down whether the resulting meta-analysis gave the desired effect or not. To then estimate the power of the set sample size, 400 here, one simply repeats steps two and three, i.e. simulating a new study and updating the meta-analysis a large number of times. The power is then calculated as the proportion of meta-analysis that gave the desired effect. In this example, we were looking for updated meta-analysis that gave a traditionally significant p-value, of which there were 304 out of the 1000 iterations, giving a power of 30.4%. The final step is to then simply adjust the sample size to achieve the desired level of power. Right, so as you can tell, this method that I've just gone through can be quite complex to understand and follow, and furthermore, it can be computationally difficult to do. Therefore, it was our aim to build a user-friendly web app to allow researchers to easily utilize these methods. As with our other web apps, we worked with the statistical software R and the Shiny package, and as a result, Meta Impact was born. As some elements of the app are hard to comprehend, particularly the sample size calculator, additional features were added. Small features included information pop-ups and help-poisons to take the user through each part of the calculator. But bigger, more educational features are set to be incorporated to teach the user how the method works and gives the results that are being presented. One such feature will be to incorporate the langen plot. The setup of the langen plot is similar to a funnel plot. Along the x-axis, we have the outcome, and along the y-axis is the standard error. The diamond represents the pooled effect from the meta-analysis with lines extending to represent the 95% predictive interval. That's the interval for where we'd expect new study effects to be. For every possible position on that plot, i.e. for any new study with a certain odds ratio and standard error, that respective area is shaded according to how that new study would affect the updated meta-analysis. So here, the darker shaded area shows where new studies would cause the meta-analysis to give a significant pooled effect and the white areas otherwise. Now let's consider all these tiny individual points that you can see on this plot. They each represent a simulated study of the set sample size being tested according to the method we've been describing. Therefore, one can visually see how the power has been calculated. It's simply the proportion of those dots that are in the shaded area. Now at the time of recording, this app is still in development. Currently, it can run a frequentist meta-analysis and calculate the power of a new study of a certain sample size from that meta-analysis. Furthermore, the app can produce power plots for multiple sample sizes and for fixed or random effects meta-analysis. Further features that we are aiming to complete before releasing the first version of meta-impact include adding functionality to run the meta-analysis within a Bayesian framework and to incorporate an interactive version of the Langham plot described previously. Looking further towards the future, a later version may consider how these methods can be extended to network meta-analysis. Once the first version is ready to roll, we also plan to assess the potential benefit of meta-impact by utilizing past reviews. Specifically, this will involve taking all the studies in that past review but excluding the most recent study added and then plugging that all into meta-impact. As described, meta-impact will then simulate new studies based on the edited review. We do meta-analyses with the simulated studies included and obtain new results. An optimal sample size of a new study can then be estimated, such that it influences the meta-analysis. This new sample size along with the new pulled effect will then be compared with the original review and the study that was removed to assess whether using meta-impact would have been more beneficial. Once released, we believe that meta-impact has the potential to benefit patients and research by encouraging ethical sample sizes and reducing wasteful trials. So please do keep an eye out for when it's released. You should be able to find out when by following our Twitter or GitHub. Thank you.