 Hello, I'm Clarice Neville based at the University of Leicester in the UK, and in this tutorial I'll be introducing Meta Insight, an interactive web-based app for conducting network meta-analysis. We have many researchers that have worked on developing the CRSU suite of apps, but those that have actively developed Meta Insight include myself, Yi Chou, Naomi and Riannan. Acknowledgements also go to all members of the CRSU who have tested the app and provided valuable feedback throughout, and to the funder for this work, the National Institute for Health and Care Research in the UK. Lastly, thanks to Naomi for contributing to the slides and content for this tutorial. So Meta Insight is part of a suite of online evidence synthesis apps developed by the CRSU. Separate EsmarConf 2023 tutorials are available on MetaDTA and MetaBaseDTA, which are for conducting meta-analysis of diagnostic test accuracy studies under a frequentist or Bayesian framework respectively. And I'll also be giving a short presentation for this conference on a new app that integrates evidence-based research called Meta Impact. Let's briefly go over some NMA terminology. So network meta-analysis is recommended for where there are multiple intervention options for a health condition, combining quantitative evidence that has been systematically collected from a review. This image is a network plot and it summarizes how all the evidence relates to one another. Each treatment has a node where the size relates to the total number of people that were given said treatment. Now, wherever there is a line between two treatment nodes, that indicates that there was a study that directly compared those two treatments. We refer to that as direct evidence. In the thick of the line, the more studies there were for that comparison. And then the beauty of NMA is that with this setup, we can then calculate the treatment effect between any two treatments. This can only be done for connected networks, i.e. if there were some treatments on the outskirts here that didn't have any lines joining them to the main network, then that's classed as disconnected and can't be analyzed. For a treatment comparison that had no studies looking at them directly, i.e. there's no line between the nodes, an NMA can still estimate that treatment effect by going along the network lines, producing what's referred to as indirect evidence. Then for every treatment comparison, an NMA will combine all the direct and indirect evidence together to present the resulting treatment effect. Symmetra Insight is a web-based app for conducting NMAs that has been built using Shiny and utilizes the Shiny package to build the app itself. It's then hosted on the web by using the Shiny apps.io server. Within the user interface, researchers can upload their own data sets and then click through the various tabs to conduct their NMA. And Meta Insight has the capacity to run NMAs under Frequentist or Bazin frameworks using existing R packages NetMeter and GEMTC respectively. Meta Insight is completely free to use with two access options. You can access it online via any web browser or you can import the code from GitHub and run it locally on your own machine with R. You don't have to have any coding or programming knowledge as the entire user interface is a point and click system. The code used under the hood is from established well-used NMA packages within R so you can be confident in the analysis methods being used. For sensitivity analyses that want to consider the impact of individual studies, Meta Insight has the functionality to do this in real time. Researchers can remove certain studies using a simple tick box system and quickly see the differences by having the original analysis and the new analysis side by side. Meta Insight includes a wealth of data visualizations including different network plots and newly developed graphical displays for presenting how treatments rank. Finally, Meta Insight has been available for around five years now becoming an established app with around 650 user hours per month and many research groups using Meta Insight to conduct NMAs to then go on and be published in peer-reviewed journals. Now before we jump to the demo I'd just like to share a note of caution with Meta Insight. We did not create this app to replace statisticians or their expertise in NMA. There are lots of elements within NMA that need to be understood to properly conduct and interpret NMAs. Therefore we highly encourage users without such knowledge to have someone on their team or someone they can consult who understands the intricacies of NMA. So if you want to play along with the demonstration please do use the links on the screen. So when you first come onto Meta Insight the first thing that you'll notice is this pop-up window which is basically to allow Google Analytics to access where you're accessing Meta Insight from and how long you're using it for and this just helps us with our user's statistics. So if you click consent that would be great and then this is the front page. So we'll be able to see up here what version of Meta Insight we're currently on and the first thing you need to do is decide whether you want to have continuous or binary data that you're using. So this tutorial I'm going to stick with continuous. And then over here on the right you can also see some of our latest updates, some links to the full updates and then if you scroll down you'll see some acknowledgments, our publications and our funding statement. So let's go and load up some of our own data. So to do this you just need a CSV file and you just literally click select and go and choose your file like any other program. Meta Insight also has inbuilt data so if you want to play that first that's fine. When you click the view data you can see here at the moment we're looking at the inbuilt data. Once you load your own data set you should then be able to see that data set there. So there's two options for loading the data you can do it in long format or wide format. So long format is where each row of data represents each treatment arm in a study whereas wide format is where each row in the data set represents each study and there's more columns. There's more columns like you can see here for each treatment arm. So if you just follow the instructions for whichever format you want to go with make sure that the columns are titled the way that we've specified then hopefully you should run into any errors. So once you've uploaded the data you then need to add in your treatment labels. So you'll see here that we ask for people to number their treatments rather than name them when they upload the data and then you can add your names here on the left. Some notes, the names can only have letters, digits and underscores so no spaces please. And it needs to be a tab between the number and the label. Sometimes when you copy and paste it over or type it it'll be three spaces. It needs to be a tab to work. So once you've all got it up let's analyse some data. So on the left here will be your options. The first option is what kind of outcome you're looking at. So is it mean difference or standardised mean difference and for the binary you'll have options of odds ratio, risk ratio and risk difference. The second option is related to when you want to look at ranking your treatments and this basically you just need to tell Meta Insight whether small outcome values are something you want to desire or something you want to avoid. And then the third one is what kind of Meta Nices model do you want to run? Do you want to run a random effect or a fixed effect model? And then this last bit here is what I was chatting about in the slides is sensitivity analysis functionality. So if you just click whichever studies you want to exclude then that will be your sensitivity analysis run all automatic. So on the left here it will show your original analysis and on the right will be your sensitivity analysis. So on this data characteristics tab you can see now we only have 21 studies rather than 24 studies. You'll also see that the network is connected in both the original analysis and the sensitivity analysis. Sometimes when you remove some studies this causes the network to become disconnected and so you may need to look at that a bit closer and maybe remove some other studies so that you're left with just one main analysis and not two disconnected networks. So in tab 1B is basically just showing your study results. So this is just looking at every pair wise comparison and showing you each individual study result for that treatment comparison. This is a plot that you can download if you want it. And finally in the data summary are the network plots. So these are like I described in the slides. We have two different styles for you to look at and you can download each of these. So let's get into some analysis. So tab 2 is where you do your frequentist meta-analysis and this is all done straight away. So we can see the two forest plots already and we can compare them. The forest plot shows the treatment effect of every treatment versus a reference treatment. And the way you decide your reference treatment is by labelling it with the number one in your data. So think about that carefully when you put your data together. Whatever is number one you need to make sure is your reference treatment. So here we've put that as placebo. There's some extra options. You can change the limits of the x-axis depending on your needs. And you can also download the plot. And then at the bottom here also has some extra information such as the between study standard deviation. So if you're interested in the treatment effects between any of the other pairs, you can come on to tab 2B to find the results here. So the top table is your original analysis, the bottom table your sensitivity analysis. And it's basically a lookup table. You can just look up each treatment that's along the diagonal to find the treatment effect between them. The lower triangle of results refers to the NMA. So that's looking, combining at the direct and indirect evidence. Whereas the upper triangle is just looking at pair-wise meta-analyses such as showing you the direct evidence. And then finally tab 2C here is for you to assess the inconsistency assumption. So this is again where you might need a statistical expert in NMA to help you with this. It's basically testing an assumption that is key in NMA that's saying that when you take your indirect evidence along that network path, you want to check that that's consistent with if you have done it with just a direct study. And so yeah, it takes the direct information and takes the indirect information and compares it. And so you want to look at here if there are any small p values. If you do find any, then you might want to take a closer look at your data, see if you can see any reasons for why there might be that inconsistency. Great, so that's frequency analysis. If you want to do Bayesian analysis, we go on to tab three. And so because this is a lot bit more involved, if you don't understand, ask a statistician why. So we have buttons for these to run. And you get your forest plot same as frequentus. Again, it's all the treatments in comparison with your reference treatments that's placebo here. Again, you can change the limits of your X axis, have some extra statistics. The Bayesian also includes some model fit statistics. Again, if you don't understand those, ask a statistician to explain how to use them. And then again, you can download those there. Again, with frequentist, if you're interested in the treatment effects for any pair, not just with the reference treatment, you can do so on tab three B. And then a tab three C is one of our newest features. It is the ranking panel. So when it loads up, you'll find that the middle panel here is sort of the focus and that contains the results from ranking the treatments according to the NMA analysis. And this first plot here is the litmus rank ground. And it has a line per treatment. And along the X axis is the rank. So being ranked first means it was ranked best and sick for last. And then on the Y axis is the cumulative probability that the treatment was ranked at that rank. And so this top point here will be the probability that Riemann-Bant was ranked first. And the next point here, where the probability that it was ranked first or second and so on. And so that means that you ideally want a line that says top as near to the top left of this plot that will show that it performed the best. We've also given the sucra statistic, which is basically the area under that curve to sort of help you interpret and compare the curves. And so here, if you just want to look at the sucra, that is along there. And then we've color coded it, sort of see green, better, red, worse. So another plot, which is useful for if there is a lot of treatments in your analysis is the radial sucra plot. So instead of seeing all of the rank probabilities, we're just looking at this statistic, the sucra that takes into account all of the rank probabilities. And we order the treatments from best to worst according to the sucra in a clockwise direction. And so the result is plotted radially. So sort of the further out to the edge of the circle and the more green the result is, the better it performed. But for this plot, we actually also overlaid the network plot, as you can see, which helps with interpretation a bit. So here you can see that even though Romanban performed the best according to sucra, it actually has a small node. So it wasn't tested on that many people compared to the other treatments. And it only had one study that it was directly compared with. So it's relying on a lot of indirect evidence, which can sometimes lead to spurious ranks. So that is just to encourage people to really think about what's going on behind the results. You can also have a colorblind friendly version and you can have a simplified version where if the network's getting a bit too much, you can just put it all into the center point here. If you want to see the actual rank probabilities and numbers, you can just click here and then you get a table of the results and the sucra values. And if you want to download those, you can do so here and you can download the plot as well. And so on either side of the panel is some further information to help the user to again think about the data behind the results. So quite often if you get a ranking result, say first and second, you might therefore think, ah, first is definitely better than second. But we encourage you to look at the relative effects to really think about what are the differences clinically between those treatments. And then over here, this is to make you think about the evidence that's being used. And so at the moment, it's just a network plots thinking about how they're all connected, but we have plans to include other plots linked to, for instance, risk of bias for you again to think more about the data behind the results. As then this panel we've been looking at has been for the original analysis. And if you scroll down, you'll get the results for the network meta-analysis. Sorry for the sensitivity analysis. So then 3D is for the node split model. So this is a model that can be used if you're not sure whether the inconsistency assumption is holding. So it's quite a advanced model. Again, ask an expert in NMA as to whether this will be suitable for your data. You literally just click the button to run it. I'm not gonna run it now because it takes quite a long time to run. Tab 3E has your bit more details of your Bayesian results. So again, this is for people that understand Bayesian analysis and Bayesian NMA a bit more. I think the key bit here is to look at the convergence plots. So when you do Bayesian analysis, you always need to make sure that your chains have converged. And so these plots allow you to check for convergence. Tab 3F is a Devens report. So this is producing some plots to help you check for model fit, check if there's any sort of outliers going on, anything acting a bit fishy. I won't go through each of the plots. They've each got a description underneath of what they're showing and what you're looking for. But you'll find that these plots are all interactive. So if you hover over any point, it will tell you the data behind that point. And then finally, some bit more model details. Again, it's a bit more of an advanced option. You can see the model code that was used to fit the model. So this is quite similar if you've used WinBugs or Stan. This is the model code. So you can download that. You can also look at the initial values that we used. So by default, we have four chains and you can see the initial values that were put in for the analysis and you can download them. And again, if you want, you can actually download all of the simulations that will run for each chain. So yeah, if you want that download to create any other visualization or any other results, you can download the data from the Bayesian analysis. And then again, showing the deviance details and the data from all the studies. Again, it's a bit of an advanced option there. So that's all the main features. At the top here, we've also got some of our user guides. This is a PDF user guide as well as some videos from a Cochrane training webinar that we did on Meta Insight. If you're finding any errors that are throwing up and you're not sure, we encourage you to look at our troubleshooting document. This basically has all of our common things that happen that people often maybe get confused about or don't understand what's going on. We encourage you to look through that. It has lots of information in there. If that's not helping you, then please feel free to contact us. You can also have a look at our full update history. So every change that's happened since we've released Meta Insight. And then we also have our privacy notice as well. So great, let's go back to the slides. Meta Insight is still an active development and we have a long list of further features that we'd like to add. For now, I'm going to share two features that are currently in the pipeline. Firstly, coming very soon, possibly by the time this tutorial is released, in aid of transparency and reproducibility, we plan to have an option that essentially takes the analysis methods used and the results presented from the app and puts it all into a downloadable report for researchers to then refer back to. We're also hoping to improve the current process for uploading data and possibly consider options for data imports and exports that would be compatible with other software. Finally, a lot of our improvements and features actually come from user feedback, which is great. So if you have any ideas or feedback, please do let us know. So thank you for listening. This final slide includes references to our Meta Insight publications and our Twitter handle if you have any questions or just want to get in touch. Thanks again.