 Hello everyone, my name is Luke McGinnis, I'm a PhD student at the University of Bristol Medical School. I'm sorry I couldn't attend this presentation live in the chat, it clashed with something else I had booked in, but today I'm going to talk about incorporating the results of risk-advice assessments into systematic reviews and evidence synthesis using a package I've been working on for the last couple of years called ROGVIS. Just to note that the new functionality in work I'm going to talk about today wouldn't have been possible without my amazing collaborators Alex and Randall, so just want to recognise their contributions at the very start. So risk-advice assessments are a key part of the systematic review process and I'll be very familiar to anyone who's done a systematic review or evidence synthesis exercise before. The ROGVIS package was designed to make these data more visually appealing, traditionally presented as tables, and so it's an all-package and web app to produce publication quality risk-advice figures. And so currently you can produce two types of plots, traffic light plots and summary bar plots, and I'm just going to show you a brief example of these now. So this is an example of a traffic light plot using the risk-advice two tool for randomized controlled trials. You have your studies presented down the left-hand side along with your domains of bias, so bias due to randomisation, intended intervention or deviations etc. along the top, and then an overall risk-advice judgment on the right-hand side. Acceptable levels of risk-advice here are high, some concerns are low, though that will vary from tool to tool, so if you're looking at observational studies that's slightly different. And another way to present this data is as a summary bar plot, and in this case it's not presenting study level data, it's summarizing the proportion of evidence that's at a specific risk-advice level in a specific domain. So it's just another way to show the same data. So while this is better than a table, it's still not ideal, and we've noticed as we've been reviewing how Robvis has been used in the literature that risk-advice assessments are regularly performed, but then very briefly discussed in the methods, and any tables or figures are relegated to supplementary material. And this isn't sufficient or it's not great because it's not enough to simply perform the assessments, create a figure, and discuss briefly. You really need to actively think about what this means for your meta-analysis. So are you putting studies at high risk-advice into your meta-analysis? And does the effect differ between studies at different risk-advice? So to answer these questions and make it easier for the reader to know what's going on, we think it's better to try and produce these paired risk-advice and forest plots. So as shown on the right-hand side, here you have your traditional forest plot and then an extra panel on the right-hand side, which shows you pretty much the traffic-like plot I demonstrated earlier. And it's also useful if you're going to go about doing this, to try and stratify your data set by risk-advice level to see if there's any difference in effect estimate between studies at different risk-advice. So the problem with this approach is what it all sounds great in theory, there's no tool that currently exists that allows you to create these figures easily. So this figure was taken from the BMJ paper on the new risk-advice tool for randomized controlled trials, but the figure was created by hand. So it's not very reproducible, it's not very our. So what I'm going to talk through today is how we use metaphor and logvis to create new functionality that allows users to make these paired forest plots very, very easily. So I'm going to talk you briefly through the two data sets we use in our example. The first one is a metaphor example data set. It's a set of 13 studies looking at the effectiveness of BCG vaccine against tuberculosis. And then similarly on the other side, we have risk-advice data or risk-advice assessments for each of those 13 studies. So just to note that these are fake risk-advice assessments, they're purely for illustrative purposes. And I'm saying nothing about the quality of the studies because I haven't looked at them myself. So following through a fairly standard approach to performing a meta-analysis, the first step is to use the raw count data from the BCG data set to create effect estimates and sampling variances for each study. And then pass that information to the metaphor or in a function to perform a meta-analysis and save the results from that as an object, in this case, res for results. So the next step normally is to visualize this using the metaphor forest function. And this is a very standard forest plot. So now that you've seen what the standard approach is, I'm going to walk you through the two ways in which a very small adjustment to this can add a lot of information. So the first one is by appending a risk-advice forest plot using the data set I just showed you containing your risk-advice assessments to this standard forest plot produced by metaphor. And the function is very creatively named rob append to forest because it is just a wrapper for the metaphor forest function and that appends this traffic light plot to the right-hand side. So you can already see that this is adding potentially useful information to the forest plot. So for example, for the heart and Sutherland study, the fourth one down, I was confident in that effect estimate given that it's at a high risk of bias overall and similarly for two other studies. So while this is an improvement, it's still not ideal because you're not grouping studies and you have no control really over the sub-route. So this is where the second function comes in, which is a bit more sophisticated and probably has the best name of any function I've worked on in any package rob blubberground. And again, it is just taking your results object from your risk-advice or from your meta-analysis and your risk-advice data set as standard and plugging it into this function. So it's really not much, it's not very onerous on the user to produce these plots once you have the data ready to go. So what this function does is it takes whatever meta-analytical model you've applied though for the moment is limited to metaphor and applies it to each, applies it across your studies grouped by risk-advice level. So what we mean here is these studies have been stratified by their overall risk-advice level and then you get a sub-group or a subtotal effect per group. So this function leans quite heavily on the amazing forest or package built by Randall Boyes and it's still in development so there's still a bit of work to do. For example, it's very hard to tell what's a study versus a summary effect because summary effects are usually denoted by diamonds, but we haven't worked that out yet. So who knows, potentially by the time you see this next week, I'll have worked out the last few details. Fingers crossed. But just to note you have a lot more flexibility than just stratifying by the overall risk of bias. So for example, if you were particularly interested in bias-duty randomization, which is domain one in this tool and you can specify that that's the column, that's the domain you want to stratify on and so you see here there's now a lot more studies at low risk of bias for that specific domain. So again, it's starting to get people thinking about actual study results and risk of bias results together rather than thinking them as two independent entities, which is what often happens. So a couple of take-home messages from our experience working on this and for users, the two key things is that risk of bias assessment should be presented alongside the corresponding result to make it easy for readers to know the quality of what's gone into the meta-analysis. And secondly, risk of bias level should be investigated as a sort of heterogeneity between studies. It's often not and it's often potentially one of the biggest reasons why you might get different effect estimates across studies. For developers, there's, we had a really good experience working with the maintainer of the Meta-4 package to build on their functionality and it's only due to their foresight when creating the forest plot function in Meta-4 that we were able to wrap it so easily for Rovvis. And on the flip side, if you're developing packages yourself, think about what information other users might need or other developers might need to build out more functionality around your package and what information they need to do that. And then finally, just to wrap up some further information about the tool in case anyone wants to go away and read some more. So we have a package website. We also have a shiny app and there's a very brief introductory paper on the tool which doesn't cover this new functionality because it's going to make quite a while ago now in research census methods, if you want to go away and have a look. And if you're interested in contributing to the package, we'd be really, really excited to have you. All of the collaborators, so Alex and Rangel, I both met on GitHub, I've never met them in person. So don't be afraid to get in contact and get involved. So either open an issue on the GitHub repository, tweet at me or send me an email. And then finally, once again, to thank my collaborators without whom the experience would have been much diminished. That's it from me. Thank you.