 Hello everyone. My name is Luke McGinnis. I'm a PhD student at the University of Bristol and one of the co-organizers of the ESMAR conference. So today I'm going to be talking about some of the recent work I've done on one of my packages, RobViz, and the new functionality we've produced about informative risk of bias plots. As a brief introduction to the package, RobViz is an OR package and web application that allows users to create publication quality risk of bias visualizations. The main motivation for the tool was that when I started producing systematic reviews myself, no tool or package existed to reproducibly create these graphics. The package can currently produce two types of plots, traffic light plots and summary bar plots. I'm going to show you an example of each now. So here's an example of a traffic light plot showing the results of risk of bias assessments performed on randomized control trials. Essentially it's a cross tabulation of studies down the left-hand side and risk of bias domains along the top. So because we're looking at randomized control trials, the relevant tool for those trials has five domains of bias that need to be assessed, plus one overall, which is shown in shaded gray on the right-hand side. So an alternative way to present this information is as a summary bar plot where rather than showing the risk of bias in each domain for each study, you summarize across the domains to show the number of studies at each level of bias. So you can see here, for example, for the bias arriving from the randomization process, more than half of the studies are at low risk of bias. However, there are some limitations to this current approach of producing separate risk of bias plots. For example, risk of bias assessments are regularly produced or performed. We know this because authors state in the methods that they've done this, but they either don't produce figures or if they do, they're regulated to the supplementary material. And this means that readers of these papers really have to dig to find this often quite important information. So the new approach we're going to try and take is to pair the risk of bias assessments to their respective results in the meta-analysis, so that you get a summary of the results of each of the study and the summary of their risk of bias in the one figure. And this also means that authors will have to submit less figures to papers because everything's consolidated into a single one. And then the extension of this is to try and perform a subgroup meta-analysis by risk of bias level. So if you group studies at a low risk of bias and those that are at high risk of bias and see if there's a difference between them, it could be a way of explaining some of the heterogeneity you see in a meta-analysis. And this is where Rob Viz's interaction with metaphor comes in, in trying to build these paired risk of bias plots. So I'm going to work us through an example, but just introduce the data we're going to use. And so we load the two packages, Rob Viz and Meta-4, and we're using the BCG dataset from the Meta-4 package, which presents the results of 13 randomized controlled trials examining the effect of the BCG vaccination on tuberculosis. In data underscore Rob, we have the associated risk of bias assessments for those 13 studies. Because the randomized controlled trials, we're using the risk of bias two assessment tool, which has, as I said earlier, five domains and one overall domain. So to get us started, and again this is drawing pretty much completely from the tutorial on Meta-4, we calculate the effect estimates and sampling variances for each of the 13 studies. We then pass this to Meta-4 or MA to produce the meta-analysis or perform the meta-analysis. And then to visualize this as a forest plot, we pass this result subject, Brez, to Meta-4 forest. And it produces the forest plot that you see at the bottom here, showing that the BCG vaccination has some impact on tuberculosis, as protective impact on tuberculosis. But what's important for what we're trying to do is that Meta-4 forest also returns some information on the dimensions of the forest plot that's produced. And as seen as the last slide, by assigning the output to an object, in this case forest underscore obj, we means we can explore and use this information that it produces in subsequent functions. So most importantly for us and highlighted in yellow on the right hand side, the function provides information on the limits or the dimensions of the forest plot. So what I've done then is taken a kind of four step approach to working with this data to appending risk of bias plots to the standard Meta-4 forest output. So the first is to invisibly call Meta-4 forest to get the dimensions of the standard plot. And it's illustrated here by X limb. What we do then is we expand these limits to create some space around the standard plot and pass this new limit as an argument to Meta-4 forest. So this produces a standard Meta-4 output with a lot of space on the right hand side, as you can see. And then the final step, as you might have guessed, is to plot the risk of bias plot in this extra space that we've created. And then finally, wrap all of this is a function so that users can reliably implement this functionality themselves. And that's exactly what our new Rob Appender forest function does as part of the RobDo's package. So it takes this res or results object from your meta analysis, your risk of bias data set, and appends them together into a paired forest and risk of bias plot. So just some comments on our experience or my experience of working to build on the functionality offered by Meta-4. It represents a very unique scenario in that it was easy for me to build on the output of Meta-4 because it produced this extra information about the plot. It gave you the limits of the plot as output. So if it hadn't done that, it wouldn't have been possible. I think one of my key messages is to try and keep an eye out for opportunities like this where you can rather than rewriting or creating an alternative version of a function, you can build on what's already happened. And on the flip side of that, I think it's really important that as evidence since this working in R, we think about how others might want to build on what we've produced and what information they'll need to do so. But by far the best option is to try and develop packages in tandem. So if you think you're going to be working quite closely, or you're going to need to rely quite closely on another package, reach out to that maintainer and see if there's any way for you to develop together. And this is exactly what's happening in a further function that's going to be coming to Rob is quite soon. So Rob, Bobogram, and it's being developed in tandem with the Forester package by Randall boys. And this is what will allow users to automatically subset the data by risk bias level and then perform a meta analysis on each subset. So as shown in the figure at the bottom, you split your studies into some concerns and high risk. I mean you get an effect overall, well, overall effect estimate for each group, and then one overall for all the studies combined. And again, this will draw on the metaphor and the Forester packages. So further information about the new functionality that I've described an existing functionality of Rob is available either from the package website or from the shiny app. There's a very short paper introducing Rob is available from research since this methods. And if you're interested in contributing to the package, you can open an issue on the GitHub repository, tweet out me or send me an email. I'm always very happy to have people involved in contributing their expertise. That's it for me. Thank you very much for your time. And I'm hoping you're enjoying the conference.