 Hello everyone, my name is Virginia and today I will present the Robman app which is a shiny app that we developed to implement the Robman tool. This is a tool to assess risk of bias due to missing evidence in network analysis and it was published recently in BMC medicine. The framework underlying this tool evaluates the risk of bias first in each possible pairwise comparison that can be made between the interventions in the network by running both of within study and across study assessment of bias. Then it combines the judgments about the risk of bias in the pairwise comparison with the contribution that the direct comparison make to the network estimates. Also possible small study effects evaluated by network regression and any bias from unobserved comparison. And then finally a level of low risk, some concern or high risk of bias it's given to each of the network estimate. But I will move on straight to the app to show you the functionality. So the first thing the user must do is upload the data, the network data. This must be a CSV file in a long format. Precisely that means that it must be ARM based data. So a row for each arm of the study. Some of the instructions are reported here for regarding the how the data must be formatted and then the user can view the data in this tab. What's important here to note is that the data set should also have, if that's the case, if available, studies for which the outcome of interest is not reported. So as you can see here there is this study which doesn't have the values for the outcome of interest. And this is because these studies might be informative for selective outcome reporting, so for the within study assessment of bias. But let's move on to the data analysis tab, which is what we would do once the data is uploaded. So here the user must select the various parameters to run the analysis that are needed by the app to then calculate and run some of the assessment. So in this case, for example, we will select undesirable for the smaller outcome values are undesirable because our outcome of interest is responsive. And we also select our reference treatment. And then we press start analysis. This can take a few moments, depending on the network you're using, on the size of the network, we are using quite a large network. So, and I've already run this analysis before, and I'll show you this is the network I'm using to demonstrate the app. It's a network of 18 antidepressants from head to head studies. And because it's quite large, it can take a few moments to run the analysis. So this is what you will get once the analysis is it's completed. First, you will get a data summary with various characteristics also for the network, the interventions and the comparison. But you will also get output for the frequent yeast and Bayesian network analysis as well as Bayesian network meta regressions. And what I really want to show you though is the main output that you will get from the app. So first, we will go to the pairwise comparison table. Here, all the possible pairwise comparisons are automatically grouped according to whether they have data available for the outcome of interest, only for other outcomes like these two comparisons here in group B. Or they were unobserved, so they were not identified in the systematic review at all. What's also done automatically by the app is the calculation of the total sample size and the total number of studies, both for those studies reporting the outcome of interest, but also in total those identifying the systematic review. And this is reported for each comparison. Then these two columns are important because for the first assessment, we didn't study assessment of bias, because when there are comparison where there are extra studies, so studies that did not report the outcome of interest, as we can see here for comparison number six, where there are 14 in total, but 12 reported the outcome of interest, the user must decide whether there is a selective outcome reporting bias. So it must decide whether if person is to assess whether these studies did not report the outcome for reason associated with the results found for the outcome of interest, such as I don't know the p-value, the magnitude of the result, and also whether this study then, the non-inclusion of the study, can make a difference with the sized results. If so, then a suspected bias can be selected, like in this case. But there may also be cases where even if there are extra studies, like we can see comparison number nine here, there are five extra studies, but because the total sample size is already quite large and the relative sample size brought by these five extra studies, it's relative to the total size is not a lot, then we don't think that it would affect the synthesized results and that's why we selected no bias detected. So this assessment is done, as I said, for the comparison that have a difference of the sample size in this two column, as well as those comparisons in group B, so that did not report the outcome of interest, but obviously not for the unobserved comparison. While the next assessment, the across study assessment bias, commonly known as publication bias, can be done for all comparisons. And this is done mainly using qualitative consideration, such as like whether it was possible to get data from great literature, or whether there is well known publication bias in that field or for that comparison. Additionally, the user can also look at quantitative techniques if there are at least 10 studies for a specific comparison. And yeah, you can consider the final plots and the small test for small study effects, but as I said, these are only in addition to the qualitative consideration. And if in the network there are in comparison with at least 10 studies, the Robin app does not calculate any of the quantitative techniques. Finally, we assign an overall bias judgment to each of the comparison by merging these two assessments. And it can be easily done by pressing the button here. And this is essentially just looking at the algorithm. It's not actually difficult. It's just looking at whether there is suspected bias in either columns, otherwise, it's not bias detected. Once the judgments are done, then we move on to the main output that we will get from our Robin tool and app, which is the Robin table. And here, once again, the network estimates are already grouped automatically by the app, based on whether they are mixed or direct estimates, or whether they are indirect estimates, as you can see here. And here, as I said before, the judgment is combined, first of all, with the contribution that the direct comparison makes to the network estimates. And here, already calculated by the app, is the percentage of contribution that comes from a pairwise comparison of suspected bias. And it's already also divided in how this bias is directed, whether it is favoring the first treatment or the second treatment in a comparison. To do so, the app uses the contribution matrix. So it runs, I think it's called the contribution flow package in R. Contribution matrix is also available and it can be downloaded in the app. And what the user must do in the next column is evaluate whether how is this contribution then affecting the network estimate. So here are some sort of subjective judgment must be made. So for example, here we say that if there is a difference of at least 15, then that contributes a substantial contribution. So for example, here we see that there is 32%, 25%, going in one direction. So we have selected substantial contribution from bias favoring that specific treatment that goes in that direction. Same for the second estimate. But for the third and fourth, we'll see there's only 3%. So it's less than 50%. But there's actually zero and zero. So it's no substantial contribution from bias. There are also cases where, for example, there is a substantial contribution in both directions. So here 33% and 46%. So the difference between them is less than 15. And so we, for example, might decide that it's somehow balanced. And so we decide to give a substantial contribution from bias balanced level. So as you can see, I'm not going to do them all for all because there's in total 153 estimates. So I'm going to move on to the next part, which is the bias assessment for indirect evidence. So because the contribution part only comes from the direct comparison, but we made a judgment also for unobserved comparisons in the previous table. This must also count in our final judgments. Because if the reason why the studies, these comparisons were observed, are related to the results found in the studies, then we will have bias. This might lead to bias. As you can see for the first part of the table, for the first group, this is grayed out. And this exactly because we have already considered the part of the contribution from the direct comparison. While this part, the bias assessment, is only for indirect evidence. So it's only considered in the unobserved comparison. And it's only done, as you can see, for the indirect part. So here it's now grayed out. And the user doesn't have to do anything. This is already copied from the previous table, from the last column of the previous table. The last part here, the last assessment, then it's evaluated whether there is any possible small study effects, as evaluated by the network mirror regression that, as I showed you before, has been run by the app. And I mean, the user can look at the output. But what the app automatically does is include the unadjusted and adjusted by each an estimate. And also, I forgot to say that the network mirror regression is run using the smallest observed variance as a covariate to give an indication whether there is a small study effect or not. What the user must do in the next column is evaluate whether there is evidence of small study effects by comparing these two estimates, as well as the credible interval. So if we see that, as in this case, there are very different than there is good overlap between the credible intervals, then there is no evidence of small study effects. Otherwise, once again, we would say small study effects favoring one direction or the other, one treatment or the other. This case, because we didn't find any evidence, so to make it easier anyway, you can also, once again, set to or to no evidence. And then if there is evidence for any, the estimate can also be changed manually. So that's quicker. So finally, we are at the end of our assessment. And we just need to give the overall risk of bias level, combining these two parts for each of the network estimates. To do so, the user can also just press the use algorithm to calculate overall risk of bias judgments here, as we are doing now. And, as you see, give automatically some concern low risk or high risk level. So the thing here is that the algorithm that applies is the algorithm that we described in our paper here. I'm not going to go through it because I have no time, but it's a more complex algorithm unlike the one in the previous table. But if the user does not agree with it, so if you want to use stricter rules or actually more relaxed rules, you can also change the judgment. So as you can see here, it will show in yellow because it's not the one calculated by the algorithm, but it's perfectly fine. So we are at the end of it. The last bit about the app is that the table, both of them, but I'll only show you tape for the second table, can be downloaded as CSV. And so that can also be included in reports and things like that. So finally, I just want to thank you for your attention. I want to remind you that of where to find the app and also the paper that doesn't describe exactly the functionality of the app, but it describes the Robman Framework and Tool. We plan to actually publish a manual paper that would be more of a manual for the app. And I also want to thank the two people that have helped me with the development of this app. So first Todoriz Papacostantino and also Alex Holloway. And if you have any questions, you can ask after the session or just write me on and send me an email. Thank you very much.