 Hi everyone, thanks for joining this tutorial on the Robman app, which implements the Robman tool to evaluate the risk of bias due to missing evidence in network meta-analysis. This tutorial will mostly focus on the functionality of the app, but I will start by giving a brief overview of the Robman tool and framework. I will demonstrate how to use the app using a real example of network meta-analysis of antidepressants. And at the end I will also briefly mention how to integrate the Robman output in the Cinema web application. As I said, this tutorial mainly focuses on the app functionality, so I will not explain how to run the risk of bias assessment from a methodological point of view. These steps are described in more detail in the original Robman publication in BMC Medicine, which also include two illustrative examples, one that uses the demo dataset that can be downloaded from the app and the other with the network meta-analysis that is used in this tutorial. This flowchart illustrates the Robman tool at a glance. You can clearly see the two sections that make the tool. The first one, in purple, is about the evaluation of the risk of bias due to missing evidence in all pairwise comparison in the network, which is then recorded in the pairwise comparison table. This part includes a within-study assessment of bias, which is what we commonly call selective outcome reporting bias, and an across-study assessment of bias, which we commonly refer to as publication bias. Based on these two assessments, a level of no bias detected or suspected bias, with relevant direction, is assigned to each pairwise comparison. The second part, in blue, shows the assessment of risk of bias at the network level, which is the main output of interest of our tool, and it is recorded in the Robman table. Here, the overall bias level for the pairwise comparison from the previous part are integrated first using the contribution that our comparison judges at bias makes to each network estimate, then checking whether any of the unobserved comparison leads to bias, and finally we also look at the presence of any small study effects at the network level. These three elements are synthesized using some proposed rules to classify each network estimate as having low risk some concern or high risk of bias. The proposed rules to assign an overall bias level to each network estimate are shown in this table. This algorithm is automatically implemented in the app, but I will also show you later how to overall this in case the user decides to choose different rules, for example stricter rules. The Robman method is implemented as a multi-page web application with a workflow that intuitively guides users through the process of analyzing the data. The web application is implemented using R Shiny, JavaScript and Bootstrap. Once the user uploads their data, all analyses are performed server-side, so the only prerequisite is access to a modern web browser, but let's move on and actually use the app. Here is the on-page of the Robman app. You can find instruction for the data format and upload, and you can also download the demo data set referring to the first illustrative example reported in the main Robman publication. You can also find the links to the publication itself, other training materials and the github repository where the source code is available. Let's go on to the load tab, which is the tab where you can upload your data set. We will upload our data file as CSV, although the app also supports text files. You can then have a look at the data that you've just uploaded. You can see that we have column R instead of column mean and sd, because our output of interest response to antidepressant is a binary outcome. I must remind you that it is important to include in the data set also studies that are identified in the systematic review but do not report the outcome of interest, such as in our example the study with ID 38, as you can see it's blank cell for the column reporting the outcome of interest. This is because these studies can be informative for the selective outcome reporting bias, which we call within study bias in our framework. Once the data is uploaded, we can go on to the Analyze tab to select the parameter for analysis. Robman should not be used to run your network man analysis as in the main analysis. There are other specific packages, app and software for that. The reason why we are running analysis is because the tool needs the output of the analysis for some of the assessments. So we select OTS Ratio Summary Measure. We need to define if small outcome values mean something good or bad. So given that our outcome of interest is response to antidepressant, the more the better. So smaller outcome values are undesirable in our case. We then select the synthesis model and the reference treatment. The last three parameters refer to the network manner regression model, which is run in a Bayesian setting. As default the burning and deterioration are set at 1,000 and 10,000 respectively. You might need to increase this at a later stage if the model does not converge, but I will show you more about this later. And then you can also select the assumption for treatment specific interactions that are the coefficients of the network manner regression model. We will select the unrelated interaction because it's a less strict assumption, but again for reason of non-convergence you might want to select the other option. Once all of these parameters are selected then click start analysis and then the app asks you to confirm. And depending on the size of your network and the amount of data in your network the analysis can take a few minutes to run. Once complete the analysis outputs are shown in various tabs under the main analyze tab. It also shows you on the side the parameters that were selected for analysis. At this stage these can no longer be edited so if for any reason this has to be edited then the app needs to be reloaded, data uploaded again and then you can select different parameters and rerun the analysis. The first tab data summary shows you some descriptive characteristics of the network as well as a network graph and then the actual output from the Bayesian analysis, so the network manner analysis and network manner regression are displayed in the relevant tab. It's particularly important to look at the Bayesian network manner regression tab for convergence of the model. You can look at the convergence by looking at the Gelman Rubin diagnostic values here or also the trace plot that can be downloaded by pressing the button here. It does indication that the model is not converged then as I said before the app should be reloaded and then the parameters changed and the analysis should be rerun. The tab also shows you other outputs but I will not go into detail with this one and I will move on straight to the main output of the interest of the Robman which are the two tables where we record the risk of bias assessment. In the first table the pairwise comparison table, the app automatically groups all the possible pairwise comparison that can be made between the interventions in the network according to whether they had data for the outcome of interest as we can find here in group A. Data only for other outcomes but not the outcome of interest like this two comparison here in group B or they didn't report any outcome so the comparisons were unobserved here in group C. The app automatically reports also the total sample size and the total number of studies for the studies reporting the outcome of interest and for the total pool of studies identified in the systematic review. This information is important for the first assessment which is the within study assessment of bias. In this column but also in other columns in this table and also in the Robman table this button automatically sets all the comparisons to a specific level in this case no bias which might facilitate and make the completion of the table faster. For example we might want to press this button for the within study assessment of bias so that they are first all set to no bias detected and then we can change the levels for the comparison that are a suspected bias. I quickly remind you that this assessment is done first by looking at the presence of selective non-reporting of results in each study using study specific tool as reported in the RobMe tool or as described in the Cochrane handbook and then the likely impact of the missing results across our study can be assessed using the true signal in question that we propose and we report in the main publication. For example we will change for comparison 6 the level and select suspected bias favoring fluoxetine and for comparison 11 we will choose suspected bias favoring tracodon. Obviously this assessment can also be done for the comparison for which we only have data for other outcomes but not the outcome of interest but clearly cannot be done for the unobserved comparison which is why we don't even have the option of selecting the level but for this comparison we can definitely run an across study assessment of bias. Once again here to make it faster we can also set all to no bias detected first and then select manually those that need changing. For this assessment we looked primarily at qualitative considerations and as reported in the main publication we used a principle of novel drug bias so we consider that there could be publication bias favoring the newer drugs in the comparison unless we have received data from the authors or from the pharma running the study which means that in that case we are not suspecting publication bias favoring the newer drug. If there are comparison with at least in studies then we can also look at some quantitative techniques. The app automatically calculates it again for the comparison with at least in studies specifically it reports counter and then following plots and tests for small study effects but as I said we will look primarily at qualitative consideration. So for the same principle that I said we applied in this assessment we leave the comparison with agomeratin with the level of no bias detected because agomeratin is the newer drugs in this comparison but the authors of the review received the unpublished data from the manufacturers. While for other comparisons such as for example number six we change the level and we suspect bias for the newer drugs in the comparison. So in this case we will select suspected bias favoring fluoxidine in the next one suspected bias favoring fluoxamine and so on. Finally once both these assessment are completed we can apply the overall judgment to each comparison quickly by pressing the button calculate overall judgment. The road behind simply look at the previous to assessment and check whether there is suspected bias or not. This overall judgment are then automatically integrated in the main output of interest of dropment which is the robin table specifically the first element of the assessment for each network estimate. Like previously each network estimate has been automatically grouped in this table according to whether they are calculated using mixed or only direct evidence or if they are calculated only using indirect evidence. Going back to this first element of the contribution of evidence coming from direct comparison with suspected bias as you can see the app has automatically calculated how much of this contribution is favoring the first treatment of the second treatment in each contrast. To do this yes it has considered the level that we just in the previous table but it also uses the contribution matrix which you can download and view in the contribution matrix tab under the analyze tab which shows in percentage how much each direct comparison here in the column contributes to each network estimate here in the row. So what the user must do at this stage is actually evaluating this contribution from evidence with suspected bias. It can first set all to no substantial contribution and then it can change those that needs a different level according to the threshold that was decided. So in this example we used a threshold of 15 percent so if there is at least 15 percent or a different of 15 percent then we will change the level for example in this case we will change this substantial contribution from bias favoring the C-tallopram and so on for those that are over 15 percent and there's also a level like in comparison 59 where there is substantial contribution from bias evidence but as you can see the difference between the two seems somehow balanced in this case we will select substantial contribution from bias balanced. The next assessment does not require any input from the user because once again the app has automatically taken the last column from the pairwise comparison table so the overall judgment for each pairwise comparison and copied it into this column here and as you can see this bias assessment is only taken into account in the final stages for the unobserved comparison for the indirect estimates. That's why it's grayed out for the mixed or only direct estimates. The last assessment that has to be done in the argument table is to look for presence of small study effects. So what the app has done automatically is taking the network meta analysis and network meta regression results so essentially the unadjusted and adjusted estimates for the analysis and what the user has to do at this stage is compare these results not only by looking at the point estimate but also looking at the overlap of the credible intervals and decide whether there is evidence of small study effect and if there is which intervention is favored by this type of bias. Once again to make it faster we can set them all to no evidence. In our case there was no indication of small study effects so we will not change any level but it works as for the other columns the user simply changed the level where it's needed. Finally we can calculate the overall risk of bias for each network estimate by synthesizers in these three parts and we simply do so by clicking this button which automatically applies our proposed algorithm which I showed you at the beginning of this tutorial. As you can see the level of low risk some concern or high risk has been automatically applied. However as I mentioned before if the user does not agree with these rules and want to use stricter rules or more relaxed rules then this can be done like for example the user wants to change the level for comparison six from some concern to high risk. We allow to do so our framework and tool is flexible the app will simply show it in yellow to indicate that this is not the automated judgment from the app but it's been changed manually. Both tables but this is particularly important for the robin table can be downloaded by pressing the button here in the upper left corner also for the pairwise comparison. At this point we have completed the robin assessment and our time with the robin app is over but as I mentioned before I wanted to show you how this has been integrated into the cinema framework which evaluates the confidence in the findings from network meta-analysis. I will show you how it's been integrated not using the anti-depressant network example but using the demo dataset that you can download in the cinema web application. I will go straight into the reporting bias domain and you can see that here the user is prompted to use the robin tool there's also a link to the app but let's imagine that we have already done the robin assessment for this example and we have already downloaded the robin table which we can upload as a CSV file and you can see our automatically the cinema app import the robin judgments and shows them automatically in the relevant domain page. If as I said before the tool could not be used or the robin table for some reason is not available then this can also be set manually as in other parts in cinema. This is the end of our tutorial which I hope you found useful. I would like to thank Alex Holloway for his help with this latest version of the robin app and especially with the redesign. If you have any question or anything isn't clear please feel free to contact me either by email or on twitter. Thank you very much.