 Hi, my name's Enzo and I'm going to be presenting on an app that we developed called MetaBaseDTA. So the aims we're making this app were to make an updated Bayesian version of the R Shiny app, MetaDTA, that's accessible to statisticians and other researchers who don't yet have the sufficient programming expertise to be able to do these analyses using R for example and that's also still useful for people who can use software like R. And we wanted it to lower the user burden since no programming was needed so that researchers have more time to focus on the interpretation of the results and putting the results into context. And so for metanalysis of test accuracy, often there's insufficient or inappropriate statistical models that are used. So we hope that this app will sort of encourage the use of more appropriate methods and improve the uptake of them and lower the time interval between new methods being proposed and then being routinely used in applied papers in practice and to help incorporate, emphasise and improve other non-modelling aspects of the analysis, such as quality assessment and risk of bias and the presentation interpretation of results. And yes, so for example people have more time to run the discussion to make more nuanced conclusions and it's less likely to lead to strong statements not being supported by evidence being made and it will help reinforce the use of practices which are known to be better yet not routinely done by statisticians due to lack of time for example. The app also has some appropriate restrictions in place for example there's no option to report credible or confidence intervals without the corresponding prediction intervals and it helps to visualise and plot priors before the analysis and it automatically plots model and sample diagnostics. So I'm just going to demo the app now and open the app. You get pretty good with this home page and then from here on the left there's a panel which has various options so there's the data tab which will go through first. So this is where you upload your data and we're just going to be using the default data sets which are sort of example data sets which are already in there when you open the app and we're going to be using the example with quality assessment and covariates. You can upload your data using this box here that says please select the file and you just upload the CSV and you just need to make sure that it's formatted correctly and on the right there's the instructions for how to format it. And then this just gives a description of the example data set so it uses data from a systematic review that investigates the accuracy of a test for screening dementia called the informal questionnaire called the decline in the elderly and this is what the data set looks like. The analysis so you've got some categorical covariates which end in .cat and a continuous covariates which end in .cts. So now I'm going to go to the perfect gold standard tab so in this tab you can run models which assume a perfect reference test or gold standard. So in the meta analysis sub tab you've got the standard bivariate model something called and the first thing you want to do is just run the prior model so the default priors are sort of the closest you can get to having a uniform prior so the sensitivity of specificity we're just assuming a 95% prior interval between 5 and 95% and it also plots them for you here and you can download these plots if you want to change the size of them. You've also got more advanced options so if you click on this box you can change the stand sample options and here you can actually change the prior distributions. The next tab is where you run the model so I've already run this but if you press click to run model it will run the model for you and then in the study level outcomes tab you've just got the data set again but you've also got the percentage weights for the sensitivity and specificity so this just shows how much each study contributes to the summary estimate. In the parameter estimates tab you've got your parameter estimate so sensitivity specificity the logistic transforms, diagnostic odds ratio and you've also got the parameters at the bottom for the for the HS ROC model. This library model is equivalent to the HS ROC model but there's no covariance at least and then you've got a parameter for revman tab so if you're using this app as part of a cockering smack review mirror analysis you'll need to use revman so you can export these parameters into revman to create your plots and finally you've got the model diagnostics tab and so these are these shows you the sample diagnostics first so you want your divergent transitions and iterations which exceed the max tree depth of zero which they are here your r hat to be less than 0.5 which they are it'll warn you if they aren't and then you've got your posterior density plots and trace plots so these are all normal and you can narrow it small so you can change the size here and you want these to be bimodal sorry not bimodal unimodal which means they got one which they have one peak which they do so they're okay and the trace plots you want these to you want both chains or however many chains you run to be overlapping like this which they are so these are all fine and they should be sort of randomly the values that as the iterations go on should be like oscillating like this not sort of going off to a tangent or anything like that or just having two chains that don't overlap and often you know if that happens you'll get you also get a bad r hat as well then on the right you've got the summary receiver operating characteristic plot so this shows your summary estimate there and then the black dots the study specific observed sensitivity specificity for each study then the gray shaded region is the 95 percent credible region and then you've got the dotted line around that which is the 95 percent prediction region and then if you click it you can click on the study specific points to get the information about each study pop our box up here and you've also got various settings here so we can display quality assessment scores for example so if we just do that for both risk of bias and applicability concerns it's loading you can see that these like pie charts will appear and if you click on them the information for that specific study will show up so you've got the risk of bias and applicability concerns there show whether it's low higher unclear then you've got the forest plots tab so this just shows the forest plots for sensitivity specificity for each study and then finally got the prevalence tab so this sort of puts the results into context so you can select a number of patients and choose a disease prevalence and population so let's just say we think it's going to be 10% and it will show you the number of people who get disease and test positive so the number of true positives you have then false negatives false positives etc okay so the next tab is meta regression so with this app you can conduct a univariate meta regression using either one category of covariate or one continuous covariate so hit in this example we've already run it using the test threshold as an example and again we got the priors which we just used the default and then we're in the model then the study level outcomes tab looks basically the same as the meta analysis tab and then the parameter estimates now there's more parameter estimates so for each group so we've got four groups because we've got four different thresholds observed in study and we've got four sets of parameter estimates so it shows you the sensitivity specificity of each one and then the first box is the shared parameters so between study correlation and standard deviation between study heterogeneity parameters they're going to be the same between the groups but the means are going to be different and here you've got a table of the per wise accuracy differences and ratios so here for example for comparing threshold 3.3 to 3.4 we can see that the difference in sensitivities is about 10% and the interval doesn't contain zero which means there's evidence that there is a difference in the sensitivity between these two thresholds and it does that for each pair wise comparison and because we've got four categories there we've got six comparisons and this is particularly useful if you've got and if you want to do a comparison between different tests so you can actually this example doesn't have multiple tests but you can use this app to prepare different tests that use the same study with each other so to do a multiple test a marinesis and we could actually do that with the reference tests and if we wanted to compare reference tests so we won't do that the purposes of this example it's got a threshold and then you also got the model dynamics tab which is exactly the same as before and all the sample diagnostics satisfactory and on the right the with the src plot now we've got and multiple points because we got a summary estimate for each group we've got four groups and again we can click on the study specific points to get the information from each study and we've also got this new plot which shows you the and let's make it a bit bigger so this shows you just the accuracy applied against the categorical covariate on the x-axis which is the threshold each one of these points is the summary is the summary estimate and the bars represent the 95 confidence into a credible interval so i've also run a subgroup analysis here again i've used the same categorical covariate to do that so the threshold and yes we got a priors run the model study level outcomes tab it's the same parameters for estimates tab is a little bit different now because we haven't got shared parameters because essentially the subgroup analysis runs a separate model for each group so yeah we've got this table which is split by the category threshold and we don't have one for 4.1 this time because not only have one study so you can't run a subgroup analysis on it but we we can for the rest of them and the src plot on the right looks really similar to before except that yeah we haven't got one for 4.1 we've just got that point there purple point but not the summary estimate and the accuracy versus covariate plot has the same form as before and in the model diagnostics tab this is also the same as the previous tab and all the sampler diagnostic satisfactory for this as well and we can also run a meta regression for a continuous covariate so if we choose prevalence as an example we'll select the so by default it will send to the cover at its main and we can select the value to calculate the port summary estimates that so let's say we wanted to study the port summary estimates for prevalence of 0.1 so 10% we can run the prime model first okay so we got our priors so we got the intercept and coefficients you can see more details about how these prior distributions are formed in the associated pre-print paper which is linked to in the home page but basically they're chosen to be as sort of weakly informative to stabilize computation but not too informative so in the next tab we can run the model like before yeah when you run the model pop-up box will appear to just remind you to check the model diagnostics tab okay so the study level outcomes tab is going to be same as before and we've got our parameter estimates so the first box will have the parameters which depend on the value of the coefficient so you've got your summary sensitivity for a prevalence of 0.1 and what that is and other parameters and then you've got the parameters which don't depend on the value such as the H disrupt parameters it's what the coefficient is for the logics sensitivity and specificity and the intercept terms and you get between static correlation and to droneity parameters and then on the right the src plot will look different now so the study specific points are shaded according to the value of the continuous covariate and then you've just got your summary estimate in the middle which corresponds to the value that you selected here to calculate called summary estimates that and in your accuracy versus covariate plot because it's a continuous covariate it just plots the relationship between the value of that covariate and the sensitivity and specificity on the y-axis so the last tab we've got is the imperfect gold standard tab so this runs a latent class model which doesn't assume a perfect reference test and when you run these models I would recommend choosing informative primes for the reference test because usually we know information about the reference test just for the purposes of this tutorial I've just left it as the uniform priors and yeah so we run that and as before we get the priors for each of the reference tests and the index test and then run the model then in the study level outcomes tab we don't have the study weights this time because it doesn't calculate them this type of model over the rest looks the same then in the parameter estimates we've got the index test parameters first and the summary estimates for that and then you've got the second table which shows the summary estimates for each of the reference tests and because we because we've run it assuming we've run this model assuming fixed reference tests between studies so in the first tab you have various options here so we're not assuming conditional independence between tests that's usually not a reasonable assumption to make in clinical practice because it assumes that conditional on the disease status that the test results aren't correlated to each other between the different tests but we've assumed that the reference tests are fixed and that the index tests are not fixed so they're random effects and so going back to the parameter estimates tab we don't have some parameters for the reference tests here that's why it's just a dash there because it's fixed here we don't have any between study correlation or other parameters because these are random effect parameters but we do have them for the index test over here and the model diagnostics tab this looks similar to how it did for the other models before and we can see the sample diagnostics are all satisfactory we also got the model fit here so this shows you how all the model fits there so we got the deviants which is 44 and you can use this to compare different models to each other and then you've got the correlation residual plot so for this plot all the line all the bars should overlap with the zero line here line of unity which it does not the line of unity here the line is just a zero line and then you've got the table probability residual plot and again the bar should ideally overlap all the points to be as close as possible so the zero lines and then you've got your posterior density plots and trace plots as we did before so if we just make these plots bigger so we can actually see them see how we can see these are all okay so they're all in new mode and the trace plots are also fine yeah they're all okay because they all overlap basically it shows good mixing and yeah on the right as we did for the other tabs we got an src plot so this shows you the summary estimates for the index test which is the red dot here and the reference tests and then we've also got the study specific estimates for the index test so these aren't observed estimates this time they're actually they're actually like the model estimates because we we're not assuming that the reference test is 100 percent perfect so the model actually estimates what the sensitivity should be for the index test and for the reference test because they're fixed we only have a summary estimate we don't have studies of points and as before there's various options that you can choose for this plot just going back to the model setup and price tab if we try and run a model without assuming a perfect reference test okay so i've just run a model not assuming fixed reference tests anymore so the reference tests and index tests seem to be random effects and if we go to the model diagnostics tab we can see that although there isn't any divergent transitions or iterations which exceeded maximum tree depth our hats are all good if we go down to the posterior density plot so we can see that some are bimodal so like this one for example here on the right we've got two peaks so this isn't okay we can't really use this it means it's not identifiable and having more prime information can sometimes settle this so if we've got more prime information for the reference test it might help in this case it still didn't seem to help much and you can see that in the in the associated paper so just to summarize we created a web application using rshiny and san which has the benefits of previously proposed applications and addresses several limitations of them is accessible to researchers who don't have the program and expertise required to fit complex test accuracy meta analysis models but it's all but that's also suitable for experienced analysts and we anticipate that meta-based ETA will appeal to a wide variety of researchers due to its user-friendlyness thanks for listening