 Welcome to this tutorial on R-Package DirecMeta. I'm Guido Schwarzer, and I'm the maintainer of this package. You find on Xenodo the R script and the data set that I'm using in this tutorial. There are two versions of the R-Package available, the official CRAN version, as well as the Github version, where you can find bug fixes on new features. Let's have a brief look at the overview of the package. So it has been down here. It implements the method by Steinhauser. I will have a brief look into this method in a minute. And the main purpose is to conduct the meta-analysis of diagnostic test accuracy studies with multiple cut-offs. Yeah, there are not too many functions here in the package. The main function is the DirecMeta function. But let's first have a look at the paper, the publication, underlying this R-Package. So this is by Susanne Steinhauser, together with Martin Schumacher and Gerter Rücker from Freiburg, from our institute. They proposed a new approach for meta-analysis of diagnostic test accuracy studies. And what they are doing is they estimate the distribution functions of the underlying biomarker within the non-diseased and the diseased. And for that they assume either a normal or a logistic distribution. And they are applying a linear mixed effects model to the transformed data. So either the normal or the logistic transformed data. And the determination of the optimal cut-off is done by maximizing the user index. Yeah, so, and in pitting what they are doing is they are using a class of weighted linear mixed effects regression models with fixed effects for the groups and the thresholds and the interactions and with various random effects terms. Those are assumed to follow a multivariate normal distribution. And what's modeled here is basically the specificity and one minus of the sensitivity. And here are the corresponding linear mixed effects equations for that. And this age here is then the transformation either for the normal distribution or for the logistic distribution. Estimation is done using ELMA, the ELMA function with the remel estimation and inverse variance weights. And just to a brief overview here in table one on the models that are considered here. There are several models and this CIDS model. So with the common random intercept, that's the CI and the different random slopes. That's the DS part here. This is the model that we will have a closer look into. So let's go back to our example or a first to the function. And what we see here are the arguments of the function. So what we have to provide here are the true positives, the false positives, the true negatives and the false negatives. So basically the two by two table and also the cutoff where these two by two table is based on and the study labels because we can have more than one row per study because this is what we do with multiple cutoffs. The distribution either logistic or normal and here the model common intercept common slope but we will use a different model here and some additional arguments that I do not want to go into detail now. So let's go back here. So first what we do is we define that sensitivity, specificities and so on should be printed with three digits. And then we load the data here of our example. It's a meta-analysis of diagnostic test accuracy studies that try to differentiate between bacterial and viral meningitis in children. So let's load the data and have a look at some rows of the data set. And as you can see here, these are the last entries here in total we have 17 rows. The last two one are from a study that only has a single cutoff value but we can see here that this new study has at least four. Actually there are even more, I think there are seven and we can have a look at this by using this command here. Here we get an overview of all studies and all available cutoffs. Yes, I was right. So there are seven cutoff values for the new study. Here are the two that only provide a single cutoff and we see that there are three studies that have two cutoffs as well. Overall, this is I would say a data set that is not that big. And as we will see when we run the model, we have there also some estimation problems. So the Steinhalter method as I've described before is available in this diagram meta function. These first six arguments are the ones that you should provide or must provide the four values here for the two by two table, the cutoff value where this two by two table corresponds to and also the study level to know for which study this two by two table is. Data set and then the model here, common, random intercept, different random slope, logistic distribution. And we also say that the cutoff that we do not want to lock transform the values here. So as we can see these two here distribution and lock cutoff, they are the defaults of the model. Only thing we change here is to use this model. And if we run this, then we get here this warning or note that we have a boundary singular fit here. But nevertheless, let's have a look at the results here for this model. So here is the summary stating that we have in total eight studies, 17 cutoff values, 10 different cutoffs. This is the model we used with the corresponding distribution and so on. And this is the first result. So here the optimal cutoff value is 0.74. And we get then also the sensitivity and specificity estimates with confidence intervals for this optimal cutoff. And also the area under the curve for a dense region either for the sensitivity given specificity or the other way around. The AOC value is always the same. It's very high as we can see here. And the confidence intervals are somewhat different. Then next, what we do here is we use the DIAC stats command. Let's run it just on our DIAC meta, our object. What we see here is that we get a listing here of the sensitivity and specificity estimates, standard errors and the confidence limits for a given cutoff value. If we look here into the function, we see that there are some arguments that we could use to get different estimates and so on. So for example, we could say that we are not only interested here in the optimal cutoff value, but we could specify others here. We could provide a prevalence value. If we provide a prevalence of value, then we also get positive and negative predicted values. And this is what we are doing here in the following examples. I'm using here the Deployer package in order to only select certain variables of the printout. So here, in addition to the optimal cutoff, we would like to see the sensitivity and specificity values for some other values. And they are printed here. As I said, when I looked at the theoretical paper, this optimal cutoff is based on maximizing the unit index. But you could might also be, yeah, have tried to see what are the other values here or how do they compare here? And in this example here, I'm looking at prevalence values from 10% to 50% and look how the positive and the negative predicted value are for these values. And here we can then see the corresponding values and then interpret them according to our clinical question. Then concerning plots, let's first have a look at the default plot for a diagmeta object. We get here four figures. This one says here survival curves, but actually this is one minus the cumulative distribution function. And we see here with the field dots that are the bacterial infections and the other one meningitis and the other one are the viral ones. So the open circles. What we see here is then the maximization of the unit index which is plotted here and the bold line here that is actually the optimal cutoff value here. Here are the ROC curves and the SROC curve. As I said before here, we have rather large values for the AOC and this is clearly visible also here. So now as a final step, let's have a closer look at the SROC curve. So here was the witch argument we select not the interesting figure for our purposes. We want to like to print all points in black and so on and the corresponding result then looks like this. Yeah, so that is a quick introduction to the diagmeta package. There are other functionalities for example plot confidence regions and so on, but I think that should be enough for the moment. Thank you.