 Welcome to this presentation. I am Victoria Nyaga, the author and maintainer of Coppola DTA. Coppola DTA is one of the products of my doctoral project, which was about optimization of statistical procedures to assess the diagnostic accuracy of cervical cancer screening tests. Why was there a need for optimization? Often meta-analysis of DTA studies contain few studies, or the data is passed, and the recommended model is then not identifiable or has convergence issues. To illustrate the power of Coppola DTA, I will use a sample dataset from Arsacha Code Ying. Ying emailed me a couple of months ago. His email read as follows. My 95% confidence intervals appears to begin at zero. I assume it's because of my tiny sample size, two studies, or the zeros in the first negatives. Please refer to the attached script and data files. I attempted to change the covariance, but it did not rectify the problem. It would be wonderful if you could please assist me with your expertise, that was Ying's email. Here is the dataset from Ying, and here is the recommended model. In Ar, there is no package dedicated to fit this model. And if there was a package, the model would not be identifiable because there are five parameters to be estimated from four data points. These five parameters are 1, 2, 3, 4, 5. If you remember, in Ying's email he mentioned attempting to change the covariance. What he means is changing the structure of this covariance covariance. So in principle he meant, or what you can do is reduce the number of parameters in the model. We can reduce the number of parameters in the model by assuming that there is no correlation between the studies. So we remove or we fix this to be zero. There here is the simplified model with zeros on the off-diagnose. And we can use the package metaphor to fit a logistic normal model for sensitivity and for specificity separately. This is the forest plot for logistic specificity. The study specific confidence intervals have a 0.5 continuity correction because there was a zero. And as you can see here, the confidence interval for the median specificity is very wide from 0.2321. The model for sensitivity was not identifiable because the parameter is on the border. If we go back to the studies here, both studies have one. If you are to estimate the sensitivity is both one in both studies. So it will be safe to assume there is no between study heterogeneity. Then if we assume that then we can reduce the number of parameters further by putting this to zero or fixing to zero. We do this in metaphor by changing the argument by introducing the argument method is equal effects. When we do that, we have forest plot. And now the median sensitivity is one, but the confidence interval spans the whole zero one range. With no more tricks in the heart, I go back to look for a better solution. And this actually a package dedicated for meta-analysis of diagnostic test accuracy data, but for well-behaved data. And by well-behaved data, I mean the sufficiently large number of studies or the parameters are not on the border. The package is called meta. It implements the normal-normal model. It computes the study-specific logics and models them with bivariate normal distribution and plugs in the study-specific variances into the variance-covariant matrix. Fitting the data to this model is impossible because three or the four data points will be undefined because of the zeros. If we go here, the first study and the second study, the sensitivity is one. So the logic will be undefined and the sensitivity for the first study will be undefined. We can remove the zeros by adding 0.5 to the counts and then refit the model using the functional rate map with just the name of the data set. Here is the resulting forest plot. The confidence intervals are for sure shorter, but the point estimates are different compared to metaphor. The question now is, is this the optimal solution? I would say no because we have disturbed the data and distorted the main variance relationship in binomial data. Copula DTA is a change in strategy and philosophy. We will assume that the overall sensitivity and specificity is 0.5 or that the diagnostic test is useless. Then there is no between-study correlation which is safe because anyway we don't have enough data. And then we will feed the data into the model and update our information regarding these parameters. To feed the recommended model, we will first need to rewrite it in terms of the Gaussian copula. We specify the cost copula used by passing the argument copula equal to Gauss to the function CDTA model. Under the hood, the package rstan does the hard drive, so using function fit, we will pass the job to rstan and wait. When rstan finishes, it gives us back the results and we can examine the model convergence using the function trace plot. We obtain the summary estimate using the function print and we request the forest plot with the function plot. This is the resulting forest plot here. These diamonds and the other diamonds are the overall sensitivity and specificity. The stars are the posterior study specific sensitivity and specificity. The big dots under the stars are the observed data points. If you can read this, the overall sensitivity is 1 spanning from 97 to 1 and the specificity is 0.86 and the credibility interval is from 0.41 to 99. Would the FGM copula result in a better fit to the data? To answer this, we need to read the FGM copula. Once we have the results, we are going to compare the trace plots and other statistics from the print function. This is the resulting forest plot. The sensitivity is likely higher at 0.91 spanning from 0.57 to 0.99 and the credibility intervals are shorter. From the model diagnostics which I don't show here and from this forest plot, this model, the FGM copula, appears to describe the central tendency of the sensitivity and specificity better. Using this class theorem, we will write the bivariate distribution of sensitivity and specificity as a product of a copula and the corresponding marginal distribution. The resulting models are binomial-normal model or beta-binomial model depending on the link function that you take and the marginal distribution that you specify. For more flexibility, I implemented fact copulas, the Gaussian and FGM that we have already seen. There is the Frank, Clayton, 90 and the Clayton 270 copulas. They are different in regards to the nature of the between study correlation. With more data, you can also include covariates in the model. The code under data used in this presentation is in my DUTAP repository and the link here is the link. There is also a paper and a vignette with more details and demos on copula DTA. Thank you for listening.