 Hello and welcome. This is the tutorial about the R package per uniform. My name is Roby Van Aert, and I am the author and also the developer of the R package per uniform. So over here you see the links to the package on CRAN and also on GitHub. And what I will do in this tutorial is that I will focus on two of the methods that are in the package. And I will focus on the methods per uniform, per uniform star. At the end of this tutorial, I will tell you a bit more about all the methods that are also included in the package. OK, so what the methods per uniform, per uniform star do is they are able to correct for publication bias. And publication bias is a selective publication of studies with a statistically significant outcome. So in its most extreme case, this means that studies with a statistically significant effect size get published, and studies with a statistically no significant effect size do not get published. And there was actually quite some evidence for publication bias in the literature. So on the right hand side of this slide, you see a figure that indicates the percentage of papers that report support for the tested main hypothesis of papers, or the main hypothesis that is reported in a paper. And what you see is that in the last row, the psychiatry and psychology, you see that it's about 90% of the papers report support for the main hypothesis. And what we also know, based on the literature, is that the average statistical power is in psychology between, let's say, 20% and 50% percent. So these two, this large percentage of support that's hypothesis on one hand and on the other hand, the low statistical power on average, 20% and 50%, these two are just not in line with each other. And a possible reason for this is publication bias. And the consequence of publication bias are really severe also for meta-analysis, because first of all, it yields overestimated effect signs in the individual studies and also in the meta-analysis if you are going to combine these studies. And it also yields a false impression of whether our effect exists or not, because imagine a Jewish reader interested in a particular relationship between two variables. And all the time you see in the literature that there is statistically significant effect, but it's only due to publication bias. And you get a false impression about the existence of such an effect. I'm going to apply the beautiful, beautiful style method to correct for publication bias and to show you how these methods work. And I do this by using an example. And this is an example about the efficacy of cognitive behavior therapy for treating pathological and problem general. And this is data for meta-analysis by a colleague and colleagues. And there are two groups in these studies that were synthesized in this meta-analysis. One group, the experimental group, achieved the CBT, the cognitive behavior therapy, and there was no treatment given to the participatical problem. And this meta-analysis consists of seven studies, seven standardized mean differences where a positive effect size indicates a smaller financial loss for the experimental group. Okay, if we first take a look at the individual studies and if you just apply, also apply the random effects model by using the metaphor package and the restricted maximum logic estimate for the estimating between study pairings, then what we see if we first focus on the individual studies, we see that two studies overlap with this dotted line indicating that there is no statistically significant effect for these two studies. All the other studies do not overlap with this vertical line and zero indicating that there is statistically significant finding. And also if you fit the random effects meta-analysis model, we also observe a statistically significant result because it does not overlap with zero and we observe as estimate of the average effect size, port 52. So this indicates an effect which is of about medium size as we use these thresholds for interpreting effect size. And if we look at the between study variants, this is estimated as zero and if we test the normal hypothesis of no heterogeneity, then the PVL is port 69, that's also no evidence for heterogeneity. We cannot reject the normal hypothesis of no heterogeneity. I'm going to try to explain how the pre-uniform method works. And I will first do it by using an example, an example where I simulate data of studies and I do this repeatedly. So I do this for multiple studies and then every time I compute a right tilt PVL and the true effect size is equal to zero, which means that I'm computing PVLs over the neural outputs, generating data under the neural outputs. So let's do this and let's generate some studies. Then starts like this. So let's now pause for a moment. And then what you see is that if I have to say something by the distribution of these generated project PVLs, I would say, well, they seem to be, they seem to follow a uniform distribution. It's not both uniform, but they seem to start following a uniform distribution. So let's add some extra studies and then you see the more studies are generated, and they simulate the data, the more this distribution looks like a uniform distribution. And this is used for the PVL method. So the PVL method makes use of the property that the PVLs are uniformly distributed under the neural outputs. The PVL distribution for other effects size. So if the effect size is, for instance, 0.2, then we get a right skew distribution. If the effect size is 0.5, then we get also a right skew distribution that's more extreme than the one for 0.2. And the opposite is actually true for negative effect sizes. So some left skew distribution for minus 0.2 and the distribution becomes more left skewed if you have an effect size of minus 0.5. And the PVL method makes use of this property for estimating the effect size. And more detail or some more technical details are in these two papers. Let's do this Gamble example. And remember that the effect size estimate, the effect size estimate in the Gamble example was about port 52. And then what we need to do, and you probably noticed, we first need to install the package. So like this with this line of code, we need to load the package. And then we can use the PVL form function in order to apply the method. And now what is needed, we need to provide some input for the argument yi. So these are the effect sizes. And these are now in the data frame dot. And we also need to provide the sampling variances, the vi, and these are also in the data frame dot. And finally, we need to specify the side and the side can be either right or left and indicates whether significant effect sizes are in the right or the left tail of the distribution. Let's have a look at the results. What we see is that the effect size estimate of the method is about port 22, which is substantially smaller than the port 52 that we obtained with the random effects model. So the random effects model, we have let's say an effect size of medium size and now it is an effect size of small size. Over here, we get a confidence interval. This is a test for the null hypothesis of no effect. So in this case, we cannot reject the null hypothesis of no effect, p-value of port 25, and this indicates the number of statistically significant times. You see that you can also get a publication bias test based on the pu-reform method. We're over here a test statistic and this is a p-value to test whether there is publication bias and over here, we cannot reject the null hypothesis of no publication bias. And in this case, we have specified in the pu-reform function, we have specified the effect size and so the sampling variance, but you could also specify, for example, the correlations together with the sample sizes, or you can specify the means of two groups together with the standard deviations of two sample sizes, and then the function itself computes for you the effect size. So drawbacks of the method, that's what we realized when we worked on this. And drawbacks of the method, the pu-reform method is as it deals over estimation of effect size, in case of heterogeneity in the true effect size. And it's also non-efficient method because not all available information is used because the method makes only use of the statistically significant effect size. And for that reason, we improved the method and we call this method the beautiful star method. And it's an improvement because first of all, it enables estimating and testing of heterogeneity in true effect size. So that's an advantage. And because of that, it also does not overestimate the effect size anymore in case of heterogeneity. And the second advantage is that it takes into account significant and non-significant effect size and is therefore a more efficient method that uses all the available information. And you can apply the method by using the code pu and i underscore star. That's also in the pu-reform package. If you use this function, then you need again, you need to provide the function with the effect sizes, the sampling variances of the effect sizes. And also you need to specify whether the significant effect sizes are on the right or the right or the right or the left tail of the distribution. The output looks like this of the pu-reform star method. And what we now see over here, seven studies included. K is equal to seven, still five significant studies, of course. And we now get an effect size estimate of about port four, which is still quite a bit lower than the port 52 of the random effects one. We still get a confidence interval and we also get a test of the null hypothesis of no effect that is now still rejected, p-value of port two. And I also estimate the tween study variance using the methods where the between study variance being from star is now equal to zero. This is a confidence interval of the between study variance estimate. And this is a test of the null hypothesis of no heterogeneity. And remember that also in the random effects model, the between study variance was estimated as equal to zero. It's the same as with the pu-reform star method. So to conclude regarding this example, the effect size and the average effect size is considerably smaller when we use the pu-reform pu-reform star method. So when we correct it for publication bias, what's the null hypothesis of null effect was rejected with pu-reform star but not with the pu-reform method. So there seems to be evidence of publication bias over here with quite a bit lower effect sizes. Also developed some web applications for the pu-reform pu-reform star methods. And the reason for this is that some users might not be familiar with R and therefore it would be more straightforward for them to use these web applications. So that's why we also have implemented the methods in these web applications. And for future developments, we have in mind to extend the methods such that the pu-reform star method, for example, does not only draw distinction between the one hand significant studies and on the other hand, non-significant studies because currently the method only draws distastation and treats only significant and non-significant studies differently, but to extend this to more intervals. So you can, for example, imagine that also the probability of publishing a study that this is different for, for example, a study that's positive compared to a study that's negative. This is something that could also be included in the method. And we are already working on the inclusion of moderators in the model. So to also correct for publication bias if you're interested in the effects of moderators in the meta-analysis. So this was only a very small part of the functionality of the package. There are also other methods included. So for example, two methods included to meta-analyze an original study and a replication study. And what the methods do is they take it to accounts that is likely bias in the original study but not in a replication study. And then they combine these by means of a meta-analysis. So there is a prequitous methods which is called the hybrid method of meta-analysis. That's the hybrid function in the package. And we have a Bayesian method that computes posterior model probabilities for different effect sizes. And this method is called a snapshot Bayesian hybrid meta-analysis method. And you can use this method using the snapshot function. Then we've also created a plot. Plot that's called the meta-plot. And this plot is actually can be used instead of a funnel plot and it's based on cumulative meta-analysis. And finally, we have also some functions in the package to correct for outcome reporting bias using this corp method. So that's it. That's a small illustration of what you can do with the package. So I would like to thank you for your attention. Over here are the links to the package, Sira over here and over here on Githam. And here is an overview of the references that I used during the presentation. And most of this work on especially on a beautiful, beautiful style method is together with my colleague, Marcel van Aas also from Tilburg University. Well, thank you for your attention.