 I'm going to be talking about the R-Package Publication bias, which conducts simple sensitivity analyses for publication bias and meta-analyses. And the overall goal of this package is to answer questions of the form how severe would publication bias have to be in order to explain away the results of meta-analysis. So here's an example of a meta-analysis where publication bias was a serious concern. So Anderson at all looked at the association of playing violent video games with increased aggressive behavior across 75 studies. And they found a pooled correlation of 0.16 with a confidence interval 0.14 to 0.17, indicating some positive association. However, the possible impact of publication bias was hotly debated and actually continues to be debated on this very same paper even 10 years later, with some commenters saying that possibly this whole effect was entirely due to publication bias. So we're going to try to look today at whether that's actually plausible. Could it actually be that all of this effect is just publication bias? The approach that we are going to be showing you in this package is one that's called sensitivity analysis. And the way this works is that we consider a publication process in which statistically significant positive results are more likely to be published than negative or non-significant results by some unknown ratio, which we call a selection ratio. Going forward, I'll be calling statistically significant positive results affirmative and negative or non-significant results non-affirmative. The methods then make statements of the following form. In order for publication bias to shift the observed point estimate to the null, significant positive results would need to be at least, let's say, 30-fold more likely to be published than negative or non-significant results. We can then think about whether that selection ratio of 30-fold is plausible or if it's too high to really be plausible in practice. To give some visual sense of how this publication process works, this is a funnel plot of just some simulated data prior to the introduction of any publication bias. And I'm color coding the study's estimates by whether they're affirmative, so significant and positive, or non-affirmative, negative or non-significant. OK, what happens when I introduce publication bias? Let's do publication bias with a selection ratio of 10. Well, what's happened is that the non-affirmative results have each have basically a 10% chance of getting published. And so we see this thinning out in the plot of non-affirmative results. A brief intuition behind the statistical methods to deal with this is that they use inverse probability weighting to upweight the observed non-affirmative studies to compensate for their underrepresentation due to publication bias. The theory and precise statistical assumptions behind how this works are available in the statistics paper down here. Another thing you can consider using this approach is the possibility of worst case publication bias. And this is publication bias that favors affirmative studies by essentially infinity fold over non-affirmative studies. And to obtain an estimate that's been corrected for this kind of worst case publication bias, that turns out to coincide with meta-analyzing only the observed non-affirmative studies. Again, the theory behind why that works is in this paper. So the reason that we wanted to add another publication bias method to the literature is to try to deal with a few of the disadvantages that can be troublesome with existing methods. So one is that we tried to assume a fairly realistic, although still simple, model of publication bias in which publication is really selecting for statistical significance. We wanted to not assume that very large studies are immune to publication bias to have methods that work even for small meta-analyses and also to accommodate heterogeneity without assuming that the effect sizes are normally distributed. Okay, so here is the R package. So first I'm just gonna answer two simple questions. So going back to the video games meta-analysis, we might first ask how much publication bias would it take to completely explain away the effect? So basically what I'm doing here is calling a function called S value, passing the point estimates and variances from my studies and passing a little bit of additional information like whether there are clusters to what value do I wanna shift the point estimate and the fact that I've assumed publication bias is favoring positive rather than negative correlations. The first two return values that I get, Sval S and Sval CI, are telling me that no amount of publication bias of this assumed type could entirely explain away the point estimate nor even shift its confidence interval to include the null. So in that sense, there is a sense in which this meta-analysis is actually rather robust to publication bias. The other question we might ask is what about worst case publication bias? What would it look like if publication bias were favoring affirmative results to an almost infinite degree? That same call to S value also returns a worst case meta-analysis, meta.worst. And this is on the Fisher Z scale but transforming back to Pearson's R. This is telling us that under worst case publication bias the pooled estimate would decrease to correlation of 0.08 with the confidence interval bounded above 0.05 which is about half as large as the uncorrected estimate but is still positive even under this very severe publication bias. The R package can do other things. You can consider shifting the point estimate to some other value, not zero. You can do bias corrected estimation for a specific selection ratio. You can consider other mechanisms of publication bias and you can make visualizations that kind of help show the difference between the worst case estimate and the naive estimate. Last point is that interpreting these analyses sort of depends on thinking about how much publication bias is actually plausible in practice. We have an empirical paper where we tried to give some empirical benchmarks to do that. And here I'll just leave you with links to the papers, the R package and my contact information.