 Now I'm happy to introduce Beth Clark, who will be talking about the prevalence of replications in psychology. Okay, thanks, David. Hi, everyone, I'm Beth. I'm from the University of Melbourne. I'm a PhD student there. And today I'm gonna be talking about the prevalence of replications in psychology. So where I come from in psychology, we talk a lot about replications. And we really like to talk about replications in our published articles, especially when we talk about our limitations. So I picked out a couple of examples of that. This one says, it's an open question as to whether these results would directly replicate. Additional work is clearly needed to replicate these findings. Future research should replicate the present findings. It would be desirable to replicate the findings of the current study. So you get the point. Of course, I've cherry-picked those examples, but I think they lead us to an interesting question. And that's, if we're making all these calls for our studies to be replicated, is there actually someone out there doing the replicating? Now, a lot of change over the last decade, and particularly on the front of replication. So it's really an open question as to whether things have changed. And you might think that they have. But that's an empirical question. And we were motivated by that to examine whether all of this talk about replication has actually had a tangible impact on the psychology literature. And so we did that by investigating how often replications were published in psychology over the last decade or so. Now, people often mean different things by replication. So let me tell you what we defined replication as. We used a framework that's pretty widespread in psychology, although there is some debate. And that's the distinction between direct and conceptual replications. So our definition of a direct replication is where you follow the methods of a previous study as closely as you can to see whether you can find the same result. By contrast, in a conceptual replication, you intentionally change some aspect of the original study to see if the result also holds in a new context. Now, of course, that's an oversimplification, but I'm going to use this distinction for the sake of time. And the important thing that I really want to flag here is that for our study, we only focused on direct replications. And we did that because we wanted to understand how much they're valued by our field. And one way of doing that is to see how much real estate they get in the published literature. All right, so what did we do? We looked at the top 100 highest impact journals in psychology, and after excluding review journals, we were left with 78 journals. And in those 78 journals, there were about 82,000 articles that were published between 2010 and 2021. Now, because 82,000 articles are just way too many to check over, and because direct replications in psychology tend to call themselves replications, what we did is we narrowed down that sample a little bit by identifying articles that use this term replicat in their title, abstract, or keywords. And that left us with 3,229 articles. And then one person from our team read over all of these articles. They just read the title and the abstracts and identified which of them were direct replications. And so basically to count as a direct replication in our study, the article had to really sell itself as a direct replication, and that had to be quite a prominent part of the article at least based on the title and abstract. So what did we find? In our sample of 82,775 articles, 169 were direct replications. And that reflects a prevalence rate of 0.2%. To make that a little bit more intuitive, let me show you what that looks like. So just to emphasize here, that's not 20%, it's not even 2%, the actual prevalence rate is actually just one-fifth of a percent. And that means that one in 500 articles was a direct replication. Now, to me that seems incredibly low, but it might not be all that surprising since we were only interested in direct replications and our field probably values conceptual replications a lot more than direct replications. Now in terms of where those 169 replications came from, the majority of them, 100 out of 169, came from, were published in just six journals. And that really means that just a handful of journals are publishing the bulk of these direct replications. On the flip side, there were 44 journals in our sample, so a little bit over half, that didn't publish a single direct replication between 2010 and 2021. So basically, there are a few journals that are publishing a lot of direct replications and a lot of journals that aren't publishing any direct replications. But because this was a really tumultuous time, a lot of people are probably wondering whether things have changed over time. So let's break that down by year. So unsurprisingly, we start things off with really low numbers. The highest number of direct replications published in any of these years was just five in 2011. And then in later years, things start to look a bit better. You can see here that replications peaked in 2020 with 46 direct replications being published in 2020. Now you'll also notice that this change isn't huge. For instance, if you look at the Y-axis, it really doesn't get close to 1%. But this increase was statistically significant at the conventional threshold. It's not a tiny p-value though. So I think we need to be careful about the conclusions that we draw from that because I don't necessarily think that this reflects a super robust trend. And that's partly because it's hard to say whether this trend will continue into the future. So based on what I've already told you, this increase seems to hinge on just a few journals. And it's easy to see how those numbers could have potentially been driven by a few journals and potentially a few special issues in journals in those later years. It's also possible that the bump that we see in 2020 reflects the peak of all the buzz around replication and that things started to dip a little after that peak in 2021. And then once you account for how long it takes to publish an article, maybe that peak occurred a couple of years earlier. Of course, we'd really need more data to know what happens in future years for sure. But it's still an open question as to whether this replication rate is going to continue to grow, whether it'll stabilize or maybe even recede to pre-replication crisis levels. So what does this all mean? To make sense of these results, I think we need to disentangle two critical questions that are at the heart of this. And the first question is an empirical question. How many replications are there in the published literature? I've given you one narrow answer to that question. And I think that that answer gives us some information about how much my field values the practice of direct replications. But there's another big question that I've mostly avoided so far and it's an evaluative question. How many replications should there be in the published literature? And I've avoided that question because it's a really tricky one to answer. I don't think that there's a magic number. But instead, I think that we in our respective fields need to consider what replication rate would be concerningly high and what replication rate would be embarrassingly low. And the answer is probably going to depend on things like how prevalent are false positives in the literature and how costly is it to replicate studies in the field. And in my opinion, I do think that the current prevalence of replications in psychology is concerningly low, but some people are going to disagree with me on that. Either way, I think that now that we have an estimate of how we're doing, we're better equipped to have those conversations. So now that I've hedged my way around that tricky second question, I wanted to give you something to take away from this talk. So to summarize, here are two key takeaways. The first is that we seem to be paying a lot of lip service to replication. But replications themselves, or direct replications at least, they don't seem to be getting as much space in the published literature. And that may or may not be concerning to you, depending on how many direct replications you think that there should be in the published literature. But one thing I do find concerning is that there are only a handful of journals that publish the bulk of these direct replications. So most of these higher impact journals didn't publish any direct replications. And from all of this, I think it's clear that there are just a few journals doing all the heavy lifting. Before I wrap up, I just wanted to note a couple of important limitations. So first, we relied mostly on the title and abstract to determine whether articles are replications, and we wouldn't have counted any articles that didn't have the term replicat in them. Second, most articles were only checked by one person and percent agreement with the second coder was high, but the reliability between the two coders was on the cusp of what is conventionally considered acceptable. And the last is that our definition of replication was quite narrow. We didn't count conceptual replications or articles that included a direct replication of a study within that article. But if you include those things, and some people would argue that you should, then the replication rate would be higher than what I've told you today. But to wrap up, the problems that I raised they aren't anything new, and we've been talking a lot about them over the last decade. And through these discussions, I think that we've generated a lot of potential solutions. Today, I've given you another data point to help inform these discussions, but I do think that the critical question now is really what should we do next? And I'd love to hear your thoughts. Thanks for listening, and thanks to my collaborators for all their work on this project. We have time for about two or three questions. I have a question. Did you, you said there was a very small number of journals sort of taking up the slack. Could you categorize those? Did you see any patterns or trends amongst who those were? Yeah. Well, we looked at impact factor as like a predictor and it seemed to be like a very close to an L effect. So it doesn't seem to be like the higher or the lower impact journals that are leading the way, but one of the biggest predictors was the journal policy. So journals that did have a policy stating that they would accept replication submissions were almost eight times more likely to not publish replications, but an article within them was more than, was about eight times more likely to be published in a journal with a policy than it was to be published in a journal that did not have a policy or discouraged replications. Very complicated, but yeah. Yeah. Yeah, thanks. Thank you. Thank you.