 Up next, we have Tom Hardwick on how journals should handle scientific criticism. Okay, hi everybody. So, we now know from a pretty substantial body of matter research that poor quality studies frequently survive the peer review process and make it into the published literature. And this highlights an important role for scientific criticism after research has been published, which I'll call post-publication critique. So, post-publication critique is useful because it can highlight errors, limitations, or alternative interpretations that might impact the claims that have been made in a published paper. And this makes it an important mechanism of scientific self-correction or what Robert Merton called organized skepticism. So currently, post-publication critique is fragmented across the scientific ecosystem. We have it in informal conversations between colleagues, the Q&A of academic presentations like this one, various social media outlets like Twitter and personal blog posts, dedicated commenting platforms like PubPia, and of course in letters to the editor and commentaries that are published in academic journals. And these different modes of discourse all have relative advantages and disadvantages in terms of their formality, their speed, persistence, and discoverability, the amount of oversight, and the career incentives that they offer. Today I'm just going to focus on the role of journals in handling post-publication critique. That's because I think journals are particularly interesting. They're a key leverage point in the scientific ecosystem that we can intervene on if we want to initiate positive changes. So there's also plenty of anecdotal evidence which suggests that journals often seem to stifle critique rather than encourage it. I often see tweets like this one. This example is from Marcus Creed. And he says, I submitted a commentary on a PNAS article published eight months ago, but the journal has refused to consider the commentary because more than six months have passed since the paper was published. Apparently major errors discovered after that time period do not need to be corrected. So prompted by anecdotal evidence like this, me and my colleagues decided to conduct a systematic investigation of post-publication critique. So the goal of the study was to describe how top-ranked journals across scientific disciplines handle post-publication critique. And we firstly examined journal policies. So what kind of formats of post-publication do they accept, if any? And what limitations do they impose on those formats in terms of length or time to submit? We then looked at how often journals that accept post-publication critique in principle actually publish them in practice. So in the interests of tractability, we used a particular operational definition of post-publication critique, which was any journal-based avenue for sharing peer-initiated critical discourse related to specific research articles previously published in the same journal. So a prototypical example of this would be a letter to the editor, but we also found other relevant formats that met this definition like longer commentary articles and also online web comments. So we aimed for a sample of journals that spanned the full breadth of scientific inquiry. So we relied on a Web of Science schema that divides science into 22 high-level disciplines and we looked at the top 15 journals ranked by impact factor in each of these disciplines. That's a total of 330 journals. And to summarise the methods in brief to assess journal policy, we extracted information from journal websites to assess the prevalence of post-publication critique. We checked a random sample of 10 research articles that had been published in each of the journals that accepted post-publication critique in principles. That was 2,066 articles in total. And the extraction and classification was all performed in duplicate with disagreements resolved through discussion and a third team member arbitrating if necessary. So as to the question of how many journals actually have a format for submitting post-publication critique, 123 journals, that's 37% of those that we examined, didn't have any option for post-publication critique. And of the 207 journals that did have such a format, most of those were letters, then commentaries and then web comments. You can see that there was considerable variation across disciplines. So most notably, journals that offered post-publication critique were most common in the health-related and biomedical domains, especially clinical medicine where all of the 15 journals that we examined offered some form of post-publication critique. So when journals did have a post-publication critique format, they often imposed limits on the length of those submissions. So this graph is showing that 82 of the critique formats didn't say if there were length limits or not. Five of them stated length limits in a qualitative manner, for example, saying that they needed to be short. And then 162 of those critique formats had a specific quantitative length limit. And this larger graph is a histogram and a box plot showing those quantitative length limits. So the median limit was a thousand words. The strictest limits that we encountered were 175 words for letters at the New England Journal of Medicine, 200 words for letters at the Journal of Neurology, and 250 words for letters at the Lancet. Some journals also imposed limits on the time allowed to submit since publication of the target article. So this graph is showing that 162 critique formats didn't say if there were time limits or not. 31 stated time limits in a qualitative manner, for example, saying that they had to be about recently published articles. And 49 critiques had a quantitative time limit. So again, the larger graph here is a histogram and a box plot showing those quantitative time limits. The median limit was just under three months. The strictest limits that we encountered were two weeks at the Lancet and three weeks at the New England Journal of Medicine. So that's the question of how prevalent is post-publication critique. Of the random sample of 2,066 research articles, we found 39 that were linked to at least one post-publication critique. So that's a prevalent estimate of 1.9% overall. Again, there was variation across disciplines, as you can see from the graph there, which is showing the prevalence for each individual discipline. Clinical medicine published the most by far. 13% of the research articles we examined in that domain were linked to a post-publication critique. In 15 of the disciplines that we looked at, none of the research articles we examined were linked to a post-publication critique. So in summary, a considerable number of journals did not have any format for submitting a post-publication critique. Those that did often imposed length and time limits, some of which were very strict. Publication of post-publication critique was rare in most disciplines, and there was considerable variation across disciplines. Notably, medical journals had the most active culture of post-publication critique. However, they also imposed the strictest limits. Our data don't speak to why journals are imposing such restrictions on post-publication critique, but it's likely that there are competing interests between what seems to be best for journals and what seems to be best for science. It's unclear if there are principled justifications for not providing any avenue for post-publication critique or for imposing strict limits on length and time to submit. In our view, length restrictions arbitrarily limit the scope and the substance of post-publication critique. There's very little that you can say in a few hundred words. And then time to submit limits seemed especially problematic, because of course an important criticism could arise at any time. Douglas Altman back in 2002, commenting on a similar finding in the field of medicine, said in effect there is a statute of limitations by which authors of articles in these journals are immune to disclosure of methodological weaknesses once some arbitrary short period has elapsed, which cannot be right. In conclusion, top-ranked journals often pose serious barriers to the cultivation, documentation, and dissemination of post-publication critique. I'd like to end just with a few tentative recommendations for journals, and they're tentative because although they're inspired by our data or our data don't speak to whether they would actually be easy to implement or effective in practice. So these are some ideas to pursue. The first recommendation is that journals should offer at least one option for submitting post-publication critique. They might consider using flexible length targets that can be adjusted on a case-by-case basis, rather than using strict universal limits. We don't think journals should impose any time limit on post-publication critique. Journals could also consider hiring independent post-publication editors to try and mitigate the conflict of interest they have when handling post-publication critique, and those editors could also help with the next two recommendations, which is to handle post-publication critique as quickly as possible, and also to publish yearly transparency reports, which could contain information about, for example, the number of critiques that have been submitted to the journal, the number that have been rejected, and the reasons for those rejections. So that's all from me. Thank you very much for your time. If you'd like to read more about the study, you can find the paper with the QR code or the DOI link there. Thanks to all the colleagues that are involved with this research. If you have any feedback or criticism, of course you can email. Thanks very much. Hello, Joel Chan from University of Maryland. I'm very curious if you saw any justification from the journals for the super-short time limits. I imagine they're not arbitrary, but I wouldn't be surprised if they were. I'm just curious what you saw. No. So I can speculate that I think journal editors maybe want to promote, I think they would say timely discourse of articles, but I don't see how imposing a limit on criticism really achieves that goal. So I think it would be good to hear from journal editors that have these policies, what they think, because I really don't know what the justification is for that. Hi, Dom Rush. I was wondering if you had looked into journals' links to PubPier, the PubPier platform, which has emerged as a really useful kind of medium for post-publication critique. And I've noticed that, at least in biology, there are some journals that will post how many comments have been put on PubPier. And would it be the case that some of the journals that really restrict the content that you can put in one of these publications allow you to at least include a link, for example, to something posted elsewhere, or would they even go the full length of advertising like PubPier comments on the article's webpage? So we didn't encounter any journals that were actively linking to PubPier. That could, of course, be one way of handling these critiques. I think to go back to very early in my talk where I was saying there are these different types and there's different advantages and disadvantages, there's an interesting question there of what's the ideal approach in terms of incentivizing critique? For example, we do see that PubPier tends to be, it seems, underused. I often see great critical comments on Twitter that no one ever goes to the lengths of putting on PubPier. Perhaps they could get a publication in the journal with the criticism. That would be more of an incentive, so there's some interesting questions to ask there. But we didn't directly look at PubPier in this study, but that would be definitely interesting to follow up on. Thanks. One more, very quickly. I'm curious as to whether you have sent the journals that were in your sample report cards showing their results and detailing your recommendations so that you could do a follow-up study in about five years? We have not, but that sounds like a really good idea. Would you like to send those emails, Tracy? Maybe with a public dashboard, just saying. Yeah, I think that's a really interesting approach and actually thanks to your work, Tracy, and those of others. I think just in general about my work, I've been thinking more about how do we move beyond just publishing a paper showing that there are potential problems in a particular area and how do we make that bridge to actually changing policy? So yeah, I appreciate that, so that's a good idea. Thanks. Great.