 Good morning, everyone. My name is David Miller. Welcome to the webinar today about registered reports. Chris Chambers from Cardiff University Brain Research Imaging Center is going to be presenting the first half of our first minute or so of our webinar recording was cut off, so I'm just providing the first slides here, the moral transition back to Chris's talk. Content of the webinar, Chris will be going into what problems registered reports try to solve, how do they work, what the workflow is like, one of the benefits of the registered report format. And then going to some frequently asked questions that come up when editors are considering implementing registered reports in their journal and their workflow. Chris Chambers is, again, from the cognitive neuroscience, professor of cognitive neuroscience at Cardiff University. He was one of the founders of the registered report format back in 2012, and he's the editor at Cortex European Journal of Neuroscience and Rural Society Open Science, three journals that are offering the registered report format. He's also chair of the registered reports committees, supported by us here at the Center for Open Science, and learn more about that. See us.io.rr. First, what problems do registered reports try to solve? It's best for me as an individual scientist, which is to produce an awful lot of what we call publishable results or good results. And the problem with this incentive problem is that if you put individual scientists under pressure to produce publishable results, you get them. But you get them through means which undermine the validity and the credibility of the literature. So here are just six well-known problems that this incentive structure creates. Publication bias, suppression of negative or complex findings, significance chasing, otherwise known as p-hacking, or any kind of selective reporting of various experiments or studies. When we retrofit the hypothesis onto results that are unexpected in order to produce a story that editors or reviewers will find more compelling, when we refuse to share data or are unable to do so because it's too hard, we don't have any time. The focus on producing a large quantity of papers conflicts with the importance of quality. And so inevitably, particularly in my field in psychology and cognitive neuroscience, studies are indemnically underpowered. And on top of all of this, replication is rare and seen as something that's a bit boring, a bit lacking in the kind of intellectual prowess that it's believed that you need to demonstrate in order to secure funding, to secure promotion and so on. So I think it's useful to show how these practices, these problems corrupt and short-circuit, if you will, the deductive scientific method that we've all been taught. So here, of course, is the happy cycle of deductive science from generating and specifying predictions, designing a study, collecting the data, analyzing that data and testing it against our hypotheses, interpreting it, and either writing that study up for a journal or conducting another experiment or both. Now, those practices that I just described target different points in this cycle in quite damaging ways. So for instance, and I'm going to focus on some of the statistics associated with psychology. Lack of replication, we know now that just one in 1,000 papers is likely to report a direct replication of a previous study by an independent group. Low statistical power, many for many years now, we've been gathering evidence that psychology is underpowered and only has about a flip of a coin chance to detect a medium effect size. Various forms of significance chasing or p-hacking are quite common. We've estimated prevalence of 50 to 100%. And changing hypotheses in order to fit unexpected results, practice known as harking, is also thought to have prevalence of around 50 to 90%. On top of this, publication bias and lack of data sharing are quite common. So Yalta Vikkerts is showing repeatedly that when even when asked, most psychologists refuse to share the data or in many cases are unable to do so because they don't know where it is or whatnot. And on top of this, publication bias is also a big problem in psychology. You can see how these problems kind of conspire to short-circuit that scientific method in different ways. So I think it's important to just reflect briefly on why this is happening. So it's in my opinion, because what we've done and the system we've created for ourselves is so results-based, it's so success depends so much on getting the right kind of results that we've ignored the processes relatively that go into producing those results. And this is just, I think human nature because results make our work exciting, they make it interesting, they make it worth doing, but the minute we start judging the quality of the science and the individual scientists as well and the publishability of the work that they're doing according to results, I believe we succumb to a kind of soft scientific approach and that leads us down a path which leads to low reproducibility and questionable practice. So I do think we can fix this and in fact, the philosophy underlying is fairly simple, that if we accept that at least within the domain of hypothesis testing a certain set of rules so that what gives that kind of science, deductive science its value is the question that is being asked, how important and valuable that question is, the quality of the methodology that we're using to assess that question, the rigour, if you like, but never the results it produces. And if we accept this philosophy, then what logically follows is that journals in fact should be blind to results when they're making editorial decisions because results, interventions, results can bias reviewers, they can bias editors, all kinds of motivated reasoning can step in and confirmation bias can present problems when results intrude into this process. So in fact, what we should be doing is thinking about forms of scientific publishing in which results are not available at the time of the critical peer review process. And this is, I think, a nice reflection of Richard Feynman's famous quote here, that the first principle of science is that we must not fool ourselves in the way the easiest to fool. And I think in a way that's all done in, in the life sciences and the social sciences, particularly as creative system in which we've managed to fool ourselves for a long time very easily. So it was from this premise that registered reports really emerged back in 2013 as a new, as a way of addressing some of these problems. So there are four central pillars of the registered reports model. The first is that researchers decide, according to the deductive scientific method, their hypotheses, their experimental procedures and their main analyses before they start their data collection. So before data exists, then part of the peer review process takes place before the experiments are conducted on the basis of peer review of that experimental protocol. If you pass that stage of peer review, then the journal virtually guarantees publication of your paper regardless of how the results turn out. And the format is explicitly open, not just to original studies with novel experiments, but also high value replications. So how does the registered reports process work once you implement it at a journal? So I'm now gonna describe the workflow in simplified way. So at stage one at the protocol stage, the authors submit what we call a stage one manuscript, which includes an introduction section, proposed methodology and analysis plan, and any pilot data if applicable. If for instance, the authors need to provide proof of principle of a particular effect or an effect size estimation. This goes out following a triage process with the editorial board. It goes out to stage one peer review where reviewers are assessing the importance of the question that's being asked, the rigor and strength and validity of the methodology that's being applied to assess that question. And they do so according to specific review criteria. If those reviews are positive, and there's often, of course, a lot of discussion going on, a lot of revision, but if ultimately the reviews are positive, then the journal offers what we call an in-principle acceptance or IPA, which virtually guarantees publication of the final outcome regardless of what that outcome may be. Now at this point, most versions of registered reports that journals have offered do not publish the protocol at this stage. They instead reserve the protocol. The protocol is, if you like, put on ice while the authors go away at this point and then commence their research. Stage two then begins. So the authors go away and do their research. They conduct the research according to protocol. And then when they're finished and they've done their analysis and finished their manuscripts, they resubmit what we call a stage two submission, which includes the introduction and methods from the first stage. So the virtually unchanged protocol, except for perhaps slight changes in temps if they were going from futures past. And of course now there's a result section. Now the results is different from conventional publishing in a crucial way, which is that the results are separated into the outcomes of any pre-registered confirmatory analysis on the one hand and also any unregistered exploratory analysis on the other hand. So kinds of analysis that the authors might have thought up after they've seen their data or have in any way come up with after the data correction started. And of course there's a discussion which is added as well where the results are interpreted. In synchrony with this, the data and materials are also deposited to the maximum extent allowable by ethics and law in a public archive. This then goes out to stage two peer review, which is a relatively simple process where reviewers are essentially assessing compliance with the study protocol, whether any pre-specified quality checks were passed. And I'll talk about those in a little while in more detail and also whether the conclusions are evidence-based. And if those criteria are met, then the manuscript is published. And so it's published when it does appear in the journal, it looks very much the same as a standard paper, except perhaps it's a little longer in the methods because the methods are more detailed and explicitly welcome more detailed methodology that can be reproducible. But basically what you get is a standard looking paper. It's what's under the bonnet is key. That's what's different about this process. Now it's important I think to emphasize as much the criteria that are used to select papers but also the criteria that are not. So none of these things matter at the stage to whether the hypothesis was supported, whether results are statistically significant, whether results are novel, and subjective judgments about whether results have impact or whatever are also completely ignored as part of the review process. Again, in contrast to standard publishing. So here's just a few published examples of registered reports of cortex, fully completed registered reports that you can see. You can read more if you go to our virtual special issue which is the hyperlink that I'm indicating right now. You can also read more special issues of registered reports here at social psychology. And of course there's the ongoing registered replication reports of perspectives. There are many other examples emerging. So the key strengths I think of this format. First of all, the idea here is that the work that is published is going to be more reproducible than average because for one thing, the methods have to be very detailed and repeatable in order to be assessed properly at stage one. And also setting a very high power requirement at stage one means that studies typically have the sample size that is at least two to three times above what's typical in each respective field. Accompanying this reproducibility is more transparency. So papers are accompanied by open data and materials and the outcomes of any confirmatory and exploratory analyses are clearly distinguished in the results. Again, in contrast to standard publishing where they often blur together in an unhappy mix where you can't tell which analyses were preplanned and which were post hoc. And on top of this, I believe this format is also more credible because it eliminates various forms of bias from the publication process. So it aims to eliminate publication bias. It aims to eliminate hindsight bias but by requiring authors to pre-specify their hypotheses and noting any transfer of changes so that it's impossible to reinvent history and also by specifying an analysis plan in detail at stage one. It prevents the kind of selective reporting which also reduces credibility of published work. So back in 2013 when we first launched the Cortex, we decided to make a push toward expanding this format more widely. So together with around 80 colleagues and members of editorial boards, we published an article in The Guardian calling for all journals in the Life Sciences to offer this format of article as a new option for authors. Not just the only way of publishing but as a new option and new possibility that all scientists really should have the option to consider in their work. And since then we've seen the format taken up by I think the latest count is 43 journals and some of them are permanent adopters and some of them have been adopted as part of special issues looking at particular questions in science. You can see looking at this list, there's a lot of psychology journals but also one of the things that really struck me is how broad the uptake has been within the life and social sciences covering areas as wide range is political science, biology. We've even got empirical accounting as one of the areas here that has been taking this format up and it's very popular. Two journals of note just recently to point out Nature, Human Behavior which is the first uptake of the format by what we would traditionally, I guess, call a high impact journal. And we're also society open science where I'm the section editor and what we've done there is launch the format across 205 or 206 actual different scientific areas ranging from physics all the way through to psychology. So I'm gonna talk now about what I see as some of the benefits for journals, editors, authors and ultimately the scientific community. I've already alluded to some of these but if we reflect back on the way this deductive scientific method is, I think hijacked by certain questionable practices and bias, we can then map on some of these benefits as they relate to this problem. So first of all, by eliminating publication bias that takes out one big problem and hopefully will lead to a more balanced literature where we see a mixture of negative and positive findings. Secondly, because the format requires this detailed specification of experimental procedures and analysis plans, it logically eliminates various forms of researcher bias such as p-hacking and post-hypothesizing or hacking. So we can also then eliminate that from our scientific method or problems associated with the scientific method. Thirdly, by setting this high statistical power requirement, we can hopefully improve the reproducibility of all findings but positive or negative. So we can solve that problem. Fourthly, by I think creating a format and this is key really by creating a format where you can have your protocol and your acceptance, your paper acceptance, guaranteed before you start your research. What we do is create an incentive for researchers to consider different kinds of research they perhaps wouldn't normally consider. For instance, replication studies which you wouldn't bother doing unless you have this kind of format because everyone knows if you're trying to do a replication study at least in my area without having preregistered it with a journal, it'd be very hard to publish because whatever the results, journals might see it as boring, reviewers might oppose it and so on. It's from your kind of battles associated with that in your future if you go down that path. Whereas a registered report provides the incentive and the means to actually have your paper accepted in advance. So provides a way of incentivizing replications and also I think looking beyond replications any other kind of novel resource intensive project where normally it would be a really big deal to try and do this before you'd lined up your publications because the outcome might be contingent on the results and so the investment is not guaranteed. So if we create that incentive then of course we eliminate another major problem from our list here. And finally, this is something that we bolted on to registered reports in order to improve transparency but in basically building in this public archiving of data and materials to the maximum extent possible hopefully we'll have other knock on and ancillary benefits for science generally. So the idea is hopefully at the end of this if registered reports should be free from all or at least most of the problems that afflict hypothesis driven science. In addition, I think there are a number of other benefits. One is the full protection of exploratory analysis and any kind of serendipitous findings. So it's one of the criticisms that registered reports sometimes gets is that it's too rigid, it's too constrained and it will somehow hinder exploratory analysis, serendipity, creativity and so on. But in fact, the format in all journals that have launched it explicitly protects those aspects of scientific discovery. All it requires is that confirmatory and exploratory analysis and outcomes are distinguished in results. And if you look at the registered reports that have been published so far at Cortex in the special issue that I pointed out in an earlier slide you'll see extensive exploratory analysis sections in every case. So there's no barriers whatsoever to exploration in fact, this format might help protect exploration in a way that conventional publishing doesn't because conventional publishing actually often forces scientists to present what is truly exploratory as something that it isn't by confirmatory analysis or hypothesis testing. And the second additional benefit here is that the review process is very different. So, excuse me, what reviewers do with registered reports is adopt a much more kind of constructive and collaborative style of dialogue. And you can see why this is because if you're submitting a protocol before you've done your work there's a much more of an opportunity for a reviewer or a team of reviewers to actually help address methodological problems and fix them before it's too late. So rather than trying to shoot a completed study down and show why it shouldn't be published reviewers can step in and say, you know what, you should consider this condition or you should consider that analysis. And this is something that authors of registered reports have written about, here's an example here written by Dorothy Bishop that this process, this constructive process actually helps better organise the publication timeline and actually helps prevent a lot of the problems involved with say sequentially submitting papers to different journals that we see down the conventional route. So in the final section here I'll talk about some of the frequently asked questions that I've had, I'm not going to go through all of them on the website, the registered report central repository which I'll show you in a moment. You can find a much more extensive list of frequently asked questions but here I'm going to focus in particular on questions I've had from editors and at talks. So first of all, is registered reports suitable for my journal? Is perhaps the most common question because there's such a broad spectrum of science out there and I think sometimes that people are not clear, where does this fit? Does it really fit for my discipline? And the answer I give is this, that the format is suitable in principle to any field that's engaged in hypothesis driven research where at least one of the following problems applies where there's either publication bias or some form of significance chasing, p-hacking or indeed any kind of selective reporting of results or where there's any kind of post hoc hypothesizing or hindsight bias where a hypothesis is retrofitted based on the results and in fact presented as though it was a priori or where there's an endemic problem with statistical power where studies don't have large enough sample sizes or where there's a lack of direct replication. Any of those problems apply and the area in which your journal is publishing in is hypothesis driven, at least in part, then registered reports can stand to benefit the credibility and the transparency and the reproducibility of the work within that field. That said, it's not a cure all and it's not applicable for everything. So there are areas where it doesn't fit. So purely exploratory science and methods development are two areas where I think it probably doesn't fit so well because there's no hypothesis testing. And if there's no hypothesis testing, then pre-registration doesn't really value much in terms of an advantage because if there's nothing to pre-specify, then why bother pre-specifying anything? You may as well just accept what you're doing is exploring. And I think that's actually one of the nice advantages of this format, which is that it draws a line between the kind of confirmatory hypothesis driven deductive science that a lot of research follows that kind of at least in name follows that kind of approach and more exploratory observational science which is just as valuable but not suitable for this registry report format. Secondly, one of the common questions I get is what's to stop researchers from pre-registering a study that they have already conducted? And there's a number of protections against this. The first is that at most journals, authors have to upload their raw data files at stage two together with a basic laboratory log that shows basically the range of dates that data collection was undertaken and the certification from all of the authors that the data was collected after provisional acceptance was awarded and not before. Of course, that doesn't apply to pilot data. So if you were to go through this process, of course, any researcher was to go through this process and then somehow submit a stage one study work for work they'd already done, that would be essentially forward. And in any case, it would backfire because we haven't had a single case of registered report at any journal where reviewers haven't asked for amendments of the experimental procedures at stage one. Now, unless you have a time machine, it's impossible to change even the tiniest detail of your experiment if you've already conducted it. So it would be impossible for you to indeed report the experiment as described if you'd already done it. But I think there's a bigger issue here, which is that registered reports are not a fraud, an anti-fraud tool. They're not a fraud prevention device. Their aim of them is not to stop malpractice, which any system can be beaten. The idea here is to incentivize good practice and best practice in hypothesis testing and indeed to eliminate perhaps some of the incentives that would encourage researchers to pursue a kind of a program practices that might eventually lead to fraud. Thirdly, if some editors have asked me if I'm accepting papers before results are known or before results exist, how can I know without seeing the results whether the studies are going to be conducted to a higher standard? Will I end up just having to accept badly conducted studies in poor quality research? And this is where a particular aspect of the stage one criteria really kicks in is key, which is that you can require the a priori specification of data quality checks or positive controls or manipulation checks that subject to the editorial discretion from the editorial policy must be passed at stage two for that final paper to be accepted. So for example, you might require data, the noise level in data to be below a certain threshold. You might require the absence of floor or ceiling effects. You might require certain positive controls to be passed, certain reality checks that if a hypothesis can be tested, must also pass in order for that hypothesis to be testable. The key with this is that in order to prevent publication bias, all such tests must be independent of the main hypothesis. So that there isn't a return to selection based on the outcomes of hypothesis testing. So if you build this criteria in as most registered reports formats do, then in fact, experiments that are run poorly or sloppily or that contain major errors may in fact not make the cut to final publication. So it would fail that critical criteria. And it's perhaps useful at this point to reflect on what those criteria are. I won't go through these in detail. I'm just throwing them up here so that you can see an example. These are the stage one and stage two review criteria at the European Journal of Neurosites. There's five in each category. And one in particular I wanna highlight here is this is the criterion that it is used to assess quality before results exist. So at stage one, whether the authors have pre-specified a sufficient outcome neutral tests for ensuring that the results obtained are able to test the stated hypotheses. And this may include positive controls or other quality checks. And then when the results are in at stage two, whether the data actually passed those quality checks or those positive controls. And so if you build those in and it's up to editorial discretion as to how strictly this requirement is enforced because some fields are more amenable to these kinds of strict tests than others. But based on this, you can set a rule where if the quality isn't within a certain range, then we're not gonna accept the paper independently of the actual results of hypothesis tests. Another question I get often is what happens if authors need to change something about their procedures after they're provisionally accepted? So they get their stage one approval, they start to run their experiments and then they need to change something. And this is very doable. So registered reports are not so rigid. You can't change anything. So the key is always transparent reporting of any deviations. So minor changes such as cases we've had like replacing equipment or any other kind of small change they can just be foot moving stage to manuscript as a protocol deviation, other changes. And again, this is up to the editorial team to decide where to draw the line between major and minor. But major changes, for example, if someone wanted to change their exclusion criteria on their data, these are, I guess, changes which are very prone to various forms of bias. This would often require or generally require withdrawal. So cases at Cortex for instance, we've not allowed authors to change their criteria and just footnote that. That would be something where they would either have to, withdraw the paper fully or it'd have to be some kind of review process going on. And as I say, it's up to the editors to really decide what qualifies as a major or minor deviation and assess accordingly. The key point here is that protocols can be amended along the way in various ways. The only thing that's required here is that those deviations at a minimum are reported transparently. You can also ask how will this work? How does preregistration work when some of the proposed analyses may in fact depend upon the results? So for example, authors may not know what kind of statistical analysis they're going to do until they see the distribution of their data or some other aspects of their data. And it's important to point out that preregistration and registered reports don't require that every single micro-decision is hardwired and pre-specified. Only the tree, the decision tree, needs to be specified. So this can be a series of contingencies where if my data looks like this, then I'll do this. If it looks like that, I'll do that. The point is that the rules and contingencies for future decisions are pre-specified to eliminate bias. Not that every single step is pre-specified and hardwired. We can also ask, and in fact, this is a very common question I get because some areas, in fact, deal with archived data, is our registered reports appropriate for secondary analysis of existing data sets? And the answer is yes. Many journals offer this feature. So long as authors can certify that they have not yet observed the data in question. So as always, when you're doing any kind of hypothesis testing on observed data, there is a risk of overfitting in statistical analyses and there is a risk of bias. And more observation that the researcher has of that data, the greater the risk of that bias is. So as a journal, you can set that threshold wherever you desire in terms of what level of stringency you need, what level of distance the authors need to have from the data in question, whether or not it can be in their possession that they haven't looked at it, whether it needs to be behind a gatekeeper of some kind. But in general, registered reports are completely suitable for secondary analyses. Another question I often get is whether registered reports are really only suitable for low risk research. Is it just a kind of incremental style approach that's only suitable for specialist journals looking at small questions? And I would answer very emphatically no. This is a misconception. Really registered reports are ideal for any kind of hypothesis driven question where it's important to know the answer. Yes or no, either way, regardless of study risk, it's not about the prior that we placed in the hypothesis we're testing. It's simply about how important it is for us to know the answer to the question in the first place, independently of that prior. And I think the adoption, the recent adoption by nature, human behavior really demonstrates how this format is useful beyond the specialist set of journals and specialist areas where it originated. Another question I get sometimes from publishers, in particular, less so from scientists, but from publishers and editors sometimes is whether or not registered reports risk lowering the journals impact factor, because what you inevitably will see if you offer this format is more publication of negative results. Not negative results that are in any way of low quality, but negative results simply because you're now removing the bias which filters them out from the process normally. There's no reason to think that so far that registered reports present any threat to citation indices. I'll leave it for you to decide how much weight and value you think should be placed in these. I personally don't believe impact factor is very useful, but I know that publishers, many publishers do place importance in it, and therefore editors are required to. What I can say is that at Cortex, the first six registered reports that we've had, if we look at their citations, then we see that they're cited around about 10% above the impact factor of the journal. So there's no real risk at the moment of this format lowering impact factor if anything it might push it up. And that makes sense in fact when you think about it because these studies are big, they've gone through a very rigorous peer review process, they're answering important questions. There's every reason to think that these studies should in fact in the long run be more cited I think than regular articles. I suggest in registered reports as a replacement of existing article types. No, absolutely not. So even though there are a couple of journals which are proposing to do this of their own core, the way we're pushing and promoting this initiative is in fact that it should only really be offered as an option for authors. We're not trying to say that any other kind of research shouldn't be done or any other kind of article type shouldn't be published. Simply that this should be added to the journal as a new pathway, as a new possibility for authors to consider. How complicated is the implementation of registered reports? Of course, there are two stages of peer review. There's a stage one and a stage two. There are some interesting little changes that need to be made to the manuscript handling software. But overall, it's very straightforward. Cortex took us one week and at World Society about the same to make the changes we needed to the manuscript handling software. So building in different kinds of decision letters, incorporating the necessary link between the stage one manuscript and the stage two manuscript and all this kind of thing. There's various kind of technical details which need to be addressed, but they're fairly straightforward. When broken down, actually each one of them is fairly small and straightforward tasks. One of the advantages at the moment is that most of the major publishers have already adopted this format through one of their journals. And in doing so, they've therefore already changed a lot of the software to lay the path for other journals within that same publishing group to do the same. So that can make the implementation easier. And we've also got a repository at the Center for Open Science that's dedicated for editors. So there you can find template, decision letters for authors, reviewer invitation letters, the implementation checklist and various other materials to help streamline the implementation as much as possible. And David is here and I'm here to help assist as well when we need it. And we've helped many journals address bespoke problems over the last three years in introducing the format. Another question I often get is how many submissions have there been in total? At Cortex so far, it seems we've launched, we've had 33 submissions. Other journals have had more. I know that for instance in some areas it's been very popular, in others less so. It's ultimately for the community to decide, I think what kind of popularity this will have. And I think it will continue to grow over time. But if you're interested in monitoring this in a more continuous way, then you can go to our Zotero database, which is also linked to on our central knowledge base here. If you go there, you can find all of the published registered reports so far, perhaps all journals. Some of them are protocols only, some of them are completed studies. You can find every published registered report in that list so you can monitor the growth as it goes. So I think that's it. I've gone fairly quickly through all of that because I wanted to leave time if possible for questions. And I hope you found it useful. And if after this you want to ask any specific questions offline, feel free to drop me an email or David an email and we'll help you as much as we can. You can't hear their applause, but I'm sure everybody on the line is going to be giving the appropriate classroom. I just want to thank Chris, especially for those frequently asked questions. I think it's great to see a lot of the concerns, but I might suggest our misconceptions about the format have been fairly addressed. And so we want to take this as a opportunity to show how well the format is working and how appropriate it is for a very wide suite of science. And especially I've cited that Royal Society Open Science Initiative several times with adoption from that journal. I really think it's reached the stage where any researcher can use it at this point given the suite of disciplines that are covered and we're actively working to expand it at more journals. And so that's really what I want to see come out of this. And so everyone who's online now, there's the Q&A bar. So I'll leave a couple of minutes for you to submit questions that way, but of course we'll be available afterwards. Those are our contact information up on Chris's screen right now. And so for the time being, I'll just wait for a few moments as people might be typing in questions. Andrew and I said, thanks. Yeah, cheers Andrew, yeah. Yeah, it's been a really exciting three years, I must say, working on this project. I thought it out as a kind of random idea. I know Brian, no second others and Dan Simon sort of in parallel to what I was doing in Cortex were developing the format of perspectives at the same time and it's really come together in an interesting way. And it's great to see so many published examples coming out as well and to see the paper as well received. It also, by the way, provides excellent training I think for PhD students when they start their PhDs to really to start off with a registered report and start thinking in detail about their methods. And not necessarily for every study they're gonna run because it's always good to spread things out and to do different approaches. But I've seen some very positive examples of student training. I think that peer review before conducting the study has sort of demonstratively been very effective at increasing the quality of the work that later gets submitted. Well, I think that's very interesting. Yeah, I know Tom Hardwick is interested in doing some analyses of the actual, like doing some empirical analysis of how registered reports differ from conventional articles. And it's challenging to do that, but there are ways that this can be measured and assessed. And I think there's a really interesting line of research there to see exactly how they differ. Well, differing very obvious ways. We know they're gonna have higher power. We know they're gonna have a more a broader distribution of peer values because of the lack of bias. But there are other questions surrounding perhaps blinded sort of analyses of quality, subjective judgments that people can make about them. All sorts of other interesting questions can be addressed, I think, empirically as well. Two questions that come in. Emily White asks, are you seeing this publishing method in clinical research? Not so much. And it's something where we're trying to push quite unaggressively. Clinical research, so if looking at, it depends what area of clinical research you mean. So in the area of clinical trials, for example, trials are registered already. And I think this format has a lot of traction for clinical trials because it eliminates something known as outcome switching, which is essentially the same thing as p-hacking. And we're hopeful that we can convince some medical journals this year to take up this format and use it as a way of publishing clinical trials. Within other areas of clinical science, like clinical psychology and that kind of thing, we haven't seen any uptake yet, but we are working toward this gradually by approaching the editors and trying to show the benefits. And I think in areas which are closer to the public and closer to that translation stage, it's even more important, in my opinion, to eliminate bias from the publication process, which is what these achieve. So hopefully we'll get some more uptake this year. But so far it's been primarily taken up within basic science. Couple more questions. Adrian asked, what is the typical time lag between stage one and stage two review? And for stage two, do you recommend using the same reviewers or same referees that handle this? I'll answer the second question first. Yes, we always, so far, at Cortex and RSOS, we always get the same reviewers back. We've actually never had one say no. I think because reviewers are just naturally curious, hey, how long ago how it went, you know? So that continuity is quite important to the process. So we make every effort to try and ensure that the same reviewers come back in stage two. Now, in terms of the time lag, so, well, there's a few different time lags here. So I think the question was time lag between, I can't actually, by the way, David, see these questions coming up on my feed here. But I think that the difference between stage one and stage two is up to the author. That's how long it takes them to do their research. Now, that can take, as long as they like, it can be as three months, it can be, we've had a paper, a longer term study at Cortex, which has been going for nearly two years now. That's up to the authors. We don't see any limits on that. But within each stage, it takes at Cortex, not counting the time for authors to revise around 10 weeks on average to go through that, that stage one review process. That's the moment you submit to the moment you get your principal acceptance or a final decision at stage one. And that's usually two rounds of review on average with two to three reviewers each. At stage two, it's around about the same and a little bit quicker. Are the registry reports kept private or confidential until the stage two publication is ready? At Cortex, they are, yes. So we don't publish the protocol. Some journals do. So for example, Elife, with its special issue on reproducibility and cancer biology research, what they do is they, once the protocol is accepted, they publish all the protocols. And then when the results are in, they kind of add them so you can see them kind of unfold step by step. Most journals in psychology and in neuroscience at the moment are not publishing the protocols at all because of the field doesn't really want that. And I think for two reasons, people don't necessarily want to advertise their ideas before they've done them. But also they don't want a literature full of protocols to read. It's not what we're used to, it's a bit boring. They want to see final results and they want to see papers that look like normal papers. So at Cortex and at Royal Society of Open Science, we don't publish the protocol separately. Having said that, if any authors wanted to publish their protocol, what they can do and what has happened Royal Society of Open Science in one case is that after getting that stage one acceptance, the authors then took their accepted protocol and published it on the Open Science framework. So because they just wanted to put it out there. So it's up to the authors to control that side of it. Yeah, and I can give more information about the Open Science framework once you put that file online on osf.io, there are various privacy settings. Everything's private by default. You can make it public when you're ready to. And then if you create a permanent copy of that, what we call in the OSF registration, it's a permanent timestamped copy of that. That can be embargoed for up to four years. And so it kept private for a reasonable amount of time before you're ready to share it. Okay. Oh, and you mentioned the Rubricatively Project in Cancer Biology, that the first set of results are coming out this week. So stay tuned for that. We know there's gonna be a lot of news coverage for that. The attempt to replicate about 29 studies in cancer biology research at the first set of five results are coming back this week. Do you have any data on timeline from start to finish for registered reports based on what's been published? Do they get published in the same amount of time, faster, slower? You've answered a few of those questions already. Yeah, so as I said, so the timeline is about 10 weeks per stage. And the time in the middle between getting your IPA, your Inference of Acceptance, and resubmitting at stage two is completely up to the office. It just depends how long the research takes. My impression is that this is a much faster process on average because the acceptance rates are so much higher. So I'll give you an example of this. So at Cortex, we reject 92% of regular submissions. This is regular research reports that are not preregistered, 92% are rejected. And that can be about half of those as after review and about half before review. With a registered report, that statistic is actually the other way around in an interesting way. So if you make it past the triage stage and you go out to in-depth stage one review of your protocol, because the review process is constructive and leads to improvements in design and so on, there's ways to address the concerns that reviewers raise. Most papers are eventually accepted if the authors go ahead and make those revisions. So we haven't had, we've had very few rejections at that stage and no rejection so far at stage two. What that means is you're much more likely with a registered report to have your paper accepted at the first journal you submit to rather than getting rejected by one journal after another and go down the chain sequentially. Which means that even though there's this kind of delay involved at the start between submitting your protocol and starting a data collection, you're investing around 10 weeks or so in saving a lot of time later. It's a classic kind of delay discounting. You're spending a small amount of money now to save a lot later in that sense, or time now and time later. So I would argue, in fact, I hope what we will see eventually is that studies are actually published a lot quicker through this process than they are in the standard unregistered sort of pathway. Chris, I know you're like this next question. We have two questions that came in. Have funders shown interest in this method? And related question funders will typically perform the equivalent of that stage one review before granting an award. Have you seen instances where the funding agencies are partnering up with journals to facilitate this? Yeah, well, that's what we're exactly what we're working with at the moment. So we have a registered reports funding model that we're trying to develop at the moment where the idea is that we try and bring together all of the different kinds of review that happened before a study begins. So at the moment, with a state on registered report, authors already have their funding and ethics and so on already in place. It's kind of duplicating a type of review and it's admittedly at a different scale sort of micro scale rather than a macro scale which you see with the grant but it still there's this kind of commonality of review and it'd be nice to bring these together just be more efficient for science to do that. So one of the things we're trying to do is work with funders. So that a stage one registered report or a series of stage one registered reports back to back could be submitted as a grant application. So you would submit that simultaneously to a funder and a journal who would be working in partnership. They would simultaneously review them and if everyone agrees, if both stakeholders agree that the work is of high quality, you get your funding at the same point as your papers are accepted. So it just streamlines the entire process. Now you could also of course bundle in your ethics assessment, you can bundle in any sort of regulatory assessment. It could all happen at the same time. And so it could all be much more efficient I think than the way it is now. So I was in Charlottesville, of course in December had a really important meeting with funders about this where we had a lot of interest from a wide variety of funders in pursuing this. So this year we hope to see some definite progress in this and we've already seen some uptake and some partnerships being formed. We hope that this format once kind of demonstrated as a sort of proof of concept will take hold. Yeah, stay tuned for more announcements about that as we get those. Very exciting. Highlights about the wall, yeah. How would this model be integrated into an author and author pays open access journal? When would an author be expected to pay the APC fee and how would that not compromise the review? I don't really see how registered reports can like, is any different from standard publishing in an open access journal? So you would, in justice, you would have, there would be an APC which would be applied at the very end of the process at stage two just like it would with a standard article. I can't really see any real difference. So I think the cost structure would be about the same from what we've known from talking to publishers that are considering doing this, the open access publishers. I should point out that so far like Royal Society of Open Science, which is of course a fully open access journal doesn't charge an APC at all. So in that sense, we don't really know in that structure what it looks like. But I don't know, maybe I'm not answering the question properly but I'm not quite sure why there would be a conflict between or an additional problem introduced by registered reports for APCs. I guess maybe just perhaps the time of paying that APC would come out after the stage two acceptance. I think that'd be pretty straightforward. Perhaps there would be some conflict if the authors weren't able to pay but they've already been, they were accepted a year ago but there was some sort of tension that developed after completing the study. But isn't this the same for just regular papers? I don't see how that's for registered report. You can still have a situation with a regular paper where if you go to plus one or something you can have your paper accepted but then you don't have the money so you have to go for the fee waiver. I don't see why registered reports, in my mind is sort of orthogonal to that process. Yeah, I think so. All right, stay on the line for another minute or so in case there are any other follow-up questions but in general I'd like to thank Chris again for taking this time to share his experience and going into a lot of depth about why this is an appropriate format for a pretty wide range and just like everything we have we really wanna implement this as many journals as possible and so the contact information is up on the screen if you're interested in pursuing this at your journal that you edit or publish, please do reach out because we do want to get these, this format more widely implemented now that the proof of concept stage for the past three or four years has been widely successful. With that I think it's time to say adieu. Thank you, Chris. And thanks everybody for joining us. Yeah, I'll go ahead and end the meeting. Thanks guys.