 All right, I've started the recording. I'm going to introduce Chris Chambers now, and a few other folks might join us, but we'll welcome them in as they log on. Chris Chambers is a professor of neuroscience at Cardiff University, and he's been leading the registered report initiative for the past about four years now, approximately. And what that is is a system of peer review before results are known in order to address major biases in the scientific literature. And he's going to go over the rationale for doing that and some of the journals that are using this, some of the 50 or so journals that are using this initiative and go through a series of questions geared towards early career researchers and how this can benefit your workflow. At the end, I'll give a little bit of demonstration of creating a pre-registration on the OSF and just defining what that means and how that can work in a registered report workflow. And at the end, we'll make sure to leave at least 15 minutes for questions and discussion. At any time, you should be able to send a question to us and we'll see it. And if there are questions or if there's clarification needed, make sure to let us know as we're going. Otherwise, we'll have more discussion at the end of the webinar. And so with that, Chris, take it away. Thanks, David, and welcome, everybody. Thank you for joining us. So I'm going to, as David says, I'm going to talk about the registered reports format. And in particular, I want to discuss the implications of this format for early career researchers, PhD students, career postdocs, fellow student lecturers, and so on, people who are at that sort of formative stage of your career. As I imagine many of you are watching this are because this format's been very popular among the younger scientists. And I think there are important issues to discuss around that and to highlight the benefits. So these are the three points I want to cover today. First of all, what problem registered reports are trying to solve? How I'm going to talk about some of the problems that, you know, Chris, just quite briefly in science that we know about in terms of bias and reproducibility. Now I'm going to describe how registered reports actually work and take you through the mechanics of the process as it operates at journals. And then I'm going to finish by actually anticipating some of the questions that I get asked when I give talks on registered reports for, you know, in various locations and in various scenarios. So just a little bit about me. David already kind of anticipated some of this, but so I'm a neuroscientist at Cardiff University. I founded the registered reports project together with others back in 2013 where we first launched it at the Journal Cortex. And since then I've taken up the section as a role for registered reports at the European Journal of Neuroscience and Royal Society Open Science. And I've also helped to implement it at Nature Human Behavior. And I'm also chair of the registered reports committee at the Center for Open Science, which is a collection of researchers who are trying to advance this format and advocate for it in different contexts. And you can, if you go to this link here, you can find lots and lots of information about registered reports. So just to put the problem into perspective to start with, we know that science has a big incentive problem that on the one hand we would probably all agree that what's best for science as a whole is publishing high quality research regardless of the particular outcome that was achieved in a study. And that would help to generate a knowledge base that we can rely on that's unbiased and it can help lead toward better theory and better applications. But on the other hand, what's best for me as an individual scientist is quite different. It's producing an awful lot of publishable results, a lot of good results, a lot of striking results, results that are seen as novel, attractive, making an important step forward and impressing reviewers and journalists. And the problem with science is that when we put individual researchers under pressure to produce great results in order to advance their careers, achieve publications and get grant funding, they do precisely that, but they do so by short circuiting the scientific method in various penicious ways, which I'll just very quickly summarize. So we know, I'm just gonna take some statistics here from my field of psychology. We know that replication is extremely rare. And we know that just one in 1,000 papers reports an independent replication of a previously published experiment by researchers other than those who did the original experiment in the first place. We know that the statistical power in psychology is quite low with just a flip of a coin chance to detect medium-sized effects. And this problem has remained persistently from the 1960s through to present. We know that various forms of p-hacking are quite common. So p-hack by p-hacking, I mean selectively reporting analyses or outcomes that produce results that look better or are easier to publish. We know that researchers engage in different forms of p-hacking that even unconsciously undermine the scientific method of different stages. We have here p-hacking of data acquisition where data are collected until p-drops below .05, which of course violates frequentist statistical philosophy. And we also have various forms of p-hacking at the analysis stage where data are analyzed in many different ways and only the most publishable outcomes are reported in the final product. Also, research is under a great deal of pressure to create narratives from their experiments that prove hypothesis correct and show that the authors were correct in their suppositions in the first place. So we know that changing the hypothesis or hypothesizing after results are known is also a very common strategy. With estimated prevalence rates as high as 90%. The same time, we know that researchers are not very keen on sharing their data with other colleagues. Around seven out of 10 psychologists will refuse to share data with a colleague given when directly asked for it. And on top of all this, if it wasn't bad enough, the published literature exhibits another kind of bias altogether in which only positive results really are published and only 8% of published results in psychology are negative or inconclusive, which is interesting when you think about it because it either means we already know what's true and say why bother doing experiments or we have a massive problem with bias in the evidence base itself. So those problems are pretty bad and I think it's important to reflect on why this is happening. And I've always believed that in fact, the real root cause of all of this is because we place as scientists in different parts of the job we place far too much importance on the results of the experiments we conduct in terms of determining outcomes and not enough on the processes that produce those results. And this is just human nature in many respects because science depends on results. Results make our jobs exciting and worth doing but judging the quality of the science and the scientists themselves according to those results is the classic definition of a soft science. And when I talk to my colleagues and friends who are in physics or chemistry, this is what they would say defines a soft science. When you choose what should get published based upon the results of the experiment rather than the actual quality of the experiment itself. Now the good news is that I think we can fix this and it requires adopting a rather different philosophy to the one that dominates the life and social sciences at the moment. And the philosophy is simply this, that when it comes to hypothesis testing what gives it its scientific value is the question that it asks and the quality of the methods that it uses but never the result that is produced from the combination of those two things. So really what's important is is the question that I'm asking important to answer how important is that answer and how robust, how rigorous, how well controlled is the method that I'm using but the result that is produced should be completely irrelevant in determining the value of hypothesis testing. Now if you accept that philosophy then it makes sense that editorial decisions that journals should be blind to results. There's no possible benefit for a journal editor or a reviewer knowing something that does not inform them about the quality of the experiment that they're assessing. All that can do in fact is introduce bias which of course we know is very common. So based on this logic we've argued that in fact editorial decisions should be as blind as possible to results in order to prevent ourselves from fooling ourselves into prioritizing the publication of experiments that have more attractive results over those that have unattractive results when the methodological and theoretical quality of the work is the same in each case. And it's based on this logic that registered reports really emerge. So back in 2013 we first launched this at the Journal Cortex and there are four central pillars of this publication model. The first is that researchers decide there are hypotheses, there are experimental procedures and their primary analysis before they commence data collection. Part of the peer review process takes place at this point before experiments are conducted based on a peer review of basically an introduction and a method section. If you pass this stage of review, publication is virtually guaranteed regardless of the outcomes of the work. And original studies and high value replications are welcome as part of the initiative. So I'll now talk about how this works in practice. So the way it works is that authors submit a stage one manuscript which includes an introduction, proposed methods and analyses and pilot data were applicable. This goes out to stage one peer review where reviewers are assessing the manuscript according to various criteria such as are the hypotheses well founded? Are they based on a strong and coherent theoretical rationale? Are the methods and the proposed analyses that are in that protocol feasible and are they sufficiently detailed to provide a recipe essentially that other researchers could replicate without requiring additional contact with the authors without requiring them to pick up the phone and ask what was the secret source? What was the magic potion and you getting this result? And is the study well powered? So this is a crucial criteria for many of the journals including Cortex where we set a minimal statistical power level of 90% for all hypothesis tests. And finally have the authors included sufficient positive controls to confirm that the study will provide a fair test. So have they included sufficient quality checks, robustness checks, positive controls, various other forms of checks like the absence of floor and ceiling effects in data? Have they pre-specified what conditions must be in place in order for the outcomes of the main hypothesis tests to be interpretable? Which is of course a key element of good study design. So these are go out to review and the protocol goes out to review at this point and if the reviews that come back are positive and this usually follows at least one round of revision then the journal offers what we call in principle acceptance or IPA which is regardless of study outcome. So the journal basically provisionally accepts this paper before the results exist and before the results are known. Now at this point at Cortex the protocol is not published, it's held and reserved by the journal but authors can of course freely publish it themselves on the open science framework or in any other repository as they choose. So now with the provisional acceptance under their belt authors can go away and do their research as planned and when they're finished they resubmit what we call the stage two manuscript which includes the introduction and the methods from the original stage one submission which are virtually unchanged except for any necessary changes in the tense of the language another very minor syntactical or grammatic changes. It also includes a result section but crucially unlike a normal manuscript the result section is divided into two different subsections. The first reports the outcome of any registered confirmatory analysis so these are the analyses that were pre-specified and approved as part of the stage one review and it also includes any additional exploratory analyses that the authors might have thought of along the way but which were not pre-registered in that stage one submission. So this could be a new analysis technique was developed in the meantime or a serendipitous finding in a subgroup was observed in the data. Anything can be reported provided of course it passes peer review and it's legitimate and well conducted but in theory any exploratory analysis can be reported there's no barriers or exploration simply that those exploratory analyses are reported in a separate part of the result section. The manuscript also includes a discussion section of course for interpreting those results and as part of the initiative of Cortex authors are also asked to deposit their data in a public archive so that it can be verified and re-used by other researchers. So this goes out to stage two peer review where now it goes back to the same reviewers and this time they're answering you did a different set of questions. Did the authors follow the protocol that was approved to stage one? This is obviously a crucial element. Did the positive controls exceed so did any pre-specified outcome neutral tests positive controls, quality checks, et cetera, pass and are the conclusions justified by the data as presented? And if the manuscript passes all of these if the answer to all of these questions is yes then the manuscript is published. So I'll just emphasize a few points here what doesn't matter really when assessing a registered report is just as important to notice what does matter. None of these things determine whether or not a manuscript will be published at stage two. So it doesn't matter whether the hypothesis is supported. It doesn't matter whether the results are statistically significant. It doesn't matter whether the results are considered novel or impactful by reviewers or editors. In fact, editors are prevented in policy from allowing even the suggestion of such interpretations to influence the editorial decision. So as an editor, my hands are absolutely tied in this respect. So I'll just show you a couple of published examples of registered reports at Cortex. So we have here on the left of the screen three examples. One was a neuropharmacological study using MEG in other words a multi-site application of behavioral effects and another one was an EEG study of neurons. One point to note is that like most of the registered reports we see these were led or at least heavily contributed to biolite career researchers. And this is an ongoing theme with registered reports that it's very popular with the young scientific community. And you can see if you navigate down to the link here you can read all the first six registered reports at Cortex in our virtual special issue and you can also find other special issues of registered reports at Social Psychology and many ongoing registered reports of perspectives on psychological science. So I'll just quickly emphasize what I see as the three main categories of benefit in terms of science of this format. So first of all, in my opinion these are amongst the most reproducible studies published in science today. So for two main reasons. The first is that they contain a highly detailed method section much more detailed than in the normal paper which allows the methods themselves to be more repeatable more easily repeatable by other researchers. And secondly, because of the power requirement of 90% the statistical power of registered reports is much higher than for typical papers and sample sizes are usually around two to three times above the normal level. These papers, these registered reports are also extremely transparent by necessity contain open data and materials and they're transparent in another way I think which is very important just that the outcomes of the confirmatory hypothesis-driven analyses are clearly and transparently distinguished from the outcomes of any additional exploratory analyses which really helps strengthen the inferential value of both forms of investigation. And finally, they're in my opinion amongst the most credible publications in science because there's no publication bias because the editorial decision the main editorial decision is made before results are known so it's impossible for the outcomes of the research to influence the decision to publish. There's no hindsight bias because authors can't change their hypotheses or even refine their hypotheses after they see their data and there's no selective reporting because authors can't selectively report analyses based on the outcome because all of the main confirmatory analyses are pre-specified in the protocol. So back in 2013, we decided to offer this format in one journal but we also decided to push for more journals to offer it across science. So Marcus Monafro and I and more than 80 members of Journal Editorial Board called for all journals in the life sciences to offer registered reports. Not just the only way to publish and not some kind of mandatory requirements but as a universal option that scientists should always have to consider when choosing how to conduct their research and how to publish it. And since then we've seen the format taken up by 48 journals at the current count including a mixture of permanent adopters and journals that have launched the format via a special issue. And one of the interesting aspects of all of this is the sort of breadth of the uptake across different fields. So we started off in cortex and we've seen it go to all kinds of areas in health psychology, developmental psychology, psychophysiology, we've seen it move into political science. We've seen the adoption by even interesting areas that didn't even know existed like financial studies and journal of accounting research. So areas where issues of bias essentially have been around for a long time and the journals are seeing this as an opportunity to incentivize a less biased and perhaps more robust way of doing life of the student science. I'll just emphasize a couple of, I'll draw your attention to a couple of the adopting journals in particular, registered reports first, first of all at Royal Society Open Science which is notable because the format here is offered across all STEM subjects from astrophysics all the way through to biology, life sciences and all the way through to psychology. So this is notable for being launched across over 200 different sciences which is a fascinating way of doing this because you get to see the uptake within different areas. So we're seeing submissions coming in from computer science and plant biology and psychology and we're seeing interest in all these different areas. So that's a very interesting test of the model and it's also a really nice journal because it's completely open access. It has open peer review and it has no article processing charge. So it's a really kind of ideal venue for maximizing the transparency of the registered reports process itself. And the second journal I'll point out which may be of particular interest to early career researchers is Nature Human Behavior which launched registered reports in February. Again, across quite a wide range of disciplines from neuroscience and psychology and psychiatry but all the way through to areas that traditionally haven't really been that invested in the reproducibility discussions. Humanity subjects like anthropology and sociology, public policy. It's very interesting to see the kind of uptake we get within those individual specializations. And just to draw your attention to some of the other specialist journals that are now offering registered reports. Of course, Cortex as I mentioned earlier but we've also seen recent adoption by behavioral neuroscience, attention, perception and psychophysics which was one of the earlier adopters. The European Journal of Neuroscience or the Official Journal of Fens comprehensive results in social psychology which is the only journal at the moment that has been created specially to publish only registered reports. So this journal publishes every article that appears in this journal is a registered report. Memory. And so there's many other journals and if you navigate to our central repository you can find a full list of those journals and I'll return to that later. So I'll just finish by anticipating some of the frequently asked questions that registered reports raises in audiences of early career researchers in the hope that maybe it can address some questions that are on your mind. And I'm sure you'll have other questions as well which we can also help answer at the end. So the first question I often get is whether registered reports are suitable for my field. How do I know whether this format would benefit my area of science? And I think the simple answer really is that it's applicable to any field engaged in hypothesis driven research where at least one or more of the following problems occur. So publication bias, various forms of significance chasing or p-hacking, various forms of hindsight bias such as retrofitting a hypothesis onto unexpected results, low statistical power or lack of replication. If any of these problems apply and your area is one in which people publish hypothesis driven studies, then this format has the potential to benefit the reproducibility of that field. On the other hand, it's not applicable for everything. It shouldn't be advertised as a cure all for science. It's not appropriate for, in my opinion, for purely exploratory science where there's no hypothesis testing where the purpose is perhaps to generate hypotheses from large datasets rather than to test them. And it's generally not really appropriate either for methods development where an approach is being taken to perhaps refine a technique without any clear hypothesis moving from one stage to the next. So really the essence of this format, the value that it derives is from hypothesis driven science. I've registered reports suitable for me as an early career researcher. Yes, in my opinion they are and the evidence from the submissions we're getting suggests that this is indeed the case. They send a signal that you're a scientist that cares about transparency and reproducibility, not just playing the game as we say, but seeking to make real discoveries. And there's no reason for this quest for understanding nature and truth to trade off against the incentive structure and science by pursuing this route. You can still publish in prestigious journals offered by Royal Society, APAA, Nature Group, et cetera. But you can do so in a way that sends a signal that you care about discovering something true rather than manufacturing stories that reviewers and editors will find attractive and interesting. And also there's perhaps a more pragmatic benefit, which is when you're going for postdoctoral positions, it's worthwhile thinking about how the CV will look if you're coming to the end of a PhD. It's very common for finishing graduate students to list their CVs with lots of in preparation or submitted papers. But in fact, if you pursue at least some registered reports, then your CV at that end point, even if you haven't completely finished writing out the final end stage of the registered report, you will still have more to say about the work that is in progress. So for example, a paper might say provisionally accepted at journal rather than in preparation or submitted. So personally speaking as PI myself, if I see a CV with provisionally accepted I know that that's a paper that actually exists and has gone through peer review whereas when I see in preparation or submitted, there's always this question in my mind as to how far that work has really progressed and in fact, if that work exists at all. What is the acceptance rate for registered reports? So at Cortex, our standard rejection rate for unregistered normal research articles, so not registered reports is 90%. So nine out of 10 standard research reports get rejected for various reasons. But for registered reports, what's very interesting is that that flips around. So actually in fact, only 10% of the submissions that pass the initial editorial triage and so therefore proceed to in-depth peer review at stage one, only 10% of those are rejected. And that's not because we set a lower bar for quality or that we're more lenient in any respect. It's simply that before the research is conducted, there's an opportunity for the authors to resolve criticisms and concerns raised by reviewers before they become blocks to publication. So it's easier to fix a floor and a procedure or to optimize an analysis before that procedure or that analysis is in fact being conducted. And because of this, we see a much lower rejection rate for papers that proceed through the stage one of the process. And at stage two, you might be wondering, well, what's the chances of getting rejected after my results are in? And it's very low because the reviewers at that point are not assessing papers according to the traditional criteria. They're simply assessing, as I mentioned earlier, some of the very basic aspects of where the positive control is successful. Does the protocol follow closely? Are there conclusions based on the evidence? And for this reason, the rejection rate at stage two is currently 0% across all the journals that currently offer registered reports. How long does the review process take? Cortex, it's around nine weeks for each of the rounds. So nine weeks to complete stage one and then nine weeks to complete stage two. This typically includes at least one, but up to three rounds of review. The most we've had is four. The average is about two. This doesn't include the time taken for authors to revise their manuscripts. So sometimes authors revise their manuscripts very quickly. And they come back within days. Other times, we've had authors take months to adjust their papers and return them. But if you take away that time, then it's about nine weeks to, from the moment you submit your stage one manuscripts to achieving an initial stage one decision. What happens if I need to change something about my experimental procedures after they are provisionally accepted? Are you forbidden from making even the most trivial change to your protocol? No, you're not. So minor changes are absolutely fine. And they're very common. This can happen when things like equipment breaks or other very small changes and procedures are required. All that we require is that these are communicated to the editorial board as soon as they become known to the authors. So for example, during the research. And in those cases, minor deviations are simply footnoted in the eventual stage two manuscript as protocol deviations. Major changes are more serious. So authors who perhaps want to change their data exclusion criteria or something very major about their preregistered analyses. In those cases, if they're too significant, then that would require withdrawal of a stage one submission and possible re-review. We haven't had a case of that occurring today. So this is most likely a rare event. But there will be a line that needs to be drawn by the editors between minor and major changes. And that's ultimately for each editorial board to decide. But the point I really want to make is that deviations that are minor, that are part of the normal process of doing an experiment are perfectly fine. Some of my analyses will depend on the results. So how can I preregister each step in detail? So for example, whether a parametric or nonparametric test is used in an experiment would depend on the distribution of the data that's obtained. How can you know this in advance? Does this preregistration require you to really know everything about your data before you preregister your analyses? And the answer is it doesn't. So preregistration doesn't require each decision to be specified, only the decision tree. So authors are welcome to preregister contingencies or rules for future decisions, essentially if and statements. So if my data is distributed this way, then I'll do this kind of test. If it's distributed that way or that kind of test, there are various ways that the most likely possible distributions or contingencies can be anticipated. And then those contingencies themselves are preregistered rather than researchers being committed to a particular hard-wired approach. I have access to an existing data set that I haven't yet analyzed. Can I submit this as a registered report? The answer is that some journals you can, some journals you can't. So Cortex or Nature Human Behavior, for example, secondary registered reports are not accepted, but at other journals they are. So at the European Journal of Neuroscience where I'm one of the section editors, we have decided to offer this format as a way of opening up registered reports and expanding its potential utility across science. The only requirement is that authors haven't previously observed or analyzed that data set before they propose the analysis on it. And that's in order to prevent overfitting and forms of bias from creeping back into the process. And if you wanna see which journals currently offer that amongst other features, you can navigate to our registered reports journal features page, which you can find on the Center for Open Science registered reports hub. And in particular, column seven there shows which journals offer analysis of existing data sets or secondary registered reports. How to register reports, support replication studies is a very common question that I get. A lot of early career scientists are interested in doing replications, but they're not able to and they find that there are a lot of systemic barriers to doing so. And this is true because I mean, generally in science that a whole range of circumstances conspire to tell us that it's not worth bothering investing the time and the energy is doing direct or close replications. For one thing, the method sections in original research are often too vague to allow precise replication. There are too many gaps and too much secret knowledge essentially which prevents us from just using the paper as a guide. The chronic lack of power in novel research means that replications often need very large sample sizes which further reduces their feasibility. On top of that, there's a kind of social problem which is that in some areas such as psychology, trying to repeat someone else's experiment can sometimes be perceived as an active aggression. And people who do replications can face accusations of bullying and so on as we've seen in recent years. Even if you pass all of those barriers, you'll find it very difficult normally to publish a replication study because motivated reasoning by the reviewers stops this from happening. If you successfully replicate the target study, then editors and reviewers will say that this adds nothing that we already knew this and if you don't replicate, if you fail to replicate, then at least some of the reviewers who are likely to be authors of the original work will try to stop publication because they will find, they'll go through your methods with a fine tooth comb and they'll find some deviation from their original method which I'm sure they can use to argue that you used something different and that's why you've got different results. So it can be very difficult to overcome this barrier of motivated reasoning. And on top of all of this, there are some journals, which there are many journals which not only prioritize novelty but also prohibit replications themselves and they see them as unpublishable and unworthy of attention. Registered reports solve all of these problems because you can have your, for one thing they're explicitly invited for replication studies but you avoid all of these problems with motivated reasoning and vague methods such as and so on. By having your replication experiment reviewed before you invest the resources into doing the study. And you can also potentially involve the original authors in the peer review of your protocol in cases where the methods that they've provided insufficiently precise to enable you to just use their paper. And by breaking up the review process into these two stages, we prevent this motivated reasoning. It's not possible for the reviewers to shift the goalposts after results are in. Once they sign off at stage one, there is committed as you are to the results regardless of how they turn out. Another question I often get is how well registered reports have been cited. And it's an early initiative so we don't have to grow a lot of data on this but we can say so far at Cortex that they're cited about 10 to 15% above the impact factor per year. So the impact factor of Cortex being around four is about 10 to 15% higher than that impact factor. I'm not a big fan of impact factors I must admit. I think they convey very limited value in general. But in so far as these statistics matter, we have evidence so far that registered reports are reasonably well cited and certainly are cited consistently with the impact factor of the journal perhaps a little bit higher than the impact factor of the journal. Which perhaps isn't surprising when you think about it given that they are large studies, they are robust studies and they're very transparent studies. So if science is really this process of seeking truth then in fact registered reports should be amongst the highest sizes. I have no idea what effect size to expect in my experiments so how can I do a power analysis as part of stage one? It's a very common question not just from early career scientists but from all kinds of researchers at all stages. Usually there is some related literature. In fact I think we have a tendency in my area at least to overestimate the novelty of every experiment that we do. We think every experiment is a kind of unique and beautiful snowflake that's not like anything that's ever come before. But in fact I think there usually is related to literature but even if not, you can always specify a minimal effect size of theoretical interest or of clinical significance or whatever that criterion is and you can power your study to detect an effect of that size. If even a minimal effect size of interest is uncertain and there are some options you can adopt an orthodox statistical approach where you correct repeating. So you collect data sequentially and before frequentist test as you go but you correct sequentially at each stage and this is, Benny Larkins has published a really nice method for doing this which preserves high power in study design. Or you can go Bayesian and you can set a prior distribution of possible effect sizes and sample continuously until a Bayes factor favors one hypothesis over another. And you can also include pilot results within stage one submissions in order to help inform effect size estimations and to help contribute to the evidence base that you're using to motivate your power analysis. Another question I often get is whether reviewers could steal ideas at the preregistration stage and then run those protocols faster perhaps in the original authors. Before I answer this, I should just point out new scooping is very rare and there are very few cases of it really properly documented cases of it happening but nevertheless we recognize that this is a real concern so there are a few safeguards that we've built in. For one thing that's important to remember that the protocol isn't published when you get your stage one submission accepted provisionally. It's held and reserved by the journal and the authors can choose to self publish it if they wish on the open science framework but the protocol itself is really only seen by a handful of people, the editors and the reviewers. Even if you had an unscrupulous reviewer that went away and decided you had such an amazing idea that they wanted to run the experiment themselves it's important to note that they couldn't influence whether or not your paper is eventually accepted because novelty of the approach once you get to the provisional acceptance stage is irrelevant in determining the final publication decision. And there's another little feature that we built here to help authors, help to generate confidence in authors about this issue which is that when the final registered report is published at the top of the paper in addition to the manuscript received date manuscript accepted date and so on we always publish the protocol received date. So the date, the very first date that the journal saw that stage on protocol. And this is so that authors, if necessary could always show that their idea came before any unscrupulous reviewer went off and ran their experiment faster than they could. And I think it's important not to, you know to overload this concern because we have so many different forums in science where we present ideas to each other and openness is something that we treasure really when we write grant applications or we give conference presentations, seminars, discussions and so on. We often present ideas for research that has yet to be completed and seeking feedback is an important part of science. So I don't really see that registered reports raise any additional concerns on top of the existing processes that we have. Registered reports seem limited to single studies. What if I want to publish a sequence of experiments? Well, this is a feature that we offer at many of the registered report journals like Cortex and Nature Human Behavior. So we offer sequential registrations. So what authors can do is they can add studies iteratively at stage one. So you preregister your first experiment, get your results right up your stage two submission and then you have a choice. You can either publish the paper at that point or you can lock it in stone essentially and then preregister the next experiment. And if it's a very similar design then we can pursue a fast track review process and then proceed and you can bolt on experiments one by one in this manner. So with each cycle that you go through the previous version of the paper, that accepted version is guaranteed to be published and you can securely add experiments as you go. There's another way that you can also pursue sequential experiments and that is to use the registered report as the kind of the finale of a series of preliminary experiments. And we see this quite often with registered reports where authors will submit say one, two or three preliminary experiments and then this all goes into their stage one submission and they propose a protocol for a fourth experiment which kind of seals the deal, a big preregistered study which is usually designed to try and resolve some kind of question or some uncertainty that has been generated from the previous experiments. This is a really nice model I think for a registered report where the confirmatory experiment is used to provide a kind of finale to the original experiments. And finally, perhaps the biggest question I get and the one that I think is perhaps the toughest to answer is how do I convince my supervisor or principal investigator to even try this? And this is a real challenge. So this is, I don't have a good answer to this. This can be challenging and it really depends what kind of PI you have. If your PI is someone who maintains a large file drawer and someone who is promoting a brand essentially or has a really strong investment in a particular theory or a particular direction of their work, you may find it difficult to convince them to even attempt registered reports or to allow you to pursue this because in a lab where the information that is published is tightly regulated, a registered report can be a risk to a lab's narrative and a lab's brand if the results of that registered report happen to disagree with the brand that the lab is promoting. So I think the first step is really to raise the issue of your PI and you'll learn something, I think, quite informative about the kind of scientists your PI is from how they react to the suggestion of registered reports. Whether they say, you know, this is an interesting idea and maybe we should give this a try versus absolutely not. There's no way we could embrace a method where we commit to publishing a paper regardless of how the results turn out. So you'll learn something useful about your principal investigator from how they react to the suggestion. But I'd say regardless of how they react, you can make a number of quite strong arguments for trying this out and perhaps persuading them to adopt this or at least attempt to as an experiment even within a lab to give it a go. So first of all, I'd say you can explain the wider community benefits of this format to reproducibility in the field and you can explain the potential benefits for your career in signposting your commitment to transparency and reproducibility and doing high power, rigorous deductive science. And this is always a strength when you're demonstrating the scientific contribution of your work. Also, there's a potential argument that can be made and a benefit for PI's that if they're working in a really competitive or controversial field, the registered report format can be very helpful for providing much needed clarity and avoiding stonewalling. So sometimes what happens in highly competitive areas is that rivals, rival labs or rival researchers will reject papers because the results disagree with their particular theoretical bent. But with a registered report, those rivals no longer have the opportunity to do that. So there's a benefit potentially for PI's who are very strategic in their approach in adopting this format as a way of publishing results which they would otherwise find very difficult because they disagree with other scientists particular agendas. And you can also argue that the format is being offered by an increasing number of major journals with the number of journals rising all the time. And as we go forward with this initiative and the way this initiative kind of, the way it's situated within this kind of continuum of various initiatives going on, transparency is something that is becoming increasingly prominent in the eyes of funders, institutions, journals, science generally. And this is only gonna increase in prominence over time. Open science is going to be the future. So you can certainly make an argument with a PI or a supervisor that it's important part of your training and an important part of perhaps their own career that they start thinking about transparency and ways of getting ahead of the boat a little bit. So it's better to adopt these sorts of initiatives before they're necessary or before the bar is raised too much so that by the time, funders are saying, you know what, I would like you to do more pre-registration or I would like you to do more open data. You will be prepared for that and you'll have necessary training and experience to do that yourself. So I'll just finish by pointing out the registered reports information hub. So this is a page that David and I maintain at the Center for Open Science. There's lots of great information on here. We've got a list of all the participating journals with links to all of their individual policies and their author guidelines. And so you can find all of this in one location. And that list is continually growing. So check back now and again and consider following us on Twitter because we always announce when you join journals, adopt registered reports. You can find more details about the workflow of registered reports and hear more details than I've had time to discuss today. You can also find various other kinds of sources of information. You can find information about new registered reports funding models that we're developing where we work in partnership with funders to create a registered reports model that doubles as a grant application. And you can find an extensive list of frequently asked questions as well. Questions that, again, I haven't had time to address in full. So I'll leave it at that for now and either we can move on to David's discussion of pre-registration generally or I can take any questions that come to mind. So thank you very much for joining us today. Hi, thanks, Chris. I'm gonna give about two points and then we'll open it up for questions. So let me just steal Chris's screen here. Move you off. Go to my messy desktop here. All right, I just wanted to give one little distinction about how we talk about pre-registration and how it fits into the registered reports workflow. And then I'll give an example workflow where you can include pre-registration by yourself and give you a link to create that pre-registration on the open science framework. So the pre-registration itself is that time-stamped read-only version of your research plan. And if it includes the complete analysis plan, it can start to address all the issues that Chris was talking about, about key hacking, unreported flexibility and data analysis, hypothesizing after results are known. But then when that research plan is subjected to peer review before results are known, then it becomes part of that complete register report format that Chris has been talking about. And so I wanna give a couple of examples of when you can use a pre-registration, hopefully in a series of studies leading up to a submission to a register report or by itself for publication eventually in a journal that might not necessarily be implementing registered reports. So let me show that two example workflows here, sorry about that. One example of a workflow is when you've got that strong theory-driven a priori expectation, you create the pre-registration with the research plans, the specific hypotheses that you're going to test, the variables you're gonna collect and how you're going to analyze the data to make an inference. You collect the data set after completing the pre-registration. Specify those confirmatory or conduct those pre-specified confirmatory tests. And then afterwards go to town on the data set, look for any sort of unexpected trend, look for the effect of various covariates or differences between populations that weren't necessarily expected ahead of time. And those unexpected trends, those, that data exploration that can lead to really new discoveries are what's appropriate to be used in a next round of hypothesis testing research. So create a new pre-registration with those unexpected trends for your next round of data collection. If you don't have the luxury of doing that, multiple rounds of data collection with the type of work you're using. Another example to have your cake in either two, so to speak, is when you're starting work, with very few, much more exploratory way, you have very few exploratory, you have very few pre-specified expectations for how there will be differences or trends or significant results possibly in the data set. So you create a fairly vague pre-registration, saying you're just trying to collect this and look for some trends in the data set. You collect the data and then you split it in half or some other ratio if you want. And with one set of the data set, you keep it secret, keep it safe, don't look at it, but with the other set of the data set that you're opening up, you're using that opened up data set for discovery, for looking for something that you didn't necessarily expect ahead of time. Then when you have something that you think is exciting, worth sharing, worth, or really worth putting to a stronger test, you then create a new pre-registration with that test that you just conducted in order to find that tantalizing preliminary results. You create a pre-registration with that test and use that as the basis for your confirmatory research after creating pre-registration with that new analysis that you just recently discovered. You don't have to collect new data at that point. You just open up the data set that you'd had on reserve and had it analyzed up to that point. And that can be the basis for confirmatory hypothesis testing. I want to show one example on the open science framework of how to do this. And so, cos.io slash preredge is the page where you can learn a little bit about what creating a pre-registration can entail. And that brings you into the open science framework where after log again, you'll have choices to start a new pre-registration or if you have projects already on the OSS, you can continue working on one that you already have. And the purpose of this form is to lead somebody through the series of questions that need to be answered in order to create a complete pre-registration for the fully specified analysis plan. So it'll lead you through the research questions and hypotheses that you need to specify, what you're going to do to collect the data, the degree to which any data set may or may not be to exist already. I'll ask you about the variables if it's an experiment, what variables you're going to manipulate. About a third of the pre-registrations we see coming in are observational studies. So you don't have any manipulated variables. So this type of work is appropriate for observational studies or even meta-analyses. If you're going to combine variables into a complex index, you can specify how that will happen. So you've had the most detailed part of the pre-registration is the analysis plan. And this is where people will tend to spend the most amount of time because these are the types of questions that do have to be answered in any publication but are often left until later after data collection. But we're asking here is to ahead of time specify exactly how that analysis will happen. If any transformations are necessary, if any follow-up tests from the result of your omnibus. Hypothesis test or models, how you're going to make an inference, pre-registration is a great way to justify a one-tailed test, which is otherwise often almost unpublishable because your viewers will ask you did you really specify that direction beforehand and that pre-registration is a great way to increase power by specifying in advance your one-tailed directional hypotheses. At the end of this form, you'll have the option to review what it looks like and you'll have two options at the end. You can just create a pre-registration right away but the pre-registration challenge is an education campaign where we're incentivizing you to try this process out. So if you submitted for review, what we're doing right here is just taking a quick look about making sure that the registrations are fully specified analysis plans and publications resulting from these registrations are eligible for $1,000 prizes. So we have a thousand of these $1,000 prizes to give out, so take a look at that. Every pre-registration, any registration on the OSF can be made public immediately or you can mentor into a Bargo for up to four years. That's what I did. And then what happens then is we'll take a look at it, get back to you within one or two days. About half of them we ask for slight revisions and then a pre-registration is created on the OSF where you can send that for review in a normal submission to a journal for peer review after results are known or more ideally use this as part of your submission to a register report or after getting in principle acceptance from a journal, include that information on the OSF in a pre-registration. And with that, I'm going to stop talking finally and leave it open for questions. So I'm just gonna pull up the Q&A window. Feel free to use the chat window or there's a Q&A panel it should be at the bottom of your screen and you can submit questions either way. Emma just asked, can you recommend some reading on effect sizing power? Chris, do you have any off the top of your head? Yeah, so there's lots of great stuff on this. There's a great paper by Daniel Larkins on how to calculate effect sizes that he published in Frontiers. Let me just see if I can dig it out of your ears. I'll put it through on the chat window, you'll see here. This is a really great paper for effect sizes. There's been lots of great papers written on about power. Perhaps the one that I think is really important to consider is a review of power by Kate Button, our failure in neuroscience. I'll just pop that link through as well. It depends on which area you're working but the issues that they raise seem to be quite common in many different areas of science at least the last night. It's worth a read about the consequences of low power for not only false negatives but also for false positives. And for software for doing power analysis, G-Power is really good and really easy. It's a simple program to use. There are other programs as well. If you do fMRI there are now, for example, special things like Europe Power Tools and fMRI Power.org, which are also quite useful. Another question. I'm new to pre-registering studies and I'm wondering if there's any benefit to pre-registering a study on OSF if I don't go through a journal's pre-registration process. For example, I'm working on a model that I have a pilot study for. I published my method. How can I leverage that with a submission to a journal? Chris and I will go back to that. So the advantage of pre-registering even if you don't go through a registered reports process is that if you find amazing results you're able to prove to reviewers that you didn't pee-hack. This is an argument that I think Leif Nelson first made some years ago. He said, you know, I have a purely selfish motivation to pre-register, which is that I want to be able to demonstrate that I predicted my results rather than having to navigate through the uncertainty of reviewers perhaps questioning to what extent your results or your hypotheses were cherry-picked out of unexpected parts of your data. So the transparency that pre-registration gives you also kind of insulates you against various criticisms from reviewers that can make it difficult to publish. And the second thing I would say is that even if you're not able to use the registered reports route, one of the benefits of pre-registering is that you can earn the pre-registered batch in some journals like Psych Science are offering in other journals and an increasing number of journals are offering where having demonstrated a time-stamped pre-registered plan you get this little type-mark appearing at the top of your article which adds a further signposts, you know, credibility and transparency. And of course you're also eligible for the pre-reg challenge that David mentioned earlier which is no always nice. Everyone needs a thousand dollars, don't they? It's a nice little incentive grabber. I'm just going to show you what those badges look like on Psych Science. So every issue now they've been issuing these badges for the past several years and here's two triple badgers right in a row. So these are just little visual indicators that these two studies have an open data set. They're openly sharing their materials in a persistent repository and they're citing a pre-registration. So these two studies did not undergo peer review before results are known but they are citing that pre-registration showing that their analyses were specified before seeing the data set, before collecting the data probably. Let's see a question here in the Q&A window. What if the editors of the most common journals in my field do not want to implement registered reports? They might cite several reasons maybe because they think it gets too much work for the editors and reviewers or because they think it's impossible for researchers to plan methods and analyses in advance even though the field is hypothesis driven. That's a good question. The first thing I would say there is email me or David and let us know which journals you want because we have a pretty good hit rate now of convincing journals to adopt this. There are a lot of reasons why it is in the interest of journals to offer registered reports and in fact if you look on the YouTube page you'll find a previous webinar idea which was geared toward journal editors and you can find some of the some arguments for why they should adopt it. So we are getting pretty good at convincing journals to move forward on this because there is something fundamentally incompatible about saying that an area is hypothesis driven but I am unable to plan my methods and analyses before I see my data. You can't really have it both ways. Either you are doing hypothesis driven research and your hypothesis came before your analysis or you are doing exploratory research and you are generating your hypothesis from your analysis but the mixture between the two will be able to claim that you are testing hypotheses but somehow generating the hypotheses from the data. It is all very circular and very inferentially unsound. So there are strong arguments for editors to adopt this and issues like workload and so on are always an issue but one of the arguments we often make to editors is that in fact in the long run this format should reduce the workload on reviewers because it reduces the rejection rate. Rejection rate is a huge factor for workload on reviewers because so many papers get sequentially submitted and rejected from multiple journals and that means that a typical paper might go through 12, 15 and even 20 reviewers before it gets published. Whereas with a registered report there is perhaps a slightly more detailed review process initially but there is a much higher chance of that protocol being accepted in the first journal where the authors send it to following peer reviews so ultimately as this format gets more popular there will be less of this submitting down the chain so the workload issue should get reduced. I would say though that it is more work for editors because unlike normal papers and most editors admittedly don't even bother reading papers that they edit. As a registered report editor you really do have to put the timing to read the papers properly and to invest the the authors are investing and the reviewers are investing and you have to match that as an editor. These are all reasons that we use to motivate editors to promote this so do let us know if there is a particular journal that you would like to see registered reports offered in and we will approach them and we will let you know the outcome. Question here, my work isn't nutrition and memory, would registration be appropriate? I think the answers are resounding yes if you are testing hypothesis. Absolutely. This comes back to the question is it appropriate in my field? If you are testing hypothesis if you are writing papers which say we predicted that then you are testing hypothesis and if there are various forms of bias in the area then definitely. I think there is definite traction for the format. There is one nutrition journal so far we haven't got a huge number of them but there is one nutrition journal that is offering registered reports and I think it is called NFS Journal and it is an area that we need more journals in so again I would say if there is a particular journal in your area that you would like to see it adopt this format please let us know and we will approach them. Here is NFS Journal and here is a link to their guidelines for I will paste this here. They are very similar to the guidelines also I would just say that there are a number of other journals which would publish work on nutrition and memory that aren't nutrition journals so Royal Society Open Science may well consider them. If you are doing work where the research question is very important and you are able to perform very robust methods think about nature of human behaviour because they are a generalist journal within that broad scope of human behaviour which is inviting registered reports across their entire range. If you are doing work where the question itself is very important to address I would strongly recommend considering them as an outlet. The benefit is so strong to you the researcher because it is providing that really strong and rigorous peer review and improvement process at a point in the time where you can really take that advice and improve the research workflow and the biggest benefit is that you are guaranteed a publication as long as you follow through with the requirements making sure that the work is conducted to the level that was agreed upon and that is a huge benefit as opposed to playing the game as Chris said. I think that is all the questions that have come through so I want to thank everybody who is online now again if you Google just send the link right now where this will show up your Centre for Openness Science so you will see this I will make sure that an email goes out with that link but feel free to share that link around if you have other colleagues who want to see this webinar. Thank you Chris for your time and thank you everyone who has stayed online. Thanks everybody Take care. I am going to stop recording