 Welcome everyone to this webinar. I'm Jason Gersten, a senior program officer at PCORI, the Patient-Centered Outcomes Research Institute, which is a funder of clinical comparative effectiveness research. I co-chair the HRA Open Science Task Force, and some of the goals of the Task Force include learning about current practices and policies regarding data sharing, pre-registration, and registered reports among other issues, among funders and other parts of the research ecosystem. We also work to identify and disseminate information about best practices on open science and stimulate discussion among funders about future directions and aspirations for open science. So today's webinar is our second in a series about pre-registration that's followed up on a webinar that was held in December 2018. So without any further delay, I'll briefly introduce our two speakers. Dr. David Meller is the Director of Policy Initiatives at the Center for Open Science and supervises the implementation of open science policies including data citation, data and code sharing, pre-registration, replication, and registered reports. And Dr. Cevula Couspa is the Chief Editor of Nature-Human Behavior where she's implemented the registered reports format and has over a decade of experience in implementing high standards of research at Fox Biology and Trends and Progressive Sciences. So I'll first turn it over to David. David, please go ahead. One more housekeeping note I wanted to say. All attendees should be able to use the chat window if you want to provide a comment. There's also a specific Q&A feature. If you have a very specific question, you want to make sure it gets answered. So please use those. We'll be monitoring those and make sure anything that comes through there eventually gets answered at least by the end. So this is the first half. I'll be talking for about 20 or 25 minutes about sort of a general overview of registered reports and how they can be relevant to the research funders community. Cevula at Nature-Human Behavior will be talking more specifically about what they've been seeing there, what types of submissions, what types of responses that has happened so far with implementation there. You can find this presentation at that URL. These materials will be made available to everyone attending and everyone who is invited. So if you want to use or reuse any of this material, please feel free to do so. So first I'm going to give a very basic introduction to what the Center for Open Science is just so you know who we are and where the types of fields we work in, followed by a general overview. Just give the basic mechanics of what a registered report is. Give some information and distinctions between how registered reports relate to pre-registrations because they are very similar, but have a couple of distinct features. Talk about some of the advantages that the registered report format has to individual authors and researchers in the scientific community, but also the benefits it has to the scientific community as a whole. A couple of pointers about how research funders can use the format in their workflow and cover a couple of FAQs that we see pop up, and then I'll hand it over to Steve Lula. So our mission at the Center for Open Science is to increase openness, integrity, and the reproducibility of scientific research, and we do that through three major activities. We're well known for a lot of the meta science work we do where these are the reproducibility projects where we attempt to replicate work that had been previously published, and the purpose of that is to identify barriers to successful replications. Those barriers that we identify are then used to inform our policy and our advocacy and our education initiatives to help increase reproducibility through increased transparency. And finally, most of the folks who work for us at the Center for Open Science are developers who build infrastructure solutions to enable the actions for which we advocate. So when we talk about increased collaboration, data sharing, or registration, we build tools to enable all three of those activities, and we're funded by a diverse range of government and private foundations who support our work in various means. So when judging a scientific study, or any scientific claim really, there are three main ways that that can happen. You can look at the importance of the research question, you can look at the rigor of the methods that were used to conduct that study, and then finally you can look at the credibility of the results. Ideally, the results need to flow directly from what was proposed in the questions and the methods. But in reality, we know that the results are what matters most for publishing and career advancement. So if we have a general premise that hypothesis testing research is given a particular value, that value is provided by the importance of those research questions, those first two things, the rigor of the methods, and not the results it produces. The value of the hypothesis testing framework is given by the first two. So if we accept this as a premise, then it's important to make as many decisions as possible before we see the results of the outcome of the study. And that's the rationale behind the register report publishing format. It's a two-stage peer review process where the much of the peer review occurs before the study is conducted. So there are many advantages to that for both improving the study design and sort of addressing a lot of devices that can occur as results become known. That first stage of peer review typically looks like, can look like a typical research article formatted with the introduction, proposed methods and analyses, and possibly pilot data. And then it's obviously going to exclude results and discussion. That gets submitted to a journal and the reviewers and the editors are evaluating whether or not the hypotheses are justified, either theoretically justified, or the hypotheses could be directly replicating the hypotheses that were proposed in an earlier study. Are the proposed methods and analyses feasible and sufficiently detailed? So can, if you're describing a recipe there, is it specific enough for somebody else to follow? Is the study well-powered? Typically over 90%, although some journals, Nature Human Behavior being one of them, can have specific power requirements, and have the authors included sufficient positive controls to confirm the study will provide a fair test. This is an important point that we'll come back to, but it puts the reviewers in a frame of mind not knowing what the results look like. How can we be sure that the study is going to be conducted in a very competent manner? If the answer to all those questions is yes, then the journal has the option of providing a in-principle acceptance IPA. I'll use that term several times throughout this webinar. And that's a promise given by this by the journal to publish the resulting work regardless of the outcome of the main outcome of the study. The work is conducted, sent back to the authors with that in-principle acceptance. After the work is completed and written up, authors submit a stage two from results section for peer review. That will include both the direction and methods, which had been previously provided to them. New results section, these are all the registered confirmatory findings that were conducted. And they are encouraged to include any other additional exploratory findings that they want to, as long as they're clearly indicated as such. And then obviously a discussion talking about the implication of those findings. Those last two parts of the study are obviously brand new, had it been reviewed before. And many journals implementing this feature will also require data and materials deposition, but that's on a case-by-case basis for the journal. Seeing that second submission, the reviewers will evaluate whether or not the positive control succeeded or the conclusions being drawn justified by the data. And they're explicitly told to ignore whether or not the hypotheses were supported, whether they were significant or novel or impactful. These things can't come up at that second stage of review. It's perfectly appropriate at the first stage of review to wonder will these results be important enough, no matter how they come out, to warrant publication in this journal. But those considerations cannot come up at that second stage of peer review. One question, pre-registration was the subject of the first webinar in this series. And we've had a number of education campaigns working with researchers who are pre-registering their analyses. I just wanted to give a little bit of a disambiguation about how pre-registration is distinct from this process, because it's one of the frequent areas where there's a lot of discussion. So a pre-registration is a result, is a research plan that has a time stamp to it. It's a read-only version of the research plan that's immutable and and itself cannot be changed. It can of course be updated as time goes on, but that read-only version is a stuck version. It's created before the study and it's submitted to a public registry. There's a couple of nuances for both of those. There are ways to pre-register if you're blind to existing data, if the data already exists but you haven't looked at it. And those public registries, some of which also allow an embargo feature. So it does eventually have to become a public, but not necessarily right away. The research plan will contain hypotheses, data collection procedures, any variables that are collected or extracted from an existing data set. And the analysis plan will include the specific statistical models used to address each of those hypotheses and any criteria used that are going to be used to make an inference. And those are most often key value thresholds, but also of course base factors or confidence interval criteria. Clinical research, economic psychology and social sciences all use slightly different terminology and these aren't perfect synonyms, but for the sake of early conversation it's fair enough to describe all these as being functionally the same thing. A pre-registration is typically referred to as simply a registered trial in the clinical world or perhaps more specifically a prospective trial registration and particularly in some of the social sciences and economics. The term that's used most frequently is a pre-analysis plan that can be attached to a study's registration. So if you hear these terms in many ways they are synonymous. Both pre-registration and registered reports have a variety of benefits and they address similar issues. Pre-registrations and registered reports both address unreported flexibility in conducting statistical analyses. So if you've seen some of our background work on the importance of addressing those, the take-home message for that unreported flexibility is that given a single data set, given a single research question, there are dozens if not hundreds of ways to analyze that and by chance alone one or more or several will be statistically significant even if just by chance alone because of the number of different analyses that are conducted and the expectation that p-values provide. Both registered reports and pre-registrations make a clear distinction between planned research, that's the confirmatory research that's being conducted to address a very specific pre-determined research question and unplanned discovery research the purpose of which is to look for something or discover something that's completely unexpected and making a clear distinction between those two modes of research is important for reasons I'll reinforce in a moment. Register reports address publication bias against null results. Obviously if you pre-register your research and you have null findings it could still say take some work to find an outlet for that. Register reports include a two-stage peer review process where the methods can be improved prior to conducting this study and that's one of the primary benefits we see with the registered reports format is that these pre-registrations or these research plans just like any plan or any piece of writing can be improved through additional feedback and review. Confirmatory versus exploratory analysis is a core concept when discussing pre-registration and registered reports so we're in the context of confirmation this confirmation this is the very traditional hypothesis testing research results from this are held to very high standards of there's a very specific way that they should be conducted so that they were held to high standards of rigor and you do want to minimize false positives when you're in this context of confirmation and importantly p-values mean what they were sort of designed to meant the probability of obtaining the observed data in a universe in which the null hypothesis is true or seeing a data set that's even more extreme. That definition is is a mouthful and doesn't quite gel with the way we think all the times and that's why some of these methods are appropriate to make sure it's a very clear distinction when you're in that mode of research. When you're in discovery research pushing knowledge into new areas the the the result of discovery research is a is a testable hypothesis that can be confirmed a model that can be applied a theory that can help explain a wide range of phenomena and then be further tested and refined and you really do you want to minimize false negatives in this mode of research you don't want to miss out on the next great discovery you don't want to pass up discovering penicillin or anything else by happenstance but the results of this are not inferential work that can be applied to wider populations the result of this is something that deserves to be confirmed and pre-registration is just a way to make that distinction between these two modes of work a little bit more clear and I should also sort of reiterate and state very clearly that neither of these two modes of research are superior to the other and we shouldn't impose one on top of the other say that one is better than the other but they are very distinct and problems arise when the work that was conducted in an exploratory manner is is presented using the tools that were designed for confirmatory research that can make the work seem more surprising but it can come at the expense of their ultimate later credibility so there are many advantages of rickshaw reports for the research community they produce reproducible highly detailed methods you're not going to give an inferential acceptance a reviewer is not going to see that as being worth publishing regardless of outcome unless those methods are are sufficiently precise they're transparent they often include open research data or materials and they have a clear distinction between those two modes of research and addresses hindsight bias or publication bias or selective reporting and so these are all aspects of the rickshaw reports that the research community sees as a benefit individuals individual researchers see individual benefits as well the early peer review process is tremendously beneficial because it provides feedback at a point in time in the research life cycle when that feedback can actually make a difference and one of the most frustrating things in the world is getting a review back pointing out a serious flaw and saying you know a lot of shucks you know if only I had known that a year ago I would have saved all this time so that's a tremendous review to the individual author entered individual study it's more efficient overall shopping an article around can be a waste of both author and reviewer time that time spent upfront on the rickshaw report early submission does add additional time to the to the process but it's more than made of four in the end and then it's obviously simply more ideal it's focused on what scientists really for going to the profession caring about these are the research questions that I want to answer these are the methods to propose them those are the things that authors have control over and should be rewarded for more and more so it's a more ideal way to go about the process there are a couple of ways that research funders and can engage with the rickshaw reports format option one is actually going to be the subject of a the third in this series of webinars where there's a specific partnership between a journal and a research funder where proposals can be jointly evaluated through a shared reviewer pool and if the program officer and the editor jointly both agree on providing funding and in principle acceptance that work can go forward there are a number of these partnerships ongoing there are about four or five of them right now between a number of between another number of entities we'll go into more detail about that again in the next webinar option two that a funder can engage with the rickshaw report format is simply having an requirement that now that the work has been funded part of this work has to be submitted as an rr as a registered report that's obviously going to very tremendously based on the scope of the individual work if the project being awarded is for and one individual study this could work very well even if it ends up getting rejected that feedback provided through the peer review process can help improve the statistical analysis plan and if it is accepted then of course the funder is getting you know benefits from knowing precisely where the work is going to be published if the project is spans many many years and is likely to include many many studies obviously requiring all of that upfront to be a registered report could be impossible aren't necessarily going to be able to pre-register work that's not going to be conducted for months or years in advance and so it could be a simpler requirement that some of the work or perhaps after stage one or perhaps if there's a certain amount of time at least some of the work can be submitted as a registered report so there are a couple of options even within that option two and then finally sort of the easiest to implement way to engage with the registered report model by the research funders community is to simply incentivize it so mention to some to grant applicants that the work is going to be either scored or partially evaluated based on plans to pre-register the studies or perhaps to submit it as a registered report so these are just a little bit more creative ways to think about how to engage with the registered report format as a research funder so as I said that for option two requiring that research submission there'll be the registered report will get off to a great start oftentimes those registered reports are submitted after a bit of pilot data have been collected so that can show either preliminary evidence or exciting new evidence or also just feasibility that a study designed is you know able to be conducted a key consideration if you're going with that option two is to make sure that there's that you encourage time in the project timeline for this it does take additional time on the front end there is a round or two or perhaps three of peer review that occurs early on in that study planning process that's that's a feature of the registered report publishing format so adding in that additional amount of time a couple of months in the project timeline early on will have benefits later on but it is worth considering and making sure to at least allow for it or if you're more directly engaged with the grant submission process and perhaps you'll require make sure to add a couple of additional months in your project timeline to account for this process that we want to encourage you to do for sure reports have been adopted by i think that number is even out of date now i think it's about 166 at the moment and it's been they've been adopted by journals across the research life cycle or across the spectrum of science that's conducted in in biomedicine and social sciences and and physical sciences there are a couple of journals doing it even in there we've seen both the number of journals accepting registered reports and the number of registered reports being published as growing nearly exponentially since over the past five years as this initiative has has grown and we have resources for both the number of journals and the number of completed registered reports again these links will be provided to attendees afterwards the very first analysis that's occurred shows some preliminary evidence that they are working as intended so there's a study late last year that preliminary results from that showed that that some of the outcome measures are as expected showing more null results from the reported findings than a comparison group of studies we have a information hub available for editors for funders or for authors who are interested in submitting a registered report on the website cos.io slash rr for registered report so there's information there that gives clarity into all the different policy options available for journals or funders to choose sort of a detailed explanation of the workflow detailed language for both editors reviewers and authors to use and template recommendations for how authors should engage with the format a couple of the FAQs that we often see if accepting papers before the results exist how can we know that the studies will be conducted to a high standard that stage one review criteria often focus is just on that very specific question if we don't know the results how can we make sure that it's going to be conducted to a high standard that creative thinking is often applied by the reviewer and by the editor to push back on the author saying you know the onus is on you to prove that and so that comes back to positive controls checks for floor ceiling effects in the data that are coming in and and importantly the journal is not obligated to publish research that does not pass any of the predefined quality checks if errors were made in the system they're not obligated to publish the work if it's if it's not meaningful in that way obviously the main outcome of the study can't be a blocker to final publication but there can be a series of checks put in place earlier on in the process to ensure that the work is being conducted or that the data are being collected in a in a way that's expected what happens if authors need to change something about their procedures after they are provisionally accepted no well-made plan survives first contact with the enemy as we know so sometimes if it's a very minor change just you know slight equipment changes or something those could be mentioned just as footnotes but major changes or changes to exclusion criteria or procedures um are handled sort of at editorial discretion reach out to the editor say this change is happening and they use editorial discretion to decide if that's a game changer if that requires additional review and input that's very likely to happen or just a note that that's fine go ahead with that that won't um that won't affect final publishability our registry reports suitable for all research no it's not suitable for every type of research that's conducted it's applicable to field where hypothesis driven research um and where there is hypothesis driven research um and one more of the following things are likely to occur if there's publication bias if there's a um a strong desire to get statistically significant findings if it could suffer from hindsight bias or low statistical power or if there's a lack of replication within the field those are all reasons to implement rich reports within a particular discipline but it's not applicable for all modes of research that answer different types of research questions again exploratory work methods development model or theory development don't necessarily benefit from this research workflow some of my analyses will depend on the results of what I see early on how can I pre-register each step well the important thing there is to pre-register the decision tree um knowing that that's going to happen one or two or three steps down along could be feasible to make sure that that those decisions are made in a way that aren't biased by how the data are looking coming in and in some cases pilot data or sort of modeled expectations can be used to sort of justify some of those decisions early on without being biased to the incoming data that will be used to draw inferences can I submit a register report if I'm using an existing data set it depends some journals offer secondary register reports if you've put in safeguards to ensure that bias is being prevented the authors would have to provide a very precise statement saying the degree to which they are ignorant of that data set and so that's on a case-by-case basis with the journal and then finally here we've known a lot about these problems with hindsight bias and looking to confirm previously held notions for a long time it's sort of built into the fabric of the way that science produces and so a lot of these ideas have been pushed around before that we can now have a really clear process for addressing them so with that I will say thank you we're going to switch to starfula and again these presentation materials will be made available and if you'd like to follow that link now you can take a screenshot and start looking at it but now David this is Jason I'm just going to jump in because I noticed the question the Q&A but then you can decide to defer this until the end the question is because register reports show a sharp increase in null findings are scientists more cautious to use them that's a great question the when presented with with null findings um authors traditionally don't don't quite know what to do with them it's it's inferred it's the impression is given that the study failed and that the work is not going to be interesting and not going to be publishable so that's a problem because null results can if they're if the study was well designed and well conducted be as meaningful as positive results and right now there's a strong bias against those null results and so we have a biased understanding of how the universe works because we're only seeing half the picture that's a problem that authors and researchers address that's a problem that editors and reviewers see you know null findings don't seem interesting no matter how credible they are and so the desire for the format is to make sure that research questions that are interesting regardless of outcome and that are conducted to a high degree of rigor that that information is disseminated and it's a responsibility of the entire research community to make sure to to showcase these null findings as true accurate credible um and since they were asked before seeing the results there was a reason to ask that question um and just because they're null results doesn't mean that they're not necessarily interesting so the time to judge whether or not a null result is meaningful is before you see the null result otherwise it'll just bias how we perceive the universe I'll mark that as answered but we can come back to it um instead of rule it could probably give some insight into that based on what they've seen so far at nature. Great thank you David um it's a pleasure to have the chance to talk about our experience with registered reports of major human behavior um to sorry my screen is on the other side so um in this presentation over the next 20 to 25 minutes I'll talk a little bit about major human behavior might be adopted into the format I'll look you through our requirements for registered reports of major human behavior as David pointed out there is some variability in terms of what sorts of research different journals can be set up and what expectations they have for the format um I'll talk a little bit about our evaluation criteria um in the two different stages of our generations we are specifically major human behavior as well as dimensions that can vary and apply across all the journals um there are three things that I want to make sure that by the time that this presentation is over is entirely clear to everybody listening in is that registered reports represent a radically different way of doing research for authors one that I strongly believe leads to much more robust credible science and so a radically different approach to peer review for reviewers that is far more satisfying and gives them much greater involvement in the work finally um David mentioned a list of advantages that the scientific community um perceives being conferred to science scientists and the community in general so I won't go through those I will just focus on this single key benefit that I see for funders and my funders would be making a very very clever investment if they have adopted and supported initiatives that promote registered reports so um nature human behavior is a nature research journal published by springer nature the flagship journal of our group is nature and there are now 29 primary research nature grant journals including nature medicine, nature genetics, nature neuroscience, nature human behavior is a recent addition um we launched um at the beginning of 2017 and that's the time when we adopted registered reports at launch we're an online homely journal that currently supports green open access following an article of six months recipe culture news journal and like all of the nature journals um the we operate through professional we're all professional editors it's a small team of professional editors currently in three locations and we're very actively involved in the peer review process there are three aspects of the journal that are very relevant to what I'll be talking about for the rest of this presentation but this is a very broad school journal covering several different disciplines we're highly selective in terms of the types of research out of the journal to publish and one of the priorities when launching the journal is to design its policies in a way that supports what we're talking about so the journal is a thematic journal we're very used to disciplinary journals so journals as I mentioned before nature um neuroscience journals that are developed to just be our best discipline we are um a thematic journal that actually encourages the submission of research from across all design social biological uh health physical they have something important significant to say about any aspect of individual or collective human behavior um we ultimately aim to strengthen the reach and the impact of this research in addressing the most pressing societal challenges ranging from health to sustainability and as I said the priority for us is to support robust scientific practices usually when you talk about a journal that's highly selective and um once you support scientific practices the past decades that has been somewhat of an opposite norm however it just it doesn't have to be um and the the way we've approached our policies and our evaluation criteria publication criteria are designed in order to align what is good for science with what is good for scientists publishing papers that may be highly visible um to the one or several communities that may be interested in them one example of how we support robust reproducible research is to our evaluation criteria yes i'm sorry to bother um there's a little bit of audio just time for echo it sounds like a big room could you speak a little bit closer to the course is this better brilliant i think so so um i've put on this slide um our editorial evaluation criteria for all the research manuscripts that are submitted to the journal just registered reports this is for any research there are certain editorial criteria that we've put in place in order to promote research that is credible robust and reproducible we prioritize for peer review research that has been pre-registered we do not mandate that but a piece of research that has pre-registered its um hypothesis and analysis plan its protocol will be prioritized for peer review we don't look just at significance whether a result is significant or not because we know that significance is meaningless unless a study is sufficiently powered and unless effect sizes are meaningful uh we've also redefined what we consider a significant scientific advance discovery is extremely important science would come to a halt if we stop learning new things and discovering new things but that's only half of the story the other half of the story is making sure that what we think we know is true and how much faith how much um we can believe specific finding word discovery and so on so traditionally for highly selective journals there may have been an outsized focus on discovery and on novelty we believe that that's unwarranted and for that reason we prioritize equally for peer review and publications studies that may say nothing absolutely nothing new but they represent an advance in evidence for instance a replication study that replicates successfully or not highly influential previously published paper or a study that due to its scale and rigor can provide a definitive either confirmatory or disconfirmatory answer for research question we've published a number of papers that are evidence advances rather than saying discoveries or adding something new to the literature these are extremely important so we've tried to i'm sorry my my power point isn't cooperating let me just give it one minute but to bridge to the following slide we have tried to align our editorial criteria with what we believe is good for science so providing the right incentives for scientists to do throughout the publication process just give me a minute there we go sorry it's being a little bit obstinate today so we've redefined what constitutes a significant scientific advance and we've designed our editorial criteria to provide that alignment and we're going to step to adopt the attitude that david started his presentation with we firmly believe that if the question is important and the methods are robust the answer will be important no matter what it is and that's the reason why we publish and can't publish studies reporting non findings and it's a key reason why we have adopted registered reports more broadly registered reports for us represent a key way of addressing issues both of questionable research practices but also publication bias um you may have already heard that i've read that in 1979 rosenthal in an article he coined the the file drawer problem the term the fire file problem to our problem so do you mind the worst case scenario um which he could imagine as potentially being true is that journals are filled with the five percent of studies that represent type one era well then 95 percent of studies are in the file drawer and non significant we know that this extreme case isn't true but we also know the publication bias is a huge problem upwards of 80 percent according to an article by daniel flanelli a few years ago upwards of 80 percent of research published across the sciences reports significant positive results in support of the hypothesis and we know that that's not that is definitely not what the full spectrum of results having what's in the file drawer represents so we we strongly believe that we must shift the focus to the questions and the methods the answer will be important if the question is important and you have the best possible methods to address it so as david explained um the register report format is not is not suitable for every type of research it's suitable for confirmatory research for research that aims to test hypotheses that is driven by earlier researcher and theoretical theorizing it's not suitable for exploratory research which is extremely important we do not emphasize it we very much welcome it but the issues that the format aims to address are limited to how we conduct confirmatory research rather exploratory research currently we only consider register reports that intend to collect data so the data doesn't exist but we are considering we are working on revising our policies to extend the types of research data we consider the secondary date as well as meta-analyses and key concern with registered reports is that the limit creativity the limits being able to follow up the serendipitous finding that you didn't expect and changes entirely how you theorize and hypothesize about a phenomenon to address this we allow incremental registration if after in principle acceptance the authors go off and do their study and they come up with a result they didn't expect and it changes entirely their thinking about the question they're pursuing they can submit an incremental register second registration which we will do everything we can to fast track through peer review and then thinking go off and do the additional work and have a meaningful project currently we also require that authors at the time of stage one submission have already received ethical approval for their research project and funding however we are very keen actually starting to collaborate with funders and adopting the option option one in David's presentation to work on initiatives that merge grant and peer review more on that a little bit later but you also have a third webinar to look forward to on that topic also at the time of submission of their stage one protocol and again if it is accepted in principle authors must agree to deposit it in a repository so that it's a matter of public record it's not just hidden within a journal it will be accessible publicly either at the time that it is deposited if the authors have a problem with that or under embargo until the stage two paper is submitted we also mandate open data open code and open materials we will consider and discuss cases where there are privacy or other ethical concerns that prevent open sharing nonetheless in those cases we will negotiate solutions that allow access even though sharing may not be public now we as I said launched in January 2017 for a new journal and registered reports are a relatively new format we have received to date 80 stage one and registered reports 80 that represents a small fraction of the research articles submitted to us however we saw that going from one to two there has been a steady increase in the number of articles of stage one protocols submitted to us as well as questions from the community so researchers who is interested in formats who have become aware of the format and would like to learn more about how to go about submitting a proposal although at the beginning we received mostly submissions from psychology and neuroscience now we receive a broader spectrum of protocols both from the biological sciences for instance genetics and the social sciences for instance economics and political science David walked you through the core criteria of evaluation of stage one and stage two submissions I want to add much to that except to say that these evaluation criteria can be taken to different journals registered reports are suitable for non-selective journals for scientific events so for journals that will publish all research that is robust regardless of whether the question is important to highly selective journals where there are criteria in terms of scientific advance in terms of relevant in terms of impact to more than one discipline we are in that end of the spectrum and we have modified our evaluation criteria to fit the the aims of the journal so we ask for research that our stage one submissions must be relevant must ask an important question but also be relevant for promoting the disciplinary audience we are also looking for substantial projects that can provide a comprehensive answer to a research question and for preliminary evidence for that reason we've added an additional criterion that other journals don't have we also aim for a higher level of evidence than is usually required so the majority of journals that have adopted the format require that power for frequentists such as the excess 80 percent we ask for 90 percent other journals with the requirements so we ask for for more evidence journals can adjust the criteria um according to their selectivity or no selectivity beyond robustness one principle that is in common across all the journals that have adopted the format is that we all believe that's everything methodologically research design conceptual hypothesis predictions and analysis plan must be in place and a decision for acceptance must be made in principle before the data has been collected or accessed so we all agree to do acceptance blindly results blind nature human behavior the outcome for for stage one submissions may be either to reject a paper or to request a revision usually stage one submissions the journal are rejected either editorially or after a peer review because the research question does not meet our criteria of significance or because the question may be important but of interest to specialist audience because the study of studies cannot address the research question and because the project is exploratory and hence the format is simply not suitable when you ask for for a revision invite a revision usually the authors have to address virtually the majority of the points that I've put down so the proposals protocols you can't imagine how much nurturing happens during the peer review of stage one register reports the work comes out not virtually unrecognizable because it's the research question that the authors want to address but the design is optimized the analysis plan is optimized hypotheses become clearer they're linked to specific predictions the authors take the hard issues of robustness and positive controls and the needs to properly specify their samples depending on what their design is so the peer review process transforms stage one submissions usually submissions go through two to three rounds of review before they receive acceptance in the principle and our reviewers the list of subject matter experts so the panel of reviewers for every submission will involve expertise covering these three areas as David mentioned the do this go off they do their research so we accept it in principle they deposit the protocol go off do their research writes the whole thing up come to us for an evaluation we as editors take a look at whether the authors did what they said they would do and whether the introduction the rationale and the hypotheses are the same as the ones we approved they have to be verbatim for except minor changes in intents the first part of the paper must be what we accepted now if all is good we sent the paper out to review if we notice that there are deviations then we discuss the possibility of rejection following examination of what what deviations happened why and so we may consult the reviewers we may not depend exactly on the project involvement the review our reviewers at that point only check whether the authors did what they said they would do whether there are controls and quality checks they all worked as intended that is that the research is valid that it does test what the authors intended to test they did what they said they haven't changed anything they have clearly distinguished their pre-registered analysis from any exploratory analysis they check whether the exploratory analysis makes sense and whether the conclusions are meaningful and they're based they're justified given what the authors found as david mentioned it's it's hard for reviewers to hold back and not to say not to say oh do this additional exploratory analysis and take it down to the reason why the outcome is null or why it's negative even though it was hypothesized to be positive authors are not obligated to do any of those they are obligated to report what they said they wouldn't and any other analysis that is necessary to support their conclusions but we overrule reviewer requests that may take they may require any any set of additional analysis or they may raise novelty or any other concerns or concerns about the interpretability of the results well these do not enter at this age so assuming that stage one protocol is strictly adhered to if the authors did what they said they would do and their positive controls and quality checks confirm the study is valid and their pre-registered analyses were all performed the way the authors said they would they're distinguished from exploratory analysis and their conclusions make sense then we accept the registered the stage two submission actually we're only now at the point where we are accepting our first two uh register reports and in both of these papers it will take a few weeks and encapsulate this hopes I'm afraid I can't say for reasons of confidentiality at this stage anything more because the papers are not in the public domain yet however very soon you will be able to see and print examples of what I've been talking about so register reports represent a radically different approach to research for authors usually that the mainstream approach now is you have a rather vague idea of high theory a theory b some contrasting hypothesis you go off you do an experiment getting more or less the number of subjects that are the previous studies did you data peak you analyze your data the number of points to see are the results significant if they're significant you stop testing if they're not you scrap the study you go back and try again so this is this is not these approaches for a very long time it still is it is the case it wasn't clear to the research community and it may still be a clear unclear to many researchers that all these practices invalidate the way invalidate the research simply you cannot the results are no longer credible with the standard approaches that we're not consciously and are not consciously necessarily adopted but they do render the results of research simply fiction so researchers need to change from frame of mind and it's something that they love to do actually it's our experience with authors is that they genuinely appreciate the process that the register for format takes me through they focus much more on rigor of design rather than stressing whether the results will be significant or not they focus more on issues of power and positive controls and interpretability of the research than they do on what comes at the end it's also a very different approach to peer review for reviewers it's a much more collaborative process so I think it's reviewing can be a thankless task especially for for full length manuscripts you get a paper as a reviewer and you accept to review it because you're at the abstract and you loved the question we want to read more about the paper but as you read on you find that oh this is true this should have been done differently and the research doesn't quite answer the question of course that's a major major flaw in studies so all you need to do is to provide some critical feedback for registered reports reviewers have a much more active role in shaping research project and their feedback has a central role in actually making a protocol as good as it can be given the shortcomings of the peer review process itself that could be a matter for a totally different series of webinars but to the best of the abilities of peer review process you get it right before the project starts collecting data or analyzing data so reviewers shape the project and it's something that actually they enjoy more at least the feedback we're getting from our reviewers as such reviewers also need to change the frame of mind in terms of how how much foresight they need to have absent the results what needs to change in the design as well designed and as good a place as possible before being executed a tough aspect is refraining at stage two submission as I said from recommending exploratory analyses that could take the research in any number of suggestions or commenting on how informative on whether the results are informative as you've got a lot of all the controls demonstrated on the study so it's difficult to move a little bit away from focusing the results rather than how the question and the process is going to go there however this is something that we see all the reviewers very the reviewers who the self-selected sample that agreed to review for us that are actually very willing to work on this and enjoy the process of learning how to peer review register reports now the benefits I strongly believe they're substantial benefits to adopting and supporting register reports on the behalf of journals, scientists, research communities collectively and funds and for funders you have limited funds and you want your funds to actually offer you the maximum return for your investment you want to provide a grant that will actually at the end of it gave an answer to what the grant was designed to address in the standard publication model peer review happens too late so what we see here and it's it's a difficult process as Ed says is that a substantial portion of the manuscripts that we reject after a peer review is because of fundamental flaws from the minimal flaws that's if the peer review had happened earlier it would have been only dead this means thousands or hundreds of thousands of dollars euros pounds whatever currency wasted in a project that perhaps even after two or three years of investment from the researchers turns out to be fundamentally flawed with the register reports model peer review happens when it can make the most difference in research resource investment it can make a difference when the researchers are conceptualizing designing and making methodological and analytical decisions so all these aspects can be corrected identified and corrected before going off and collecting data so if as funders in 2019 you make it you make a single decision to support one initiative I'd say there's very very good reason to make that register reports you you get maximum return on investments as I said we're very interested in partnering with funders either on a combined grant peer review process or in promoting initiatives around register reports and pre-registration so please do feel free to follow up with me I've provided my details here thank you for listening thank you several for giving that insight and perspective of how nature human behavior sees us moving forward and I whole heartedly agree with that ending note that if you can do one thing this year engaging with that register report format would be the way to go we'll leave it open for a few more for minutes if there are questions can we give a brief teaser for what the third in the series might cover yes mirrors thanks for asking so that's going to cover the there's been about about five partnerships that I'm aware where a research funder and a journalist partnered to to decide jointly whether or not the work should be granted funds and granted an in-principle acceptance by the journal and so we'll be giving a a overview of that workflow the additional benefits to to efficiency that the that workflow provides and a couple of examples from both the granting agencies perspectives and the journal perspective of what they've seen out of the process so far we've also seen some preliminary evidence on the effects of those types of partnerships as well so we'll be sharing what we see from that expect that in later in april or may what journal or agency if possible to mention the if you go to the Richard reports website cos.io slash rr and I'll share my screen and get you there right now let me show you how to find that information so this is the registered reports website cos.io slash rr and here are a couple of links to a few existing ones and as more announced we'll put links to them but cancer research uk has worked with the journal nicotine and tobacco research the children's tumor foundation is working with plus one and we've seen a couple I think they're in their second round we've seen three studies from that project get funded and we forgot to pop up and their registry is here on the osf you can see the currently funded projects there and I believe their second round is underway right now they have a call for that on the children's tumor foundation website but these are partnerships between the journal and the funding agency children's tumor foundation thank you everyone for participating thank you jason and syrup rule for your time and help getting this off the ground and we will follow up with resources from this website from this webinar thank you everyone thank you pleasure