 Okay, welcome everyone. We'll go ahead and get started. Welcome to the first hopefully of a series regular the occurring series of All Q&A's for registered reports, of course the publishing model where peer review occurs before results are known and Results are published regardless of outcome with us today. I'm David Miller from the Center for Open Science and Chris Chamber Professor of neuroscience at Cardiff University and chair of the registered reports committee Chris, can you give a little explanation of why we're doing this and then why are we doing this very good? So welcome everybody. This is some this is like registered reports for registered reports nerds So well over the last six years. I've given perhaps 200 talks on registered reports and one of the Common features that most of those talks has been there's never enough time for Q&A at the end And it goes on one particular talk, you know, the Q&A lasted for 90 minutes after After the presentation. So we what we thought we do what we thought we would try Is a format dedicated to like the Q&A session of a talk and that means I won't be giving any Introduction to registered reports themselves This is for people who are already familiar with the format and indeed if you go to the The COS website or my talks page on the OSF you can find plenty of slides introducing the format But instead today we thought we would as I say focus on Q&A and in preparation For this webinar we sourced questions from from you from the community Some of them came from Twitter. Some of them. I think we mailed directly to David And we're going to work through them one by one and you know talk about the issues surrounding them And as David says this is all recorded so you can come back to this later And you know if you missed anything and also the slides will be going up online So I think that's enough for a very brief intro to why we're doing this as I say it's for this is for the nurse This is for the people who are in the process. Maybe they're thinking about Launching registered reports at their journal. Maybe they're thinking about a registered report. They're writing. They're in the middle of the process now I'm always happy to get really specific questions like no reviewer one asked for this. How do I do? How do I do this in a registered report? I've edited I was looked at the numbers today. I've edited 196 registered reports so far across seven journals in the last six years. I've seen most Variants of the process and things that can you know the challenges the opportunities things that go right things that go wrong So without further ado, I Suggest we move on to the questions and just to say if you have a question during the webinar Jump in ask it. I think David's going to be watching the bar on the side here so we can We can catch those questions as they come up and we'll be keeping a close eye on that so that we can track that as we go along So to our first question Which I think this one is from a journal editor who is thinking of implementing registered reports And they're asking from the editors point of view. How do you manage? a registered report with OJS, so that's the I think open journal system Which is an open source peer review Manuscript handling system or other stock peer review management platform is the second step a separate submission How do you count time from submission to decision slash? Publishing now the answer to this is it really depends. There's lots of different ways of doing this when we when we launched registered reports back in 2013 at Cortex, which is an Elsevier journal. We kind of jury-rigged it so that Basically we didn't change anything about the workflow. We simply added some new Guidelines on the website which created a new article type Which was the same workflow as a regular research article But had different instructions to reviewers different template letters to decision letters to authors And what we did is and actually we still do this is That we Process that stage one submission as a regular manuscript and when it gets to the point of In-principle acceptance we actually reject it which might sound weird But we technically we rejected on the system of course we tell the authors that they have their IPA So everything's tickety-boo on that front but we actually reject it technically on the system and then what the authors do is they come back At stage two and they select stage two registered report from the drop-down list And so the idea there is there's two separate workflows one for stage one and one for stage two and The publisher at that time was quite keen on this because they were reluctant to incorporate the time that the authors took to actually do the research i.e. between IPA and stage two submission within their publication times because they felt that that would actually make registered reports look extremely slow and would Skiw all of the various statistics that they use as KPIs I don't really care about those issues, but you know, this is what publishers care about so you have to work with them so in that case as I say we had two different workflows and In terms of counting the time from submission to decision It's simply a matter of doing that within each of those workflows as you would do for a regular article Now that's how Elsevier do it. At least that's how they do it for Cortex there are other ways to so At Royal Society Open Science, for example, we use scholar one and for scholar one There's a more integrated workflow where there's one Pipeline for the entire stage one stage two process that's preferable if you have that facility within your Management platform because it's just it's just better to have it in one order trail It makes inviting the reviewers back easier because you don't have to like search for them again on the system They're just there waiting for you. There's little time savers. It's just better. It's a more. It's just a better trail And I prefer that so that's the better way of doing it, but really It's it's even though the the structure of the review process is very different from Regular peer review in the sense that you have these two stages You have different letters. You have different criteria and so on you can shoe horn it into the into the old way into the old Management platform if you need to and because all these management platforms seem to have been designed in like 1989 and never updated since then They're so antiquated that it can be often quite challenging to to go into them and try you know Implementing revisions and innovations and they break very easily So you may decide as an editor that it's easier just to work with the system You've got get registered reports up and running and in the meantime, perhaps work on perfecting a more integrated work flow and these next series came from a From one single lab and their colleagues, so some of them will be pretty similar. What's the average? Rough range or an estimate time taken for a stage one register report to go from submission to give them the first round of reviews back Yes, it's a good question. I don't get this one very often I often get the one where how long is it before I get my final decision How long before I get my first round of reviews? So I was looking up some statistics before before joining today and it's It's a little bit tricky to tell because when you submit to a journal It doesn't always go to the editor immediately Sometimes there's a there's a buffer period in between where it goes through the journal admin to check it for very basic stuff And so it's not always obvious to the editor how long that process is taken so it usually takes about a week So if we factor that in if we allow a week for it to go through some kind of pre-editorial Smoke test as you know, does this does this pass muster at a very very basic level and Then if the at many journals there's an additional kind of editorial triage stage where the editor looks at the manuscript in relation to the stage one criteria and says Does this sufficiently me each of the stage one review criteria that it's not going to catch fire when I send it out for in-depth reviews So if it's close enough to meeting those criteria, the editor will probably push it through to review if it's falling Significantly short in any one of those areas like for example insufficient methodological detail insufficient linking between Hypotheses and analysis plan that's this kind of thing when these when when there's not enough in those areas of the criteria It often gets desk rejected So the fastest you can get a response back is often the one that I send at least half the time when a registered report comes in And I see that it's not gonna it's not gonna do well in peer review until the issues XYZ are addressed So what I do there is I typically desk reject that within about a week And I ask for the authors to make various revisions in order for it to proceed further to in-depth review So that process overall can take one to two weeks to get past that desk stage if you go into in-depth review I ask reviewers to turn around registered reports at least at the journals. I had it for for within 14 days Never takes 14 days. What's rare usually takes longer. It takes time to find reviewers It takes time for reviewers to accept review requests And then when they do accept this, they're often delayed and so on so It I would say the average once you go into in-depth review is probably about four weeks Maybe five to get your first set of reviews back allow some time for editors to look at it and make a decision And you're probably looking at I'd say in total if you went smoothly through the process And you read the guidelines very carefully and you avoid the obvious pitfalls So my top 10 tips for how to avoid getting desk rejected if you follow those closely You're looking probably at about five Five weeks or so five to six weeks for your first set of reviews back on average at the journals I had it for now. It's not always that quick at other journals. I know some journals are slower Some journals like to get more reviews in they just had different journals have their different Process to this and I don't know all of the information at all the journals. I added seven Soon to be six journals out of 208 So there's a lot that I don't cover But typically allow yourself about a month to a month and a half to go through that process The fastest I've ever had was a few days when the reviewers just were like lightning speed And I was able to process that one and feel really good about myself and the slowest one I've had was 12 weeks and that was because of reviewers being very slow Saying they'll review asking for delays not being contactable having to find other reviewers and so So it can take there is a range there, but you probably fall on average within that range about one one to one What's the most time-consuming thing that can be asked for the reviews in that first round of review I think the most time-consuming Request or requirement is pilot data. So this can happen where the reviewers are not convinced about the feasibility of the of the protocol for some reason or maybe they're not convinced that the rationale is all in place They may not be convinced that positive controls or manipulation checks have been sufficiently established There are various weak points which can be addressed often by doing preliminary studies And and that can be the most time-consuming thing because obviously the authors then have a choice They can either you know try somewhere else or abandon doing a registered portal together Or they can go away and actually do the preliminary studies and that can take as long as it takes them to do Aside from that if those issues all check out And often, you know, we get submissions where authors come in with pilot data already So they've already addressed those issues before they come to us The next I suppose most time-consuming element would be just addressing all of those criteria You know so linkages between hypotheses analysis sampling plan making sure it's a really really clear one of my Top recommendations for authors is that you always include a table in your stage one registered report Which includes your hypothesis stated in terms of specific variables your your sampling plan your analysis plan and your contingent interpretation depending on different outcomes of that test and Make sure it's all a really clear chain because that can that can avoid a lot of unnecessary Discussion about what's unclear, you know, I read the reviewers often say I don't Understand clearly the link between this hypothesis and this set of analyses And I think authors if they come in without having thought those things through they can go away and realize that in their own mind They weren't even clear and that can Elongate the process longer than it needs to be So just making sure again It comes back to those top 10 tips that are in my in my guide try and really nail those those issues Make sure you've absolutely tick them off And then if you've got your pilot data if you need it all in place and all of the other elements are there It should be relatively quick another similar question but Average and rough range of time taken for a stage one registered report to go from revisions to getting it accepted That's usually pretty quick So if you get to the revision stage of a registered report You've you're almost certain to get IPA eventually I think at least of the journals I edit for I shouldn't I shouldn't say that too general because there are as I say 190 something journals where I don't edit but for those that I edit for you you're pretty good chance of getting in because You've you've already achieved most of the criteria to even get to that point So There's two ways this can happen Sometimes a revision comes in and it's so good and I don't need to send it back to the reviewers And you get an acceptance within days and sometimes hours So more often than not though when there's major revisions it does have to go back to the reviewers and it's a bit quicker than the first round usually because they don't have to Read the whole thing again There's a very high acceptance rate because the reviewers are already when I say acceptance rate I mean acceptance of the invitation to review Because the reviewers have seen it already and they're invested in the process I would say typically you're looking at about three to four weeks For you know from from the time that we submit When you add all of that up you're looking at around three months to go through the entire stage one process and added to that you're going to be Taking into account the time you take so we as an author you're gonna have to go away and think about issues So add that as well make sure you factor that into your timeline. I typically Ask authors to consider a four month window as being representative of the process And that gets through one of their questions It wasn't specifically submitted but one of the frequently asked questions that we hear also a lot is just how much time Is this going to delay my project? Getting started because oftentimes one is a year to get started and this does by design take more Review and focus and evaluation right up front For very good reasons, but that is some of the take-home message Remember also you say that time comes back to you in the end It's like it's like putting money in the bank because when you get to the end of when you get your IPA You know, you know, you're not going to get pretty much No, you're not going to get rejected at stage two So you're in a really strong position to get that research published quickly So you're investing time up front But you're really getting it back at the end because you're not going to have to go from one journal to the next to the next Is there data available in the length of time to go through this process in general by journal or by a subfield? So some of the things here This data doesn't exist in any Organized form at the moment. It's mostly me talking And a shield works with Daniel Larkin's in Eindhoven has data on this For cortex that I sent her and she's analyzing it at the moment and There I suspect are more such studies in the pipeline Looking at how long it takes and then once that data is out there. We can start carving it up into, you know Genres specialties journals and so on and actually think this is really important information because one of the key Deciding factors for an author as to where they submit could very well be how fast is this journal? And I think journals should be transparent about that process as much as possible One of the challenges I should say from an editorial point of view is it is it's quite tricky to actually calculate these times again? Because the manuscript handling software was all designed in the 90s It dumps you out this really uninformative statistics was an editor Which doesn't caption really the information you want at the granularity that you want So we often find ourselves having to do this stuff manually, which is not good and very time-consuming I think which is one reason this data isn't more available But I hope it will become more available The as I say that the overall time frame you should allow is in the order of Three to four months to go through the entire stage one process. We don't know Really how that differs by journal or subfield yet If designing an intervention study, there's no data collected during the intervention only before and after it How specified does that intervention itself need to be? This is a good question And it's interesting because this is actually a lot of registered reports fall into this category You know, it's quite common that I think that there'll be no data collected during the intervention if it's an independent variable It's just something that is done to somebody or to you know to an animal or to whatever and then you measure before and after This is typical experimental design and so, you know the answer is it needs to be fully specified because it's crucial that every element of a registered report is reproducible and That's kind of one of the fundamental principles of registered reports You know in general and I would argue that should be the case for all publishing But the thing that you learn as an editor as a reviewer as an author within the world of registered reports is that to properly assess a Stage one manuscript under all the criteria that we assess them by You really need to have that level of detail in there and so I would always recommend Having more information rather than that So if you're not sure whether it should go in put it in if you think it really disrupts the flow of the manuscript to have All that detail put it in supplementary information, but still put it in Big one right here, but they're describing Two phase project phase two is suitable for the registered report, but the measure employed will be will depend on which measures In phase one a separate study Our significant predictors of performance in order to submit the phase two as a registered report and still be within the timeline of the grant We would need to submit at stage one at a point in which we know what potential matter measures will use And we know the criteria for determining which measures will include But we don't yet have that phase one data to determine which one is which Would you be able to list the potential measures that we will include and state exactly how we will determine which to include and why? The short answer is yes So in this situation you could do what we call like a contingent design Where you would say you would actually submit the registered report now and you would say Step one Which you could build into the protocol or you know for stage one review or you could You could have a separate and it's not part of the actual protocol But it's something separate like separately pre-registered. Maybe it's already underway, which is fine But you could build into that that process into that phase two protocol which forms the core of the registered report You have options. So there's presumably a finite number of measures That will go into phase two and you'll have in your mind You must have already a series of decisions that you're going to make in the future about if this then I will do that if this Then I'll do that but if this I will then I'll do that and so if you can operationalize it with that level of granularity Then the best approach is simply to build that a decision tree into your registered report and then When you get your IPA you can just follow the follow the tree and and see what happens and when you come back With your phase two data, you will have completed one path in that in that tree So it's very doable. We don't get a lot of submissions that propose that kind of thing. Usually they Usually they have a more formulated and hardwired sort of protocol But there's absolutely nothing stopping you from doing it indeed where we tend to see this more often is with analysis plans Where someone will say depending on the distribution of the data, we might do this type of analysis or might do that type of analysis It tends to you send us tend to see that kind of Contingency built more into the analysis side, but it's just as doable at the design stage provided Your phase one is Sufficiently developed that you know in you know what you're going to be putting into that contingency So you need to have some kind of bedrock underneath it which will now enable you to to build that contingency table And I think the reviewers will have to be able to see that that's being done in a very rigorous way and just convince that skeptical reviewer that That decision is not going to be made in any sort of biased way That's right. And you know one of the things the reviewers might do is start critiquing the phase one element So you it's it's almost advantageous in this situation to to to submit Phase one and phase two together as one registered report and revise them all at the same time So that you can get your phase one element approved and you can update that As you're also updating your contingency to do them all together in a way because if you do that You avoid the problem of having started your phase one and there's no turning back now some reviewer says fatal flaw the contingency that you've Put in place doesn't make any sense the this therefore must be rejected That would be a situation where an editor would be left in a position where they might have to just outright reject because there's now an element of the design Which is fundamentally unrevisable All right This one is kind of a talking about scientific culture and and how we disseminate and publish surprising findings What's the ideal vision for balancing a breaking news finding that deserves lots of attention? But by the very nature are less likely to be confirmed in later studies a very exploratory preliminary findings The current system encourages hyping of those findings and one manifestation of that is high-impact factor journals But if that is a to be diminished How will we attract attention to those potentially important findings that deserve lots of attention in the research community? Is it simply a matter of resetting our expectations? I think it is and I This this touches on a lot of issues about incentives and newsworthiness journal rank There's a lot of issues to unpack in this question I think and because of that as you know, you're going to find a lot of views on this And there's no reason why my view should hold any particular way. However One thing I will say first of all registered reports do not eliminate journal rank journal rank will always be there So long as there are journals because that's human nature If we eliminate journals, of course we eliminate journal rank, but as long as we have journals So people will find some journals more prestigious than others prestigious something that's baked into the academic culture at the moment But I do also think that resetting expectations is important here about what Justifies a piece of scientific research for newsworthiness in the first place at the moment Newsworthiness at least in the vision of this question and more generally as well is thought to be Demonstrated based on outcomes so you meet some minimum standard in terms of methodology and question get an amazing outcome and it's newsworthy and Registered reports shake this up really because they say at the core really what makes piece of research valuable important high quality newsworthy even Is when you might get an amazing finding but you can also show that it's not the product of reporting bias We don't take it that into account enough in my opinion in my opinion it You know what we need to reset expectations about is how newsworthy a finding ought to be when it's proven Or at least Suspected to be the product of reporting bias and I would suggest that in those situations It shouldn't be that newsworthy at one point in the talk I gave in London on Open science for controversial research I presented this kind of pyramid this hierarchy of evidence that I think makes sense in that context where you know typical sort of status quo research is really rarely what particularly in controversial field is in my opinion rarely Deserving of newsworthiness what's deserving of the newsworthy label is really when it's when it's Replicated when it's meta analyzed when it's shown to be reliable and demonstrable That said, you know, I can say what should happen till the cows come home. It's not going to be the case I think we have to accept that newsworthiness is still going to be a factor some outcomes will always generate more news than others The best thing we can do with registered reports is treat all such papers the same at the level of the journal The scientific record is not the newspaper. I think one of the mistakes that we We allow in science is Conflating the scientific record with with new with news reporting They are different and I think registered reports help to separate those two worlds and finally the thing I would say is that registered reports are not designed for the purely explore Blurratory science that often leads to findings that generate a huge amount of impact. I found this amazing unexpected thing I wasn't I didn't even you know, I wasn't didn't even have a study going or I just saw something That type of research is always going to be there. It's always important It's always valuable and it will always be newsworthy The question is how much weight you put in it until it's being verified and that's the balancing act Rebalancing I suppose of science that registered reports trying to establish I know cortex and a few others are specifically labeling or inviting submissions to authors Just specifically label exploratory results with you know, very little expectation about replicability, but but specifically designed for completely unanticipated findings We have a format cortex as David says called exploratory reports, which is the kind of yin to the registered reports Yang So this format is all about Generating hypotheses from data and it's not about there's no pre-registration as we discourage inferential statistics really of any kind Although, you know all this can report me if they want but we we we try to steer authors away from making conclusions about what they think is real And more about using the data to generate questions that can be answered in future studies using You know the hypothesis sort of methods so completing that deductive cycle. I also think this Question speaks to the Potential strength of preprints authors reviewers even journal Journalists often preface Findings report in a preprint with a this has not yet been peer reviewed taking that peer review as kind of the gold standard of scientific credibility And it would be great to hijack that skepticism a little bit with You know unconfirmed findings or unregistered findings Can be you know quickly and easily disseminated via preprints waiting for the higher level of rigor and evidence Provided by that the full register report format Yes, I think so and I think you know Certainly tagging articles as to whether they've been replicated Yeah, so I'll I'll have to go in there Going backwards through the scientific record and saying has this result been replicated and then badging that maybe onto previous papers Would be a really neat thing Unfortunately, I keep coming back to this You know the way journals work the way the scientific record is accumulated is again stuck in the 1990s Where you know you get this fixed PDF record, which is very difficult to update So we need to really think about the future of publishing the future of communicating science if we're going to develop those kinds of innovations This is a paraphrased from a series of tweet questions Relation in question they submitted a register report in the first review reviewers requested a manipulation check Fair enough. If you don't know that's just a way to demonstrate that the Proposed manipulation had the intended effect or that I was done in a competent manner Now the second round of review Interestingly reviewers are asking for a pre-specified plan as to what we will do if the manipulation checks don't pan out Finding this harder question obviously Then it seemed that first I mean if manipulation check is don't work out You have to pretty much go home, right? There's not much point running the main analyses if you believe they didn't even successfully manipulate the thing you were trying to manipulate So I guess I'm thinking you have to be a middle way Perhaps a plan can be if manipulation check fails We'll still run some planned analyses, but purely for exploratory reasons. It will not consider results supporting our theories What else do you have here? This is a good question. This is one that comes up a lot actually with registered reports because Manipulation checks positive controls in clinical research is referred to as intervention fidelity Hmm These are the only aspect of the results that could theoretically lead to a rejection at stage two So that actually happens, but it's possible that if a study was run in such a way That the authors couldn't show that the intervention was even applied properly or that there was some you know Something failed in a in an inner an outcome neutral check In such a way that the study was almost pointless then that could in theory leads a stage to rejection now in reality You got to ask yourself as an author what a reviewer what a reviewer is trying to get hold of when they ask you to report manipulation checks And when they ask you for contingencies about what you will do when they don't work out They're actually throwing you a life line Because remember that failure of a manipulation check at stage two can result in rejection So if you're in a position where you're able to say what you'll do You're also therefore in a world where that's the failure of that check won't lead to rejection Because otherwise they might they might say nothing you go ahead and run the study your manipulation check fails and get bounced Which is theoretically possible So what I suggest doing in this situation is always having think of it like a layered insurance policy your manipulation check is your highest Level insurance policy, that's the one where if that passes you're absolutely rock solid because your intervention whatever it is Has been shown to have the necessary fidelity was applied in such a way that the hypotheses that you that you propose are testable But if it fails all is not lost what you want to then have is another layer below that of insurance Which is how if my manipulation check failed how would I convince a skeptical reviewer that I ran my study to a high quality? So, you know if I was giving some kind of drug to one group and placebo to the other and My manipulation check on you know some known effect of that drug didn't pan out How would I convince a reviewer that actually gave the drugs to the correct groups in the correct dose? Etc. So think about how you would actually demonstrate that you know in some studies we get That might consist of demonstrating that signal-to-noise ratio in an EEG study is within a specified margin And or you know some other demonstration of lack of flaw ceiling effects. So think of it like a see a lay a series of layered Checks where the lowest level one We the one that if that doesn't pan out You're not sure if if the study was run at a level that you're satisfied that you want to publish anyway and We have never had a case in all my editing of the lowest level check not panning out What we have had and is very interesting is a couple of cases where manipulation checks have failed But the low-level checks have passed and what that tends to suggest is that the thing that you think is real That your manipulation check is tapping into may not be as real as you think it is So it could be that the the reality check that your manipulation check relies on is in fact Not as reliable as you thought and that that can happen because you know We work in a field where there's a lot of unreliable research. We don't have key benchmarks in every area that we can always push against So it's not disaster to have a manipulation check fail, but ask yourself how Reliable in your heart of hearts. Do you believe that manipulation check to be how much do you believe that particular result? Will pan out if the study is wrong correctly if you have if you suspect bias in the literature, which has shown That particular effect before or there's some other reason why you think it might be unreliable again build in the lower level insurance And if you do that then you can that can be your outcome neutral check that Determines or could determine whether your manuscript passes stage one and then later stage two review This is the last one that was Pre-submitted and we've got at least four questions in the queue that have been submitted through our question and answer format So do you think there is a risk of predatory or sub optimal register report journals? And if yes, how could these be policed and there's two possible ways to do this not honoring the in principle acceptance or Giving a poor stage one peer review. So let's answer that in two stages. First of all predatory journals I can't stop predatory journals. Can you David? I don't know Yeah, but it's a good predatory journals You know, they're like they're just a thing and I think we have to just avoid them because to be honest with you They're so easy to spot really most of the time that it should be straightforward to avoid them one of their very basic things that we do to We haven't encountered this situation yet, but if a predatory journal an obvious predatory journal Like one of those ones that says greetings to the day what are emails you if one of those started offering registered reports We would probably not add it to the COS list or at least if we did we would flag it in some way as potentially predatory We would do something to keep you informed So every journal that's on that list at the Center for Open Science COS.io forward slash are every one of those journals We believe is not predatory So that's actually if you just go by that guide That's a fairly easy filter that you can apply in addition to your own judgment I think most scholars who have got past a certain level in their career Know how to avoid predatory journals. The second one is a little bit more tricky suboptimal our journals This is a this is something that we This is this is a greater risk. So this is where the journal itself is respectable But it's just not doing a good job editing registered reports. It's either it's you know when when When we came up with this format six years ago I was very careful to make sure that the policy was quite detailed because I knew that it would probably be Cannibalized by a lot of other journals And so if you build all that detail in at ground zero then it that will be propagated across the area And so you will see a lot of the journals that offer registered reports use the text from our original policy That's good. If they follow the policy if they follow that policy then it should be fairly consistent across the board But they may not follow the policy that is always possible that editors will not honor an IPA or they will reject a manuscript at stage one for bad reasons or at stage two for bad reasons or that they will You know, there's one journal that comes to mind which has an implicit policy of Always rejecting stage one manuscripts that require any revisions whatsoever Which just seems bonkers to me in the whole purpose of registered reports is to is to Improve you eliminate bias but also improve the designs as you know through reviews So what's the point of going through that process and then just bouncing them if they're not perfect? Technically that's within the rules a journal can do that because it can decide at stage one It can reject on those criteria. It doesn't have to ask for it It doesn't have to allow authors to revise but I would say that suboptimal So the best way of monitoring all of this is for the community to tell us when it happens We have some very very early stage plans for a website where Researchers could rate journals based on their registered report experience Where they could leave specific feedback about what they liked and didn't like about the review process at that journal Maybe they could even talk about how long the process took you could gather various information from the community And then you could create a league table You create some competition between the journals and all of a sudden you've got a marketplace And then I think things will become more transparent and then we can address Some of those suboptimal elements more transparently in the other hand whilst we've got that sort of stick And there's maybe carrot in this hand I think we need to be training editors better in how to do this kind of editing and in a way that's depressing just for me Editing a registered report is really just editing a regular paper in two stages So if you can't do a registered report, I'm not quite sure how you're able to edit a paper with results So that's that's a little frightening to me However that it also must be accepted that there are some you some some unique challenges in editing registered reports Which some editors are not familiar with like maybe some of the statistical requirements, you know Maybe some of the deeper thinking about theory and rationale Maybe they're not used to doing that Maybe they're just used to going straight to the results section and making judgments on that basis So we need better training materials registers This is also something that we're in the process of developing and that could include, you know vignettes and possible even accreditation That an editor could get to show that they've passed some kind of test or a series of tests for properly editing registered reports But all of those things together and I think you can create an environment where you've got some control over the non-predatory yet suboptimal practices that are almost inevitable Almost inevitably going to going to happen because as this thing scales up it goes out of my control I only edit as I say seven journals It's going to go out of my control and it's going to be something that the community needs to start monitoring And I'd say for both of those Information is key Information gathering is key So if you see something going on if you feel like an IPA wasn't honored We are happy to advise and see what's possible for either of those situations Yeah, you know, I get a lot of email like this, you know, I probably get At least an email every week or two from someone. It's not always complaining about a bad experience But it's often, you know, like I've I've got this situation I don't know what to do this reviewers ask me to do this xyz What's your advice on this and so I do a lot of that sort of stuff through back channels But so I think it'd be nice actually if more of that sort of Dialogue was public because then everyone can learn from it and it can become a knowledge base So I would encourage you to share your registered reports experiences as publicly as you're able to do even if to some anonymous Way as much as you can because then others will learn and you'll also learn from from their experiences We have four submitted questions right now on the the chat We have time for plenty of them if you have others anyone listening feel free to submit some more But let me go down the list where we have Please can you give guidelines on searching the literature to find pre-registered studies? Chris and I can well, okay, so if you mean registered reports then There's There's one way you can you can find them is to go to the zodoro database And I'm sure David will post a link to that. Yeah in response So there's a the center for open science curates a database of stage two registered reports that have been published Which is not comprehensive and it is manually updated. So you can't rely on it as being the definitive guide But it's got a lot and it's And it's growing all the time So that's good. So that's one place you can go to find them You can also just set up a google scholar alert for the word registered report or registered reports You'll get a lot of false positives You know people talking about registered reports more and more people talk about them in their articles But you will also pick up the ones that are called registered reports um And the other if you if you also mean stage one registered reports rather than the completed Um, you know full stage two articles You can go to the osf registries page and i'm sure david can post a link to that as well Where you can filter for registered report protocols that have been Accepted at stage one. This is a really good resource. I send my students to this and on the master's course that I teach I I send the students that too because it it um Shows you what an accepted stage one submission really looks like which is terrible I think it really useful and a little bit more useful than a completed stage two article Because the stage two article of course also includes the results and discussion And there can be minor changes to the front end, but to really get a grip on what it takes to get a stage one manuscript Um Accepted it's really good to look at the accepted stage ones that are out there And as I say you can find that on the osf register And there are links to all of those on the registered report website and I'll send that out right now also cos.do slash rr It will send links to all of these too to all attendees right Chris you mentioned including a table detailing the links between the hypothesis the analysis and the possible interpretations Are there any good examples of that that we can look for? Um, there are so there are some published examples of that I couldn't think of any particular study to mind because my brain doesn't work that well But there's there are there are examples out there that have done this and you'll see more of them coming through because It's something I regularly ask authors for now And so most submissions that I edit actually end up with this in it But of course there's a delay, you know between that going through the pipeline and being coming out the other end Um That said you don't really need an example of this. So think about it logically. So you've got question And that there might be a research question here and then next to that the hypotheses You know list them one by one, maybe one each road and then The analysis plan now, maybe you've got multiple analyses for each hypothesis Which is okay in which case what combination? Of tests will give you what outcome then you might have your power analysis your sampling plan for that specific test Always make sure that your sampling plan is linked to the exact test that will test your hypothesis It's a big mistake people often make in a registered report having that too fluffy Make sure that's there and then contingent interpretation at the end What will different outcomes mean it's you can map this out on a whiteboard. It's it's one of those things that I would encourage everyone to do really whether or not you're doing a registered report or not It's just a good process to go through. I think when you're designing an experiment But you will find examples in the literature. We can probably dig some up after this webinar, but you know Fundamentally is also an element of logic to that yeah, there's a One of the big points is simple clarity using the same Basic words the variable names that are used in the analyses as closely as possible in their hypotheses just to avoid ambiguity there You know, you can always have two columns. I've seen this also where um You have one column which is the kind of prose description of the hypothesis and the next one linked to that that cell next one along will be the The opera operationalization of that hypothesis in terms of specific variables in an equation essentially And that's really nice because then you've got an equation then that links to an analysis Then that links to a sampling plan then that links to an interpretation so if you can build that chain of Very specific granularity with such precision that you can just throw data into this and then answer comes out the other end That's kind of what you're looking for All right, this question. What is the usual journal policy about news or media coverage of the study before stage two acceptance? Are you familiar with any of those before stage two acceptance? So this is interesting So this is when presumably the question here is about um Preprints this is about pre so this is this essentially falls into the world of a standard pre print whereas results exist So it's obviously potentially newsworthy But the the stage two hasn't been accepted. So let's assume you have published your stage two manuscript as a pre print Which you can always do At least in our field I think virtually every journal and indeed every registry report adopting journal That I know of is pre print friendly So you could always submit your stage two manuscript as a pre print and indeed you can do it all the way through So there's just to go off piece for a second I mean there are some researchers who just put their that ben jones does this he puts his stage one protocol up And he keeps revising it and it's basically stage two and you can just track it through version control all the way through to the end um, it's this I the same it'll be the same policy for media coverage and embargoes for that For a stage two registry report as which will apply to a regular research article that journal the journals embargo policy if they have one Will not discriminate between registry reports and regular articles So for example, if you publish a registry report in nature human behavior You're welcome to publish a pre print if um, you get You probably want to talk I would talk some of the more high impact journals might be a bit twitchy So I would talk to the editor about potential news Exposure that you want to generate before acceptance just to be sure that uh, that's not going to violate their embargo policy more generally Because that could endanger acceptance in a very strange way. I don't think it's ever happened, but you know, it could happen but you know Basically treat it like a regular article look at the journals embargo policy All right a couple of questions about Registry reports and grants and timelines Do you know of any cases where a grant was submitted without a register report in mind? But once awarded the applicants realized that it would be ideal for an rr But the timeline of the grant doesn't accommodate doing it Do you know of anyone who has gone back to the funding body to ask who extend the life? Of the grant and the associated costs And of giving a rationale about the benefits of Assuming that they were convincing about that That's a follow-up question, but yeah start there That's what a great question. No is the answer It's a really good it's one of those situations when you put all of those contingencies together the answer is no I've heard of different parts of that being placed, you know, I've had I've got a grant myself which I've submitted which Wasn't going to be a registered report, but ended up being one There's also cases I know of where people want to do registered reports, but the timeline doesn't quite work for them I what I don't know of is a situation where that anyone's gone back to a funding agency to say can I have some more money It's usually quite difficult to do that ever My experience like you tend to have cash limited grants and going back to a fund and saying hey Please may I have some more rules? So like all of the twist doesn't tend to really work That said you never know till you ask and some of these funders Particularly the smaller ones that may be more flexible and not so bogged down in pages and pages of policies and procedures They might they might be open to that I think this is what this question highlights for me is that the importance of registered reports funding model themselves Because they solve all of these problems You know if you can if you could submit your stage one registered report to a journal and to a funder at the same time You can tie all of these processes together and get rid of all of this risk of things running at the time and you know different stage one processes not Not being compatible and so on And this is something that Marcus Monafo and me and others are working on at the moment But in the meantime, I if you if you're in a situation where This is a scenario you're facing I'd be very interested to hear more about that because it's possible that with our collective influence of the c os and and Being involved in registered reports that we could always Approach a funder as some kind of team and say You should consider at a very general level This a particular change to your policy which would enable more flexibility For researchers to do registered reports that there's always scope to change policies We're actually anywhere if you've got a good enough case for it And the follow-up question, uh, you just answered it. You know how How could grants or registered reports go together better? And that's exactly where the editorial review and the potential funding review can rely on the same set of expert reviewers and If accepted by both you get the decision that the the work will be funded and published regardless of outcome Simultaneously So anonymous attendee at 3 47 p.m. Nailed that so that is absolutely right and that that is why we created the registered reports funding model So if you go to the c os.io forward slash rr and you go to the four funders tab I think it's called the four funders tab. Yes, it is You will find a list of the current registered report funders or funding models that are available It's a very short list at the moment in very specific fields but it's worth having a look just at the You know The style of it and I think we need more of these and that's something we're certainly looking at Understanding more and promoting and if I could pass on this to sort of an anecdote or series of anecdotes anecdote data There's a lot of enthusiasm by the research funded community about registration in general and registered reports in particular and they very much want to encourage the process and our working to Dot their eyes and cross their t's so But if they were to receive a question I want to add this process to the workflow List a couple of the benefits that they many of them are quite familiar with it But but having quite figured out the way to require it or to to make it more of a common thing That I very much expect that those types of questions would be well received Absolutely always asking for more money is tough, but always ask I totally agree always ask because If lots of people ask for a thing they take they start paying attention and sometimes if you know, they get one question And you know, it can lead to a change in policy a change of thinking That it's possible to change almost any element of this With time and you know with pressure So yes, definitely. All right chris. What are your experiences with reservations of journal editors? And society officials towards adopting the register report format for their journals Any insights as to how to best preempt these and win over the skeptics? Wow, this is a whole another one-hour webinar. Let's go right That's a good question. I mean active very this is a question. I get a lot from from editors actually who who Encounter barriers within their own editorial boards or they get blocked higher up So they might be on an editorial board and they want to do register reports They might even be a chief editor, but then they go to some publications committee Of essentially academic bureaucrats who then say no, we don't like it or they don't even you don't even know why they don't like it they just say no and Why so why why do journals say no? I have somewhere a bingo board on this I think we should add it if we if we've got a website for this for these slides Let's put my bingo board up. There's all kinds of reasons I've had from editors and societies over the years of why they don't want to do registered reports They will perhaps I think a lot of them are fallacious So they might say well registered reports don't fit all of the research in our field Therefore, we're not going to offer them as an option for anyone within our field Which is really a bit disingenuous because no article type at any journal ever suits everybody in the field All of the time. So, you know to limit an article type on that basis is pretty pretty illogical I think perhaps the greatest objection you I've come across is really just various forms of Publication bias. So essentially particularly higher impact journals Get their brands in large part by selecting what gets published based on the results and the Obviously the core of the registered reports model is that it takes away the power of the editor To say which results they prefer to publish over others which results are worthy of my prestigious journal And that power if you are if you are a journal whose reputation relies on publication bias And whose impact factor perhaps relies on publication bias You'll be less disposed perhaps toward offering this format then if you're a journal editor who actually tries their best to select what gets published based on quality And looks at results because of course regular articles always have them But tries to make the best judgment scientifically and those sorts of journal editors tend to be much more open to the idea Now the other thing is that a lot of editors don't like admitting That publication bias is the reason why they They are resistant to registered reports because they know that that's an unpopular opinion and I might talk about it so what they often do is they they they Kind of use smoke and mirrors and you get a whole lot of bullshit like our computer system can't handle it or our editors are too busy You know, I'm not the only editor who's come across these These issues there's some wonderful blog posts that have been written by editors of other journals who go to their chief editors and say Um, would you like to do registered reports? I'll do all the work I'll do all of this and they get a know for reasons that are very obviously publication bias, but Are not phrased in such a way How do we pre-empt these and win the skeptics over? Well, you know step one Show as much as possible that that the impact of a registered report Is is so great that you don't need results to go out to to go a certain way in order for Them to do well within the journal and we actually have the first evidence from citations. For example, that that's the case so Work by lily hummer and colleagues at the co s. I don't know if you're even on that page David you might be So they've got a no, I'm not not. It's um, it's your colleagues over there which have shown that That registered reports are cited Comparably to regular articles Or even possibly slightly higher within the journals which they appear Which is the best evidence you can give to a fussy journal editor who cares about their impact factor and is worried That accepting a whole lot of papers with quite possibly negative results will Lead to them publishing a whole lot of boring studies that nobody will read That doesn't seem to be the case based on what we know so far. And in fact, if you go to our uh, the co s web page in amongst the faqs you can find a Uh a two page pitch to editors which addresses these questions head on Yeah, and I think it gets to one other thing The proposal of a registered report is essentially asking the journal saying this research question is so important that it Deserves to be addressed regardless of outcome and that's kind of a high hurdle to to get over um, so The type of work being submitted here is not Boring Work that we all know the answers to already it's it's to questions that there are really burning questions So um, you know at least in the past couple of years there It hasn't been a tsunami of boring boring work. It's been the pressing research that deserves a You know the register report treatment and it's often high high risk research, you know Well, it's interesting back in the early days when we launched this format people thought well The only types of articles you get submitted to register reports are really really really incremental research or replications You know because that's the only time maybe that people would be prepared to go down this road In fact, you get everything So I get a lot of submissions of people proposing quite out there hypotheses and the reason it makes perfect sense if um If you're if you've got a really important question, but a really Severe test a high risk hypothesis Which could turn out null, but if it doesn't well, it could be really interesting You know if it's if it's supported that I could have major implications that can get through the registered reports process fine As long as the question is important enough and the method is robust enough So we get a lot of submissions like that the testing risky predictions What would be very interesting to know is whether the predictions in registered reports are fundamentally more or less Risky than in regular articles and that's a meta scientific question that I know some people are currently Studying get on that everyone. All right. We are just about at a time. I will conclude this first hopefully in a series a webinar of all q&a for entry reports a success and Be on the lookout for information coming your way if you register for the webinar We'll send you links to slides and a couple of resources mentioned throughout this Chris, thank you as always for your time and everyone. Thank you for logging on. Thanks for joining us