 All right. Good morning and good afternoon, everyone. Welcome to the second in a regular series of Richard report Q&A webinars inspired by the fact that always the most engaging and most compressed portion of any talk or webinar about Richard reports is the Q&A session at the end. So these are planned for monthly or five monthly events where all we do is Q&A of what's come in and any live questions that occur that come in from you, the audience as we're going through this. So you should have an ability to on your screens submit a question these do so at any time and we'll answer it live. Claire, I think there's also a chat function so we monitoring that the chat and the Q&A window. So if you see that on your screen, feel free and say hi to the group if you want to start cluttering it up, or if you have questions use the Q&A box and submit those. Joining us again. Thank you Chris Chambers from Cardiff University chair of the Richard report committee and Richard report editor for several seven general journals I think, and without further ado, welcome Chris and we can get started. Thanks David and welcome everybody so you know as David says, this is our second in this series and the idea is that we're here to just answer all your questions about anything. Registered reports related so that could be a general question about policy or practice it could be a specific scenario that you'd like feedback on perhaps you're going through the process now as an author and you'd like some advice on what to do next. It could be a concern or criticism you have about the format or an idea for the future or you might be an editor thinking of implementing. There's all kinds of interesting questions that we get routinely. And, you know, as I said, introducing the very first of these webinars, we cut straight to the chase with these so there's no, there's no summaries about what registered reports are there's none of my usual go straight to the Q&A straight to the meet. It's the registered reports for nerd session. So, as David said jumping at any point with any questions you might have. Sometimes the questions that we, we answer kind of trigger more questions as we go through. And so feel free to to raise those questions in real time and we'll do our very best to answer everything we can in the time available. So, what shall we begin David? Is this the, yeah here we go. Yeah, sorry that was the last one. Okay, now we're ready. I think what we've got, we've got all the Q&A in one big slide deck here so we're going to be kind of appending to this over time. So our first question, supervisors, have you supervised a student or postdoc who conducted a registered report or a pre-registration? If yes, what advice would you give to other supervisors considering it? If not, is there something that puts you off? E.G. Warrie's student will run out of time. Well, I can answer this from the yes perspective. And, you know, I guess anyone who's watching this who maybe can chime in on the no perspective, please do. But from the yes perspective, I think there's a number of, from a supervisor's point of view, there are a number of points to get clear in your own mind. I think before you decide to go down the registered reports track specifically, which is the one that I'll focus on here. Much can be said about a pre-registration, the broad concept of that outside registered reports, but given the focus of today's Q&A, I'll just focus on the RR component. So I guess the first thing to think about if you're in the supervisory position is how much time you have. And this is for two reasons. First of all, you need to accommodate the, you need to be thinking about and to take into account the stage one review and editing time at the journal. At the journals where I'm an editor, that ranges from about two to four months on average. Most submissions from the moment they're submitted have a stage one final decision within four months. Different journals have different pace, you know, some journals are faster, some maybe slower. You can always contact the editor of the journal and ask them for some general advice about how long submissions typically take when they go through. So that can be a good thing just to get clear in your own mind. Make sure you allow that period of time to go through the review process. So this can impact on the decision. If your student or postdoc is on a short-term contract, you know, if perhaps the study is going to be very large, do you have time to go through this review process? Now, as many have said over the years, you get that time back at the end of the project for it because you're much more likely to have your stage one registered report accepted at the first journal you submit to. Because the acceptance rate for registered reports typically outpaces regular articles because of the ability to change the protocol based on review or feedback. So you typically get that time back at the end, but still you need to make sure that there is sufficient time within the term of the contract to whoever is actually running the project from a supervisor's point of view. You need to make sure that there's that time available to actually do the research. Once you've got that nailed, so once you figured out, okay, I do have enough time, you know, this is achievable. The next thing to really think about is power and sample size, particularly if you're doing human research, registered reports can often require larger sample sizes than the average or typical sample size in any given field. And the reason for that is the chronic underpowering or under sampling in many areas. And when you when you apply a priori power analysis or any other kind of inferential sampling plan at the beginning of a project and you power or sample to a smallest effect size of interest. You often find and we often find as editors that authors typically come back with sample sizes that are maybe two or three times larger than usual. So that's something also to take into account when you're planning the timeline. After that, it's really a case of anticipating all of the requirements that the student or postdoc will need to meet in order to get accepted at a journal. So I have on my talks folder on the OSF. So if you look, if you look at my talks folder, the top most recent talk, which was one that I gave at the reproducibility workshop last week at Cumberland Lodge slides 51 and 52. I believe that presentation include a list of the top 10 ways to avoid getting rejected at stage one. And I really strongly recommend that anyone considering submitting a registered report pays attention to that list. And there's also a very, very useful primer on registered reports that recently came out in trends in neuroscience as I believe is that right, David. I'm always terrible with remembering author names. We can perhaps post a link to this later on Twitter, but it's brilliant primer for how to approach the registered reports challenge from the author's perspective. And I think this is something that applies to both the supervisor and of course also the student who's running the project. And I'll post links to those in the chat window. Just a moment. Next question many studies aren't deserving of in principle acceptance. We think because of their results would not be informative if they're not, for example, asking to redo a replicate or experiment or account for a likely compounding variable. No result would be uninteresting. So I guess my question to you Chris, why do you believe that such knowledge would be meaningful or are relevant. I guess you can tackle this question in a number of ways. I think that there's an underlying premise here that replicating an experiment in order to account for a likely confounding variable is not a useful endeavor, because it could produce an uninteresting no result. And I'm not sure that I agree with that that that premise. I don't necessarily agree that such nulls would be uninteresting and lack meaning in the first place. I mean, I can imagine a situation where it's very important to replicate a previous study, maybe what showed a positive results deal with a confounding variable and if in that case, a null result was obtained and that no result was different from the original study that might suggest that the original finding is not particularly reproducible or that there is some impact of that variable. If you get the same result then you've you've controlled for something that is potentially important. You've got to see the stage one process as the screen for the kinds of questions and methods that are important and interesting and useful. I've not really come across many cases as editor where authors have gone to the trouble of designing and writing down a detailed stage one protocol, only for the reviewers to come back and say, this is pointless because a null result would be uninteresting. Usually I suppose those designs that may fall prey to very asymmetrical value, you know, it's only useful if we get this result. It's not so useful if we get that result. I suspect those kinds of designs are probably self filtered out by researchers when they're thinking, do we really want to go to the effort of putting this in as a registered report when only one outcome would tell us anything at all. You know, the best kind of registered report is one where all outcomes are informative in some way because of the virtues of the design, well controlled design, statistical power and, you know, rigor. So, I guess in summary, in that respect, I'd say, need to think carefully about the underlying assumption that a null result is uninteresting in any context. It really depends entirely on the experimental design. David has just posted the link. Yes, of course, Kain Agra and Schemecker. So this is a really good primer for authors, whether they're supervisors or for, or ECRs, he's posted that in the chat there so you can read that. And all of those links are available on the registered report website. I'll put a link to that also. And there's also a link to the webinar that they both participated in about two months ago. So take a look at that if you're interested. Next question and we've got a couple of questions coming into the Q&A so we'll make sure to get to all of those as well. Can you discuss the relative merits of using registered reports for replications versus novel studies, novel confirmations. Novel ones are of course are more rewarded for researchers. How do we get to the point where we're in a sound confirmatory zone while still testing a new question. Seems like the only solid confirmation to be a replication. Okay, so I mean, I'm reminded of Brian Nosek's slide where he talks about the sort of scale of culture change and you begin by making things possible. And I think that's what registered reports do. I think that historically, at least in many areas of the life and social sciences, the barrier to doing replication studies has been that it's simply not worth the personal investment of time and resources to do a massive replication of a previous result. When, regardless of the outcome, it's going to struggle through the publication pipeline, because if you get the same results as the previous work, reviewers will turn around and tell you that we've learned nothing from this because we already knew it. And if you get different results from the previous finding, then the reviewer who is likely to be one of the original authors of that paper will claim that you changed something in your method, which is why you got the different result, which means you get rejected all roads in a way lead to rejection, probabilistically, I think, going down the standard path through replications, at least in in in these fields. And I think what registered reports do is they, they lift replication out of these doldrums, and they say, here is a track which enables you to get approval for your replication study before you invest all the resources and eliminating all of the bias in the review process, which would, you know, go against you, which regardless of your results using the typical route. So we begin by making it possible. And this already is having a huge impact. So a lot of registered reports that are published are replications, because they obviously provide this mechanism for researchers to do replications. In the first place, but I guess there's more to this question than just doing replications. It's about reward. It's about incentives. The answer I suppose viewed on a short term for me in a short term way is simply publish more of them show them having an impact show them being cited show them having an influence on the field and on theory development on making people sit up and pay attention. Then in fact, the received wisdom in a particular area may not be entirely correct. Watching the self correction process unfold in real time is likely to put a lot more pressure on the system to recognize replication because it is having demonstrable impact. I'm seeing this as well in a kind of personal way because at Royal Society Open Science I'm editing a format there called replications, which are a little bit separate from registered reports. They can be pursued through the registered reports track or researchers can submit replications that they've already done in the past in a results blind way. So they submit a stage one manuscript, which just describes the rationale and the methodology and withholds the results until it gets stage one acceptance. And this is the idea of this, this initiative is not just to again encourage more replications to be done through registered reports, but also to unlock the file drawer of all of the replications that have been done out there, particularly in, in my area in psychology and cognitive neuroscience, which have been done and consigned to the dustbin of history. And a lot of these papers I think have incredible value we're getting quite a lot of submissions coming in, and they're proceeding really well through review so I think you take little steps if we're if the question here is, how do we get to the point where in a sound confirmatory zone, while still testing a new question, replication has to be normative and the way you do that initially is you just make it as widely available as possible, you reduce as many of the barriers that are out there as possible, and then we just see what happens then we can start to build incentive structures around doing replications, many of which have been proposed already. And I think one other theme that comes up from this question is, how do we know when we're ready to confirm something outside of the zone of direct replications. And it gets to a theme that I think it's going to be coming up in the next question about. And when theory and explanations and problems, probable suggestions are sound enough that it's that's worth doing a very highly structured sound confirmatory study about, you know, potentially a question that hasn't been put to a thorough test before. So it's not a replication but it's but we're at the point where testing hypothesis in a very sound confirmatory way is worthwhile. So that leads us to the next question. Talking about a lot of the discussion that's been going on on social media and and in the published literature about the needs of registration. And similarly for registered reports for a large portion of the research that's being conducted seems to me that there's truly not that much confirmatory research that deserves to be registered. However, we can continue to try to upsell the importance of every finding as if it were earth shattering. Chris, what do you think is the correct balance here. So this is an interesting question I think because I think underlying this is perhaps that this idea that in some areas maybe in psychology especially theories are not mature enough to really support a program of truly confirmatory confirmatory testing and we are perhaps beginning to learn this from the high rate of negative results that are coming through from registered reports in psychology so we're no longer falling ourselves into seeing what we want to see in the data. The data is simply giving us the answer and we're finding out that that answer is no. And I think perhaps one of the lessons that could be learned from that it's not the only one but maybe one of the lessons that could be learned is that in some fields confirmatory research is premature and we need more observation, we need more exploration, we need more just charting the landscape before we begin formulating theories that in turn generate specific predictions that can then be subjected to rigorous confirmatory verification or otherwise. Yeah, but it seems to me that these, you know, prevalence of null results is the evidence necessary to sort of shake us into realizing that that I can't be hard to imagine getting that realization without those as well. Right, so in a funny kind of way registered reports could be the death of confirmatory research area where they reveal an extraordinarily high rate of disconfirmation of hypotheses. If every time I make a prediction I'm wrong. Then maybe I'm making the wrong predictions and I need to go back a step. Now I don't know whether that's the case. So, you know, and this is purely subjective point because nobody knows what percentage of hypotheses need to be supported in an area if that question even makes sense in order for us to decide, hey, we need to do more exploratory research or we need to do, you know, develop better theories and we need to invest all of our resources in that end rather than testing predictions. And I don't really know how much of this is even specific for psychology because if you go into the if you put registered reports into any topic. So far, you find you get a lot of negative results. Okay, so you could put them in cancer research and you're going to find a lot of negative results. You could put them probably in plant biology and get a lot of negative results. I think what we might have been one of the interesting issues here is whether there's just too much confirmative research in general across science and maybe that should be reserved for areas that have a much longer richer tradition of very specific theorizing. I don't know the answer to this question. It's well beyond my pay grade to make such pronouncements really, but it's something to be thinking about and I think it's it's also perhaps a slightly paradoxical benefit of registered reports, this initiative which champions rigorous unbiased confirmative research that it may prompt us to say, actually, we don't need or it's not we're not ready for confirmatory research in an area. I think it's certainly good to be thinking about these. I have no idea what the correct balance is I think that's something that a community as a whole has to decide based upon its priorities. All right, so we've got a lot of questions coming in so let's dig right down to everything. Thank you everyone for submitting. How can we decide whether we need to supply pilot data for registered reports. So you need to think what's the purpose of pilot data or preliminary data in a registered report so typically in the submissions I handle it is to provide a proof of concept for a particular method perhaps a novel method so that needs to be verified in some way that's independent. This is crucial that is independent of the actual hypotheses that will be tested. So some kind of independent verification that this method works in some way. If it's an analysis pipeline and pilot data might be useful for confirming that the pipeline does what it says on the tin that the date if you put data in you get a sensible answer out again, not in the context of testing the original hypotheses most likely, but instead just confirming that the pipeline passes the smoke test doesn't catch fire. Another way sometimes authors use pilot data is to provide an effect size estimate for a power analysis. This can be tricky because it's usually not advisable to use any single point estimate. When doing a power analysis because point estimates are biased and inaccurate and have an error bar associated with them whatnot, but still, there are cases where this can be useful for deciding on a zone for an effect size estimation for the actual preregistered study. So really that I guess the overriding point there is you need to supply pilot data if there's some element in your method that you can't really pre specify without knowing more about the general landscape in which you're going to be collecting the data. Some fields rely on this a lot. So most, for example, neuro imaging papers that we get as registered reports includes some pilot data of some kind to verify that the very complex pipelines that are used in the analysis actually work as intended. Other areas in very well developed parts of say cognitive psychology, these kinds of preliminary experiments aren't necessary in some other cases where it goes the other way entirely, where researchers will submit very large very comprehensive experiments in stage one of their registered reports, which the purpose of which is to generate hypotheses for the preregistered protocol. So these aren't really pilots. They're sort of we did these experiments which suggested this hypothesis or these hypotheses. And now we're going to put them to the test. So it can go the other way to basically any time you want to use data in order to decide something about your method, or something about the question you want to ask in your preregistered protocol. That's the point where you probably need to be considering pilot data or preliminary data. And I would also point to the importance in field research sort of demonstrating that you can manipulate the system or or reach the community that you need to reach. I know both impact evaluations economics research and ecology and field research. It's often a point of pride to be able to demonstrate that you can do what you're proposing to do and pilot data can be quite helpful for justifying the your competence there, your ability to do feasibility. Yeah, demonstrating the ability, particularly if you're doing something out on the edge. Jesse asked, is it acceptable to include a pre planned analysis without hypothesis as well as hypothesis testing analysis. I've seen some journals such as cortex allowing this but others EMJ open science specified that all analyses must test hypothesis. When testing novel research questions. It's not always possible to have a hypothesis. I think in theory. Yes, I think in practice. When researchers pre planning analysis or a series of analyses in great detail in order to answer some question, they kind of end up with hypotheses anyway. It's just that they perhaps not specified or that they perhaps don't they're not they don't stem from some very clear theory. It might be the case that they're kind of exploring, but they want to explore a series of paths in an analytic chain. And each of them is in fact, essentially a hypothesis but they don't have a strong rationale for any individual one instead they're going to kind of test all of them. Some of our studies come to mind as one way of doing this. Hey, let's just just 15,000 hypotheses and correct our alpha because we're exploring the landscape. We want to do it in an unbiased way. So I think it in principle. Yes, it's possible. It's unusual. We don't get many submissions. At least not that I've seen where researchers are able to articulate in sufficient detail what exact analysis they're going to run and what conclusions they will draw from what outcomes. Which is also very important without having without that in a way just becoming a series of hypothesis tests, even if it's many. But I wouldn't rule it out. And I think, you know, in these situations if you're on that if you're doing that kind of work and you think you can meet that condition where we've got it. We've got a question or a series of questions. We have no predictions whatsoever. We have a big data set. We're going to run these analyses, these very specific analyses, and we're going to draw these conclusions based on the outcomes. But there are no hypotheses provided bias is controlled throughout that chain from question through to interpretation of outcomes. It's almost of secondary value if any to the registered reports process that there are explicitly articulated predictions. That's not a requirement really provided everything else is locked in and bias is controlled. What is, in your opinion, the best power analysis tool that you can suggest for mixed factorial and over designs such as a two by two by two by two, mixed design. Well, you know what, it's funny with any kind of two by two by two by two design everything pretty much in the end boils down to a T test. It's just a difference of difference of difference of differences. I think things get tricky when you start dealing with multiple levels of like more than two levels of a factor. So G power can do some of this. G power, of course, notoriously struggles with factorial repeated measures and over designs. Daniel Larkins has published a really nice preprint on this, which David can probably conjure from the internet. Give me a few minutes. I've got it in one of my slides. I talk about it in some of my workshops. It's a it's a simulation based. I think the preprint is actually called simulation based power analysis for repeated measures and over or something like this. And it's very nice way of doing this outside G power using, I think there's even a shiny app that goes with it. There's also Panjia, which I think was a tool built by Jake Westfall. I might be wrong about that, but that's also a really nice tool for doing complex factorial and overs. And it's quite flexible as well. Perhaps not quite as user friendly as G power, but some G power does have its limitations. You know, the other thing you can always do is simulate. And this is something that Dorothy Bishop advocates quite often, which is that if you can't find an analytic analytical solution for your power analysis, just generate data. So generate some data feed it into a into your experiment and run power analysis that way based on, you know, predicted effect size, a number of participants, the particular part of the design that you're testing hypotheses within and go from there. The one thing I also would say is that most of the time for most registered reports that we look at the key hypothesis tests are usually not the highest level interactions within and over. They might, those high level interactions within and over might be necessary in order to go further, but they're usually not sufficient because in any kind of high level design where you've got three, four factors interacting. There's numerous patterns the data could take which would produce a significant interaction. There might only be a handful of those results which would support your hypothesis. So it's really important to think about what pattern of results would confirm or disconfirm your prediction. And usually when you drill down to that level in a stage one register report, you end up with some kind of test that is comparing one condition with another one or one difference between between traditions with another difference. So, you know, whilst I wouldn't say don't go ahead with mixed factorial designs, think carefully about whether the what part of that design is the crucial test of your hypothesis. Yeah, yeah, there's there's probably a nugget in there. That's the main focus. All right, very practical question. A few journals recently accepted register reports, but they did not yet have clear guidelines for like blank manuscript should have for submission. Can we use the non register report published papers as a guideline to the outline of these submissions or are they different enough to assume larger pieces of writing will be accepted. What a good question. That's a I've never had that question before. Okay, if there's nothing written specifically in the policy. There's two possibilities. One is that there are no word limits and that kind of thing so that it's basically just a big zone of freedom. And that's how I run things. So, you know, at the journals I added for there are no word limits on registered reports at all. And the guidelines are the guidelines and you follow those guidelines, you'll be okay. I'm going to get back at reading from the regular article guidelines. Other journals and not all journals work the same. Some guidelines are not as informative as they could be. Some journals in do impose word limits. And sometimes those word limits are not stated so clearly. So I think if you're in doubt and you're considering a journal and you think you might run a foul of some arbitrary formatting guideline or word limit or whatever it might be. I would just drop the editor a pre submission inquiry and say we are considering submitting a registered report to your journal. We have the following questions. You might even use this opportunity to tell them a bit about what your studies about to make sure that it's within remits. You know, I get a lot of these sorts of inquiries they can be dealt with very quickly. But in general, a lot of the guidelines are quite specific where they're not specific. What I wouldn't do is just submit blindly to a journal and hope. Because you might, you might, they might come back and say, thank you for your 10 and a half thousand words stage one registered report. Our word limit is 3000 words. Go away and change it all and you'll be like, God save. So, you know, just perhaps get clarity, get sufficient clarity in your own mind before taking that leap. Yeah, those pre submission inquiries can be quite helpful point you to the right direction all the time. And don't be shy about doing that. You know, maybe some people feel it's a bit inappropriate to email an editor with a question. It is not, you know, pre submission inquiries of routine at journals across all article types. Use it. All right, referring to the recent nature editorial from June 2019 where the register report concept was described. Why don't nature journals themselves except register reports yet. What is the status of registered report adoption in medical journals. Okay, so nature human behavior does offer registered reports there is one nature journal that does adopt them so far. And I, and without speaking for nature, or the nature publishing group, I can tell you that they are very keen on the concept generally. They are very supportive of it internally and they're doing a lot of work to discuss amongst the different editorial teams ways of implementing. And I've been involved in a lot of these discussions with them up right up until the end of last year. And I can tell you that there are two more journals which are coming online very soon in the nature group, which will be very significant adoptions nature is also in the process of considering adopting the format and and the chief editor Magdalena Gipper is very positive about registered reports, but also they're cautious they want to make sure that they have. I think from their point of view that their main concern isn't publication bias or having to accept papers regardless of results it's making sure that they have the editorial expertise on hand in order to assess these manuscripts at stage one without making mistakes. And I think this is quite prudent for a set of journals, which employ professional addresses rather than academics they want to make sure they're getting it right. And I think that's okay so they'll be a bit slower to come online, but I think we can look forward to a future where most if not all of the journals offered by within the nature publishing group to accept registered reports eventually. The status of registered report adoption by medical journals is a separate issue again. So BMC medicine was the first major medical journals offer registered reports. I think they launched in August 2017 or 2018. And they've already had some submissions, which is good they've been getting some good some good submissions coming in. There are a couple of smaller medical journals which are now offering them that none of the big five medical journals like BMJ, New England Journal of Medicine, JAMA, etc. None of those are offering registered reports yet. They do know about the format. They have not made many or if any many cases any positive noises about them yet. We don't entirely know why that is. I suspect when when publishers or journals go silent on registered reports, it can often be because they are concerned about having to accept manuscripts regardless of results. If they say nothing, that is often what fills the silence. But they're reluctant to say that in any public way because it's kind of unpopular to say that your marketing model depends upon publication bias. So we don't yet know why these big journals are offering the format. I was suspected some combination of fear of eliminating publication bias and also perhaps consequent effects for impact factor and rank and this kind of stuff which these big journals care a lot about. But we will keep pushing and I would suggest anyone who's watching this who wants a medical journal to adopt registered reports, go and ask them. You know, the more people that put pressure on these editorial boards to do the right thing and to offer this as an option that eventually they'll just fall over. They can't say no forever. I've had seen numerous cases over the years of journals, shaking their heads and saying no, we couldn't possibly do this. And then after a while they change their minds because they realize which way the wind is blowing. So keep blowing wind. Keep pushing them and and eventually I think we'll succeed on all these fronts. Yeah, I think that field is a natural fit for the model they're extremely used to the, you know, process of registration for registration. There, there's been a whole lot of work done in that field by Ben Goldenacre and many, many folks who have been looking at the difference between what's registered and what's reported and looking at different types of outcomes reported on registries versus those that are reported in the, in the articles and in that field. It's quite aware of the issues and keep on pushing and if you're in that field. As Chris said, asking the editorial board for it or even you know a pre submission inquiry saying I would like to run this study and I believe the results should be published, no matter what. Because a couple of journals have come on just from direct inquiries of people, you know, asking for their particular study to be submitted as a register report. So those are all possible. Dave, this might be a good opportunity to post in the chat the link to register reports now. Yeah, which is a crowd initiative to increase pressure on journals to adopt registered reports it works by providing template for consortia groups of researchers at all career levels to lobby using collective action to write to journals and say please offer registered reports. These are the common objections. This is how you handle it to say do it. It puts me and David right in the firing line is helping these journals set up which is something we're always happy to do the more groups of researchers the more critical master is. As I say the more likely these journals are to eventually flip. And also the nice thing about registered reports now is that there's a public list of every journal that has been approached so it shifts this entire lobbying initiative from behind closed doors which is the way we used to do things right right answer the open where everyone is accountable for the decisions that they make in positions of leadership. So I'm sure David will post a link to that there it is it's right there now in the chat. So please read and use this and assemble avengers assemble go ahead and approach these journals. Alright next question would you consider a secondary data analysis study for example on a longitudinal cohort for a registered report if the lead author has never accessed the data set before. Yes, very easy one to answer absolutely way the authors have never accessed the data before. There is no risk of bias or minimal risk of bias. And so I would personally consider that perfectly fine for a secondary analysis registered report. That's just me of course I edit seven out of 223 journals so if you are considering a journal where I'm not an editor check their policy. If they don't say anything about secondary analysis of registered for registered reports perhaps again use pre submission inquiry to lay out your scenario and what you have in mind. Make sure that you emphasize in your pre submission inquiry the steps that you have taken to prevent or minimize bias. Overfitting which is always a risk when you're analyzing data again. And see what happens most of the time I think if you particularly if you haven't even access the data, then a secondary registered report will proceed in much the same way a primary one would. Yeah, absolutely. Victoria asks after about 10 minutes and two rounds reviews are richer stage one Richard report was rejected because quote the power analysis is likely to be an overestimate of the true effect size. It's not actually the effect estimated in the design and it's not match the actual analysis plan. While we do agree with the reviewers on this response we felt we've done the best we possibly could given the frustrating lack of existing effect size. It seems the reviewers think our design simply wasn't a fit for the register report, which we interpret to mean that all rich reports either must be exact replications, or to use Bayesian analysis. Do you think this is true. If not what room is there for power calculation estimates with register reports. There's a little bit worth background, but let's talk with that generally. Okay, that's not so grand. That's a bit disappointing to hear so to see a registered report rejected after two rounds of review, you would have thought this issue could have been addressed, much sooner in the process. It's also, I mean, from my mind, and we see a lot of registered reports in which this is very common for reviewers to raise this issue that an effect size estimate is is overly optimistic. And therefore that a much larger sample size is needed. And the best way to address this is simply to ask the authors to increase their sample size. And to of course, as part of this, you know, to align everything's making sure that that that link in the chain is exact as you pass through between question hypothesis sampling plan analysis plan and interpretation. I, in answer to the question. Should all registered reports either be exact replications or use Bayesian analysis. I think that is not true. I know it's not true because most registered reports I handle are not exact replications and most don't use Bayesian analysis. So we know that that is not true generally. It appears that in this particular situation. There's been some problem that has led to this conclusion. In general, the main issue when you say what room is there for power calculation estimates with registered reports. The key thing to really nail is making sure that that you are tackling something that everyone would agree is the smallest effect size of interest. And this can be a point of contention, particularly in areas where there isn't a huge evidence base or theoretical base to motivate that effect size estimate. Usually what happens is that in areas like this. So this is a study on in infants. Yes, bilingual infants. Usually there's some usually reviewers from the same area appreciate the limitations of the feasibility of doing very large studies. So usually there's a there's a natural kind of realization that doing a registered report that is already larger than typical within an area where these sorts of resource restrictions apply is better than doing it the old way outside the registered report format in a small sample and introducing all this bias. So usually this is an unusual situation as far as I can see where reviewers, you know, after two rounds of review, there's been this massive disagreement. What I would suggest perhaps is that Victoria you contact me offline and we could talk more about some of the details and there are what I mean it might be worth just looking through here some of your additional background that you've given. It might be the case that this is worth appealing if it's not too late. Particularly given that you've gone through two rounds of review and there might be at this point for any registered report that goes this far. There is always a solution. Right. It may now the solution might be just recruit more change the analysis there's usually a solution that can lead to acceptance but it might be worth exploring that in more detail and I'll be happy to talk about that. Yeah, yeah, often early rejections are for not the right fit or you know they the answers could be uninformative, but but these should be solvable problems although they are of course as you describe challenging ones but but there is a solution there. Yeah, so look just drop if you want to talk more about it. And it's very detailed so I don't have time to look at all the details here then respond to everything here, but don't tap me offline and we can we can we can discuss this in more detail and who knows maybe we can come back and X Q&A and say hey yes, this register report ended up being appealed successfully at Journal of X. So we'll be we'll be curious about this Victoria so follow up. Martin asks what effective alternatives to register reports and pre submission inquiries are there to reduce the risk of manuscript rejection. What effective alternatives to register reports. So, can you elaborate on this question Martin if you're if you're listening. So are you talking is the question here alternative article types, or is the question about alternative approaches to submitting a registered report. I think maybe focusing on the besides a pre submission inquiry for a register report. What what should we do to maximize the probability of it being accepted. So maybe it's worth posting the top 10 recommendations again, if you've got that document to hand. I include this within the register report guidelines at I think most of the journals today that because there are lots of common ways that manuscripts failed to meet the criteria sufficiently to get to in depth peer review and I should point out, you know, it's not going to fail every single criteria 100% to get to the reviewers, but you have to get probably about 80% of the way there, so there has to be some. The editors have to see because they're not going to be specialists in every area, they have to be able to see that we peer review in this context will be constructive and isn't just going to identify a whole lot of glaring because we try not we try to avoid that for everyone's sake I I will not send a manuscript a registered report submission for in depth review. If I feel that it falls a long way short of meeting the stage one criteria, because that risks wasting everyone's time if the reviewer sees an enormous gap between what they what they what they're reading and what they think a registered reviewer needs to be. It's much more likely that you'll get three reviewers recommending outright rejection, and then the editor is in a difficult position of having to decide whether or not to invite a revised manuscript or not. It's much better if authors get closer to that point when they initially submit. So that's where these top 10 recommendations that that this is not the top 10 recommendations this is the, what you put up there. That's the checklist for building it. It's actually also very good. I recommend using this as well. By the way, coming back to the last question. One of the things you can do before that disappears is in that checklist. Question nine has a table that I've started using in my registered reports workshops where if you really want to nail the linkage between question hypothesis sampling plan analysis plan interpretation to everyone satisfaction including your own. Then this table is really useful and I would recommend you actually complete this and put it into your stage one registered report because it'll help everyone understand exactly where you're coming from. But as I say this is not the top 10 the top 10 is a separate document got somewhere. That's the best way really the best way of making sure you don't get a desk reject or even worse perhaps it goes out to in depth review with a whole lot of omissions and problems and the reviewers are just like what the hell is this. And then you get a whole lot of negative reviews back and then you go through this torturous process. The best way to avoid all that is to nail it when you submit. And that there's top 10 reasons for desk rejections should help you do that. Yeah. By the way, one other thing you could think about. I've seen a couple of people do this is to post their stage one draft as a preprint for a few weeks before they even submit it to a journal and then get community feedback into it and that can be quite useful. It's a really good idea posted online and share as many colleagues you can and that's a really good way to drop their omissions if you can get some good feedback at that point. All right. We'll find that top 10 recommendations and share it out but I think that is all the questions that have come through. And just double checking the chat window here because a couple of questions came through the chats and the q amp a. Okay. All right. So I think that's, that's it for now, we will make sure to send a recording of this webinar out to the panelists where we're looking into ways to provide a transcript for that as well as that can be a useful way for disseminating some of this information. And Chris, thank you very much again for your time. Pleasure as always. And as I say, if you've got any questions you'd like to follow up with me one to one, you can always drop me an email, and I'll take a look at any individual cases. That's until next time we'll do this again probably what next month. Probably yeah. Yeah. Super. Thank you everyone.