 Welcome to the webinar today. The webinar is on being a reviewer or an editor for the Register Report format. We're hosting it this week because, if you're not aware, it is one of the most exciting weeks of the year, peer review week, and the Register Report format is a major reinvention of the way that peer review works and how to put more focus on the parts of the science that really matters, the importance of the research question and the methodologies. So, we'll give a little bit of a definition of where Register Reports are in just a moment, but mostly focus on what it takes, what reviewers and editors are expected to be looking at as they go through the Register Report format. Our presenters today, I'll give an intro first, first we'll have Chris Chambers talking about from the editor's perspective answering the question, what we're looking for from reviewers when we send out these, particularly the Stage 1 Register Reports for the proposed questions and proposed methodology. Then Anastasia Kinaga and Jason Simeca will be talking about practical tips for writing up a Register Report. And then we'll have about 15 minutes for Q&A afterwards. I'll give a little bit of instruction for that in a few minutes. Chris Chambers is the chair of the Register Reports Committee, professor of neuroscience from Cardiff University. He's currently an editor at Seven Journals that accept the Register Reports format and has handled or is in the process of handling 171 Register Report submissions. When I asked him what the favorite reviewer comment has come back or what the funniest thing anyone has ever said to him when talking about this format is senior researcher shouting at him, you're ruining science. We certainly don't think that's true and we'll give him an opportunity to respond to that as he's presenting. Anastasia Kinaga and Jason Simeca are postdoctoral researchers and cognitive neuroscientists at the University of California, Berkeley. Their recent article was published a few weeks ago in Trends in Neuroscience about practical considerations for navigating the Register Report format. And with that I'm going to pass it off to Chris Chambers to talk about from the editor's perspective what they're looking for or reviews coming back to Register Reports. I'm going to stop sharing my screen and pass it off to Chris. Great, thanks David, let me just get this working. Right, that should be working now. So yeah, so for the next 20 minutes I'm going to give you some advice as a reviewer and I suppose you can join me in ruining science if that's what you want to look at it. But the idea here is that I think when reviewers approach Register Reports they often do so with some trepidation. It's a bit unusual, the peer review process feels differently. So what I'd like to do today is give you some very concrete tips that you can follow for when you're in this position of reviewing a Register Report. Some very specific questions you can address when you're looking at a protocol and some general advice in the hope that this helps make this as easy as possible for you. Now I included these slides in the webinar simply because for those of our community that can't watch this webinar now they provide some background into the Register Reports initiative and how it works. I'm going to assume today that there's a good degree of knowledge on this already. So I'm not going to spend a lot of time talking about the process of how Register Reports work. Suffice to say there is a stage one peer review process where a protocol is assessed by reviewers. Following this successfully there is an in-principle acceptance where the journal agrees to publish the final results regardless of how they turn out. At stage two authors come back with a completed study or studies including of course the results of both the analyses that they registered in any additional exploratory analyses. Reviewers take another look this time assessing compliance with protocol and various other checks which I'll go through in a moment and then of course the manuscript is published regardless of results. But I don't want to spend much time on that. What I want to spend more time on today is how you should approach reviewing a Register Report when you get this invitation. And the first thing to remember really is that don't panic. This is something that I have found in the 170 or so Register Reports that I've edited comes very naturally to virtually all reviewers. Of the hundreds of reviewers that have been involved in the process we found only a very small handful struggle with this. And I think that's because as researchers we're used to judging research without results anyway. We do this all the time anyway when we design our own studies, when we give feedback to colleagues, when we go to conferences, when we review grants or even write grants and assess them ourselves. This process of pre-results review is very much baked into the kinds of scientific training that we get throughout our career. So the first thing is don't worry, this comes naturally and you'll most likely find it quite an easy and rewarding process. The second thing to do is to really make sure that you check the journal guidelines on Register Reports for the particular journal that's invited you before getting started and just familiarize yourself with the criteria that the journal is using to assess submissions. And as an example, I'm going to show you the stage one criteria that we use Cortex, which is one of the journals that's been offering Register Reports for the longest time now. And you can see there that there's five criteria that reviewers are asked to assess. And I'm not going to go through these in detail because they're written in very policy language, but suffice to say you should be judging how important the question is, how robust, feasible, well-controlled the analysis pipeline and study methodology are in general and the extent to which the authors have included various quality controls and checks into their design. And to give this more concrete reality for you, I'd like to propose a number of questions that you can ask of every submission that you get. And if the answer to every one of these questions is a yes, then there's a good chance this submission is suitable for in principle acceptance. The first is whether the hypotheses that the authors are proposing are sensible in light of the particular theory or the application that they're framing in their protocol. So are the hypotheses defined precisely? Are they falsifiable? Do they have promise or hold promise for answering the research question? And that's really the first step. For many journals, that's criteria one for a registered report. Then you can ask, okay, is the protocol sufficiently detailed to enable an expert in my field to replicate what's being proposed without any additional information, without needing to pick up the phone or send an email to find out what they really did, is there enough detail here to enable that expert to replicate that design? And in doing so, is there enough detail to close off all of the usual research of degrees of freedom that registered reports are designed to prevent? And you'll find this is quite a different challenge to a regular manuscript where method sections for regular papers are often very general. Don't provide this level of detail. You'll find that with a registered report, the level of detail that is required is much greater and method sections tend to be much longer. This point three is very, very important. And I think it's perhaps the greatest stumbling block for stage one registered reports in my experience. Is there an exact precise mapping between the theory, the hypotheses, the sampling plan? So for example, the power analysis, the pre-registered statistical tests that will interrogate those hypotheses and the interpretations that the authors will draw depending on different outcomes. And it's really important that this is a very clear logical chain that you as a reviewer can follow. Very clearly, there's no doubt that, you know, this is the hypothesis one. These are the tests that will interrogate that. This is the sampling plan and so on. There has to be an uninterrupted chain of logic between these. And this is an area where I think a lot of authors often struggle because we're not used to writing down protocols with this amount of detail. So it's really important to pay attention to this element. And one of the things you can do as a reviewer is ask, for example, authors to present these in a table where you have column one is the hypothesis and then there's a cell next to that which is the sampling plan, then the test, and then what interpretation will the authors draw if the hypothesis is supported or not? Point four addresses a basic statistical point. So does the power analysis or whatever sampling plan they're proposing reach the minimum threshold that's required by the journal? Some journals will set a threshold here of 90% power to detect the smallest effect size of interest. Other journals don't set a formal threshold but they'll ask you to judge whether you believe it is sufficient. So pay close attention to this in light of the journal policy and in light of your own judgment as well as a scientist. Point five related to this, do you believe that the sampling plan for each of those hypotheses in the protocol proposes a realistic and well justified estimate of that effect size? This is another common stumbling block that we see in stage one registered reports where authors are forced into a position of having to really justify their smallest effect size of interest and they often struggle to do this and as a reviewer one of your jobs is to critically assess that is that effect size perhaps unrealistically large and in which case perhaps the sample size is insufficient. My sixth recommendation is that you check that the authors have avoided the all too common problem of proposing conventional null hypothesis significance testing and then intending to conclude that there is positive evidence of absence from non-significant results. This is very common. It's a common fallacy in regular papers and you see it practiced in registered reports as well at stage one. I think it reflects a certain gap in statistical training. So pay attention to this and if you detect that authors really do want to draw the conclusion that there's nothing there from a null result then suggest perhaps that they use Bayes factors instead or if they want to remain with an a frequency framework then equivalence testing can also be a useful suggestion in that scenario. Point seven, this is a stylistic point but also an important point of content. It's very easy in a registered report for authors to propose a whole bunch of exploratory analysis that they might consider doing and doing so can blur the distinction between the analyses at stage two that were truly pre-planned and preregistered and confirmatory from those that are data-driven and exploratory and post-op. So what we generally require as editors is that authors remove all discussion of planned exploratory analyses from their stage one manuscript to really separate those two types of logic and those two types of analyses very clearly except for those exploratory analyses that as a reviewer, you feel unnecessary in order to justify certain design features. So for example, if the authors include a whole lot of extra measures, questionnaires or extra measurements of some kind like eye tracking or whatever it might be. In order to justify those additional measures they might have to say, we plan to do extra exploratory analyses looking at eye tracking or looking at how the main effect is moderated by the performance on these scales. In that case an exploratory analysis would be justified because otherwise the design doesn't make sense. But anything else really should be cut from the stage one manuscript as I say to ensure a very clear separation in your mind and in the author's mind and in the editor's mind of which analyses were planned and which at stage two are in fact post-op. Point eight, have the authors clearly distinguished work from that has already been done. So a lot of stage one manuscripts will include preliminary studies and pilot experiments from work yet to be done. This can sometimes get a bit confused in particularly when authors are writing a stage one for the first time, where they include experiments that have been done and they include results and then they'll include a protocol for what they haven't done but they'll write that in past tense which is sometimes confusing to the reviewer because they're not sure, am I reading a protocol for another preliminary study or am I actually looking at a protocol that's planned? So it's really important that the authors clearly distinguished that and that's something you should be trying to assess as well. It's usually very obvious when authors get that wrong. Of course, one simple way of achieving that is that everything that hasn't been done should ideally be written in future tense and anything that has been done should be written in past tense. Another important assessment criteria is whether you feel the authors have put enough positive controls, manipulation checks, reality checks, data quality checks into their design that would convince you that if they got a particular kind of result whether a null result or whether a confusing result, a paradoxical result, that there was enough evidence that they'd run the study to a high standard. So positive controls and manipulation checks can take many different forms but for a stage one registered report we ask whether authors have really thought this through and whether those controls and quality checks past can be a determinant of success at stage two. And of course, in addition to this you're also performing all the regular scientific assessment is the design well controlled in other ways? Negative controls is the design sufficiently precise in other ways. And finally, when reporting positive controls that do rely on inferential statistics, one area I've noticed that authors often overlook and that reviewers therefore need to pay attention to is whether it's accompanied by a sampling plan or a fair analysis and that this meets the minimum requirement just as all the other hypotheses do. So a positive control is not an afterthought which should be baked into the design with exactly the same level of clarity and precision as all of the other hypothesis tests. Now that's stage one. Now let's say you do this, you go through this process and the answer to every one of those questions is yes. The authors get IPA, they run away and do their study and a year later they come back and if you're a reviewer at Cortex you would then be presented with these criteria. So I'm now gonna give you five tips on how as an author you can, as a reviewer rather you can test that authors have achieved these criteria. Again, the answer to every one of these questions should be a yes in order for you to grant stage two acceptance. The first and most basic one is did the authors follow the protocol that they registered? Often authors need to deviate from their protocol and that's normal and that's nothing to necessarily warrant any great skepticism that they've done anything wrong. You know, life is unpredictable, experiments often need to be changed analysis might need to be updated. The important thing is are those deviations well justified, are they transparent? And usually there aren't major deviations and most journal policies will require that if authors do have to deviate during the process of doing a study that they contact the editor and they advise the editor of what they need to change and they get some approval for it. But nevertheless, as a reviewer you should be assessing how closely they follow their protocol and whether those deviations if they're there are well justified and transparently reported. The second question to ask yourself is whether the introduction in the method the introduction and method in the stage one manuscript including those predictions is identical as the stage in the stage two manuscript. So this is related to the first point but it's very specific. It's often the case with regular papers that in going retreating into one's mind and trying to play back the historical record we often fooled ourselves into thinking that we predicted something that we didn't. And a registered report is designed to prevent this kind of hindsight bias from corrupting the scientific record. So as a reviewer, it'll be very rare that this is an issue but it's very important to check that the hypotheses that are stated in the stage two manuscript are identical to those in the stage one manuscript. It's very rare that there would be any changes but if there are, they should be of course transparently flagged. The third check is whether any pre-specified manipulation checks, data quality control checks or positive control succeeded. So if they did have some kind of data quality test in there which might be that, you know, signal to noise ratio was within a certain range or a well-known, well-established reality was confirmed then did those quality checks pass? Now, the failure of these can lead to stage two rejection. It never has to my knowledge in the history of registered reports but it is possible. Nevertheless, you should assess this if those controls did fail. It's not the most likely outcome is that the authors would be required to actually acknowledge this as a major limitation of the work and it probably wouldn't lead to outright rejection in most cases and there are reasons for that. One of those reasons is that in a lot of fields that aren't clear positive controls to refer to. So it's not always known what exactly a robust positive control would be. What's a definite reality check? Because reality is often uncertain but nevertheless there's something you should definitely check and comment on. Fourth point is whether, as you will often see, when they add exploratory analyses to their stage two registered report which is completely permissible and indeed encouraged, have they been performed correctly and appropriately? Have the authors taken care to distinguish the outcomes of their post hoc exploratory analyses where conducted from the outcomes of all the pre-registered confirmatory analyses? This is very, very important that these are clearly separated in the results and in the conclusions, really, they should be dominated as much as possible by the outcomes of the pre-registered analyses because they're the ones for which bias is controlled. So as editors, we pay a lot of attention to this as well. We don't want to see an abstract where all of the discussion of results and the implications is dominated by exploratory analysis. And as a reviewer, you should expect the same. And finally, perhaps the most important criterion of all is are the conclusions that the authors draw based upon the evidence? Very simple, very important conclusion, very important element to assess. In addition, there's a couple of things I wanted to flag also. So as a reviewer, it's quite likely that you'll look at a stage two manuscript, you look at the results and you'll have ideas for extra analyses that the authors might want to run that might add shared additional light, perhaps on a confusing element of the pre-registered results or just something that you thought of in your own thinking about this process. Now, there's nothing to stop you requesting these and you should definitely do so. But it's important that you don't expect that the editor will necessarily enforce this request. And the reason for this is that the registered reports policy explicitly protects authors from the subtle goal post-shifting that can happen when reviewers send authors away to do endless quantities of post-doc exploratory analyses. This is a way that I have observed reviewers who don't like a particular result from trying to gate-keep publication of those inconvenient results. So as a reviewer, it's important to recognise that you should make those recommendations and often, most often, they will actually be followed through by the authors because usually they're good ideas, but don't expect that this will be a condition of publication. And secondly, if you do find a flaw in the actual design of the study that you missed or that wasn't addressed at stage one, again, by all means mention this, but just as with post-doc analyses that you want to suggest, don't expect that the manuscript will be rejected on that basis. And this is because this is another very important protection that's built into registered reports that we do not allow the stage one protocol and the design rational hypotheses to be re-litigated at stage two. Because to do so would be to allow in all of the same confirmation bias and hindsight bias that we see with regular papers. So the most that you can expect if you do uncover a flaw in the protocol that you missed or that you felt wasn't addressed properly is that the authors will be asked to address this in the discussion. And I have adjudicated some interesting cases actually where the reviewer has performed the additional analysis themselves. So there might be some kind of flaw that they think is in the protocol. They suggest that additional analysis, the authors disagree and the way we resolve this for the registered report as editors is we say, okay, reviewer, why don't you write a short comment that accompanies this registered report where you report the outcomes of this analysis yourself? And that can be an interesting way to broaden the focus of the discussion. So I'll leave it there. I think my 20 minutes are up. These slides you can see from the bottom right at the screen here are available online. So feel free to use them and share them with your colleagues. Otherwise, I'll end my time. I'll take any questions later and we'll move on to the next. Thank you so much, Chris. Those are really great overview. I bet we'll have a lot of questions. Everyone, we will have the time for questions that come in afterwards. So do take a look at the Q&A box. It should be at the bottom of your screens. You can submit those anytime. And those will go into the queue and we'll make sure that those are all addressed by the end of it. But without further ado, Jason and Anastasia will talk about practical considerations for writing up a register report. If you're listening to your screen. Are we good to go? Great. Hi, great. Thanks. I'm happy to be here. I'm just gonna jump right in and get a little bit of background about our experiences with the register report process just to understand where we're coming from. So first off, as authors who have submitted a register report manuscript, that's now been accepted in principle. I'm just gonna tell you a little bit about the project because we're gonna come back to it for examples throughout. So as cognitive neuroscientists, we developed a proposal to use fMRI brain imaging and TMS brain stimulation to assess the causal role of certain brain regions in the cognitive function of working memory. After scouring all the available resources that we could find for any guidance on the register report process, we felt like we had a really good handle on it and had developed an airtight proposal. And when we submitted it, ultimately it underwent five rounds of review that then involved several additional experts being brought in along the way. And I don't say that at all to complain about the process, but just to highlight the fact that if both we, as well as the editors and reviewers, had a better understanding of how to thoroughly address each of the criteria, then the whole thing could have been streamlined from the start. And now of course we're practitioners, scientists are actually conducting a study in the context of the registered report. And that's revealed a whole new set of challenges in addition to the review process as we sort of realized the consequences of some of the sort of exclusion criteria and methodological decisions that we had made. And again, if we had put in a little bit more, I guess, effort on the front end, a lot of those challenges could have been minimized. But we don't think we're alone. Now as both of us have evaluated several registered report submissions, we see a lot of other authors struggling with the same sorts of challenges that we did. And so recently, like David mentioned earlier, we took this experience as scientists who are excited about this format, I'm really enthusiastic about registered reports, but essentially novices at the process, having just gone through the process of the stage one review and then as active practitioners, carrying out the science, turned it into this practical advice that we wish we would have had access to as we were navigating the process the first time. And so this turn into this primer that David has actually linked from the primary registered reports website. And then also we've compiled just a collection of resources that we think both define the goals of the registered report idea, but that are a little more practically oriented in terms of what you might be able to use as a resource when you're navigating the process. And a lot of our perspective comes from the mix of excitement and misconceptions and challenges that we faced and that we have encountered at conferences, talking to other people about our project. There's a lot of people who come up to us at conferences and say, I'm really excited about this registered report format, but I'm glad you're the one who's doing it because it seems like a lot of work and I'm not sure it's worth it. We've also had people who we've described the project to in there are unaware of the distinction between a registered report and the registration or even I'm familiar with the registered report concept. And then we also have people who we talk about this with who will say, well, that's great for you, but I can't imagine that in my field of neuroscience or cognitive science that the sort of work would ever be suitable for a registered report. And so kind of the way that we've been talking about the registered report process and then our specific practical suggestions is that it's very distinct from pre-registration, but you can think of it as pre-registration on steroids because you're aiming for this platonic ideal of the hypothetical deductive scientific method of what we want our research practices to look like. And it's taking all the things that are part of the pre-registration movement, the open science movement kind of amplifying them because you have to know that the research will be valuable regardless of the outcome but put a little more practically you have to convince reviewers to accept your proposal without seeing any results. And so we've kind of distilled this set of practical suggestions into three separate categories that we think and help address a lot of the 10 goals that Chris laid out already and also to address kind of the six points that we know that reviewers will be evaluating the manuscript or the stage one proposal on in terms of how do you actually get to those goals? How do you actually accomplish those goals? Because they often prove to be much more multifaceted than they look at first glance. And so we're just gonna walk through some of these categories and examples of how we think on author and then also our viewer can consider the practical steps that go into meeting the criteria for a registered report. I think that's sort of the theme of our whole experience is about sort of anticipating what the results will look like. Anticipating the challenges that you could face as both an author and a reviewer. So when we're talking about this sort of practical idea of convincing your viewers to accept putting yourself in somebody else's shoes and thinking about how to make this a compelling case. So first off, registered reports are designed for confirmatory hypothesis-driven research. So you have to make hypotheses. And I know that sounds like a really sort of obvious and straightforward requirement, but there's actually a much more to it than meets the eye. And we think that sort of gearing this part towards the goal of ensuring that any findings that you get can make a valuable contribution and that the potential space of potential postdoc interpretations is limited. These can help both authors and reviewers be sure that the hypotheses are thoroughly and well-specified. So we're just gonna give some examples from our own experience, just to highlight how some well-meaning researchers could misconstrue the demands of the format. So in our first submission, our hypothesis amounted to basically, we expect to see differences in performance as a function of TMS stimulation site. And now in retrospect, that seems woefully underspecified because the accepted and principle version now entails four pages of detailed hypotheses. And here I'm just pulling a quote from this list of hypotheses just to highlight that now there's a tight linkage between the predictions, the specific statistical tests and the theoretical interpretation. I'll say I really like the suggestion to ask for or put these in a table because we ended up writing this out in narrative format to try and do that tight link, but actually just laying them out would also be a really great way to clarify this and make it a little more streamlined. Yeah, so in our primer, we've sort of laid out a subset of pointers that we think are a good way to guide doing the hypothesis specification. I'm not gonna go into all of them in a lot of detail here just because we don't have that much time. I just wanna highlight the last one that's sort of in keeping with this future looking idea, because I think this is really important, but also really kind of tough to get a handle on. And it's about asking, it's about anticipating not just what results you predict to find, but the various pattern of results that you might find and how those could be interpreted. And if your design allows patterns of results that are clearly gonna inform the predictions, this comes back to the overarching goal of ensuring that whatever you find, it makes a valuable contribution to the field. And I think it's useful to sort of every step of the way, revisit whether these goals are being achieved. Again, I think that's something that sounds like we might do it for every conventional study that we plan, but in practice, actually for us to be on a little more nuance when you have to write it all out in advance. So that sort of second category or overarching consideration that we talked about was sample size and power analysis. Again, because of sort of primary or central goal we think of the register report format is that you wanna minimize false positives and false negatives. And again, looking just at our own personal example of this, we started off with a first submission that had just a single sentence or two based off of one closely related previous study. We proposed that we wanted 24 participants. And there was essentially a single paragraph that looked like a bare bones participant section of the conventional paper with one sentence stacked on for power analysis. And then again, my are accepted in principle version based off of feedback from reviewers. We ended up with, oops, sorry, with three pages of, again, more in-depth discussion of the relevant literature, both the TMS brain simulation, we're including some computational modeling. We have performed modeling simulations to show that we would have enough trials and enough power to check the effects that we wanted as well as several other considerations. And the point here is not just to write more, but that if you go in saying, okay, I know I need to have X power, the steps, that's listed as a clear evaluation point, but the steps to get there are often much more nuanced than you might anticipate going at. And so again, in our primer, we laid out several steps where I'm not gonna go into each of these in detail, but just to touch on some of the general themes. Here, the general idea is that there's no one size fits all that solution that you should use. So people have come to us now and said, oh, I know you did a lot of power analysis. What's the right way to do this particular power analysis for my grant or my pre-registration or my registered report? And we've laid out several steps that you have to consider, but eventually it comes down to creating a compelling argument for your approach, for your estimated effect sizes. So that you can, again, end up with a sample size that you think ties back to this fall. It's gonna minimize false positives and false negatives. And so we lay out several different points to consider in the primer, like make sure you match up the actual tests from what you're drawing, your effect sizes to the actual test that you're going to be doing in your experiments. You don't wanna take an effect size from a t-test and then use that to do a power analysis for us three-way ANOVA interaction, as well as other things like consider the effect that there might be publication bias in the literature and you don't wanna base it off of just a single study. This can also be something that's quite challenging for research questions that are not similar to a lot of pre-existing research in the literature. And so, like I said, there's no one size fits all. It comes down to making the argument of what you've identified for your power analysis to make some sense. And of course, there are alternatives to this primarily applies to a frequent and power analysis and sampling strategy. This is one of the areas where we've learned the most, well, all of them, we learned a lot, but everyone knows if you're familiar with our research reports, but you have to provide an estimate of sample size. So that's kind of a clear criteria, but what's not clear is exactly what and how much goes in to doing that well and rigorously and compellingly. So this is just another case where even if you're familiar with power analysis and typically include them in your method section, in this context, you have to go quite a bit further to make your case. Yep. Well, this is something that we're often not going to enter, that, you know, PhD programs and graduate training is just starting to incorporate, I would say. And so there's a lot of misconceptions in this realm as well. Like what's the role of, you know, you might notice we don't talk about pilot data on this slide. So pilot data is really great for demonstrating the feasibility of your protocol, demonstrating that your manipulations might work, but in general can be problematic to base a power analysis based off of a small pilot sample. And so some of the resources that we've collected on that little website we linked to include what we think are the most practical resources to read, to get a handle on the right way or the best way to approach power analysis. And then the third area that we want to focus on, and this is really a big one and there are just so many elements that go into doing this as well. But, you know, a fundamental goal of the registered report is to promote, you know, rigorous, transparent and reproducible research. So, you know, you really have to consider every less detail just, you know, going above and beyond what you normally think is the right amount to include in a method section. So, you know, again, we have a ton of examples of ways that we beefed up our methods throughout their review process, but just to give you one sort of, you know, concrete example, we had, you know, initially just proposed to do our analyses on accuracy. And that seemed like enough of a specification of our performance measure. But, you know, throughout their review process, we were, you know, asked to define all of our measures, specifically what accuracy meant. We thought laughed at this at the time. We thought it was absurd. Like, who doesn't know what accuracy is? But, you know, now we read papers and see people even sort of loosely using these terms and realize that there are still a lot of degrees of freedom built into that. And now, for instance, you know, we defined how we're considering errors of omission and exactly how we're calculating each of our measures and, you know, every condition that would justify exclusion of data for any reason, just so that, you know, any reader can be confident that they understand exactly what went into all the analyses. Again, this is just one example of, you know, many different, you know, ways that we had to more explicitly define all of our, you know, inclusion and exclusion criteria and our, you know, outcome neutral controls. But, you know, again, this does come down to asking yourself throughout the process whether your methods are truly transparent. You know, this should be, you know, written so that, you know, an undergrad, a conscientious researcher who's unfamiliar with the word could recreate it exactly. But you have to think about, you know, if somebody were to try to recreate your study and they found a different result, are there any degrees of freedom, you know, built into your protocol that could explain that? And this is, you know, another area where reviewers can help sort of anticipate, you know, what those holes are and what needs to be done to be more thorough. Yep, one point that we want to talk about not overdoing it here. I think, you know, maybe here. And so another sort of overarching theme of looking forward when you're trying to write out your stage one proposal is to consider all of the known unknowns, all of the things that you might have to determine along the way or that you might typically determine along the way, like inclusion or exclusion criteria. And so some of the pushback that we've gotten when we talk to more senior folks who are doing neuroscience research is that, oh, my research is too complicated. My data is too rich to be suitable for a registered report. And again, we often come back to talking about this idea of including decision trees or if then contingencies. So if the data falls into category A, you're going to do, oh, and so if it falls into category B, you're going to do a different set of contingencies all along the way, whether that means finding the electrodes that you're going to use, you know, excluding participants after you know something about their preliminary data, things along those lines. And one thing we want to highlight is that, of course, if sometimes you're going to have to exclude data if it doesn't meet a set of conditions, but one thing you really want to aim for is to minimize, without compromising your science, the amount of, for example, inclusion or exclusion criteria that's going to essentially hinder your ability to complete the project without actually adding anything to the validity of your project. So you don't want to exclude data unnecessarily. And I'll do this in the beginning in terms of what we've learned after the review process, just while we're conducting our study is essentially, you know, we, in an effort to make a really compelling case, built in a bunch of very specific criteria and, you know, now in practice for finding that some of those, you know, may have been overspecified. So both, you know, for authors and reviewers to think about what criteria are actually, you know, essential to, you know, to supporting your hypothesis and making the research, you know, trustworthy without sort of imposing unnecessary as methodological constraints that are then going to turn this into, you know, a much more rigid and, you know, exhausting process than it needs to be. We think this is one of the points where as a reviewer, you can really bring a lot to the table, both whether it's a disnoice for fees or just an outside perspective of saying, well, maybe you haven't considered this unknown or maybe you need a contingency to deal with this or maybe this is more restrictive than it needs to be just bringing in sort of new set of eyes to the manuscript. And one of the best things about stage one review is that, you know, you have a chance to influence and improve the actual research question protocol from the beginning by bringing this to the table. So again, here the sort of overarching goal is the proposal doable and if not, what can you change? And then the last thing we want to convey is that there's always going to be unknown unknowns as well and that some of these might be as mundane as, you know, some of them breaking, some of them for us have been chemical and have involved our equipment breaking right in the middle of a set of data collection and paying $400 to have an empty box shipped to us so that we can ship it back to the company overnight. And so, you know, these hurdles happen in all sorts of research. Everyone who's done a research project knows that. But just to know kind of the mindset going in that these things are going to happen, your timeline might be a little more complicated than you anticipated. And you will have to continue to make decisions as you go along. So you pin it here as best as possible to the actual protocol and to make the research question as valuable as possible. And yeah, and I think just sort of one final point to correct the question that a lot of people have about the suitability of their work for the format is that as long as you're doing bypass-distributing research, and as long as you can say what you're going to do and what you expect to find, how you're going to test it, then this format can work for you. And so kind of the conclusion that we like to draw both from our experience and our primers, that yes, this is a challenging and new format that ultimately improves worth. So thanks for your attention. There's some links to the resources we talked about. And we're hopefully happy to take questions at that point. I'm glad you gave proper credit to the empty box. Yes, very good. Busy and expensive, three days. All right, let's jump into questions. There are a lot of them. We'll do our best to get to them all. I've got my own if there are times at the end. And let's go. You mentioned some pilot data that you shouldn't be used to inform effect sizes or to be added to the main analyses, mainly about feasibility. Any other things you wanted to add about whether or not what role should pilot data play in the rich report process? On me. OK, thank God. I don't have that much to add, except that now being in the midst or near the end of conducting this study, I think it's great to have pilot data and just can establish the protocol feasibility. So I think that would be a strong recommendation for anyone undertaking this kind of thing, just to really make sure that they've worked through every one of the, you know. It helps to make an analysis that can happen because you can only simulate so much of what might happen in the data collection. It also depends on what sort of procedure you're doing. Maybe a hard to collect pilot data on a monument primate if the project involves a whole bunch of training. But using some pilot data I think was really useful for us. I did just mention, though, that, you know, there's a lot of challenges for using it to inform power analyses, which I think is an instinct that a lot of people have things that have happened in the past, but people who know a lot more about power analyses than may have explained why that's often really not a good approach. And we include some links to that, and there's a great literature out there if you're wanting to go into it. I'd probably throw them into your power sector if they corroborate your answer here. Yeah, it can provide nice converging evidence, but you don't want to collect five participants and say, like, here's our effect. Yeah, I just would say I agree with that. And all I would add is, you know, pilot data is great in a registered report. It's not essential, but it is good if you want to confirm that your protocol won't catch fire. And the smoke test, it's really, if you're particularly doing complex methods, like, you know, if you're doing some kind of imaging or any kind of complex analysis pipeline, really making sure it doesn't fall over and that it works as intended, not to test the hypotheses, as Jason rightly says, you know, that we shouldn't be using pilot data ideally to perform power analyses. It does happen occasionally, but it's not ideal. But it's really there to check that the protocol can get off the ground. Right. And so the next three are real close follow-ups to those types of questions, but 20 degrees, any existing data appropriate for a registered report submission, and do you have anything more to say about the smallest effect size of interest is a concept that's been floating around, particularly when using non-standardized variables. And the question mentioned link functions as one non-standardized variable as an example. I'll just very quickly answer that. So from my point of view, so the smallest effect size of interest is, of course, notoriously difficult to define in many of the fields we work in, because we don't have precise enough theory usually to tell us what it actually is compared to if you're working and say the physical sciences. So often what authors do is they will power to the lower bound of the published effect sizes to take into account inflation due to publication bias, or they might simply power to a criterion effect within the classically defined small to medium range, which is also acceptable. There are many ways of doing this. The main thing is that it's justified, right? So that it just doesn't come out of nowhere. It's something that you as an author can justify to the reader. In terms of the question from Rad about what should happen if you've got a series of preliminary experiments, there's a number of ways of doing this. One way is to put them all in the stage one proposal. So we've had a number of submissions over the years where authors have had one, two, even three preliminary studies in their stage one manuscript. And the purpose of these is to test hypotheses, but not the same ones, right? So they're using them to lay the ground for the protocol, which will seal the deal. So these preliminary studies might suggest a number of different possible explanations for an important effect or implications for a theory. And then the proposed protocol will be the one that is designed to actually answer that question. And at the end of this process, if successful, you end up with a registered report that has three preliminary studies and one registered report effectively, but the overall article is batched as a registered report. I think I think that basically covers it. We included, again, just as one of the converging points for our sampling plan and power analysis, the smallest effect size of interest from a theoretical perspective, but kind of by taking the effect sizes we were working with and saying what they would translate to in our task, kind of inspired by the medical literature where some of those ideas come from. Do you see Richard reports as favoring hypothesis testing over activities such as parameter estimation? Yeah, this is a good question. So I think the default model of the registered report is indeed hypothesis testing. Because that is the dominant form of investigation in many of the fields that we work in. And the format is really designed to eliminate bias from that mode of operation. However, after reading that question, I can see a scenario in which a very rigorously pre-registered and well-justified study proposing to just measure a parameter very accurately could indeed qualify as a registered report. It doesn't, it's not necessarily an essential ingredient to have to have a hypothesis that's defined in a classical way. Parameter estimation is open. It's unusual. I can probably think of maybe one we've had out of the 150 or so that I've edited. It's unusual. It's not off the cards. So if you've got an idea like that, you think step back from all this for a minute and ask what is the point of this format? It's to eliminate bias, researcher bias and publication bias. And if whatever the method you're proposing, if there is a risk that the inference could be corrupted by bias, then there is a potential role to play for this format. And it just might require if it falls outside the box a little bit of the typical policy that you just talked to the editor first about it and sending a pre-submission inquiry. Yeah, we just want to add something that's somewhat intentionally related about our experience with hypothesis. And that's that you don't necessarily need to sort of favor one outcome over another. We went in pretty agnostic as to the outcome. So we weren't sure at first whether this was appropriate for a literature report because we weren't testing one specific theory. Our theory versus some alternative. Yeah, we're basically just trying to adjudicate between a bunch of different theories and a bunch of alternative viewpoints. And so we thought, okay, we wanna see what happens. It's just important that you lay out what the different theories are and which patterns of results which support which theory. So ours comes in the form of, this theory says this, this theory says this. If we find this is greater than this, it would support theory A, but if we find that this is greater than that, it would support theory B. So it's a subtle point, but I think a lot of people might think that they have to go in testing some specific hypothesis when really it's just about limiting the ways that the data can be interpreted after the fact. What about mythological details that belong in the body of the paper versus the supplemental materials? So from an editorial perspective, there's no hard rules on this. In general, I favor as much as possible being in the paper because that's what people read and that's the information that they need. And I think it's easy for supplementary information to become a kind of miscellaneous pile and reviewers may not pay as much attention to it. So my default advice would be put as much in the paper as you can. Supplementary information might be good for digital materials, for example, that you don't wanna stick in the method section like questionnaires or stuff like that. But generally, for any kind of methodological approach that someone would need to replicate the study, I would put it in the method section first. And then if the editor and reviewers turn around and say, look, this is just too much, we don't need all of this, let's put some of this in the supplementary information, then you can have that negotiation at that point. I'm gonna split Alexander's question into two separate questions. Can literature reports be applied to observational research, so non-experimental studies? And then also, what if the study involves the use of already existing data? So which one do you want me to answer, or Joel? Take the first one, Chris. So what was the first one was observational? Observational research, non-experimental. So obviously the answer there is yes, because observational research can be deductive and hypothesis-driven. So an obvious way this can happen is if there's an existing database out there, which the authors haven't looked at. A good example is the ABCD project, which is the really large study of children in the United States, and we are taking submissions at Cortex for proposals to interrogate that data set at the next release, and also to generate hypotheses from analyses of data that's already been released. So that's obviously purely observational and very, very amenable to the format. In general, there's no requirement that a registered report be an experiment. It can be any kind of research study, usually quantitative, although there are qualitative marks being proposed, but usually it just has to be a quantitative research exercise involving hypothesis testing, and it doesn't matter whether it's an experiment or an observation. Yeah, I mean, I'm not sure we have that much more to add because as an editor, you have more expertise on that, but I can imagine several different ways that as long as you haven't seen the outcomes of the tests that you wanna do, that you can be using this existing data to ask a new question, as long as you can demonstrate that you're naive too. This is something that, just looking at the different policies, it looks like different journals have different policies for whether they will accept already collected data. I think it's gonna be challenging because so much is in our field of neuroscience research now, it involves analyses applied into non-human primate data that has been collected years ago, maybe in the same lab by a different person, maybe in a different lab, and also these massive data sets that are being compiled to even connect on projects like that. So, I mean, there's challenges, but also I think we can figure out a lot of benefits from this, right? Yeah, that I think would be a great area to focus on how to sort of guide researchers who are interested in pursuing a registered port route, but not in terms of the evolution of the format. You know, I'm sure a lot of people aren't sure how to use it for that kind of data. So I think that would be a really useful area to work on expanding. Yeah, I agree. And also, if you go to the registered reports knowledge base, which I think David sent the link around for, there's a link in there to a policy feature spreadsheet which shows every adopting journal. And I think it's column seven, is whether or not the journal offers, or not too specifically, the journal offers registered reports for existing data. And most of them do, and I get quite a few submissions like this, not the majority, maybe two out of 10 along these lines. There are constraints, so usually it's not appropriate where the authors themselves have collected the data and they just haven't analyzed it yet, because it's very difficult to establish proper bias control in that situation. It works better when there's a data set they haven't seen yet and they've got some questions about they want to interrogate on it. But it can work in any scenario. It just requires the authors to really lay out their steps for control and bias. All right, one last question. I'm going to combine mine with that because we are close to time here. Can a single stage one report protocol be used for multiple stage two reports? And if you have time to answer this question, there's been a lot of discussion about how to report deviations, what the implications are. Can you give a principle for how those justifications, what appropriate justifications are and what types of deviations change the inferential value of the final results? Well, just to quickly answer the first question, it's pretty much no at the moment, none of the journals will allow multiple stage two registered reports from a single accepted stage one protocol. It's at the moment package is one article. So if you're in that situation where you do have a lot to say from your results, my advice would be to submit multiple stage one protocols. There could be a lot of overlapping information that might cross reference each other. They might even go to the same journal and can be reviewed together. But I would not, at the moment, that facility doesn't exist. It's one that as being proposed in ecology, there's a peer community initiative which is a platform for reviewing preprints has a model of registered reports which does allow this in ecology. We haven't got that far yet in psychology and neuroscience in the fields I work in. But it's certainly, it's an interesting idea and in theory it's possible. The one danger with it, the reason it doesn't exist is that there's a risk of selective reporting. If one protocol gets accepted and then you get multiple stage two articles, it could be quite tricky to track whether all of the outcomes are reported transparently across the set of papers, particularly if they're published at different times. So there are some logistical challenges ensuring adequate bias control in that situation. And then the second question was deviations. How did you deal with deviations? How do you transparently report them and what are the implications of major or minor deviations? Well, that's a good question that we're still, we're still dealing with because on a few issues like the selection of our targets for brain stimulation we realized part of the way through that we could have done that process differently but spoke to our editor and she said, this is one of the areas where there really is no room for deviation, these sorts of decisions are the ones that you really need to stick with throughout the study. To avoid unintended. Yeah, exactly. Because now we'd already been collecting data and seeing how it was turning out. Yeah, it all boils down to risk of bias, right? So any deviation could in theory increase risk of bias. Some deviations will have virtually zero like changing a bit of equipment. They're completely resolved to find other ones. If their results can increase the risk of bias. So it's really up to the editor's judgment and the author's judgment as well as to which ones are significantly large that they would disturb the process to two grade an extent. I've only had one case in the 170 that I've edited where we've had to say no to a deviation and that was where there's an EEG study where the authors set a very, very reasonable exclusion criteria on their data. When they started collecting, they realized their environment was noisier in terms of the signal acquisition than they expected. And so they were actually excluding quite a lot of data and after very careful consideration, we decided we couldn't allow them to change that. They just had to collect more data but what they did have at the end was a really amazing data. They did all sorts of exploratory analysis on. It was more work for them but it's a much better paper for it. So there are cases where it is difficult and a judgment has to be made. That actually sounds exactly like our situation. I'll say just from our perspective, one thing that we hope moving forward becomes more incorporated with the processes. Some sort of dialogue that doesn't put too much burden on the editors and the reviewers but some way to interact and say, what if we make this change moving forward or how do we consider this deviation for only 10% of our data collection and we do an incremental without submitting a whole entire new submission to go back and forth and wait a month some sort of dialogue through the submission system. Thank you everyone. Thank you attendees. Thank you Chris, Anastasia and Jason for presenting and sharing your knowledge with us. We are just past time so we're gonna go ahead and wrap it off and sign off. So thank you very much. Thanks David. Thanks Chris.