 Hi everybody. My name is Noah. I'm a Principal Research Scientist at the Center for Open Science and I am really, really excited to be talking about this today. This is a long-running project at the Center for Open Science. It represents all kinds of interesting, interesting innovations in journal policy experimentations. One of the really interesting things here is that we are doing so many things that are pretty new, that have never been done before, that there really isn't language for what we're talking about. So when we say registered revisions, that's a new policy, a metatrial, that's kind of a new concept in how to actually do some of these things. There isn't a great way to describe a lot of this using our existing language. So to describe what we're doing here today, there are really three things happening all at once that makes it very interesting to talk about. So on the sort of small scale, the immediate thing is we're talking about a policy. So we're talking about registered revisions, which is an in-peer review, pre-commitment device. We'll talk about what we mean by commitment devices and in-peer view and so on in just a minute. We're talking about a new evidence framework. So a phrase, prospective backward meta-analysis, sounds pretty wild because it's something that hasn't really been done before. And we're talking about kind of on the smaller scale within that evidence framework and experimental design where we're experimenting with something new that we're calling kind of a study in a kit for within journal randomized experiments, something that has never been done before as well. And when you step back, all three of these things together lead to what we think of as maybe a path forward for evidence-based journal reform. Journals have been relatively slow to change over time, in part because we haven't had evidence for any of these new policies and ideas and so on. And we're hoping to sort of solve that to some degree or at least move things forward quite a bit with this project. There will be a question and answer section, question and answer at the end, but please feel free throughout to post things in the Q&A. We'll probably hold off on most things as we go, but if there's something immediate, we'll try to answer it live. But to talk a little bit more about the registered revisions policy itself and pre-commitment devices more broadly, I'm going to turn it over to Macy Dilly, the project coordinator for this project. Awesome, thank you. So we'll start by going through the typical publication process and then compare it with the processes of pre-commitment devices and then that of registered revisions. So the traditional publication model, though long considered a gold standard, has some no negative effects on research. For example, there are strong pressures for authors to achieve preferred results or statistical significance. And it can also create situations that can encourage publication biases and questionable research practices like p-hacking, harking, or more. And the point at which peer review occurs, which is after authors have done the most costly portions of their work, is perhaps an ill-timed moment to receive important feedback. Next slide, please. Pre-commitment devices attempt to address some of these concerns and improve research by enhancing planning, mitigating pressure to achieve those preferred results, which in turn can help reduce questionable research practices and maybe even publication biases. And it can help put focus on methods before results. And to clarify, pre-commitment devices is another term that we have kind of started using to describe some methods that require some level of commitment to a research plan. Some common pre-commitment devices include pre-registration, which is where you register your research question and plan, or registered reports, which takes that a step further, where you submit that registration to a journal, which then reviews it. And after receiving in-principle acceptance from that journal, authors can then conduct their analyses and collect their data, write their manuscript, submit it back to the journal, which will then review that manuscript based on its adherence to that registration. Registered revisions is a new kind of pre-commitment device in that it also attempts to address some concerns within the traditional publication model. And in this case, it's kind of like a mini-registered report that occurs within that traditional model. So in the typical model, reviewers can sometimes make requests for authors to collect more data or conduct new analyses and add that to their manuscript. Usually at this point in time, when a request like that is made, authors are very close to getting published, and that kind of pressure can introduce an environment where questionable research practices slip in. On top of that, there's also a level of uncertainty that kind of amplifies that environment where even if authors do submit more data or a great additional analysis, they can't tell if that's going to lead to a publication. Next slide, please. To mitigate those concerns, authors can register those large revisions before they conduct their analyses or collect additional data in the same way that one would do with a registered report. In other words, after a reviewer does make a request for more data or analyses, the authors would first submit a revision plan, which would undergo review. And then after coming to an agreement with reviewers, authors can then do those analyses or get that data. From there, reviewers would review that specific large request according to its adherence to that revision plan. Next slide. So in theory, registered revisions could decrease some of that uncertainty in publication outcomes by giving authors a better understanding of what's expected of them in order to get a publication. They're likely to have some effect on publication timelines. Hopefully they will shorten them by providing a little more structure to the review process. And then they could also decrease publication-related biases and questionable research practices and hopefully focus reviews on the strength and quality of the methods. Registered revisions could be an influential journal policy, but we have no knowledge on its effects and no simple way to test for them. So I'm going to pass it back to Noah to go over our trial design and how we're going to try to do just that. Great. Thank you, Macy. So we're going to be talking about two levels of trial design. And we're going to start at the top level, the big one. This is the meta study design. We call this maybe a meta trial or a backwards meta analysis. We'll talk about what that means here. So to walk back just a little bit, I want to talk about just evidence-based policy in general in journals. There's been a pervasive circular logic for how we actually go about journal reform, which is that we need evidence in order to make reforms. But we have this problem of, well, we have no evidence. So nobody experiments with new policies. So there's no evidence. So there's no policies. And we're in this circle right now that seems to be pretty perpetual. And in truth, there's never actually been a journal policy experiment, randomized or otherwise. It hasn't really happened, or at least not a formal one, as far as we are aware. And one of the key reasons for that is because experiments, and particularly this area, are really hard. There are a huge number of just design challenges on top of all of the social challenges. So some of those include, you really need pretty big sample sizes to do this. You need a huge amount of coordination across journals. So if you want a bunch of journals to be participating in this, which you really do to get that sample size, you need journals that are going to be working all at once. And that's really, really hard. Timelines are tough. It's a huge amount of work for very little reward for the people that are actually running these things. You might be an 18th author on a journal for effectively running a mini RCT. That's really tough to sell. There's some ethical issues with participants, right? You're randomizing people within a journal policy. That can be really ethically tough. And then there's just logistics. It is incredibly complex to run a trial like this. And there have been some previous attempts. One, a previous version of this study, for example, was run a while ago. And it stalled out in large part because these design challenges were really difficult to overcome. So we have started over and really revised the central idea of actually how we're going to go about this and the central sort of framework for evidence. And the big idea here is that we're doing a meta analysis, but we're doing it backwards. So we are starting from the universe in which we have the kind of data we would want to actually create useful evidence for registered revisions or really the supplies to any journal policy. And we're saying, okay, well, ideally what we'd want is a ton of different experiments that are all done the way that they would normally be, they would actually be done and rolled out in the real world. And that we can all put those together and make a really nice meta analysis out of all of these things, right? We don't want necessarily one study. We want a lot of studies. That would be way better. And so this kind of represents, if this could happen, this represents a much more comprehensive view of things, it represents a much more practical approach and a sort of a pragmatic, if you're familiar with that language or a more naturalistic approach to evidence, right? We're not forcing quite as much here. We're not forcing journals to all have the same sorts of policies. But we need a way to actually produce those studies, right? This is kind of a new framework for thinking about things. So we're starting from the end. We're backward inducing. We have this idea that we're going to do a meta analysis now. We want to create the studies that make up that meta analysis and encourage those to exist. So we are calling this a semi centralized meta analysis backwards, right? Again, we're going back to this language thing. There really isn't a good way to describe what we're doing. So what this allows us to do is share a lot between all of these different experiments, right? We have shared experience, designs and resources. It helps align incentives, right? Because all of these, the journals that are running these things now have a good incentive for actually participating, which is authorship and other sort of scientific currency that's associated with that. We get a much stronger amount of scientific inference out of it because it's actually much more useful to have these things rolled out in a less sort of artificial manner. And that allows us to really use those data in a much more useful and practical way. We can actually say, okay, our meta analysis gives you an idea of what might happen in your journal if you were to try to do this. And the way that this works is kind of a partnership between the Center for Open Science, so that's us, and our NAA journals and journal consortia. So on the Center for Open Science side, we're providing kind of a boilerplate. We like to call this a study in a kit. We'll talk a lot more about what is actually in that kit in just a minute. But the idea is we have a package. Here's a full package, including options where you have the protocol, you have all the materials and infrastructure you need. You have the data analysis design, IRB pathway, it's all taken care of, and a ton and ton and ton of support along the way. And you can sort of customize that it's a kit. So you can customize it to exactly how you want to run things in your journal, tweak things to your heart's content. And on the journal and journal consortia side, you take that study in a kit, and it's yours. It's not a COS experiment. It is the journal's experiment. So it's your experiment, your data, your policy, your publications, you own all of it. And so there's every reason to do this because you are sort of the first author and so on. And each one of these randomized trials in this kit is specific to each journal's individual needs and preferences. There are a lot of options in different ways of rolling this out. So what the specifics of the intervention or the specifics of actually how you go about a registered revision type policy, there's a lot of options in there, schedule and timelines, you're on your own timeline, and exactly what outcomes you're interested in and so on. And effectively at the end of the day, you have full ownership of your own data and publications plus authorship on any COS-led publications that come out of that. So that's the meta analysis and so on. So there's actually like a lot of authorship involved here. It's a great opportunity particularly for any journal editor that's interested in journal policy experimentation to get on board with. At the lower level, so that's the broad idea. So at the lower level, what does each individual study look like? And it's a fairly simple-ish design overall. So the idea here is that you have, it's a simple two-arm design. So you start with eligibility screening. So at the point at which you get a peer review comment in, a screener that could be an editor or that could be a dedicated screening editor or third party. So in some journals, there's a two-level editorial process where a handling editor will be looking at the peer review reports and handling things and making some decision and then handing it off to maybe a senior team or something like that. That might be a place for a screening editor to quickly screen through any trial-specific eligibility criteria. Or it could be handled by the handling editors themselves or a third party. It doesn't, there are many, many options here. But what they're looking for is any peer reviewer that is asking for a, it has a request for either new data collection as in we want you to, you know, it would be great if you could run these additional experiments or new analysis of existing data. In other words, we want you to run X statistical test or we want to see this additional data or something along those lines. Either of these works. Any new request for new data or new analysis are a qualifying event for our eligibility criteria. Participants, which are the journal, the manuscript authors themselves are then, can then be enrolled and randomized. It's notable here. We can talk a little bit more about this. Inform Consent is interesting because it can take place even before eligibility screening. So it does not necessarily have to be first you determine eligibility and then you obtain consent. You can have people pre-consent into this trial. And this actually presents some really interesting options for other experimentation, right? Having people pre-consented to journal experiments in general at the point of manuscript submission. But we can talk a little bit more about that in the Q&A. But so at this point people are enrolled. They are randomized into one of two arms. So the first arm is the standard procedures arm in which the editors just do whatever it is that they would normally do for that journal and that peer-review process. So that might be, you know, just whatever the standard procedures are for, you know, sort of standard of care if you're familiar with that medical trial terminology. And the other arm is the registered revisions arm. So if a person, a manuscript is randomized to that process, then the authors are asked to write a revision plan. So what it is that they plan to do to address those new data requests, the editors and or reviewers, this is largely up to the journal running this trial themselves, how they do this, review the plan before implementation. And if it's good, if they agree that the plan sufficiently answers the questions, they agree to in-principle accept the manuscript as long as those tests are performed as they are described, and then tell the authors go ahead, do the things, report back, and it's basically accepted as it's written. So it removes that uncertainty. It also could potentially remove many that sort of back and forth many rounds of review that often happens with these kinds of things with new data requests. So hopefully we're going to see some time reductions, but it also adds a step in the middle. So who knows. And then the follow-up is fairly straightforward. The main ones we'll talk about this are editorial events and milestones and survey data. So the trial design itself is fairly straightforward, just a simple two-arm arm design. Data collection here takes place over a couple of things. So the really big one, the primary data collection, is about logging key editorial and submission events. So things like when revisions are submitted or revision plans are submitted, when the reviews come back to the journals, when they're sent back out to the authors, acceptance and rejection events. So do the date and result of editors deciding, yes, we'll accept this, yes, we'll in-principle accept this, yes, it gets published, those sorts of things. It's really an event-based data collection effort. And we'll talk about the analysis in a moment. But then on top of that, we're doing all sorts of other things. We want to do some survey data. One of the really big things about this is satisfaction. We actually want to make sure that this is actually a good policy or people happy with this. People are kind of generally unhappy with the peer review process. So maybe this actually reduces uncertainty and makes people happier about it by reducing the uncertainty and increasing clarity and the sort of overall experience. Or it could be that these additional steps are too hard. They are tough for people to understand and so on. We really want to know a lot about how do people actually feel about the experience of doing this. And then some manuscript outcomes. So we really want to know some scientific outcomes. We talked a little bit about, hey, maybe this reduces questionable research practices or dropping null results and that kind of thing. So we actually want to see things like do the interpretation of findings change between these two arms? Do we get more null results in the registered revisions arms than the other arms? We want to know a few things about the actual result, the scientific process itself, the results of things. Do we have a more reliable literature from this? And the analysis is fairly straightforward as well. So the primary outcomes are those times. We really want to know, does this actually change the process in a way that's this sort of measurable in time? So particularly time from the review round to final decisions. How long does it take to get to the end? And the way that we're going to do this is, so we have a sort of a multi-stage process here. And we're going to do this on what is effectively a Kaplan-Meier type analysis. Also, in this particular case, the stage-by-stage design is sometimes called a longitudinal cascade design. And so we can really tell that sort of stage-by-stage time between all of these stages and get some more nuanced data on how long these things sort of take and where the effects are and so on. We have some secondary outcomes. So we are interested in those measures of statistical significance and uncertainty. We do want to the proportion of articles accepted. Do we get more articles accepted in the registered revisions arm or less? Just the raw number is useful by itself. And the character of those articles that are accepted and rejected is also super useful. We want to know that subjective experience with the registered revisions process versus the typical process. And really importantly, anything that the journal itself is interested in, we'd like to know that too. So there's a lot of flexibility for adding survey questions or additional qualitative information. Everybody kind of wants to know something a little bit different, and we'd really like to provide the opportunity to do that. And to do all of that, we're going to introduce this new idea of the study in a kit. This has also never really been done for at least not at this level. So what we want to do is provide all of the materials and everything that a journal would need to actually do this, to actually perform a good trial that is robust and useful and compatible with the meta analysis and all of those other good things that come together with sharing in big team science. So from the very bottom level, we have a choose your own adventure protocol. So if you've read any of those Goosebumps books when you're a kid, you choose this, turn to page, whatever, we actually, we're doing that for the protocol. We have a fully written trial protocol for all of these journals, and there are many, many options. And they have option one, when how the actual conformed consent process takes place, option two, a different option for different ways that you can actually do the registered revisions policy and so on. And it's a full protocol. It is prewritten, and you kind of fill in exactly the things that make it work for your journal or the versions that you're interested in. So some of those things might be how and when to ask for consent, who does and how the screening review actually happens for the trial itself, details of the intervention implementation, how data sharing and access works, and again, those survey questions and data collected. And when we say it's prewritten and preapproved, we do really mean it's preapproved. So we have a built in preapproved IRB coverage. This is something that also has never really been done before. So we have a, so the University of Virginia IRB has approved a protocol that covers basically every journal that participates in this project. So every journal that is doing their own trial is already pre IRB approved. They do not need to go and get a new IRB approval for this. So it's done. There are some caveats to that. It does require that you formally partner with COS that we do some things like human subjects, ethics, training materials, and so on. And that as long as, and it's automatically approved as long as the trial fits within the options that we have listed, but if there are options that the journal is interested in that are not listed, and that's going to happen quite a bit, then it's relatively easy for us to modify those options in. So we don't have to start a new IRB process for every journal. It's done. That said, anybody who's partnering with us does have their own institutional requirements. So the partners are responsible for their own personal institutional ethics requirements and so on. We're also sharing everything. We're sharing everything. We have this pre-written analysis ready. We have all of this centralized protocol and so on. So the data and analysis is actually relatively easy. Again, no reason to reinvent the wheel every time here. So we have a centralized data collection system at COS that's going to handle all of the data collection. It's going to make it all compatible with each other, which is really great for the meta-analysis, but also is really great for the journals themselves who are running these things. It makes data access easy for them. We have a centralized data set. It'll be good to go. We have a common core of analytic code. So we can say, okay, this is kind of the main analysis here. Here's the code to actually perform that. It'll be compatible with all the other journals and compatible with the meta-analysis. We'll have pre-made visualizations ready to go that can be that, again, we don't own those. Those are sort of prompts, they're templates for producing, things that we might suggest are analyzed and included in any publications that come out of things. And interpretation notes. Some of these things are tough to actually interpret, and it's great to have how do you actually describe this particular key statistic and so on. And all of that data cleaning and stuff, that's done. We have it all handled centrally. We're really reducing the workload needed to actually make something good. And because that we're doing these things all at once, it'll also ensure that there's sort of some regularity and some good coding practices and so on by making this all pretty centralized. And of course, we're providing a ton of workflow information. So that protocol includes things like workflow diagrams and instructions. We'll have training materials for editors and investigators. So anyone wanting to join up, we have all kinds of materials. We'll have webinars and meetings and so on, and trainings. A lot of, really a lot of guidance for how to actually do that, to actually go about doing this trial. So this, for example, on the screen is the basic workflow for how participants flow through, how participants and editors flow through the register revisions or a standard review procedures or trial process. So there's a nice little flow diagram that describes more or less all of the outcomes that are possible. Slightly more complicated one than was shown earlier. And of course, we have partially automated or guided workflows. This is really, really, really important. We don't want to be creating additional work here. We can automate as much as we possibly can and make things really easy for folks. So one of those is having centralized data logging and management. One huge concern we hear from journal editors and very reasonable concern is that editorial software is pretty terrible in general. And it sounds like this policy would really need an edit of the journal, the editorial software itself, right? Because, which usually handles all of these things. And in general, we want to avoid that. We want to create a system that doesn't require any of those, that is sort of agnostic to whatever journal editorial system that people are using that software. So what we've done here, here's sort of an example of one way that we're automating things and making things easy. So what you're seeing on the screen is a screenshot of a web app that is the notes field generator. So what this does is you pick your, if you're whatever journal that you're in, you have a dropdown menu to declare what journal you're in. And for each participant or potential participant in the trial, you'll generate a text box. So if you click that, that you'll get a big old block, which we're calling the registered revisions trial on participant administrative portal. So in the notes field of the editorial software, you copy and paste this block that tells you the participant or the record ID for that. It'll give a link to the informed consent form for that particular manuscript. So you can send that along to the manuscript authors. It has a link to logging the events specific to this manuscript. So I'll show you that in just a moment. A link to the participant status. So if you click that, you'll get, you'll have an update on exactly where, exactly, you know, what the status of that particular participant and so on, and their arm assignment. So that's all just in the notes field. All of those links out that centralized portal makes it really, really easy. Every editorial software has a notes field, and we can sort of do it that way, right? We want to automate this, make this as easy as possible, but everything don't make people go to, you know, external things that are at least not easily. And so that, you know, link will, so right now we're doing this in Google Forms, that link will have, you know, the particular participants informed consent form for that specific journal. So every journal has a slightly different informed consent form. And that'll provide the link so that authors can participate in that. There are, by the way, other ways of doing this informed consent. We can talk about that later if there's questions. And for the editors themselves or the handling investigators, a form for logging events. So that could be, you know, journal acceptance or rejection or administrative events and so on. The big, you know, the big and most important thing here is that are those event log data. So this provides kind of a centralized easy to access logging process. And of course, all of this logistics and all of these new things that have never been done before are new. And so really, really, really importantly, we have a pilot phase that's happening right now. And to talk about that, I'll turn that over to Maisie. Thank you. Yeah. So since the study design is rather intricate, and since there are a lot of potential real world obstacles, we aim to run multiple test versions of the trial with our partner journals and iterate off of our experiences. So this pilot is going to be a path finding project that's going to test and develop management infrastructure, write lots of guidance, explore different variations and simply go through the full process of the design. And of course, document what worked and what didn't. The pilot's going to result in a more robust infrastructure needed to implement these trials. Again, lots of documented guidance on what went well and what didn't. And at least one manuscript describing the process, the experiments, and maybe even an early look at the results. And at the end of the day, I think probably the most important facet of this pilot is going to be the pilot journal editors who are ambassadors of journal policy experimentation, and who are really going to pave the way for other journals to be able to follow in suit. I'll pass it back to Noah to wrap things up. All right. So we've talked about a lot here. I want to kind of gather it all together and talk a little bit about what it is kind of really doing here. So we've talked a lot about the policy itself, registered revisions, this peer review, in peer review, pre-commitment device that's kind of similar to registered reports and all of those other things. We think that might actually be a really nice way for us to do a little better within the traditional peer review model, as a way of sort of getting used to pre-commitment devices and improving the literature that way. So that's a useful policy that we think we'd like to experiment with. We've talked about an evidence framework. This is a new idea, this prospective backward meta-analysis idea, starting from the end and working backwards and creating the evidence from there. There's a really interesting idea that at some point or another, we decided that in science, a unit of science is a 3,500-word paper study, and that really is kind of an arbitrary distinction. A unit of science can be anything. It could be a collection of things. It could be many individual studies, but we're also taking advantage of the fact that these are traditionally published in these 3,500-word papers, and we want to give people proper credit for running these sorts of experiments. And we're talking about an experimental design, an actual way to do a process and procedure and logistics, and all of those things that are needed to do a within journal randomized experiment, which has never really been done before. And it's really difficult to do, and this study in a kit idea to help support that. And we think that this all together is kind of a path forward in general, not just for registered revisions for general evidence-based policy reform in journals. And we hope that this really leads to a lot of different things well outside of registered revisions. So where are we right now? So we're kind of starting the actual pilot phase at the moment. So we are going through sort of the logistics. So we're in kind of the in-between of the planning and the pilots. We currently have four Pathfinder journals. We'll announce more in a little bit. So there will be more very soon. And those four are evidence-based toxicology, leadership quarterly, post biology, PLOS biology, excuse me, and scientific reports. So we expect to have pilots in all of those journals ongoing. And again, those are our Pathfinders. Those are the logistical Pathfinders for this process, for experimenting with all of those little procedures and so on so we can get the best developed materials that we possibly can and have people who can talk about their experiences with this. Also represents a few different publishers who are supporting this in various capacities as well. And that's going to be rolling into the main phase as we go. So what we expect is that the all of these Pathfinder journals will start rolling their projects as it becomes less piloted into more main phase-y at some point or another. We're going to be kind of doing that with the main phase journals as well. So anybody who signs up will have sort of a test period for making sure things work within their journals and then rolling that into a main phase project. So it's kind of a, it's a little bit of a nebulous shift, but we're slowly, slowly, slowly hoping to roll out that main phase pretty soon, hopefully starting the main phase this year. We'll have, we'll have launches of the pilot journals pretty, at least one of them pretty soon, we think and hope. Yeah, so we're hoping to get a lot of interesting information out of that. And for more information, so the, the best, by far the best place to get a description of this project, what the, what registered revisions are, how this whole project works is on our, our project website here at the COS.io. We have an OSF page where we're sharing a lot of the materials, the things like the protocol and so on, the IRB approval and all of that. And most importantly, we really, really, really want to hear from you. So we are recruiting, we're recruiting journals as we speak. We would love more pilot journals to, to, who are interested in experimenting and pushing boundaries and so on. The most important thing is not, you know, we don't necessarily need everybody to be an editor and chief. What we're really interested in is a, our journal editors that are interested in experimentation that would be championing this idea of experimenting with change, of actually moving things forward and new evidence generation. So if you are one of those people, please, please, please get in touch. We would love to talk to you. If this, you think this might work in your journal. Yeah, let us know. Email us at know at COS.io and may see at COS.io. One thing I will, I'll note here is that, oh, actually, I think there's actually question exactly about this incredibly well timed question from anonymous attendee about small, about the size of the journals. So in particular for the pilot itself, the size of the journals don't matter very much. The pilots are really about a pathfinder mission. So what we want to do is test our systems. We care much more about the implementation process itself than the then getting any actual results, right? The main phase is where we're going to actually learn all of those interesting questions about our primary outcomes and so on. But the pathfinder missions are really where it sort of it matters. It's also worth noting that most journals, when you look at your data, you say, oh, I, we never accept new, new data requests or new analysis requests. And that's maybe not as true as any of us would like to believe. Maybe overt new data or new experiment requests are probably less likely, but there are many, many, many, many, many, many more requests for new analyses in almost every journal than often we think about. So often the sample size is actually quite a bit larger than we would assume for any given article or any given journal. So with that, I am going to pop it over to Q&A. And yeah, would love to hear your questions, your thoughts, your criticisms, any, anything you'd like to know or talk about. You can pop that into the Q&A or into the chat. We will be able to see it. I'll just give that a second. Oh, great. Okay. So there is a question about, I'll go ahead and answer that. So there's a question about the process being very different for a new data analysis. So a request for new data versus new data collection versus new analysis. And yeah, those are probably quite, quite different. One of the things that's really interesting about this is that every field is going to be really, really different. So you, if you're a journal editor, you know your journal best. So you decide what that process actually might look like. There's, there are some general things that are going to sort of happen in like, you know, the basic structure of a revision plan is going to be more or less the same. But how you actually handle that policy as a journal editor is really up to you. There is not really a universal way of doing this within journals, right? There are, every field is going to be different. And we want to explore all of those, those options. So we don't want to sort of restrict things too much. So that's, that's one of those pieces that the, the journal editors when they're making, when the journals that are, when they're making their own experiments are implementing this in the way that they would implement this in the real world, right? We get a little bit better real world evidence this way. So yeah, that's one of those advantages of the, of the semi centralized idea. It's a great question. Alrighty. Sorry, just reading the questions. Great question. Okay. So this is a question about about the informing the reviewers that this is going, that this is happening. So yeah, so there there does actually have to be, you do have, there's no such thing as blinding in, in this. So, you know, if you're familiar with a phrase pragmatic trial, often there's no, no blinding in this. Part of the effect of interest is in this policy, in the policy itself and blinding doesn't even sort of really make sense in this concept, right? So we were not only, everybody needs to know what, what the policy actually is in order for this to work. And in order to get a sort of more realistic idea of what's happening, people do need to be informed of the policy. So yeah, we actually do want that to impact the way that they, that reviewers interact with the, with, with the research itself and how they do their review. That's, that's part of it, right? I hope that answers your question, I think. Let me know in the, in the Q&A if that didn't answer your question fully. All right. Ooh, we've got a long question. Okay, I'll start with the last one because it's a short one. So, registered revision doesn't sound that different from regular publications. Yes, exactly. So this is a very incremental change, right? The, it's specific to new data requests and new, and new analyses, right? And it happens within the, the journal publication, the traditional publication process. Right. So this is not a new, this is not like registered reports, which is like a whole new format, a whole new type of publication. This is a little step we can do within the, the existing infrastructure so that we can, we can change things. This is an incremental step. It also is maybe a way that, you know, a way for people to get used to doing these sort of preregister, these pre-commitment devices so that maybe journals or authors and so on might be more interested in doing things like registered reports, which are sort of more comprehensive policies. So yeah, so it's not that different. And that's kind of the, that's, that's sort of the point in some sense. Alrighty. I have a long, I'm going to read that. It's going to take me a second to read that. Okay, so there's another question about the sort of the pragmatic question here, which is a great one. So how do you actually implement this study into scholar one? So I'm going to roll back here. So the preliminary acceptance, so that we are keeping everything. So that's what that, that like portal block does, right? So the idea is we don't, you don't need to implement this into scholar one or anything like that. All of that information is contained really neatly into the, the, the, the text portal itself, where you have links out to all of this, this stuff. So, you know, we'll have various ways of, you know, of editing that as you go. But for the most part, we keep it pretty simple. Now, ideally, obviously, we'd love to have something like the, you know, a dedicated system to that. And maybe that could happen in the long run. But for right now, we're going to kind of try to keep it simple. There is a possibility, we're getting into a little bit, into the weeds a little bit, there is a possibility for in some journal systems, it might, it might be more feasible. And we would love to experiment with this to submit the revision specific to data analyses into editorial systems as a registered report. But this is specific to the editorial system itself. It's not, it's not a full registered report. It just might take, be able to take advantage of the fact that there is an existing registered report system that is built into some of these systems that is, is quite good and robust. So there are various ways of doing this. This is kind of, you know, we work with you to figure out how to implement that best within your journal. Are we worried about missing data? Yes, to some degree, yes. So if you have, you know, so this is the sort of different levels of this, right? So a very large journals, I alluded to this earlier, very large journals often have this sort of two stage process where you have a handling editor that makes decisions and so on and then passes it off to the sort of senior editorial team to confirm, deny, edit, and all of those sorts of things. And in a very large journal, one of the things, one of the nice things is you can have an intermediary in between the handling editor stage and the senior editor stage where a dedicated editor, a person who is dedicated to screen or people that are dedicated to screen for new data address specific to the trial can sort of hand it in there, can get in there before it gets to the senior team or it could be somebody who's on the, you know, involved in the senior team process and so on, so that all major scripts can be handled roughly the same way. And that would catch pretty much everything out at a journal, having sort of a dedicated person. Smaller journals, we have fewer, you know, fewer handling editors, so it can be maybe easier for everybody to get on board. But there's also the sense of, you know, we want a very realistic rollout and we actually expect in the real world we're going to be missing a lot of these data requests. And so we don't want to sort of force it too much because we want a, we got relatively realistic data of what this actual policy might look like. So it's, you know, so there's, there's a bit of a balance here and we hope to explore that space with different versions of this, different sizes of journals and so on. I hope that answers your question. If not, let me let us know. I like this question a fair bit. So it's a question about kind of in some sense good faith of the editors and what is actually going to be published based on these different pathways. You know, we, again, this is kind of a, yeah, we actually do hope that this is only going to partially answer those, so apologies. We do, again, like part of the point of this policy is to change what is actually published in the end, right? So we do want, we do want to change what those papers that are published out of these different processes are. We don't want to just change timelines. It is of scientific interest to us to have a more reliable, more reliable set of literature and that might include things like things that would normally get, you know, nulls at the end of the day or questionable research practices to get non-null types of things. And so what we're hoping is that these pre-commitment devices do change the, what is published in the end in a way that is sort of useful and more reliable than these sort of conventional means. I know that only partially answers your question, but we can maybe chat about that more later. Any other questions? These are great questions, by the way. Thank you so much. Give it just one more minute. Any practical questions? Any concerns? Any, any, you know, other ways you could see this type of meta project working in other policies or other fields or anything along those lines? Give it one more minute. Right. Well, that then, I think, oh, here we go. Oh, it's a really great question. It's a complicated question, which I like. So the question is, as an editor-in-chief, I often desk reject underpowered studies that potentially can be eligible for this project. However, the inclusion criteria require that the reviewers ask for more data and or additional analysis. So I suppose the editor-in-chief request for data analysis is excluded, am I right? Not necessarily. So if an editor asks for new data and analysis, that's still a potentially qualifying eligible event. That's really interesting. We have not written that in to our protocols, and maybe we should make that explicit. So in theory, that could be an including event. At that point, they would be, you know, the participant could be enrolled and randomized to one of the two arms, but the point at which, you know, but the sort of the qualifying event, I guess in theory, it could be the editor requests themselves. That's a really great question. I really actually, I think we're going to need to write that and edit that into our protocols. Again, this is specific, though, to, you know, if a journal does not want to have that version of the policy, then they don't have to. So we want to kind of make this journal specific. That's a really great question. Thank you so much. All right. Maybe another 30 seconds for any other thoughts, concerns. All right, then maybe we'll call it there. Please, please, please email us. We would love to talk to you. Even if you have, you know, you have some ideas, you have some thoughts. If you're not a journal editor and you think that this is interesting or in any way, whether that means it good, interesting or terrible, interesting, we'd love to know. We'd love to hear from you. This is a very strange project involving a lot of things that nobody's ever done. And we probably, we certainly have not gotten everything right at this point. So we would love to, we'd love to hear from you at this stage. And at that, I will go ahead and close this webinar. Thank you so much.