 Thanks. And then we are going to have an introduction to the template from Gjeldjorn and Olmo who are representing the team that developed this template. So I will turn it over to Gjeldjorn to introduce himself and then Olmo, Mark and myself. We'll just go around real quick. Okay. I'm Gjeldjorn Peters or everybody mostly calls me Gj. It's great that you make the effort to try to pronounce my name but most people can't even in the Netherlands so most people stick to Gj. And I'm an associate professor of methodology and statistics or M&T as we say in Dutch at the Open University of the Netherlands in the psychology faculty. And my original field was behavior change where we work quite post-positivistically basically using whichever methods we need to solve real world problems. And partly because of this I've done a lot of systematic reviews and of course as pre-registration becomes more common I've been wanting to pre-register those. And that didn't always work out for our projects. So that's kind of where this started and why I'm involved. That's great. Thank you, Olmo. Yeah. My name is Olmo van Anakker and I'm just finishing up my PhD at the Meta Research Center at Gjeldjorn University in the Netherlands as well. And my PhD is about pre-registration so it's nicely linked to what we've done today. And I'm basically assessing effectiveness of pre-registration in psychology or post-science. So that's my main topic. And I just finished, I just started a new job actually as well at the Quest Center in Berlin. So it's my third day so I'm now on a work retreat actually. So yeah, cutting time. That's awesome. That's congratulations. I didn't realize that. That's wonderful. All right. Good deal. Do we have Mark as well from COS? Yeah. Hi. I'm here. I am the real Mark. I don't know about some of you guys but Zoom is giving us attitude this morning and it's made you guys some of me. So apparently we're going to take over the world. Anyways. Hi. I am Mark Call. I'm the product owner for other staff registries and we're here for the Center for Open Science. All right. Awesome. And I am Katie Corker from Grand Valley State University in Michigan. I'm associate professor of psychology and I'm also doing some work with COS involving community engagement. And I also happen to be involved as a contributor on this template. So I certainly did not do as heavy a lift as the project leaders here that are presenting. But I helped a little bit. So it's exciting to see the template moving forward and getting integrated into OSF. And we're excited to introduce it to you today. So I will just, in case we had a couple more people join, I'll just say the agenda will be in the chat for you. There's a place to submit questions during the presentation so that we can sort of gather those up for the Q&A portion later. And if you'd like, you can also sign in on the agenda so that we can, you know, see everyone that attended and you can also see and learn a little bit about the others who are on the call so that we can all get to know each other a little bit better. All right. So I will turn it over to you all. I'll go with that. And that and almost to introduce the template. Excellent. So I'm going to be doing it together a little bit already when we prepared all my reminded me of some things that could be worth while telling. So basically, he'll just put in I'll start a bit with where it started. And then we'll move on a bit. Let's see. Excellent. So this is a big group of people that was involved in creating it. Apart from KJ, I also saw Amy briefly. So I and Richard. And there are a lot more people that you can see on this list that I'm not all going to name because you probably read them right now. So this project started out of well frustration is a big word. But basically, I was working with a PhD student who did a systematic review about sports because they can. I'm sorry, a bachelor thesis student because they can kind of choose their own topics. And it was about red cards and yellow cards. It wasn't a military, military analyst project was just a systematic review. And he couldn't preregister it anywhere. So then I started looking for ways that this was possible to pre-register it outside of Prospero somewhere because they only take half related research and they don't take scoping reviews out of. And as you can see, it didn't get too much interaction because I'm not very popular, but fortunately, this was retreated by somebody who was considerably more popular. And then there were some responses and some links where I also met Nicky, who you see at the bottom here. So quite early, you see that we were discussing doing this at the open science framework because of course the open science framework had a number of these templates. And this would be, you know, great to have because then everybody can just be registered systematic reviews there. So we're thinking about maybe starting one, but that that specific account of the open science framework. And this was like four years ago wasn't super hyperactive. So we didn't really get so much response. But at some point, Nicky had the, for some reason, drew Brian's attention and then he got us in touch with Matthew Spitzer, if pronounced it correctly, from the center of open science. And then we basically started thinking about it. How can we do this? And that's kind of how we got the ball rolling. Nicky was a veterinarian who also had problems posting on a large sparrow. And that already opened up the, because I always think from psychology, you know, it's like apparently there are also other sciences, but from my perspective, it's just psychology. So we have qualitative, we have quantitative, we have systematic reviews, but that's kind of it. Apparently it turns out there's a whole world out there with people working radically differently. And that's kind of one of the things we try to do here to make something that everybody can use. So we got together this team, partly also through Twitter and through people we knew, we didn't do some like general call through societies, but we, yeah, use more of a snowball approach. And then we started working on the form based on earlier forms that existed, like, for example, the Prisma guidelines. Within the team already, there were some differences on opinion, which is good, of course, if you want to make something general inclusive on, for example, the role of pre-registration. Alma, you alluded to this earlier. Do you remember specifically what we discussed? I mean, I know some, I know at least one point where I differ with at least Daniel Larkin and some other people as to the role of pre-registration. Yeah. So some people were wondering, hey, why do you need to register a systematic review? Because you're not collecting primary data. And so at least my point of view is that whenever you are going to draw a statistical instance, or maybe even a scientific general instance, then it's good to register your design, which involves systematic reviews, which can be systematic and objective, but they still have room for research or biases and in all kinds of collections, for example, in collections of studies, especially if you already familiar with the literature. And to, let's say, present these biases, I think pre-registration has a big role to play because you're logging everything beforehand where all the decisions are made. So that was my main reason for going into this project. Yeah. And as somebody who does a PhD in pre-registration, they're probably one of the most knowledgeable people on the whole topic in this group. Yeah. So basically we started integrating these items, looking at also our own experience and what we did. We had, for example, also some librarians, like Amy, for example. So we did have these different expertise and like we have research or perspectives, librarian perspective, et cetera. And then through some rounds and discussions, we worked on the form and tried to streamline it a bit to have the same types of items within every section and stuff like that. And then in the end, the form resulted. As a kind of context, it might be good to emphasize the kind of, yeah, the vision or the aims of the form. So the idea was to not make it specific to any discipline. So ideally, even people who do a systematic review of case law, for example, in some country or multiple countries, could use this. That's, of course, a bit harder because then you have to realize that not all the sources that people extract information from are always articles. So it could be more general, also reports or other things. And also ideally it wasn't specific to any review type. I assume everybody here has looked at the form. At the end you see there are quite some fields that are kind of specific to meta-analyses, like to quantitative synthesis. But we did try to formulate everything such that it could also apply to, for example, qualitative synthesis as much as possible. The consequence of this was also that there were no obligatory items because there's always, you can always think of some type of review where something wouldn't necessarily make sense. So instead of that, the approach was much more to list some items that we consider extra-elevant. They achieved a little gold, they got a little, achieved a real gold star by their great performance. These items got a gold star. But they wouldn't actually be obligatory. So that's, I think, kind of a brief introduction as to the context and the vision aims. Omo, is there any other important stuff? Yeah, maybe we can shortly mention the force we used for addressing the item. So if I recall correctly, we used the Prisma statement as a source at the Society for the Improvement of Psychological Science We also came together in a conference, I think it's called, to draft a template for meta-analysis. And there's also some people in the group who drafted registration form for systematic reviews in animal research. And it's also quite a different perspective. And there's multiple four and quite a big group that designed the template to compete for non-preventive research, systematic reviews in that area. So we basically gathered items for all these small sources, we bundled them together, and we thought, hey, what are the overlaps? What are the things we need to add? And this, we were like, in the process, we arrived eventually at this template Perfect. Katie, was that kind of what you had in mind? Yeah, absolutely. Yeah, no, I think one of the things I'm reminded of, yeah, is how many different perspectives we brought to the table already in developing the form. And we're continuing to solicit that input from people from all different kinds of backgrounds at this point. I think one of the things that we're really eager to do as we deploy the template more broadly in OSF is to make sure that it's in a good position for people from all different kinds of fields and backgrounds. So that's where the community feedback will be so important to make sure that we get at least a good version 1.0 up and then it can be iterated later as necessary, but hopefully we can get a very solid version up for the first draft. So it is, the form itself is linked in the agenda. And I think our next plan is to actually go through the form itself and give a little bit of an intro overview to it. And then we'll move on to the time for Q&A. And I'm monitoring the questions so far, so far, no questions, but please do feel free to drop those in as we're talking so that we have something to start from when we discuss in a little bit. So we'll turn it back to you. Oh, sorry, I thought Mark was going to do this bit, but we can do it too. I think, yeah, we, I misremembered who was going to do it. So if you're all right, that would be okay if we walk through Adobe part of it, if that's all right. Sure. And I went ahead and put in the link in the chat so anyone can look at both on their own screen, but also, John, if you want to share and welcome through it. Yeah. Oh, I think I might need to allow some more. One second. Perfect. So just to pollute the conversation for everyone, these are mockups to kind of show exactly how they will reflect what it looked like within the OSF registration system. When you go through them, everything on the left-hand side and the next button in the back buttons will be clickable. Everything else is not. It's just enough to be able to provide context about the content of the questions, how we can improve them, revise them, etc. Yes. So it starts with the metadata that's pretty similar to most pre-registration forms, so it has a title and has a description to give some metadata about this specific registration. You can add a license, you can add subjects. I think these will be familiar to pretty much everybody. The landing page has an explanation about the form. One of these things here is that it links to more specialized forms because it is such a general purpose inclusive form. If you do a meta-analysis, using a dedicated meta-analysis form might have more benefits. As already indicated, it can help prevent biases by learning you to specific things that you have to take into account for meta-analysis that are quite niche in the broader landscape of reviews and therefore are not in the general inclusive form. And it allows you to document more accurately. So ironically, even though this is a pretty big form, and you know we worked hard on it, we think it works very well. The idea is that ideally if there is a more specialized one, it can be beneficial to use that instead. So these are the instructions to use. And then we get to the actual form. If I'm going too fast, by the way, please just let me know because I've had a whole day and I'm pretty well caffeinated by now. One of the things you'll notice here is that type of review stages, current review stage are all just open text fields. And we had quite a lot of discussions about this actually in the team. There's a lot of benefits to using categories, to using closed questions because then the answers become machine readable. So it's easy to create overviews of the forms and you make them more searchable. But on the other hand, that requires an exhaustive and mutually exclusive set of categories to exist. And we couldn't actually find any that we were really sure, for example, even types of review, that didn't differ per discipline or per field. So that's why actually all the fields are just open text fields. So you start off with specifying type of review in whichever words are commenting your discipline. Then you list the stages that you distinguish within your review and the stage that you're in at the moment. Then the envisioned start date and end date because in my experience at least those are rough guidelines generally. Then more background information, primary research questions and potentially secondary research questions where the idea is that your primary research questions are mostly what shape the design. And the secondary research questions are things that you can also answer, but that have less influence on your decisions. Then if you have any hypotheses or expectations, if you don't do confirmatory research, for example, you can list them here. And then you describe the variables that you aim to extract from the studies. So the main variables will generally be outcomes, for example, or predictors, or sorry, dependent variables. And then you will have independent variables like interventions that you're looking at for treatments. If you look at RCTs, for example. Then you can list some additional variables. And then you describe the software that you will use for the review. That's one of their goals here is to make sure that you remember citing those and so in the end. And so that it's easy clear to people if they want to use resources from your review, whether they can, depending on whether they have required licenses, for example, or if they don't have licenses, whether they are familiar with the software. Then funding, conflicts of interest, and overlapping partnerships. That's one that was new for me when we started this. Sometimes you do a review because you are quite involved in the field. And then you might end up in a somewhat awkward situation that like one third of the studies was co-authored by you. And then it's good to have a plan for how you deal with this situation. So you can describe it here. This is a search strategy. Fortunately, under our science framework, you can save your drought halfway. This one has only been saved 11 days ago, as you can see here. But of course, in real life, we'll save them more frequently. Then you list the databases you search. I assume that everybody here is pretty much familiar with how systematic reviews work. So you probably know the difference between databases and interfaces. If not the interfaces are the service, the website you usually use to access the data from behind them. Whether you cover any great literature, the inclusion and exclusion criteria that you use, and then the query strings that you will enter into the databases through the interfaces. Then the procedure you will use to validate the search. Ideally, usually, you'll already have some target articles that you know you will want to include. So you can, for example, use these to identify queries. Then any additional search strategies you use in addition to just searching databases. How you plan to contact authors, what you do when you contacted authors, whether you are planning to report on this or not. And if so, what you will report. How long lived you think your search will be, how long do you think it will be solid. Of course, if there's imagine that there's like a pandemic that starts, then the expiration date might be a bit shorter than if it's a topic that's relatively stable, like I imagine something in history or archaeology. Then you can justify your search strategy and then you can add any additional miscellaneous details. These last two are in every section from now on, so that you can always justify why you made the choices the way you made them and provide any additional details. Then we move on to the screening. You can describe the stages that you use for the screening. Sometimes you do everything in one go, sometimes you try to do something quick the first round and then you go more comprehensively later on. Whether you blind or mask any of the fields, for example you might not want the screeners to be influenced by the journal because if it's a procedure journal they might be more inclined to include or whatever. The exclusion criteria you use during the screening, because you can really include because there's not an inclusion criteria that can trump an exclusion criterion so basically you just exclude during screening and the literal screener instructions ideally. There you can also upload a file because often of course there are Word documents or PDFs with instructions about what exactly they should do. Then how you will look at the reliability of the screening, you usually use multiple screeners and then you usually really start screening when you're happy with the reliability which means the screener instructions are clear enough that people can work with them. How you deal with inconsistency with disagreements between streamers, when one person says that something should be included and the other person says that obviously it should be excluded. And oh yeah, whether you include all the sources sometimes you may only want to look at a number of the sources like the articles for example that are included and if that's the case then how you will choose them and how you will determine your sample size and finally the procedure how you justify the procedure for the screening. Also here you can explain how you plan to manage the data and share the data because ideally of course if people weren't already planning to this will prompt them to realize ah of course I want to share all this openly and then they can explain that here. Then we finally have our hits so we can start extracting then start with listing all the entities you want to extract like author names or effect sizes or variable names whatever else and for like legal cases for case law it could be whether somebody was what's the word well whether they had to go to prison or whatever then the stages in which you will extract that can also be similar to screening you might if you expect that you will get a lot of sources that you want to extract from you might want to try to do something efficiently at first and then do it more comprehensively later on again the instructions for the extractors that's probably even more important than the instructions for the screeners because those entities you extract determine your data and the quality of the data so it's important that everybody agrees and ideally the others that are not in your team can also extract with similar results then any blinding slash masking of extractors for example you might want people to extract do not know what your research questions or hypotheses are again how you look at reliability and again how you reconcile if there is a disagreement between extractors then the justification for your why you extract the way you want to extract again how you share your results for example the forms with extracted data and any additional details and then we can finally start with the synthesis part with any data transformations you want to do for example for quantitative research you quite often will want to convert all the effect sizes to one metric for example Pearson correlations or for qualitative research you might want to standardize the kinds of results that you extract from results from articles like cluster them so that it's easier to synthesize them later on how you deal with missing data even after converting others stuff is just not there or can't be found anymore how you validate the data you found because after all after you extracted there might still be some errors just like the empirical research so ideally you have some procedure to look at this how you look at the quality of studies for example risk of bias assessment and then finally how you plan to synthesize if you do more confirmatory research or if you have some calculations and you are planning to draw certain conclusions about something then it helps to clarify how you will come to these conclusions in an early stage so you can explain that here again you can mask or blind person who does the synthesis or you can have multiple synthesis of course and then you can still mask or blind if you have multiple people who do the analysis for example in parallel you can do the analysis for example for example if you are looking at reliability and again you may want to have procedure for reconciling any differences between unless you are doing like a many meta-analysis study if that hasn't been done yet maybe we should start after this how you look at publication bias how you do sensitivity analysis and finally again justification, data management how you recognize this probably at the end you basically see your results you can review before you finally submit it so that's the form in its current state and now of course we are very much looking forward to all the suggestions and questions to improve it into a better first Thank you so much TJ that was really great I'm reminded you know just looking at the level of detail and the length and so on I think every supervisor who suggests that their PhD student do a systematic review for their research project should be forced to just review the form first of all to remember all of the different steps that go into it before beginning so we do have a couple of questions that have come through and I will more coming in this is great I think I'll start with one about mandatory mandatory versus optional items so we have actually been discussing this recently about sort of our philosophy or approach to whether or not all of these many items should be required so do one or both of you want to talk a little bit about our thoughts on that and maybe we could also have some other input from folks to react to what you're saying Homo Shalaya will correct me if I'm wrong yes so I think for OSF we decided on no obligatory items basically because that was a technical issue so maybe Martin talk about that more well I think there was actually there needs to be at least some obligatory items because they have to have things like the metadata fields with contributors and so on like those have to be there so if we have no obligatory items at all then it creates a situation where someone could register their review and have almost nothing in it either completely empty or almost empty so a concern is that if all items are optional to make the form maximally flexible then we worry about yeah in complete entries so I think actually what we had landed on last time was that maybe we would do the opposite which was to make the form maximally flexible and then it forces the user, the person completing the form to at least indicate not applicable or something in response to every question so go ahead G.J. exactly what almost said was exactly what I also explained in the presentation the plan was to have nothing obligatory that's why we had the gold stars for the things that we thought were important but not mandatory but yeah it's right it's a bit lame if people are able to enter completely empty registrations and then making everything obligatory kind of solves them because of course you can always say this doesn't matter and an added benefit is that you then have an explicit statement from the authors as a result of their reflection on that item and that I think also helps well maybe it's too optimistic to think it directly helps to prevent bias but at least from a meta scientific point of view it helps you see how people consider these things and where they have a more comprehensive explanation or where they just say N.A. or whatever where we also had with that was that it might be too burdensome to fill out the form so with every item so of course you can say just N.A. two letters that's easy but maybe that's something that the audience can also provide feedback on so there were a lot of items that you saw and if they're also they're all mandatory now so is that too much or do you agree with our reasoning that this way of going about it makes sense from a scientific extent? Yeah that's perfect so I would invite anyone who would like to speak up please do so or you can respond in the chat as well if we've got too many people to all talk at once very curious if anyone wants to speak on that yeah Alexandra? I think I just beat Matt there so I'm speaking with my prosperity for animals on the issue that we have sometimes is that people because our items most of them are mandatory the issue that we have is that some review authors just copy paste the guidance so I think I posted in the in the Google forms if there was some way that OSF would support semi-automated data validation to just check but it's either not blank, NA like none or whatever because yeah that is a manual process for us to check it for animals at the moment. Thank you yeah that's super useful Matt did you still want to chime in? Yeah that was my question yeah I guess my concern is like someone could put a lot of effort into a pre-registration and then they can claim they pre-regist their study fairly and then someone else can put minimal effort and also make the same claim so I sort of feel that needs to be a few more mandatory items especially if it's not being irreviewed. Yeah so the way it's now is that everything's mandatory right so that's in the concern that Alma just voiced was that maybe people will just maybe well the concern was that it is too burdensome of course that would translate and fear is that people just don't use it because it has like 65 items so it's quite a lot the point is that as I said it's supposed to be inclusive right so you have a form that's created to be useful for people regardless of whether they do a review of government reports about city planning or case law regarding I don't know people breaking into houses or whatever those law people do or just normal topic in psychology like for example the determinants of energy-relevant behavior and ideally it can work for all of them so because of this it's framed quite generally but there are some items there for example data transformations that will just not apply maybe to case law or it might but in that field people might not do it yet you know because these things change over time and then of course not applicable would be an acceptable answer but they would be facing all these questions so this is I think one of the harder issues to solve in the objective yeah I agree it needs to be a balance and I had to use the OSF pre-register something and not all of the fields were applicable but I guess at least if you make the mandatory and then they can put NA it sort of gives the impression that the defaults should be to try and you know put something in Alexander for example but that might also be because Prospero does have more narrow use case so you do have clear ideas about what should be there which you can't have with a general form because you suggested kind of the opposite but this allowed people from saying this does not apply to me or NA so there is a kind of tension I mean I would say that ideally if people have a health related systematic review they would use Prospero because that's more that again some really interesting tensions here then between sort of being inclusive comprehensive rather and being more narrow or more user friendly potentially so folks have thoughts on that you can continue to add them to the chat or to put them in the feedback form Ricardo Yeah I think that like at the journal we are looking at a lot of re-registrations and so on I mean you always need to look at how people use the pre-registrations anyway so I'm not too concerned about the fact that people might say that all we pre-register these with this template is a bunch of NAs because I mean you could also answer everything elaborately and so on it might still be very, very poorly worded and everything like that so the pre-registration is only saying that it's a way to assess the transparency throughout the process so people are going to try and game that and I don't think we can avoid that by forcing people to use it so Evan sorry Evan you're muted Thanks As I look at this I'm thinking about the difference between what I think of as a registration and a protocol in this context and in other contexts and one of the things that I think is great about trial registries and medicine where a lot of this kind of kicked off is that they have structured data elements that are searchable and ensure that I can go into a database and find all of the trials that have been done about something I'm interested in and that means that you have a minimum set of items that have to be entered in a certain way the WHO has a minimum data set that trial registers around the world are working from and there's a little bit of free text in things like clinicaltrials.gov but there's not a whole lot of free text in those or they're short in structured fields and what I see here are larger text boxes that correspond to in some cases very major sections of a protocol and so this to me looks less like what I would consider registration in some ways and more like what I would think of as a you know structured outline for a protocol so I think there's that tension and if I were to design something for registration I would probably strip it down and have structured elements rather than so much free text and if I were developing a protocol I might go in a different way as well so I see the guidance in the document here and then I'm comparing it with Prisma 2020 and there's actually much more detail in Prisma 2020 for many of these items than there is in this document so if I were thinking of this as a registration I think it's a bit too much thinking of this as a protocol I actually think the guidance might not be sufficient to write a Prisman compliant protocol or a protocol that complies with other reporting guidelines yeah that is great feedback and it's definitely something we discussed among the team do you want to react to that either of you anybody yeah I can basically this is indeed one of the tensions and it also relates to the role you think pre-registrations or registrations have I more and more think they don't necessarily are valuable because of an epistemological role but also because of a meta-scientific purpose and I think Evan that means that I would be exactly indeed in between a protocol or like a bare-bones registration but I think that people can differ quite strongly I mean there are people who think that they mostly have an epistemological role and they mostly come into their strength value whatever for like confirmatory research because it helps you to evaluate the strength of claims that's a very different perspective and would create a very different focus on this form they would also say for example that for qualitative studies pre-registration doesn't have much added value and I think that they're also there so I think this is very much a function of where you fall on that spectrum which means that we can go anyway with form kind of it's just yeah I think I can just add from a sort of technical or meta-meta point of view I don't know how we want to label this this is something we're thinking a lot about at COS generally is just sort of the role of human readable versus machine readable or findable objects so thinking about you know data repositories you know there's many examples across different fields of data repositories that have elaborate metadata schemes that enable findability of you know data down to the to the variable level in other fields you know these kinds of ontology systems of categorizing whatever metadata taxonomies are less developed and we're I'm seeing a parallel issue here that you could have these fields be designed to be sort of findable searchable machine readable etc versus what we have designed which is something that's more just human readable and not perhaps not as future proof in terms of being able to have these other functions that some other registries do already currently have so that's a really interesting point that I think generalizes beyond just this particular issue to sort of a bigger issue in general being able to find and search information among these repositories Ricard. Yeah I think Evan's point there about protocols is very important and I think that part of the way how to solve this is is I mean because I agree that this is a bit of a hybrid and it's also how how pre-registration is used a lot in psychology in particular as some something in between the registration of a protocol often but I think that on the landing page what is needed is explaining that in areas where there's a tradition of writing full protocols and register them hopefully or publishing them that is preferable much in the same way preferable to use prepare and so on and I think this again this is like a fallback procedure for areas that do not yet have those kind of things and I also think that it's super important that this doesn't you know out-compete such such endeavors I mean if we could come to a stage where we have full protocol registrations like in register report format for example in psychology systematic reviews that is of course preferable and that is of course preferable in you know law or any other field as well but we are so far away from there in many areas but we are yeah. I'm going to pull out another question there's quite a few to choose from in here so I'm going to try to pick one that goes in a slightly different direction person asks what was the rationale for including the option to indicate that the review is already completed when filling in the form that's also a really cool question it also relates to on how you see preregistration registration if you approach this as being mostly about transparency it can still be useful for people to want to do this at a later stage I mean of course if everything is done and it's published and you're not going to do anything anymore I think don't think it's going to be super useful but I think for a long time there are useful stages when it's useful to take a snapshot of your plans and your ideas at that moment and then I mean in theory you could complete this form in a few moments you could do it like actually a PhD student of mine completed it beforehand said in the preregistration form that she would review her methods after she'd done like a pilot of 100 studies and then registered again that allows you to see how these things change and once we have enough critical miles that can be super useful of course to improve our procedures so but that's my perspective and this is also I think something again like where people differ I mean not everybody thinks that registration after the pre-stage is still useful so I think it's a super super interesting question Yeah there was another question you brought up pilot testing that was related to pilots I think that's a useful one to bring in would it be appropriate to add a section to the form about weather and if so how a given stage of the review process would be piloted and how adjustments would be made so I think I could imagine sort of two solutions to this one would be adding additional fields that would address piloting and perhaps there are it's already covered under some of these fields sort of implicitly or it could be using the updating function within the OSF would be a nice way so you would have a sort of versioned version of the of the protocol right a pilot phase and then a second stage when you actually proceed to the main review I wonder if anyone speakers or any others have thoughts or reactions to that idea The problem I see with updating registrations all the time is that also these updates can be reviewed so currently as it is there isn't really account at all for checking registrations versus the final product it was just actual work for the review as well and if you have a sequence of 6 or 8 or 10 secret reviews or registrations that might even add to the burden so maybe the OSF update function can help make the process more structured and also more easy for reviewers then to actually review the differences but that is something we need to take into account Yeah the questions about the interface with peer review are really interesting to me actually and it's not something I've thought about in the context of this particular form I mean I'm finding myself more and more as I'm doing peer reviews of course if someone includes a registration or review the registration along with whatever else they've sent and you know different reviewers differ in their approach to this but I wonder we could think about how how it might be challenging to use this form in the process of review because it is so lengthy like I wonder if if there are improvements we could make along that line or if anyone has thoughts about the usefulness of forms like this within the peer review process I don't know if we have folks from other backgrounds outside the social sciences on the call but we don't have really a Cochrane like body and psychology for example there's the Campbell which takes some psychology relevant work but there is not like a sort of centralized organization that's doing peer review of basic social science research one sort of hole in this space that I've often wondered about in a form like this could have sort of a natural fit with with that kind of peer reviewing activity so if anyone has any experience with bodies like this or is from fields outside of social science I'd be curious to hear about reviewing protocols and or registrations within the context of this field I'll look for another question while we're if anyone speaks up while I'm doing that in the meantime while you're looking I saw a question in the chat from Heather if finality that actually also relates to an earlier question that whether or if finality is a main goal of the template then then we will need categories then we do need like ontologies and searchable stuff I do think that this is for at least psychology and I don't know about other fields but for psychology this I think will be important but also challenging because it's quite hard even for the methods to create ontologies and standardize those I tried with Wolfgang for a while to create a package he has this ES calc function in metaphor to create a package to extract basically all the conversion functions so that it would be accessible to other packages as well easily and then we also ended up with you need some kind of systematic API just for the conversion functions and that was just already super complicated and then the pandemic started and then the project kind of stuff but I think this is I mean I think we all agree that you have machine readability and human readability and ideally you have both but to achieve machine readability here is really really hard I think so if anybody has a solution for that that would be great yeah this is where my mind is at a lot nowadays is thinking about this problem and I feel like there's so much more that I have to learn in this area but so I'm constantly looking for for folks that know about this stuff so if that's you I'd love to hear from you so I have I think this is actually maybe a question for Mark best suited someone is asking in what format can you download the input so maybe you could talk a little bit about what if any API capabilities there are for registrations in OSF Mark don't know if I would cover all that within six minutes yeah essentially the probably the best way to download all those inputs is going through the API I can give you guys a link to how you can start your API one thing about our API is that only does public registrations it does not include any that are private or embargoed but I'll be happy to send that document along it doesn't much better than what I can do as a little high level yeah that's perfect we could certainly maybe we need to follow up just just on that topic alone I think people folks would find that interesting it's the API is more powerful than we all realize and it's quite interesting once you get into it all right so we we just have a couple minutes left I think I will do just one more question which is when will the template go live and our further developments or adjustments planned after that so we're going to take a look at what we're going to do in the next week to kind of our general process which is we were currently developing a process for templates like this to be incorporated into OSF so that process is going to be iterated through 2023 so if you are actively working in this area or other areas on related registration projects have those templates considered for consideration in OSF so this template's already moved past that stage right it's been selected and it's been now it's being piloted we are planning a phase of a few weeks to months somewhere in that region to continue to gather more feedback from the community at this piloting stage before we officially launch it so I expect some time within maybe early first quarter 2023 or there abouts that this will be sort of live and functional in the meantime the template itself is in the pre-print which is linked in the agenda and you know folks can of course begin using it at any time in its old static form it's also I have an R package pre-recker which allows you or pre-reader basically pre-reader pre-register with an R exactly but all our case because otherwise you have to type that R anyway and that's created to allow anybody to specify pre-registration forms that are not on OSF but that you can then use in that R package to create our markdown templates or to fill out using the console and this one is pre-loaded there as well so if you want to use it you can create an R markdown template to PDF for example and then upload as a file using the OSF generic form yeah so it could be an open end of registration or a general an attachment to a general registration at this point but yeah it will soon be available once we have the final version for further adjustments and developments really any of these forms are opened editing at any time but of course they quickly become sort of incorporated into the broader infrastructure of OSF so we don't push changes lightly but you can always email mark or anyone the OSF help as well is another good place if you have suggestions or input for feedback on any of the templates that are up and that feedback gets reviewed and potentially then incorporated it as we push changes so this will then be in the sort of same corpus another thing that we're planning for 2023 is reviewing the existing templates that we have and then aligning them to potentially make an incremental step towards this idea of making metascientific research and searchability findability and so on easier across templates so if all registration forms for example use the same labeling and scheme for primary research question then it becomes much easier to look across different forms for how that field is completed or filled in or what have you so it's a relatively large project but that's on the agenda of sort of trying to sync up the existing forms that are already there I see Evan has his hand up so go ahead please Evan I'm not sure we can do this in one minute I'll send you a note okay great does anyone else have any last thoughts or any questions before we disband I really appreciate all of your attending and I hope you'll share this back with others in your community and we hope this will become a really useful addition to OSF and we're excited to share this work with all of you so thank you G.J. and Olmo and Mark as well for joining us and I hope everyone has a great day and thank you for you and Mark for organizing this I think this is super useful with all the feedback I hope we got through the form now as well so thank you and thank Olmo for actually joining us from Berlin are you in Berlin now? you are there already right? yeah wonderful enjoy bye everyone