 Okay, I think we'll go ahead and get started. Welcome everyone. Thanks for attending this webinar and issuing open science badges. We're very happy to have the editor in chief, the outgoing editor in chief of psych science, Steve Lindsey. He's a professor at the University of Victoria, professor of psychology. And he's been the EIC there since 2015. Shortly after the badging program was initiated, he has a wealth of experience with giving the program up and running. And we're, again, very happy to have him here. So Steve, would you be willing to take it away? I'm happy to take it away and I... Let me interrupt you. I'm sorry. Let me just give one more housekeeping note before we get started. Sorry for interrupting. For folks attending, please feel free to use the Q&A, the question submission, that there should be an option says Q&A at the bottom of your screen. If it's a clarifying point, I'll rudely interrupt as I just did to Steve. You're sort of in the middle, but most of the questions will leave till the end. So feel free to submit those over the course of the webinar. And a recording of this will be available and a blog post will be available in about a week of this transcript of the webinar. So back to you, Steve. Okay, thanks very much, David. And thanks for those of you who are logging in. I hope you'll find this useful. I'll just flag that my title slide here has my email address on it. Ask Lindsay at UVic.ca. And if you have questions that come to mind tomorrow or next week or something, I'm really keen on supporting these efforts. So please feel free to reach out to me. So I thought I'd begin just with a little bit of a background on this. My predecessor as editor-in-chief of Site Science was Eric Ike. And he really got the ball rolling in an important way. He worked together with Alan Kraut, who was then for a long time the executive director of the Association for Psychological Science, and with Bobby Spellman, who at that time was the editor of Perspectives on Psychological Science, to make changes that enhance the transparency of the works published in Psychological Science. So they contributed to the development of the transparency and openness promotion guidelines along with leadership from people at the Center for Open Science. And in 2014, or right at the beginning, end of that year rather, they began awarding these badges for data materials and pre-registration. And Eric Ike wrote a very nice editorial announcing the badges and some other changes, such as removing methods and result sections from word count limits so that authors are free to, you know, they have space to do free disclosure. Anyway, he announced that in an editorial and I recommended it to your interest and it's free to the world. So, you know, does this work? Well, fortunately, some nice folks did a study where they looked at scientific science shown here in this graph as the black line. The measurement here is the percentage of articles that claim that data are available and the dashed red line indicates when the badges were introduced. So what you can see, if I trust everybody can see my slides, is that all the other journals stayed pretty flat but psych science shot up in a huge way. But you might say, so what, that's just articles saying that the data are available and we basically asked people to make the data available so maybe all this changing is sort of what the article reports. But the kidwell at all did an amazing effort to try to gauge that by taking all of the articles that said the data were available, all those from their sample, that said the data were available. So we start up here in the upper left-hand corner of this graph, so that's 100% because they all said the data were available. Then these are various measures like where the data actually available and you wrote to them and asked them for them. And were they in fact the correct data and were they usable and were they complete? And you can see that for most of the journals, minority of them came through with complete and useful data and psych science did pretty darned well, not perfect but pretty darned well. And this is from just right, the first year of the badges so it'd be really fun to see an updated version of this study but I'm pretty confident it would look good. Here is one indicator that at least in terms of apparent compliance were doing very well. So this graph shows the percentage of empirical papers as papers that report new data published in psychological science in various years from 2014 to 2019 that earned the various badges. The blue line is the data badges, the orange line is materials badge and the gray line, the lowest line is the preregistration badge. So I'm pretty darned thrilled with this rather dramatic increase in badge earning. And it should be mentioned that the denominator here includes articles that are not eligible for a badge. So in some cases, the work is using proprietary data, they can't make it available. Or in other cases, it's already freely available so you don't get a badge for using say Canada census data, it's already freely available to the world. Or in some cases, they're ethical or practical constraints on sharing data. And likewise with materials. So I don't know what the ceiling is but it might be something like 80% or something. So we're getting reasonably close, I think, to ceiling level performance in terms of meeting criteria for the data and materials badges as they're currently stated. That's not too surprising that preregistration badges are flatter. It's one thing to get a preregistration badge for work you submit in 2019, you probably had to do your preregistration a long time ago. So it's going to be a lagging indicator. And it's a harder ask. You can decide at the time of submission, oh, they want us to make our data available, well, we'll do that. But you can't decide at that point to preregister. So it's not surprising that it's a bit slower uptake. And in fact, I'm really delighted that more than a quarter of our empirical articles published this year met criteria to get the badge. And it's also worth noting that most articles that receive a preregistration badge also get the data and materials badges. So it's fairly unusual to be preregistered but say I can't share my data or that kind of thing. So we call those triple badges. And so a lot of these have all three, okay? Sort of funny to be lecturing out into the void. Hope everybody's okay. So the next part of the presentation is sort of the main feature content to talk a bit about our workflow. How do we do this? And don't worry too much about the fine print on these slides, it's just to, I'm showing you here screenshots that David actually helped me get from the Psych Science submission portal. So if you were an author and had decided to submit a manuscript to Psych Science as he went through, eventually you would be asked questions to have to do with the transparency of the work you're reporting. So for example, we asked, inspired by Tim LaBelle's article that a number of years ago, we asked for a research disclosure statement, the all dependent variables or measures that were analyzed for this target. Articles target question have been reported in the method sections, all levels of all independent variables or all predictors or manipulations, whether successful or failed have been reported and the total number of excluded observations and the reasons for making those exclusions if anything any have been reported. So everybody sort of goes through those and if they can't check them, they're supposed to explain them. We also asked people to, I think maybe there's another slide here. We asked people to explain why they believe their sample size was appropriate. And we have fairly extensive instructions here that say things like precedent is typically a weak basis for answering and if you're using an effect size estimate, you should say where you got that estimate and so forth. And it's surprising and a little bit discouraging the large proportion of authors who have difficulty answering this question in a sort of high quality way. I mean, to be fair, it's often a difficult question to answer, but very, very frequently the response is that people submitting to psych science make when they're asked this question indicate to me that they don't really understand the issue. So people are still, despite these instructions quite often saying that they did 20 per group because so-and-so did 20 per group or because Marie Simonson told them that 20 per condition is a good number or things like that that suggests that people are still not fully understanding this issue. And again, I'm not saying that there's not necessarily a right answer to this question, but what we're looking for here is some indication of understanding of the issues. People are also asked if they stopped and analyzed their data partway through data collection and then decided based on the outcome, whether or not to continue collecting data. I think I would probably, if I was restarting this or if I wasn't leaving right away, I would probably drop this because they almost always say no. And the rare occasions when they say yes, they have sort of a plausible kind of explanation. Because what we're trying to do here is detect optional stopping, but I don't think this item works very well for that. People are also asking about online, supplemental online material. So really keen on encouraging authors to provide material that sort of goes above and beyond the article provides a richer description. For example, videotapes of procedures and so forth. So we asked them about that. And then there's a question about pre-registration that begins by saying we understand that sometimes ethical or practical constraints limit author's ability to pre-register their plans in order to make data or materials available. So we sort of acknowledge that that's not gonna work for everybody. But then we just asked, did you pre-register the work? And if they say yes, we asked them to clarify, was that before you began data collection or after you began data collection but before you looked at the analysis and we asked them to provide a URL for the pre-registration. Then there are a series of questions that asked them about access to data, access to materials. And this includes questions about after publication but also about how reviewers can access the materials and the data. So for example, here it says, how can reviewers access any novel, unusual stimulus materials or measures used in the research reported in your article? And there's a similar question for data. And you notice they've got options here like all of them are widely available or they're fully described in the manuscript, i.e. people already have access to them or such materials are provided in the supplemental online materials or they're at the following URL or not sure if this is gonna show or reviewers can email a request to the editor who will relay the request to me in which case I'll send the materials to the editor who will then send them to the reviewer. That's a little bit convoluted I know but that's a way to maintain anonymity while enabling the reviewers to ask for the data. Or finally they can say, I won't be sharing the materials for the following reasons. So for example, there might be a copyrighted proprietary test that they're not allowed to share and they'll just say that. And typically they would also say, you can buy the test at this source or I got these data from this source. Pause and think here, I think there was something I was gonna add. I think it'll come up here, right? So invitation to reviewers. So when I send a request to people to review a manuscript, it tells them what authors said in response to those questions about data and materials and pre-registration. So before reviewers accept an invitation, they know like they can get the data this way or they can get the data that way or they can't get the data and likewise with materials and pre-registration. And when they submit their reviews then, which obviously it's usually several weeks later, reviewers are asked if the data were available and if they looked at them and if looking at them affected their judgments and similarly they're asked about the materials and the pre-registration. So we've been collecting information about reviewers' perceptions for several years now. But one of the things that is a bit troubling is that I don't have quantitative data on this but I've noticed that quite often reviewers say that materials were not available or data were not available when in fact they were available. So I think what happens is the invitation letter gives them that information but the manuscript is, you know, some authors do a better job of highlighting the information about the data or materials than other authors do. So if the author hasn't really made it clear then the reviewer may just miss it. So and that seems a shame. So we need some better way of standardizing and highlighting the information about the availability of data materials and pre-registrations to reviewers at the time that they're actually doing the review. So when an editor accepts a paper, the letter to the corresponding author includes instructions for completing and attaching a open practice disclosure form. I think I have that form. Yeah, this next couple of slides show that form. So everybody is asked to complete this form and send it in. Are you going to, and then, you know, ask them, are you applying for the open data badge? And if so, confirm that an independent researcher would be able to reproduce all of the reported results, including a code book if one is needed, and confirm that you have registered the uploaded file so their time stamped and can't be changed. And we have similar thing for materials, same kinds of questions, and similar things for pre-registration. And the pre-registration thing also asks them to make clear any departures or to affirm that any departures from the pre-registration were made clear in the analysis. All right, are people doing okay? No big questions yet. So then when we finally publish things, we have an open practice statement in the article, sort of after the end of the article, but before the reference section with all those other things like author note and so forth, there's an open practices statement that displays zero, one, two, or three badges depending on what badges were awarded. But even if no badges were awarded, there will be an open practices statement. So it might just say the studies reported in this article were not pre-registered and neither the data nor the materials are available. And it's rare for that to happen, but we're allowing authors to be that, that sort of limited in their response. But here's more of what we're trying to encourage where the author says all the data are available and the materials are available at this open science framework site. And here's where the pre-registration was. And here's our open practices disclosure form and so forth. So every article has this open practices statement in it. And I don't have a slide for this. Yes. Can I ask a clarifying question right at this point? Yes. Is that disclosure statement created by the author based on the responses or is it created by you or production staff or somebody on your end based on the disclosure form that they filled out? It's a bit of a mixed case. Some authors do it themselves and include it and others don't in which case our production team, the managing editor puts it in. So this would be better if it was, again, if that was sort of more standardized I think, but at this point it's still I think a little bit too new and authors don't always know to do it or know how to do it. So if you're an editor for a journal that's doing this at least the way we do it, if you're an editor for Psych Science, when so you've accepted something, you send the author a letter accepting it and it has this open practices disclosure form which they're supposed to complete and then they return to the submission portal and submit their completed form that way. And then once that form has been submitted the action editor on the manuscript will get a ping saying the form has been submitted. And there's probably many of you know there are journals that have signed on to the transparency and openness promotion guidelines. There are several different levels. Level zero just means that you say, we're gonna publicly say we think all of this stuff is good and level one means that you're actually have policies in place in an effort to I don't know sort of reward these practices but we are relying essentially on authors self-report rather than doing a thorough vetting of claims say for data or materials. So in practice the way this works is that the editor does perform a review and makes a judgment, right? So when we get that open practices disclosure form for an accepted article we go and look at the websites and we check to make sure that there appear to be data files and they appear to be have been registered and they appear to have variable names that a person could possibly understand and so forth. So we're doing that but we're not for example attempting to reproduce the analysis. It would be better if we did but we just you know it's too big in the ass. Likewise if the authors claim a preregistration the editor is supposed to read through and really you should have done this you would hope they would have done this during the review process much earlier read through that pre-registration and assess how complete and thorough it is. But to really do a point by point assessment in the case of a complicated study of you know every detail in the pre-registration and every detail in the manuscript is a quite time consuming and daunting task. So we're doing sort of a cursory assessment of does this look like a reasonably detailed preregistration that maps on reasonably well to what people reported they actually did. So it's sort of between nothing and everything. And in some cases we do ask authors for changes actually I would say quite often only a minority of the cases do we just get the disclosure statement look at it and say everything is fine when people apply for badges quite often there's need for some additional clarification like the addition of a code book or clarification and so forth. And indeed sometimes people apply for badges and after deliberation we decide I'm sorry you just don't meet the criteria for them. That's fairly rare I would say but it does happen. But so I hope that's clear. We're doing sort of cursory analysis we're trying to do quality control to the extent that we can within our resources but technically we're a level one TOPS guideline operation and that means we're not really vetting the badges in a completely detailed way. And they're mostly I think this has been tremendously successful effort and I would put your ability to get data from a psych science article up against that of any other journal in psychology. But there are problems and I think these are mostly addressable but we do need some work. One of them is that very often authors think they have registered files but they haven't. I think a lot of people think that what it means to register a file is that you upload it to the open science framework and that means that it's registered. So somehow we have to work on better educating people about what that term registered mean. Just the last week I've been going back and forth with one of my associate editors who's been on board for a while because he wasn't understanding this point himself. So if the data, another thing is, so I mean how do we handle that if they say we registered our data but then you look and you see that they're not in fact registered. We just ask them, okay, please register them now. If when you look at it, you can look at it and you can see the dates that the files were last edited and if it's consistent with the claim that the authors had meant to register them, they had put them there, they haven't edited them since such and such a date. So it looks legit. Then we ask them to register them and accept them as meeting the criteria. And we'll do that even with pre-registration. So quite often, less so now, partly I think because of the good work of David Miller and others on his team. But so we're having fewer of these problems but sometimes people will think they've pre-registered but all they've done is created a Word document that has their plans in it but they never froze it or made a date stamped immutable version of it. But as long as it looks like it functionally and sort of morally, it meets the spirit of the law of a pre-registration. We say, go ahead and register it now and we'll treat it as having been pre-registered because we can see from the dates it possibly was. Another common problem is that although people have put a bunch of stuff on the OSF for some other site and perhaps have made a registered record of it, there's no wiki or general information that explains the relationship between that information and the impressed paper. That's one of the things where we'll just ask people to go back and please add such a thing because it just will make it easier for people who end up at your OSF page. Quite often there's no code book or index or guide and the file names and or the variable names are hard to understand. Very often people are using various kinds of proprietary file types. The ideal is that the shared information doesn't require you to own software made by Microsoft or anybody else, but often that's not the case. Another common problem we have is that we may get the data and we get the materials but often we don't get the analysis scripts and sometimes there's also sort of uncertainty about whether analysis scripts should be treated as materials or as data. There's an opportunity for discussion or future work along these lines. We could maybe talk about this. I think it might be better to have analysis scripts be at its own separate badge so that you can have up to four badges. And I'm sure David knows much more about this than I do. And Microsoft often is sort of an awkward thing about should it be a, how should it be addressed? I think it probably will be in the near future. I think there's sort of enough rationale for that distinction. There's no voting mechanism here. I apologize for the webinar attendees, but if you have an opinion on that, feel free to chime in either to the chat or the Q&A. And then the main reason that I'm keen on having an analysis badge is that right now authors just don't, I think they often don't even think about it. And if the editor fails to ask for it, then it's not there. And I really think that it's, almost as important as the data to make, especially with more complex analyses, to make those scripts available. So I would like to promote that. When we get to the preregistrations documents, often they're not very good. And I believe they're getting better. And we started with very lax criteria where we were giving people preregistration badges if they had even fairly vague and incomplete preregistered plans. And we've been gradually cranking that up. At least that's, I've been making an effort to inspire my editors to increase the criteria for that. And quite often, when you do look at them, you'll find unreported deviations from the preregistered research plans. So when it is sufficiently detailed, authors are not always following it and they're not always being transparent about deviations from it. And my impression is that the frequency of all of these problems has declined. Over the last four years, but there's still lots of room for improvement. I thought maybe it'd be good to talk a little bit about badge pushback, because there certainly is some. Some people think that badges are pure isle, you know, harken back to Boy Scout, good behavior badges are gold stars and so forth. And maybe they are a little bit that way, but boy, how do they ever seem to work? So again, I would point to those earlier graphs at the beginning of the webinar suggesting to me that it's like scientists doing pretty well in terms of encouraging people to provide their data. And we don't know to what extent is that the badges versus other messaging efforts that we've made, but I really think that a lot of people want those badges. And another concern some people have expressed is that the badging system may unfairly penalize articles that cannot qualify for badges. So your work might look worse because you're using a proprietary dataset. And that's kind of unfair. Your work is not worse because you're using a proprietary dataset or worse because you're using proprietary measures or something. But I really, I don't know whether there's sort of a punishing effect that happens to articles that are published without badges. I don't, David might know about this. I'm not aware of any empirical work on the issue of the effect of the badges on readership, impact, perceptions and so forth. And I think I hope that some people are looking at it. And maybe, I mean, I do hope that the badges are just a transitory kind of thing in that in not too many years, we won't need badges anymore because it would just be normative that when it's feasible and appropriate and ethical, people make their data analysis scripts available and they make their materials available and they preregister. And we won't have to be doing this kind of flagging exercise. But well, that's not the case. I think we'd get more good than bad out of it. Now, and finally, our badges sometimes are worded when they shouldn't have been. Well, absolutely for sure because as I explained, at least at Epsite Science, the level of scrutiny we're doing is quite lax and clearly authors are some authors at least are motivated to want to get these badges. So I really hope that professional societies, the Association for Psychological Science, Psychonomic Society, maybe even the APA, will invest money in providing support for authors who are applying for badges. So instead of asking editors to add this onto their workload, I think there should be an in-house methods and staffs expert who is analogous to managing editor who does all the copy editing to make your manuscript have all the right semicolons and grammar and so forth, but instead is looking at things like the quality of your preregistration and the clarity and usability of your data set. And I think it's useful to think of this not so much as a matter of vetting as a matter of working and in the same way that copy editors work with authors to try to make their work more clear and effective. So too, you could have an in-house methods and staffs person who would have the job of working with authors to make sure that other scientists can quickly and easily access and understand their analyses or data, reproduce the analyses and replicate their procedures. That's what we want to do. The professional society should be doing a lot to increase the likelihood that other scientists can really understand and evaluate and replicate the works that they publish. Is it a pain? Isn't it a pain for editors to administer badges? Yes, and some of my editors don't like to do it. They were already editors when I added this or we added this chore and so some of them do a better job than others and so forth. I would like to see this again, move to sort of dedicated professionals who are into doing it, who care about it and who know how to do it really well. And my fantasy, this would happen at some point in the review process for science articles that would be when a submission was judged to be worthy of external review and went out for review and came back and was not immediately rejected. Very rare for a paper to be immediately accepted. So usually there's a period there where the editor is going to ask for at least one more round of revision. And that's when I would like the paper to go to the stats methods expert. So that would maybe delay that letter for a little while while the stats methods expert looked at things. Okay. I wanted to, the last slide here, give a boost for a new tool called the transparency checklist. This has just come out in an article in nature, human behavior. And it's a consensus-based way sort of summarizing a preregistration and reporting departures from it. So there's a shining app that steps the researcher through the various kinds of questions. This is nature, human behavior, made the paper open access so you can get to it directly. Maybe I'll do that just to show it off. So this is a little blurb that explains about what the transparency checklist is and how to, what it's to be used for. And then this is the beginning, the first frame of the checklist itself. And you just completed as an online form. And then at the end it generates a report that you could submit along with an article. Kind of cool. I don't think they can see that screen, but I just shared a shiny app through all the members. Okay, sorry. I guess I wasn't sharing that, was I? Yeah. Great. So you've shared the shiny app. Thank you. And that's what I got. So now we have time for discussion questions. Steve, thank you so much. I know I have a ton of questions and several have come in. So I wanted to sort of jump right into it. I want to talk about a couple of interrelated questions that came in about the data badge. If you go back one slide, I think it was your second. Your badge is unfairly penalized articles that I cannot qualify for badges. Two potential examples of that. And Rachel shared a question in the chat flow also. Data sets that, and you mentioned also that rely on large publicly available data sets. Data sets that of course rely on very sensitive information that there's no way to safely anonymize. And there was another question that came into the Q&A about repositories that have that sort of vetting process. The data sets aren't publicly available. There is a modification of the badge that some journals are considering adopting that use the so-called protected access data badge. So if it's in a repository, that's not publicly available, but it's of course not an author's website, but it's somewhere where there's professional staff who will vet these ethical requirements. There is a modification of the open data badge that the badge community approved to have that. It does of course require an additional step, and it does of course, most of those repositories do need to charge fees in order to support that workflow, but that's something that some journals can adopt if they choose to use that criteria. Have you had any folks ask about that or? Yes, yeah. Yeah, in fact, PsychScience has at least formally adopted the protected access data badge, although I'm not offhand aware of any uses of it yet. And I think it's fairly recent, but I think that's a super exciting step forward and my own lab just recently started collecting some data where we have videotapes of pairs of undergraduates who work together on a memory task. And we always used to just, we would audio record that and then we would basically score it for a very crude way because we were only interested in certain things. But now I have ethical permission to put that on the dataverse under protected access, provided the both members of the pair consent to that. And it's protected access, so anybody in the world can see that there are such videos are there, but they can't see the videos unless they meet certain criteria that are, and it's modulated by the UVic Dataverse Librarian. So that's pretty cool. Cool. All right, I'll go down the list of these other questions too. Jill Addison asked, how did you get the research disclosure statement into Manuscript Central? We're trying to get simpler questions incorporated for our journal with a goal of top level one. Yeah. As I said, they should collect information in a word doc or Google form outside of the system. So can you describe how that happened? Yeah, I mean, I don't know the details, the implementation, but I can infer that it was not straightforward because the way it works is that the disclosure form is an attachment to the acceptance letter and when they fill it out and send it in, then the editor gets an email with that as an attachment. So it's my belief that the open practice is disclosure statement is in fact not part. I could be wrong about this, but I don't think it's part of Manuscript Central. And so if you're the editor, you'll get a ping that says an accepted manuscript for which you served as action letter has completed the open practices disclosure statement and it is attached. Please assess the attached application and then go to Manuscript Central and complete the form. And the form is just the push buttons. Did they apply for any badges? If so, for each kind of badge, do they get it or not? And if so, point and blah, blah. But the form itself is not there. Which is close. But there is that trend, the form, the set of questions inspired by Etienne Mabel. Yes. That said, have you reported all of your measures, for example? Sure, so that's in the submission guideline. And yeah, they've been pretty flexible about letting me put questions like that in. So I've on a number of occasions, working through the APS people who have the contacts with people at SAGE to make the changes in the submission form. Here's a good question from an anonymous attendee. Given the overestimation of effect sizes in the previous literature, what do you see as the best explanation for that sample size question? Now, what's the best way to answer the sample size question? I think the answer is, as with most psychology questions, it depends. I don't think there is one best answer, but an appropriate answer. I mean, one way to answer it is by specifying the smallest effect size of interest in the context term, in which you're working. You can point to some rationale for why if it's less than 0.25, I don't care sort of thing. So that would be one way, but of course, if you take that route and it's between subjects designed, you kind of need an awful lot of subjects. So you might choose not to take that route. Yeah, there's a lot of indications there. Yeah, I mean, I do have sympathy with authors to a certain extent that it's hard to know exactly how to answer that, but there's some pretty clear wrong ways to answer it. And what's worrisome is how often those are what people use. So it's like I'm using this number because that's the number we always use or because that was what was in the literature. Or I did a little pilot study and I got a huge effect, so I'm gonna use that sometimes out to the fifth decimal place. My effect size estimate from testing 18 people, I'm gonna use that as an g-power three. Megan asks, is the open practice disclosure template available? So that's an easy question, not to answer that. Yes, there's a template available and it's CC zero. So free to use or modify as you see fit. And that's, I believe Steve, what you use is a modification of the link I just shared. Yeah. Sort of, but very similar. Very similar. You have a few superficial differences, I think that basically the same. She asked, would an open data badge be awarded if the data is in a repository, but to access, oh, this kind of similarly to another question, but you have to make a request. So that's that protected access. Yeah. And I shared a link to the explanation for that criteria. Yeah, as long as the guardian is not a researcher, basically, and you have reason to believe that it will be stable for a long time. Let me mention something else that I meant to put in the slides, but forgot to make explicit here. And some people have criticized us for this, but we have taken the position that a paper with multiple experiments should get a badge if at least one of the experiments in that paper meant criteria for the badge. So quite often papers with a pre-registration badge might be papers that have say three experiments in them. The first two were not pre-registered. The third is, and that came about because the action editor said, well, this looks like an interesting line of work, but you haven't made the case sufficiently compelling. If you run a pre-registered follow-up study along these lines, then I think you'll have a good chance of the paper being accepted. So we've taken the position that those papers should be badged. And part of my rationale for this is, well, consider an article that has three experiments, only one of them was pre-registered. Should that be considered weaker than an article that has only one study, mainly the third one. So I'm taking it that way. On the other hand, I can see the case for some way of designating one out of three or something like that so that readers don't get a mistaken impression that all of the studies in an article were pre-registered because that article has the pre-registration badge. Yeah, that can be tricky because the two situations you described initially are an ideal use of pre-registration, two or three binary studies leading up to a very, hopefully a more conclusive, large and inflammatory final study. But on the other hand, how do you kind of standardize the way that you don't just have a tiny little side project pre-registered and that gets the same kind of recognition as a more ideal practice. So perhaps that's, yeah, it's just a tricky way to sort of make a blanket rule because you can't really have a very unambiguous criteria to define the differences between those two but it can, again, send mixed signals. Yeah. But a lot of the answer for these sort of fall back to the importance of transparency and clarity, pointing out what was and what was not pre-registered and that's true for an individual study and for a paper with a collection of several studies. Yeah. Ian asks, what has been the impact on time it takes? You mentioned a little bit of this. Authors to submit a manuscript with the additional questions related to data availability and transparency and a little bit, another question, additional time for editors as well. Yeah. So I think both of those are non-trivial and I know there's a new editor in chief coming in, Patricia Bauer will be handling new submissions as of the first of the year. And I know that she's working with APS staff on changing the submission portal and that part of her aim in doing that is to make it more streamlined. So I have received messages from folks that the submitting a paper to psych sciences is like crossing into the embedded states. It's just a lot of questions and a lot of sort of people sometimes have the feeling that they're being policed and so forth. So if I could do it again, I think I would make some revisions to lighten the load on authors and to make it more streamlined. And likewise with the editors, I think that it is a added burden and how much of a burden it is depends on the particulars of the studies of the particular manuscripts and also some of them, I think, don't, hasn't taken very much extra time because they do a not very good job. Others do a really, really good job so they are putting in more time. So a bit hard to have a little bit of dialogue about this, but I just wanna go one back and just make sure that one point was clear that you did show those kind of two different sets of questions that authors were responding to, one that had been integrated into manuscript central and then one that's the badge disclosure form. Are those ones that had been integrated into manuscript central? Did the answers to those get used in the disclosure statement or find their way into the template or into the manuscript in a way or what's... I don't think so, David. I could be wrong, but I don't think so. I think when I put those in, they were intended to sort of signal to authors that I want them to be doing these things. Yeah. So that was there and the primary intent was to tell authors at the time of submission when their motivation to meet criteria is very high that we're valuing these things. So, and you can tell when people start the submission process, it assigns a manuscript number to them and then sometime later, they submit the article or the submission, they hit the submission button and not infrequently, you'll see numbers that you were seeing like three weeks ago or even two months ago, a lot of, but where somebody had started the submission and then has taken quite a bit of time and I suspect at least in some of those cases is because they've gone, oh gosh, if we're gonna share these data, we're gonna have to really create a much clearer, more up-to-date, transparent version of our data files or if we're gonna share our materials, we're gonna have to do some work and those things take time. I don't have any measures on that, but I speculate that that does happen. Second to last question, there could be a whole webinar on this great question. There's been a lot of talk about pre-registration for the past five years or 10 years or more. What's the strongest best argument you have for and against registration and I guess other open science practices generally? So I guess... Yeah, that's a very, very big question that would be hard to answer in an entire webinar. I think that doing a pre-registration is just good practice. It's like keeping records. It's a way of making a note to yourself for the future about what you had in mind before you saw the data. So I think it's just a good and helpful practice. By itself, I think a good critique of it is that pre-registering does not make work good. You can pre-register stupid research ideas just as easily or more easily in fact than you can pre-register brilliant ones. So it's by no means a cure-all. I just think for people, especially if the nature of the work you're doing is hypothesis testing research, where you're planning to use inferential statistics to make generalizations from a sample to a population. I think having documentation about what your a priori plans were for analyzing the data is very helpful. Yeah, I definitely second that specifically for hypothesis testing research, it's, I would almost go to this thing. It's close to critical for hypothesis testing research. Not all research is or should be hypothesis testing work. And when it's not, the fallback importance is just clear documented workflows. But I suspect a lot of the heated debate around it is discussion about how much research is hypothesis testing, how much should be, how much is presented as if you were hypothesis testing, why are there any ways to present work as hypothesis testing? And again, I think that will be the topic of many future discussions. Very last question, because we're at time, but a good question or just maybe a point, Ian says that the metadata you're collecting from those questions upon submission could be very useful for meta research groups or to help others construct data availability statements. Yeah, I've often thought that myself, it would be just fascinating, especially that sample size planning question would be, it's just a gold mine of information about people's understanding of this how it's changed over time and how it might differ in different areas of psychology and so forth. So if anybody wants to get in there, I would certainly be supportive. I'm not sure exactly of how the ethical issues would be handled, but I think they probably are handleable. All right, we're one minute over. So Steve, I just wanna say thank you very much for participating, for all the attendees. Thank you for attending. We'll send out emails with a link to the recording and hopefully within a week or hopefully not too much longer after that, we'll have a transcript or a summary or the written record because we want these lessons learned to be widely disseminated. So thank you, Steve, and thank you, everyone. Thank you, thanks a lot, I hope it was useful. Ciao.