 This is all about advocating for change and promoting open science policies in journals and institutions. And we've got here today to talk about two different initiatives that we've been working on with them. Timo Rutter and Zinniah Schmaltz are both Cognitologists and linguists. Is that Northwestern University? Zinniah Schmaltz is at the Ludwig-Milien University working on that initiative, but coordinated and read and add for the use of the report publishing format at Journal. Matthew Badd and Aiding Katz are both at the Neuroscience Research Australia, Neura Institute. They have, they're both clinical scientists in the field of pain reading, have been working to open science policies, data sharing, within their field, all up to the open guidelines include data sharing and register report policies. So with no further ado, I'm going to hand it off to Timo and Zinniah to talk about the register reports now, advocacy initiative. Thank you, David, just give me a second to share my screen here with everyone. We're just not working. Do you see that right now? We don't. Okay, guys, one second, having some technical issues here, trying to share my screen, which I tried yesterday, actually. And there we go. Yeah, I got it, one second. And now we should be good. Is that right? Do we see this? Yes. Perfect. Okay, hey everyone. Early morning here for me in Chicago. Zinniah and I were talking about the registered reports now initiative, which is an initiative to advocate for this new article type that many of you have probably heard before about. I won't go into a lot of detail about that particular type of article, but I give the necessary information that you need. And in that shell, why we are here, Zinniah and myself and a bunch of other colleagues across different disciplines, we wrote letters to editorial boards of our disciplines or specific disciplines that were signed by a large number of our colleagues. And in that letter, we explained why we think implementing registered reports is important and we requested the journal editors to at least consider implementing them in their workflow. And as a result of these efforts, actually a number of journals already implemented or are in the process of implementing them. And many more are considering the implementation at the moment. So we successfully to some extent changed some of the publication landscape. And I think that's a very promising way forward. And our goal today is really to walk you through what we have done and how we've done that and how you can do it yourself. But that's very important. We kind of wanna give you like a blueprint as to how you can help us doing this. Most of the information that I will give you will also be available here. And I think senior is going to pop that into the chat right now at our OSF page registered reports now. And I will try to refer back to this page as often as I can, but feel free to go and explore that page while I'm talking about it. To anticipate kind of the major steps involved in the workflow. First, you just need to decide you wanna do something. You wanna be active here. Every journey starts with a decision. So with that comes a clear idea of which journals you want to approach with Refregats to this endeavor. Check online in a second step. Whether the journals have been already been approached by some of us, for example, and update your list accordingly. And then in a third step, and that's in my opinion, the most important step, you will kind of broadcast your plan to your colleagues and ask colleagues to co-sign that letter to the editors and to add potential journals to that list. And fourth, and that is probably the most iffy issues you send the letter to the editorial boards and engage in the conversation with them if necessary, that explains what the problem is and why registered reports are a good way to tackle these problems. Ultimately, you ask the editors to consider implementing registered reports as an article format. Now let's back up for a second and go through this step by step. This is you, okay? You are a grad student or you're a postdoc just like us, or you're a tenured or non-tenured faculty. What's important is you have a clear sense that things are not okay in science. At the moment, there are certain issues that need to be resolved. Among many, many other things, there's a massive problem of publication bias. There is a strong tendency to publish results that confirm our hypotheses, but results that do not confirm our hypotheses now results are rarely published. So there's a strong incentive to find a confirming result and that can really lead to very harmful dynamics, some questionable research practices such as selective reporting, p-hacking, harking. Now registered reports is a new article format that successfully can tackle some of these issues to some extent by publishing results based on their theoretical merit and their methodological soundness and not based on the nature of their results. So it also restricts many questionable research practices and draws a very clear line between confirmatory and exploratory research, between prediction and postage. That's great, right? You want that. Unfortunately, the journals within your field, in your scientific discipline, that do not offer this article format, but you want to change that and you wanted to change as fast as possible, well, it might just happen that you need to be that change. So you first really, if you sit down and you make a preliminary list of journals that you wanted to approach, right? In my case, it was 10 journals within the language sciences. You then go on one of our website and check for journal responses and check which of these journals have potentially been already approached, considering, or even have implemented registered reports already. So you look at that list, you compare it to your list and then you update your list, right? So you end up with a list of journals that have not been approached that you want to approach. Now, changing things alone is obviously difficult. So you need to find colleagues, right? You need to find colleagues that support your cause. Within your discipline that help you. So what we have done is we wrote an email to a bunch of colleagues that we thought might support our endeavor. And in that email, we talked about the problem and why we should concern about the credibility of our publication record in our field and you introduce one possible solution, registered reports. So you tell your colleagues what you want to do, what your kind of game plan is and how they can support your endeavor. So you want to approach as many journals in your field as possible and advocate for adopting registered reports. You also already attach a first draft of the letter that you want to send to the editorial board. Now that sounds scary, but it's actually quite easy because you don't even have to write the letter yourself. On the registered reports now page, there's a section on the wiki that is called journal requests. And there you find a workflow and two generic letters. One letter is to ask a journal to adopt the article format registered reports. And the other is a more specific template that requests a submission of a registered report for a specific study you plan to submit. And so you basically don't ask them to systematically integrate this type of article, but you ask them to submit at least one of these workflows for your own plans. You can just use these generic templates and tweak them a little bit. It's really not a big deal, it's very straightforward and it's definitely not a very time consuming process. So if you have that letter, great, you send that to your colleagues. Now, you ask your colleagues to do three things mostly. You ask them, and very importantly, you ask them to spread the word. A lot of our signatories forwarded our email to our email lists. A bunch of people shared this call for action on social media such as Facebook and most effectively Twitter. And when sharing, you ask them already everyone to help with the two following points. So on the one hand, you ask them to add to your list of target journals. And they don't necessarily cross check with our registered reports now page, but you just ask them to add journals. They would like to approach as well. And the other thing is they need to write down the name if they are willing to stand with their name for this endeavor, whether they want to co-sign these letters. And here you want them to note down their name, their affiliation, and ideally their contact details so you can contact them in the future. So you should set up a shareable spreadsheet. We use Google Sheets for this. It is easy to share and manage. Here's one of my lists. And I looked, my list had something like 140 signatories, I think. And again, it's very important to ask them right away to include their email addresses because you might want to approach that list. Again, keeping them in a loop, updating them on progress and involve them in a potential a second wave of emails, et cetera. And here's a screenshot of my journal list. Again, in Google Sheets, I believe I started with, personally, with 10 journals out of my field. And I ended up with over 80 journals. And a lot of them I was not even aware of. Some of them were already approached, but the majority of them were not. So we really covered the majority of scientific outlets in my field. That was pretty awesome. And so after you have the list of signatories and the list of journals, you can go to a little bit more if you step to approached editors. And that's what Xenia is going to talk about now. All right, can everyone hear me? Good, so you've got your list of signatories, you've got your list of journals. So now the next thing to do is to approach the editors. So this is kind of the most time-consuming part. You need to find the email addresses of the editors, which are on the journal website, but sometimes they are quite well hidden. So it's quite time-consuming to find everything. You take the template that Timo has already shown, which you can already find on the Registered Reports now website, which I've linked in the discussion. You have to adjust it. So you basically just, it's enough if you replace the name of the journal and the name of the journal editors and be aware that there are two mentions of the journal name. So you need to remember to fix that twice. So you do that for every journal that you approach. And then you just send the email, the template with all of the signatories at the bottom. So then you wait, that's the next slide. So in most cases, you will never hear anything back. We will present some numbers later on. So you can, but just to give you an idea that's kind of the most common response is no response. Next slide. The second most common response is a maybe, but the maybes can kind of come in different tones, so to say. So some people replied maybe they normally write something like, yes, we will discuss it with the editorial board at the next meeting. And sometimes they really sound enthusiastic and sometimes they really don't and it's more of a, yeah, just leave me alone kind of thing. And it happens occasionally that they just reply straight away and say, thanks, but no. That's actually not that frequent. And then the response that we're all hoping for is a yes. Sorry, before that. Editors might have a lot of questions. And of course, in that case, it's important to respond and you can CC Chris Chambers, which is what I've always done and he's always been very helpful. And as an editor, of course he has the experience to be able to answer most of these questions. So the final response that we are all hoping for is that they will actually do it. Quite often it's the case if they have already been considering it and then it's kind of the final match that they need an additional set of people saying, yes, we want this format. So next we have, yeah, so what you should do after you get all of the people's responses is update the list. So this is quite important because editors can get a bit annoyed if different people approach them with the same request. So if you first check the list on our registered reports now page, you will see whether the journal has already approached and importantly what their response has been. As soon as you have approached the journal, you should also add the name of the journal and that you have contacted it. As you can see, for example, for Acta Psychologica, they have been contacted about a year ago now and have probably not responded. And if you hear back from them, then you also have to update that as awaiting response or under consideration or have now adopted. So now we will show you the numbers. So that's how the responses that we get. So the most common response by far is no response which is a little bit disconcerting because that means that when you approach journal editors with kind of an issue or a problem, even though they kind of the gatekeepers of the scientific literature, quite often they don't even bother to respond to somebody expressing a concern. What is quite encouraging, I think, is the percentage of journals that have agreed to implement it or have already started accepting this format which does not seem like a lot if you look at it in terms of the percentages. But actually we already consider this a success because so personally I kind of started being involved in this initiative because I wanted to submit some registered reports and I had a look at the journals that offer registered reports and they were just none where I could realistically submit to. So either they were quite generic high impact factor journals or they were from different fields. So for me personally, this has already changed. So now I think if I wanted to do a registered report, I know where I can submit it to some journals in my field. So that's quite an encouraging thing. So the next slide is a summary of the whole process. So Timo has already, well, we've gone through all of these steps. So basically after you write to the editors, you wait for the responses and at some stage you repeat the whole process. So you write again to the editors to send them a reminder or ask them how that board meeting went, which is something that we haven't done yet but hopefully we will get a greater response rate if we turn this into a more iterating process. So the next part I think is sending again over to you, Timo. Right, so what we wanna end on here is basically a little outlook of what we wanna see happening in the next six to 12 months. So we have a few goals here. First, we have approached a lot of discipline specific journals so we wanna kind of shift gears and approach the larger journals, the more prestigious journals to maybe even signal to the smaller journals that this is something that becomes very mainstream. And so for example, we approached the proceedings of the National Academy of Science, PNAS. And I think we approached them with a letter that was co-signed by over 200 people and we asked them again to implement registered reports. They couldn't use this, they responded to us very quickly but the bad news is they politely turned us down and said they wanted to wait to see how other fields fare with the implementation of registered reports. So that was a reject. We wanna, in the future, we wanna approach similar journals like that and see whether we have some success. Second, we've written an open letter addressed to all editors and funders out there that calls for reconsidering registered reports as part of our standard toolkit in science. Again, we found over 200 people that co-signed that open letter and we are currently trying to publish that letter. It's kind of a part too of a letter that has been written by Chris Chambers and colleagues in, I believe, 2013, published in The Guardian in which they explained the problems and offered the solution of registered reports and called all editors and journals to consider this article format. In our letter now, we kind of summarized what has happened since then and it's very encouraging because over 200 journals, as of now, have already implemented that article format but there's still lots to do so we reiterate the arguments for it and we also talk about some of the concerns that people have raised with regards to implementing registered reports. I think Senior is going to talk about some other plans we have. Yes, so the next point, as I mentioned earlier, so there are about five of us who have been writing the letters so far and we have written a lot of these letters pretty much exactly one year ago now so what we are planning to do next, as I already mentioned, is to approach the editors again and just ask if there are any updates or maybe give them a second nudge to remember to mention it at the next editorial board meeting. Another thing that we are planning to do is to write up an experience report, mainly to report the numbers which we presented on the slide so what actually happens when you approach the journals. So hopefully this will kind of raise more of a discussion about what kind of responses should we expect and maybe kind of nudge editor journals to kind of be more likely to respond to people expressing such concerns as the lack of registered reports. So here are the names and the Twitter handles of all of the people who have been involved in this initiative who have been approaching journals so please follow them. They regularly tweet about registered reports and about this initiative. So if you wanted to see how it turns out and what happens next, you are likely to get updates. So what we would like to end with is really a request to all of the audience. I see there are about 30 participants in the workshop. So basically the five of us, however many have been approaching a large number of journals. What would be more efficient would be if each of you picks maybe two or three journals that you like that you like to publish in preferably even where you know the editor and approach that journal. And the more people we have approaching journals of course the broader this movement is going to be and if each person approaches just two or three journals it's not as time consuming. I think each of us has approached something like 50 journals so just even finding the email addresses of all of the editors pretty much is one working day. So that's what we would like to end at unless you have anything to add. So thank you very much for attending and we hope you are now inspired to go out and to write some emails to editors. Thank you Timo and thank you Zinia. I think that's a really great, just like it's really a fabulous initiative. The fact that 15 journals are accepting the format now that weren't before is a real concrete impact that represents just a movement, a passionate movement that's showing success. So thank you for the work that you and Timo are doing for that and it's just been really neat to see this initiative have some legs to it and I'm pretty excited for sort of the second round of outreach and especially if some publicity could arise from the statistics that you've shown both the number of journals that have been decided to take it up and the large non-response as you mentioned it's disheartening that sort of the gatekeepers of scientific information generally don't respond to these pretty legitimate concerns. So I think that would be a great story to tell. All right, with that Aiden and Matt are going to discuss a philosophically similar initiative that's just getting off the ground now, expanding some of the outreach to other related policies covered under the Transparency and Openness promotion guidelines take up top. So Aiden and Matt, would you like to take it away? All righty, good morning and good evening. I'm talking different time zones. Thank you for the introduction, Dave and thanks everyone for tuning in. We're really quite excited to be a part of this and to be giving you a taste test of a new initiative that we're putting together and currently in its testing stages. So the plan for us will give you a bit of background as to where this initiative and the thoughts came from. And we'll talk a bit about our first evaluation of general policies within our field of pain and then Matt will introduce, as Dave said, so nicely take up top, which is the new initiative as well as talk about the next steps involved with that. So let's get into it. So a little bit of background, we were quite encouraged to see that other fields were really reviewing and promoting transparent open research practices. So the use of data sharing or sharing of materials or the use of reporting guidelines. And we were interested to see to what extent were these practices being encouraged or even being advocated for in our field within pain science. So as a group, we wanted to get together and to evaluate this. And we also at the very least wanted to draw our field to the importance of these practices and what they involve. So to do this, and we wanted to look at journals and this has been said already tonight that journals are key stakeholders in the production of transparent and open research. So we wanted to evaluate the policies of journals and how they were encouraging such practices. To do this, we use what was already out there, the top guidelines as a really nice starting point to encourage these research practices within authors for journals to publish. So we use these guidelines and built our own evaluation tool as such, which incorporated some of the modular standards as shown here, as well as a few other ones. So what we chose to do was to pick 10 of the leading pain journals and to see how they adhered to these practices. So you can see here on the table, we evaluated the top 10 pain journals based on impact factor and how they adhered to each of these eight items. And what we found to our surprise was there was an overall low level of engagement with these research practices. And it was seen that adherence to these practices wasn't required for publication, nor was it actively encouraged. We were quite surprised, but also disheartened by this. And we really wanted to make it apparent to the other pain scientists, including those who published this research. So we chose to publish our report in the leading pain journal. And when we compiled the study and we put it together and went out to review, we were quite excited that it was accepted with some revisions. But in the lead up to the publication of our paper, we found that one of the journals that we had appraised pain who had accepted the paper actually released an editorial outlining some policy changes to encourage or take up more transparent and open research practices. This was quite an interesting development for us but also quite exciting to see that the evaluation of these practices was able to potentially encourage journals to change their practice. So where we're at now, from the conclusion of our review, we thought that there seems to be a need for more research to understand these specific practices as well as potentially some room for trials of intervention to see how we can better encourage change to reinforce particularly within journals as because they seem to be a key part in this story. So at the moment, we've built together a team of researchers from across the globe who are interested in pursuing these ideas. We have Matt here, our colleague Elaine Toomey from Ireland, Georgia Richards from Oxford, originally from Queensland. Hope and Lee, who's also an Oxford originally from New Zealand and then myself. And together we are now re-evaluating the top 10 pain journals but also planning to broaden the scope and I guess that's where the initiative take-up top was built. I'll pass it to my colleague Matt to fill in the next stage. Excellent. Well, thanks very much, Aidan and David, Timo, Xenia. Everyone else who's here. So just following on from Aidan, we've been thinking in the course of the last year since our evaluation, well, what are we gonna do next? Particularly in the pain science field of course because that's where we work, but also how can we expand and take this work we're doing global and perhaps connect with other people and make more of an impact. And as Aidan mentioned, part of the initial plan for us was to continue the evaluation of pain journals and we're just finishing up with the submission, I think of the paper for the evaluation we did this May just past. Really what we wanna talk about today is taking our evaluation and merging that with some work that was going on at the Centre for Open Science and then the evolution of that now into the initiative that is currently labelled Take Up Top. So we were very fortunate to have Brian Nozek, the director of COS, visit UNSW, University of New South Wales, which is a mother institution to Aidan and I and through Brian, we were able to connect with David and then recently we visited Virginia as part of some work that we were doing in North America. And so our opera team, which stands for Open Pain Research, Appraisal and Advocacy, we're using a tool based on the top guidelines as Aidan so nicely articulated previously and we were really encouraged and excited to see that Centre for Open Science through David and some other team members there had already been going through a process of evaluating psychology journals using a similar tool that was also based on the top guidelines and given that our approaches seemed to parallel one another, we thought, well, can we come together here and make this into something bigger? So on that view, we have this initiative, Take Up Top. And as you can see there on the top of the screen, quite a dense aim, one might say, but there's a lot in there. I'm going to unpack that for you now. So our aim with this initiative is to increase the uptake of policies that promote transparent and open research practices, particularly those policies that are espoused in the top guidelines by journals. And we want to see that happen hopefully as a result of ongoing crowd sourced audit and feedback. So the consistent theme this evening or this morning, wherever you are in the world, is that the journals here are key stakeholders in the production of research. And we've seen really quite nicely, I think, with the Registered Reports Now initiative that it's not just in the publication of research, but it's actually in the entire research life cycle, if you will, that journals have a key role. And so we think that if we can get journals to champion principles of transparent and open science, then that might be a way of encouraging the practitioners of science to change as well. As mentioned, this is based on the top guidelines, which are not something that I need to go into further. The idea here is that we have an established mechanism for facilitating change and behavior change by practitioners. This is called audit and feedback. And in the medical sciences, we know that this is something that is effective. It does result in behavior change. There's a Cochrane review there that shows, since about the year 2000, in fact, we were pretty confident that this sort of intervention resulted in meaningful changes. And so in essence, that's the interventional component of Take Up Top. We provide an evaluation without finger pointing. It's just a provision of data. And through that, give a feedback of those data to journals and hopefully then changes may occur. And I'll talk a little bit later on about the mechanisms for that. As well, we're really excited to partner with COS because of their global profile resources. And we see as a key part of this initiative that's exciting and she'll hopefully bear fruit, the fact that it can be ongoing through a cloud-based platform that we'll look at in a moment, but hopefully be ongoing through all of you, the interested people in transparency, openness, showing and sharing and the like. And we've currently built this initiative to be crowdsourced. So similar to registered reports now, Take Up Top is coordinated through an OSF project page. And the idea there is that we can coordinate the project, provide you with links to the evaluation tool by which you conduct the evaluations of journals and then also provide you with resources to provide feedback to journals. We're not gonna cover that in too much detail today. I'm actually excited. If we have the time, David has mentioned that he'll be able to give us a quick tour of what the project page is looking like currently. But what we will focus on is this tool for the evaluation of policies. So we're calling this currently the top policy evaluation form. There was some talk about a more bitey or catchy name, which was the transparency factor, but we're open to suggestions in that space. But in essence, what's happened is that the tool that we built in our opera team has come together with the tool that COS were using. And we've been working with David and Alex at COS to build this. The central idea here is that it's quite simple. There's nothing too complicated here. We have a Google form that is accessible via the open science framework project. And that form has four components. And I'll show you an example in a moment, but the key points we think from facilitating ease of use is that there is a fully guided evaluation through conducting an appraisal of a journal, as well as automatic scoring built into the form. So you don't need to worry about any of that. And this will output then to a Google sheet that's also available via the open science framework project. And the hope is that we can marry that infrastructure up with the provision of resources for feedback to journals to be able to enable you and other interested practitioners to easily extract and use the data that they want to communicate to their journals of interest. So on that view, let's have a little bit of a look at the forms. This is a screenshot of the current beta version. We're enclosed beta at the moment. So none of this is public yet, but this here is a screenshot, as I mentioned. And you can see there that our OSF project page is always visible to provide further guidance if required. And there are four sections of it. The first section is very basic. We need to know a little bit about yourself as the writer and of the journal, so basic bibliographic details and then the real body of the form if you will, focuses on the key domains in the top guidelines as Aidan spoke about before. So it's around these issues of do the journals support the citation of data, code and materials and then also the sharing of data, code and materials as part of the research process. Then the second section is more to do with reporting of research and also whether in the planning phase, preregistration of key elements of studies is required. Whether the journal publishes replication studies is an important component. And the third, I'm sorry, the fourth section overall, but the third section in this body, talks about what you might consider three other interventions that are currently available to try and help journals to improve their championing or their advocacy of transparency and openness. And those are becoming a signatory to the top guidelines, the registered reports initiative and whether the journal adheres to that and also whether the journal uses open science badges. So this is the header for part of the form. And as I mentioned before, it's quite simple. We just asked that the user follows the instructions and for any of you that have familiarity with clinical trials, systematic reviews, then part of the design inspiration here comes from risk of bias tools. The idea being that we have set criteria and that have some construct and content validity for what they're meant to be assessing. And then there's some justification required for the rating that one provides for each of those criteria. So this here is an example of the form for the first item and that is data citation. So this is an extract from the top guidelines, which is data citation refers to the citation of data code or other research materials as original intellectual contributions and then here we see this is the rating scale for the criterion under the hood or in the back end. If you will, automatic scoring and calculation of a score based on the criterion that you select here occurs, so you don't need to worry about that. Rather, the user's job is just to select which one of these most closely mirrors the journal's guidelines to authors and then one just copy and paste, in essence, text as a justification for that. So that actually is a relevant point. We didn't speak about that too much. The target material for conducting these evaluations of journals is intended to be their online or downloadable guidance for authors. We are currently doing some work within our team to establish whether we would also consider editorial material published in the journal itself to form part of official journal policy. But some of the work we've done to this point and how Take Up Top is built currently, it's focusing on guidance to authors documentation. So in summary, we are currently undergoing piloting of that tool as I mentioned and the project page to coordinate everything as undergoing development. We do hope to launch soon. I can't give you a date for that but please do contact us either via email or through Twitter if you'd like to be kept aware of development progress. And at the least when the project page goes live and public you'll be able to check in there from time to time and hopefully if you wish to get involved you can do so. So I'm gonna end that there and hand over back to David and thank you very much. Thanks so much Matt for giving us an overview of what you've done. I particularly like that when you were evaluating the 10 or 15 journals in pain research, your process was so effective that where it was going to be published, the publication was delayed until they could, the name of the journal was just Pain, correct? Yeah. The publication was delayed until they could come out with an editorial making sure that they were kind of in line with the norms and the other couple of journals in the field. So it was just a great demonstration that editors care what their peers are doing in neighboring journals and will do what they can to make sure that they're not lagging behind so much. So it's a good demonstration of an effective way to ratchet up expectations within a field. I went ahead and made the project page public so that folks on the webinar can take a look at it. As Matt said, it's in beta right now. So if you use it, that's great. Nothing bad will happen. But it'll give an opportunity. I'm gonna share my screen now to give a little tour of it and you're welcome to take a look at it. I think anybody on this webinar is considered a advocate for these types of policies or at least very curious about it. And so I'll go through and share the link through the chat window. We'll share the link again on an email follow-up probably in the next day or two for folks who RSVP for the webinar. Any trial you have with it, any journals that you evaluate using it, any language that you've seen it that could be updated or workflow processes that could be changed will happily take that into account so we can make this effective and efficient. With that, let me go ahead and share my screen and give a mini tour of what it looks like. Share this and let me get rid of a couple of things so I can see. Okay. So this is the project page for Take Up Top, a crowdsource project just to evaluate journal policies and advocate for open science practices. The home wiki here has a description of what the project is all about. A little bit of background about the transparency and openness promotion guidelines are. You can read more about those on the top home screen, top home page. I won't go into too much detail right now. I'm just gonna do the best way to give a summary of the top guidelines is to show the summary table. There are eight different policy standards for citation of existing data sets, sharing of data, code, or research materials generated during a course of study, design and analysis about how details are reported in the study, whether or not the study was pre-registered with or without an analysis plan and how the journal encourages the submission of replication studies. Each of these three, each of these eight policies can be implemented in one of three levels of increasing rigor. Most journals don't implement standards that are compliant with the top guidelines so they could merely encourage data sharing, which is not very effective at getting sharing to happen. So I'm gonna check the chat window here. I'll take a look at that right after this. Level one is more of a disclosure standard state whether or not something happened. Level two is a requirement and then level three is a reach goal. Make sure that it happened and have a peer reviewer try to use the author's code to replicate the results. We know of about a dozen or so journals that do it but it's not yet mainstream but it is a great demonstration that those steps are possible. So that's a very short elevator pitch for what the top guidelines are. About 5,000 journals and publishers have shown support for the top guidelines and about 1,000 or so journals have implemented one or more of those policies to one degree or another but we want to make it sort of a standard expectation that either disclosing data availability or making underlying data available should be the basic expectation for most scientific evidence, for publishing most scientific evidence. So this page describes what to do to evaluate a set of journal policies and then how to reach out to those journal editors to encourage better implementation of these types of policies. So as you can probably imagine it's a pretty similar workflow to what we've been seeing with the Register Reports Now initiative. Identify the journals that you want to take a look at. It could be one or two or three that you're very familiar with. It could be a dozen or more within a specific discipline. Whatever area of focus you want to take on, you can decide for yourself. The next step is to evaluate that set of journal policies using that evaluation form that Matt showed. It's a fairly straightforward, I won't belabor it too much. The only, there's a link to the form on the website. And again, you, well Matt sort of demonstrated just selecting what the different policy languages are based on what the journal says. So let me just give one demonstration here. David, I know where I am. I'm going to do Collabra because they're a great journal and they have easy to read policies. Maybe it's UC Press. They're kind of associated with SIP, Society for Improvement of Psychological Science. There are sort of a partnering journal with them. And importantly, providing a URL to these policies. So let chat window here, put that in there. And then it asks for you to read through and evaluate a couple of questions. I'm going to go skip to the data transparency standard because it's kind of the part of the core of the top guidelines. Sharing of data, it asks whether or not data sharing is encouraged or not mentioned. Articles must describe data availability. All data must be made available to the greatest extent possible. Data must be available and results verified prior to publication or you're not sure. Reading through the editorial guidelines. So here we go, here are the citation standards. I'm going to go ahead and copy that language for the policy right above it. The citation required. I'll post that in right below that. This information must be clearly and precisely documented, maximally available. So here we go, authors using original data must make the data available and trust a digital repository. As you can see, I control F for data and you can find the author guidelines that way. And then just put the justification here. Copy that and put it in there. And then repeat with the other policies and standards. And at the end of that process, it gets generated into this publicly viewable evaluation form. So here it is with Collabra again. And you can see the responses come in right here from the form. And importantly, the next tab over has the scoring mechanism so that you can see the final score. Put this down here. Based on the various policies that are espoused by that particular journal. Here's a group of journals that went through this process in the aging research community. And so you can see links to the different author guidelines for these 27 or 30 or so journals. Many of them mention citation of data sets. A few of them require disclosure of data transparency, but very few of them require data availability to the maximum extent possible. A few mention materials and code transparency. Several of them mention reporting guidelines. And most of them don't mention much about study pre-registration, except for what's required through clinical trials. And we've revamped the scoring to make that a little bit clearer as well. And then a few of them say that replication studies can be submitted, but most do not. And again, that results in a total number of points that one could earn from a journal's perspective. Going back to the project. So once you evaluate those journal policies in the one, two, three, or a dozen journals, simply organize them in a way that's a little bit more than a dozen journals. Easy to disseminate, such as what I just showed on the aging journals. And the dissemination page right here has information. Recommended language that you can use, again, to help you get started reaching out to those editorial boards. This is language that we've used before with just a couple of areas for you to fill in. And of course, you can modify this as much as you want. And then the last step, as Xenia and Timo mentioned, is to make sure to record that outreach. Once that you don't have several people that want reaching out to the same journal again. And second, if you want to follow up in six months or a year or two years, there'll be a record of that original outreach. And you can say we're following up on some outreach that happened back in 2019 or whatever it was. So again, this is in beta right now as we try it out with a couple of different journals. But there's no harm in trying it. Just make sure if you do use it, please be in contact with us so that we can learn from any feedback and make sure to record any outreach so we don't unnecessarily test or anybody. So with that, I'm going to stop sharing. I'm going to take a look at the chat windows in Q&A and in the last five minutes, we'll discuss any questions that have arisen. I think I saw a chat here. Oh, so here's a mention of a boycott. Okay, I like your passion. Okay, thanks everyone for the work on the registered reports now. Can I suggest a fifth step? Ask signatories to boycott journals that don't adopt the registered report format with the support of their peers. We're building a platform that measures support for particular practices like registered reports and then coordinate collective actions when a critical mass of support is met. Chris Chambers and I have drafted an example, registered reports campaign for the platform here, freeourknowledge.org. I think that went to everybody. All panelists and attendees, good. So you all have that link. Would be great to get your thoughts on use of the registered reports now list to find people willing to give, to sign the campaign when it goes live. Yeah, so there've been, I just tallied up a few minutes ago and Zinnia and Timo, please chime in with your opinions on that. I think that's a great idea. I know that there have been several dozen folks, their co-signatories or other leaders, writing letters to the registered reports now project. And I think it would be great to contact everybody who's been involved with that to sign that pledge as well. Yeah, absolutely. Thank you, Cooper, for that question and suggestion. Obviously always in for Boycott, but I think several things have to be considered with regards to how realistic it is that people will actually sign it. One number is in my, and I've approached 80 journals and out of these, I think are three of four have implemented registered reports. Boycotting journals that do not have registered reports leave people with basically no journals to publish their papers in. I don't think that is something that people would at this point consider. I think if we reach a critical mass of journals, that is a more realistic goal and people might be more inclined to sign that. The other number that is relevant here potentially is the number of signatories. In my case, I had actually quite a few 140 people, but that is a very small amount of people within the entire field. And I wonder how much pressure that actually, again, puts on journal editors at the moment. Getting an email from 140 people from the field, and there were a bunch of people in there that were influential and prestigious, did apparently not, was not enough for 72% of the journals to even write a response email, being even decent and respond in a polite way. So I'm all for it. I think that's a great idea, but I think we need to reach critical numbers, both in the pool of people that are willing to sign these letters and in the pool of journals that might, within the field might actually offer registered reports. But I will look into that for sure and we'll definitely talk to you about that. I think that's a great idea. Timo, and I think one feature of that, if I'm not mistaken, is that the sort of boycott only goes into effect after some predetermined threshold of support is met. And so I think that could be one way to address that logistical concern. You know, if you just have a radical tiny minority boycotting, it's not very effective, but if you get 10% or 50% of a potential author pool to a journal, really signing up before the boycott becomes effective, that could be the critical ingredient to making that more effective. So we'll make sure to share this link so folks can look into it and see if that's, if that's what you wanna do. That's really cool. Thanks for working on that with Chris Cooper. Thank you. We are at time. We will reach out in a day or two with a recording and links to all this information and more contact information. We appreciate. Very much the panelists who are participating and are leading all these efforts. We appreciate all the attendees who are here. And we hope that you'll take some of these steps or you'll tell others about them too so that we can, you know, keep on advocating effectively for these practices. Thank you, everyone. Thank you, David. Thank you, everyone.