 So, first of all, I just want to let you know my nut page, I'm based in Melbourne, Australia, and I'll be hosting the next three and a half hours of Menislands 2021, so it's great to have you all here. I just wish to begin by acknowledging the Wurundjeri people of the cool nations, the traditional owners of the land on which I am broadcasting this session, I'd like also to pay my respects to their elders past and present, and to Aboriginal elders of the communities who may be here today. So we've got a really exciting session coming up on incorporating open science into evidence-based practice, the trust initiative, and I will now hand it over to the moderator of this session, Evan Mayo Wilson. Thanks very much for joining us on this Sunday morning or Friday afternoon or whatever it is in the time zone that you're in. We appreciate you making the time for it. Today, we're going to talk about the trust initiative. Sean, I think, is going to control our slides here. So I'd just like to acknowledge here that this work has been funded for the last several years by Arnold Ventures, formerly the Lauren John Arnold Foundation. You can find more about the project on the website there. And I'll briefly introduce each of the speakers, but let them say a little bit more about themselves as their turns come around as well. Have the next slide. So the motivation behind this whole project is trying to align the open science movement with the evidence-based practice movement. We know that there are a lot of people who are working on standards to try to improve transparency and openness throughout the scientific ecosystem. And there are a lot of people who are trying to improve the use of evidence-based policy and practice to try to identify interventions that are known to be effective and things that are known to be less effective or that might be promising for future evaluations. And some of you probably work in both of those areas and bridge some of those those topics in the same way that we do. And what we're interested in doing is trying to bring those two things into better alignment and working with partners in both of those areas. And the next slide. So today, we've got a number of presentations. Lauren Cipri is going to begin by talking about evidence-informed decision making and open science, particularly some of the work that she and others have done to try to improve the use of evidence and policy over the last several decades. Sean Grant is going to introduce the conceptual framework behind the trust initiative and the evidence clearinghouse project, which is something that we've been working on together for a number of years now. Seneca and Erci is then going to discuss the process that we've used for expanding the trust project to include journals that produce a lot of the evidence that is used to inform evidence-based policy. And Kevin Naiman is going to introduce some of the work that we've done to try to improve the use of open science practices in journal policies and procedures, particularly thinking about what some of the facilitators and barriers to those different objectives might be. And then finally, I'll wrap up with a couple of comments and welcome your questions at that point. So with that, let me hand it over to Lauren. We have the next slide, Sean. And let's have an introduction to evidence-informed decision making. Great. Thank you so much, Evan. And hello, everyone. So I'm Lauren Cipri. I'm going to give a very high level overview of evidence-based policy and how systematic reviews or evidence clearinghouses fit into that landscape and then specifically how open science supports those efforts. I should say that my role here was that I started this project as one of the developers of a U.S. federal government clearinghouse and clearinghouses are very interested in ensuring that the information that they share is seen as trusted by the public and by their key stakeholders. And so they're very open to these kinds of dialogues and discussions and seeing how this work can fit within their overall mission. Next slide, please. So in the in the U.S., starting around the early 2000s and there was a similar push in other countries such as the U.K., Australia, Canada, even through some very challenging political environments for science in the U.S. The idea of evidence-based policy has remained, which is really the sense that if we invest in things that have evidence of effectiveness, we have a greater chance of changing outcomes for the constituents that we care about. And while evidence-based programs are just one part of this overall conversation, they are a large part of this conversation. And so that's the primary focus for today's comments. Next slide, please. So in the U.S., one of the ways that the Obama administration put forward the evidence-based policy movement was something called Tiered Evidence Initiatives, which would give increasing amount of funding for programs that had stronger evidence of effectiveness. So as you can see here in the slide, if it was preliminary evidence, they would get a small amount all the way up to a large amount of money to replicate programs and often test those programs at scale. These efforts really focused heavily on causal evidence, as well as looking at, you know, across research synthesis, randomized trials, and often also include quasi-experimental designs. Next slide, please. And so to inform the stakeholders that are making decisions about these evidence-based programs, these evidence clearinghouses, research synthesis, they came about really to support those constituents. And so what they do is they follow a strict set of standards that publish exactly how they're going to identify literature, how they're going to rate that literature and assess the quality of that literature. And then they are there really to disseminate information about these programs. They often rate outcomes ranging from social, physical, mental health and academic outcomes, and they really focus on building that trust in the research so that decision makers can focus in on what programs are best for their communities and their outcomes, and they can trust that the information that's being being put forward is something that they can believe in. And in the US, these evidence clearinghouses in these tiered evidence initiatives were very high stakes because they could affect billions of dollars and investment. So the decisions that we made in these evidence clearinghouses had large, large consequences. Next slide, please. So there were a few concerns given that high stakes nature and given the importance of trust at the center. So one is that there were some difficulties replicating results when scaling evidence based programs. And if we wanted to fulfill the promise of evidence based policy, we had to show that there was there was a good chance that when scaled up, these programs would actually improve outcomes for constituents. In addition, it takes a lot of time and money for a community to select a program, implement a program, get political buy in. And equally challenging to then descale if it's not meeting the needs. Because again, there are a lot of political challenges in doing that. So reproducibility and replicability were really essential to these clearinghouses and these evidence based program evidence policy movements. Next slide. In addition, there were really some concerns around credibility in the utility of the evidence. So, you know, in particular under specified studies was that there were a lot of details that we found that were missing details about methods, the intervention designs, the populations and settings they were implemented in. And that made it very hard for communities to pick these programs up and implement them and then potentially have that chance of showing impacts. There were problems of reporting bias when we were running our systematic review. There were oftentimes that we weren't sure if scientists had chosen not to publish a specific analysis, they were very limited in pre-registration. So we weren't sure whether they were pre-specified tests or whether these were tests that may have been, you know, post hoc analyses. In addition, you know, that raises issues of potential data dredging. And without that pre-registration, it was very hard for us to know it at a systematic review. And then finally, you know, human error is always out there now. But without having pre-analysis plans or others, it was hard for us to know. Next slide, please. And, you know, what this continues to this day that there are perverse structural incentives within the scientific endeavor. And this slide just highlights a few of them. You know, first, the sponsors or the funders are making decisions on what research is conducted, how that research is conducted, what, you know, actually feeds into that evidence ecosystem that the systematic reviews can use. They just determine whether there's funding for application available and whether the projects engage in open science. The research institutions, the universities, both through tenure and promotion policies as well as discipline specific incentive and reward structures, they actually can support or discourage many of these practices that are that are critical for building the field and supporting open science. And then finally, as we all know, journals, you know, they're more likely to publish a paper that's exciting and innovative. The scholars have reported that if they anticipate a lack of interest in a published paper with no findings, they may just not publish it, which increases the problem of the file drawer. And journals are still intermittent in their requirements to require open science practices, and this varies dramatically by discipline. Next slide. And so finally, open science is an opportunity, really, to align quality scientific practices and ideals, accelerate scientific discovery and broaden access to scientific knowledge. So transparency, openness and reproducibility are inherent in scientific ideals and they're critical for the evidence-based policy movement to move forward. And for the systematic reviews to continue to fulfill their mission of really putting out trusted scientific evidence for stakeholders to use. And with that, I will turn it over to discuss the Trust Clearinghouse Project. Awesome. Thank you, Lauren, so much for that introduction. So I'm Sean Grant. I've been leading the Trust Initiative with Evan and Lauren from the outset. As Lauren mentioned, it's a collaboration that includes meta-scientists and government partners. And the objective is to increase the transparency of the intervention research used to support evidence-based policy making. So what I'd like to do in this section is review the conceptual framework underpinning our entire project and then provide an overview of findings from the first study in this project. I was looking at the degree to which the Clearinghouse is that Lauren mentioned in her presentation have policies and procedures related to open science practices. So to start, the constructs of our framework are based on the transparency and openness promotion guidelines, more commonly known as the top guidelines to meta-scientists here. Published in 2015, if you're not familiar, the top guidelines introduce modular standards for journals initially, but now other organizations in the research ecosystem to promote transparency and openness. So these are things like getting researchers to site and share their data code of materials, promoting reporting guidelines that aim to improve the reporting of study design and analysis in journal articles and other manuscripts, registering studies and particularly registering analysis plans, respectively, if it's confirmatory research like a randomized trial testing the effects of a social intervention and encouraging replication studies and publishing studies that have null or negative findings to counteract the publication biases that result from the structural incentives that Lauren reviewed. We also added two additional constructs from the clinical trials literature on identifying evidence-based medical interventions. And these are public availability of summary results in structured repositories like the repositories clinicaltrials.gov to share some results of clinical trials and declarations of interest of investigator teams, like whether someone is the developer of a program that they're evaluating the effects of. The framework, how we structured those constructs is based on something called the Donabedean model from Health Services Research. So this model provides a framework for those working in public health like us for evaluating the quality of health care services from an organization. And according to this model, information about the quality of an organization can be drawn from three primary categories, structured process and outcomes. The structure involves the characteristics of the organization that provides the care, like policies that they've published about standards of care. Process involves the behaviors of the actors within the institution, namely the providers who are implementing those policies as part of patient care, and then outcome involves the effects of those processes. So the resultant effect on the health status of the patient populations of a given health care organization. So putting together this model with our constructs, our trust conceptual framework is a structure process outcome model for evaluating the extent to which institutions in the scientific ecosystem, like evidence clearing houses or the journals that provide clearing houses with research, the extent to which those institutions promote transparent and open research. So the quality of their policies on open science. And as a mnemonic, as a kind of a memory device, we use four P's to talk about our framework. So first there are the principles of open science, which for us are the aforementioned standards from top in the clinical trials area. Then there are the policies of organizations for clearing houses. This would be handbooks that they publish that explicitly codify their standards of evidence, for example, standards on open science. The procedures of organizations, like the methods and tools that clearing houses use to evaluate whether studies use those open science practices and then the practices of organizations, the information that clearing houses report on their websites about the use of open science and studies that they've reviewed. So in our clearing house project, we apply this conceptual framework to 10 clearing houses that are sponsored by the US Department of Education, Health and Human Services, Justice and Labor. And as Lauren mentioned, these clearing houses ratings of program effectiveness of what they designate as evidence based practices are highly consequential because they're used to inform policy decisions through things like tiered evidence grant making that lead to the awarding of literally billions of dollars of federal dollars to improve health, social, educational, other kinds of outcomes. So to evaluate the degree to which these clearing houses consider open science and their reviews of programs, we downloaded their handbooks and other documents from their websites. We explored structured fields of intervention entries on their clearing house websites. So what they report about an intervention that they've reviewed and we also worked in collaboration with clearing house staff and part of that was having them share any relevant information about their policies, their procedures, their practices that we did not identify through our review of publicly available information on their websites. As below, you can see a citation and a QR code to the paper with all this information. If you want to read it later in some we found that seven of these clearing houses consider at least one open science practice and at least one of their policies, procedures or practices. So in order of frequency, the open science practices that are considered by clearing houses are most frequently replication, then public availability of results, then conflicts of interest, then reporting guidelines, and then lastly, registering studies and sharing protocols. Phrase differently, there were three clearing houses that did not make any mention of any open science practice at any level of our structure process outcome framework. And we found that none of the 10 clearing houses consider analysis plan registration, sharing data code or materials or standards for citing data code or materials of the practices that they consider. Replication is the only practice that actually influences whether an intervention is rated as evidence based by clearing houses. So five for five clearing houses, a replication is required to receive their highest rating for strength of evidence. So the greatest support that they think something is an evidence based practice. But a concern we had is that clearing houses do not synthesize the cumulative body of evidence on programs using meta analyses like the kind shown here in a forest plot, but rather they do vote counting. They count the number of studies with a p value that's less than 0.05 for a given outcome. So as such, we recommend clearing houses consider ways in which current standards on open science practices, while commendable, could actually encourage question or research practices like multiple hypothesis testing and selective non-reporting of studies and results to get on these lists that are tied to funding and prestige for scientists careers. Aside from applications, clearing houses do address five other practices. So five make their outcome level results publicly available on their websites in a standardized and tabular format. Four clearing houses articulated reporting standards for study design and analysis. One clearing house reports study registration numbers on program entries on their website. Another prioritizes reviewing studies that either have a study registration or a publicly available protocol. And three report conflicts of interest on program entries on their website. And again, while the above are commendable, it's worth noting that none of these are actually required for an intervention to be evidence based, to be designated as evidence based. It's information that's collected and reported out for consumers of the evidence to consider. So in conclusion, we see the above results really as an epidemiological baseline like meta epidemiology of where clearing house policies, procedures and practices stood as of October 2020, which is when we stopped collecting the documents from websites. And we provided our findings as feedback back to several clearing houses who we worked with as part of this project. And I really want to emphasize what Lauren said, they deserve a great amount of credit for pushing the methodological rigor of intervention research forward in the social sciences over the last 20 years. That's in part due to their leadership in updating standards of evidence in light of feedback like ours and in light of changing scientific norms like the open science movement. So in addition to our findings, we shared a draft top guidelines for clearing houses adapting top for clearing houses, which at least one clearing house, the home visiting clearing house or home V has used to explicitly articulate reporting standards for program evaluators on open science practices. And this is in their new guideline for authors asking authors to report each of these things in their studies. But one key piece of feedback we received from clearing houses is that these open science practices need to become more common in the research literature that they review before that they can make them requirements for evidence-based practices. So they recommended that we start working with journals who supply them evidence on intervention effects. And now I'm delighted to hand this over to two PhD students at Indiana University who are taking leading roles in our current project on promoting open science at journals that publish evidence on intervention effects. So with that, Sina, it's all yours. Thank you, Dr. Grant. Hello everyone. I'm Sina Ken Ersi and a Plameology PhD candidate at Indiana University. In the next few slides, I will summarize trust process for rating journal policies, procedures and practices. Next slide, please. In a nutshell in trust project, we identified journals of interest and determined with journal documents were eligible for rating. Next, we captured the relevant documents and rated them using a structured instruments. These instruments were developed as part of the trust study. We then analyzed the data and assessed reliability and agreement measures for these instruments. Lastly, we provided feedback to journals about the transparency of their policies, procedure and practices. We further plan to publish our findings on OSF website and in peer review journals. Next slide, please. Before identifying eligible journals, principal investigators search for federal evidence clearing houses in a previous study report. In the current study, we included all journals that published at least one report of an evaluation used by one federal clearing house to support the highest rating possible for an intervention. Next slide, please. Trained graduate research assistants independently search the journal websites for its instruction to authors and other policy documents. For each journal research assistants downloaded and saved dated copies of policy documents found on the journal websites. For procedure documents, a trained graduate research assistant initiated a manuscript submission through the journal's electronic submission system took a screenshot for each step and then saved the screenshots for assessment. And lastly, for journal practices, research assistants screened the titles and abstracts of potentially eligible articles using an online form. A principal investigator then reviewed full text and identified one eligible article per journal. Next slide, please. As mentioned in trust project, we developed journal rating instruments to develop each rating instrument. The principal investigator drafted a list of questions organized by standards and the top guidelines to promote their reproducibility and scalability. Each instrument includes factual yes, no questions and detailed instructions to promote efficiency and to ensure consistency of the data. The instruments use a skip logic, research assistants use these structured instruments to rate journals, procedures, policies and practices. Next slide, please. We assess astringency of open science standards. There are eight modular standards in the top guidelines and two additional ones added later. Standard can get a level from zero to three with three indicating the maximum astringency. Top factor is a quantitative metric that assesses the extent to which journals have adopted the top guidelines in their policies. The top factor is calculated as the sum of modular standards. Top factor ranges from zero to a hypothetical 29 with higher values indicating a higher adaptation of top guidelines. Next slide, please. We calculated top factor for the 341 journals in trust study. This figure shows distribution of top factor for rated journal policies. Y-axis is the proportion of journals. X-axis is the top factor score and the red line is the score median. The most common top factor score was zero and most journals had a top factor score of zero to one and no journal had a top factor score more than 16. Next slide, please. Unpacking journals with policies here we see the top level for each of the 10 top standards. Among journals with policies, the requirements for open science practices as level two and three are rare. In addition, there is a lot of variability across policies. Some standards are taken up to level three while others are at level zero. Additionally, journals themselves are not taking up all levels similarly for all standards. Next slide, please. We are also currently assessing their reliability of the rating instruments. We have some preliminary results of these analysis that I will share shortly. Here are subjects for trust journals and raters were graduate research assistants. All ratings were conducted independently for each rating instrument. We assessed inter-rater agreement and inter-rater reliability for the items that needed to be rated by all raters. Agreement indicates the interchangeability among raters. We assess this using the overall agreement and a specific agreement of yes and no rating categories. A yes response indicated that the journal was following the transparency guideline and a no response indicated that the journal were not following those guidelines on the item in the structured instruments. Next slide, please. Overall, our raters agreed when a journal did not have a policy. It looked like it was easy for them to detect when a policy was absent and no policy was the norm. But we ran into trouble when we had it to decide the stringency of a policy leading to less agreement when there was a policy for an open science practice. Potential reasons for less agreement in these instances are the quality of our process. So a need to improve training and language in the tool. Quality awarding of journal policies. So a need for journals to write clearer policies and quality of top factor itself. So a need for open science to revise language and levels of top. Next, KV Naiman is going to talk about trust theory informed survey of journal editors. Thank you, Sena. Hello, everyone. As Sena said, my name is Kevin and I'm pursuing a double major at Indiana University in epidemiology and inquiry methodology. So I'm very excited to be with you all today and have this call to talk about our theory informed survey that we use to understand the barriers and facilitators that journal editors experience when it comes to adopting top guidelines at their respective journals. Next slide, please. So to start, I just want to touch on how we design the survey to meet this objective. And we did that by first referring to the theoretical domains framework, which is represented by the green circle and the figure to the right. And by implementing this framework and our survey we're using a behavior change approach to increase the uptake of top. And this was accomplished by first selecting behaviors that are associated and their associated factors that influence our target of behavior, our targeted behavior being, helping increase an uptake of top among journal editors. And so in the near future, we're going to use our results from our survey to then move to the outer rings of this framework so that we end up delivering targeted interventions to increase the uptake of top. Next slide, please. And so we invited 340 journal editors to participate in our survey. In the end, we had 88 editors included in our analysis, so a response rate of 26%. And the reference at the bottom of this slide is to our open science framework page. And as of now, we have a copy of our survey already up and made publicly available. And in the near future, we're also going to post our manuscript that is currently in progress. And in the next slide, I'm going to discuss the first section of our survey. So here, our first section, we just asked two questions to assess editor support for the adoption of the top guidelines. And we measured and defined support as journals who would implement at least one top guideline at level one or higher at their respective journal. And for the next slide, we're going to visualize the results. So here we can see clearly that most of the editors in our survey who participated the 26%, they do support the adoption of top. And they also believe that other editors in their discipline support it as well. And so for factual support, we phrase that question as, as editor, I support the adoption of the top guidelines app. And then we use logic to insert the editor's journal names. So they knew it was for their journal. And then for perceived support, we phrase that question as, other editors in my discipline support adoption of the top guidelines at their respective journals. And so the key takeaway from this is that actual support is pretty high, but perceived support is lower. And most notably, we can see 36% weren't really sure one way or the other how other editors in their discipline may or may not support adoption of the top guidelines. And now we're going to move on to the second section of our survey. And so here we assess the factors that influence editor's adoption of top. And again, as mentioned in my first slide, we use the theoretical domains framework to design a survey that will inform our evidence-based approach to change journal editor's behaviors. And here we did this by creating 14 survey questions, one for each factor that is one for each TDF domain. And on the next slide, we have the results visualized for these 14 different items. And so just to quickly orient you to this graph, it's sorted into sending order and agreement. And we can see that over 50% of editors agreed with 11 out of our 14 different survey questions. And I'm not going to go into much detail for each and every question. So I'm just going to focus on the largest facilitator and the largest barrier from our survey results. The largest facilitator was professional identity. And that was a statement where we phrased it as, it is part of my role as editor to maintain instructions for authors that reflect current best practices. And here we can see overwhelmingly 93% agreed with that statement. But as for the largest barrier, journal goals at the bottom of this graph, and just so you all know, the question was phrased as, compared with other editorial tasks, adopting the top guidelines is a higher priority on my agenda. And here you can see only 26% agreed with that statement, a substantial proportion, either agreed or disagreed, and 37% just outright disagreed with that. And on the next slide, I'm going to give a high level overview of our major takeaways from our survey results. So we did find that most editors have the capability and opportunity to adopt at least one top guideline of a whole one or higher, but many like the motivation to do so. And although over 50% of editors did agree with all the associated factors in the opportunity and capability sources of behaviors, there were definitely instances where a substantial proportion of editors didn't agree. And so for example, some editors did not agree that they were knowledgeable of the content and objectives of the top guidelines or have a behavioral regulation. And behavioral regulation is just a fantasy term from the TDF domain that represents whether or not editors are able to create a clear plan to promote changes to instructions for authors at their journal. And the same logic also applies to the opportunity source of behavior. And so this talking about social influences in particular and social influences, that can just simply be thought of as editors having colleagues who would approve them adopting the top guidelines. So here we can see many didn't really feel as if they have those social influences. And the same thing can be said for resources and resources refers to editors having editorial systems and tools to adopt the top guidelines. And lastly, our results also showed that motivation, that source of behavior, they had three factors where the majority of respondents actually did not agree. And so most editors did not agree with the statement that adopting the top guidelines at their journal is a high priority, as mentioned on the last slide. And then also almost half of the editors weren't really sure if they would receive positive recognition from their colleagues that they promoted changes to instructions for authors. So that's what I'm referring to as resources or sorry, reinforcement. And in the next slide, I'm gonna end my portion of our panel discussion by highlighting what I found personally to be the most fun part of this project. And I was fortunate enough to have the opportunity to lead the creation of a shiny application that provided individualized feedback for each of the 300 plus journals included in our sample. And so for instance, if we look at this graph over here on the left, we can see that an example of a journal where they received a score of zero and this is on the total top factor. So it's that aggregate measure that Sina talked about earlier. And we didn't wanna just tell them where they were at relative to the other journals in our sample. We also wanted to provide them specific text feedback. So that way they knew where they were at but also how could they improve or level up so to speak. And we still didn't stop there. We didn't just stop with providing the aggregated top factor score but we also had separate histograms and feedback for each of the 10 different top factor items. Cause we don't assume that every journal thinks each and every one of these top factor items will be the most relevant for them. So we wanted to show them where they were at where they scored, but also if they had an area that they wanted to target we provided them a score and then also feedback so that way they could improve. And I encourage people to visit the link to our Shiny application so that you can see the type of feedback we provided for each of the top factor items. And I just wanna acknowledge that we realize other researchers in the health sciences, social sciences and beyond are doing similar work such as this. So we welcome collaborators to reach out to us and we're also happy to share our research materials and code for our Shiny application if others are interested in conducting similar work with us. And with that being said, I will now turn over our discussion to Evan so that he can discuss our next steps with the trust initiative. So thanks all for a great summary of the work to date. I wanted to just highlight a few of the key points. Sean, can we have the next slide? So Sean mentioned we've shared the results that we have so far with several clearing houses. We've seen some uptake already by Home V. We also have a paper which is now online in prevention science in which we describe some of the options that clearing houses have as they take up top. Next slide. Here's an example of the options that you might have if you're considering study registration as an issue and an evidence clearing house. So at level one, you might think about reporting whether the research that a clearing house is rated has been registered completely and prospectively. You could just include that on the website for instance. At level two, your rating might be influenced by the study registration status. So you might give a higher value to studies that have been registered prospectively. Now, depending on the clearing house legal and policy contacts that may be more or less difficult to do. So it may be that some clearing houses without great changes are only able to implement at level one. At level three, we have verifying the complete and prospective study registration was done that it conforms to all of the requirements that you have from ICMJE, from WHO, et cetera. Now, importantly, clearing houses might have to do some of this work themselves. But if this work is being done by other stakeholders in the ecosystem, for example, if this work is being done by journals and clearing houses could rely on journals to have verified that these things have been done in advance, these sorts of things could eventually be incorporated in clearing house standards of evidence without having lots of additional work for the clearing houses per se. We'll be on the next slide. We've also started working with some international groups. The What Works Network is a group in the UK that leads a major program of educational research. They are considering some of these issues in their funding and review criteria. We're looking for other partners both domestically and internationally. So if you work internationally in groups that do the sort of work that clearing houses do or you work in an ecosystem that's like this, please do reach out to us. We're always looking for new collaborations and new partners. We have the next slide. We're currently working on some feedback to the Center for Open Science and the top coordinating committee. One of the things that we learned from breaking the top guidelines into a series of objective yes-no questions was that it's really hard to answer some of these questions. We had to define exactly what constituted a yes answer and what constituted no answer. And we were able to do that reliably for some of these items, but not for all of them. So as Sinon noted, we found journal policy language that was confusing. We also found some top items were not sufficiently clear. Some levels were not sufficiently distinct from each other in order for us to rate them reliably. And we hope that that can inform future development and future improvement of the top guidelines. We also hope that greater clarity in the top coordinating committee itself and in the top guidelines will help us give better feedback to journal editors about how they can incorporate these principles in the future. I mean, the next slide. So like many initiatives, the top guidelines initially focused on policy, but as Sean and Kevin explained, best practices and implementation science tell us that policies are only one level at which we need to evaluate behavior change interventions. It's theoretically possible to have very stringent transparency policies, for example, that aren't followed in practice, or it would be possible to have no transparency policy at all, but to have a community in which transparency is the norm. And we see a lot of transparent and open behavior in that community. So as we think about improving transparency and openness in the future, we wanna encourage others to think about these other levels, the procedures and the practices themselves. As we think about the implementation of top, we wanna examine whether procedures like online submission systems encourage transparency and openness. For example, do they have fields in which people can enter study registration numbers for the trials that they do, or DOIs for the data and code that they wanna share? Do the online submission systems generate warnings if those fields are empty, or are those fields required to complete the submission process? All of these things can help us implement policies and practice or encourage good behaviors even when there aren't policies that require them. Next slide. So based on these results and using behavior change theory as Kevin described, who now identifying opportunities to change stakeholder behavior, perhaps better policies will be needed, but perhaps we need to target other things like people's motivation to engage in transparent and open science. We see that people support these guidelines in principle, but we also find that most editors tell us that they have absolutely no planned to implement the top guideline. And that's something that we need to figure out how to change. Perhaps we need to develop better systems that help authors understand what's needed to adhere to journal policies. And that might be something that we do alongside not only journal editors, but alongside publishers and other stakeholders in this ecosystem. We need to help journal staff and editors confirm quickly whether policies have been followed, for example. And the next slide. So to ensure that social interventions and social policy is based on reliable evidence of effectiveness, we want to promote transparency and openness throughout the evidence ecosystem, as Lauren described. The trust initiative so far is focused on evidence clearinghouses and on the journals that produce the evidence that evidence clearinghouses are using, but there are many more stakeholders that are involved in this. And we're interested in expanding the project to work with more of those stakeholders. So as we go forward, we welcome opportunities to work with other stakeholder groups. If you're a researcher, a policymaker, an editor, a funder who's interested in this sort of work, please do reach out to us. We'd love to work with you and collaborate on future projects. And can we have the last slide, Sean? Here are various ways to contact us. Please feel free to email any one of us or to email all of us if you like. You can also reach out to us on Twitter. And I see that we've got a couple of comments here that we'll respond to in a moment. And if you have any questions, we're very happy to take them at this time. So I'll start. I see one question in the Q&A and I might throw this to Sean to kick us off. Journals face a lot of complaints from authors if they implement too many requirements on initial submission. How can journals implement top guidelines without losing authors to more lenient journals? Yeah, thanks. It's a fantastic question. And thanks everyone who's come today and is engaging in this Q&A. I don't know if I have the answer, but as an example, this year started a position as the Methodological Transparency Editor for a journal JREE that publishes quite a lot of trials on educational interventions. And a big part of that role has been taking this framework and the tools that we created for rating policies, procedures and practices into tools that the journal uses to implement policies, procedures and practices. And a lot of the debate has been on the one hand, trying to respect methodological rigor and feeling like for randomized trials where the point is a confirmatory test of the effects of intervention, often in these kinds of high stakes environments, they want to publish trustworthy evidence. But then on the other hand, not immediately implementing everything at the highest level where you get pushback from authors where folks might not have the capabilities right now to do things like share their data with full metadata in a computational reproducible way. Maybe sometimes they're illegal or proprietary concerns about sharing data. So we started in, I think, what was the intention of the top guidelines of dipping the toes in the water, of saying, here's the wider framework, here are the practices, here's the most stringent level that we could implement them. We're going to start at level one for all of them, which does not require folks to implement these practices, but it does require folks to be transparent and explicit about whether or not they register their study, share their data, share their code and then provide things like open science badges for those who have done that well. And we are gonna be explicit about how we have an eye towards using the changes we're making at the same time to the submission systems and our article templates to make this easier for authors in the submission process and then evaluate that data. As if this were an implementation of an intervention that we wanna do this kind of formative assessment of, get feedback from authors, see what kind of data we get from submission systems and use that in a continuous quality improvement process informed by data from our authors and submission experiences to see whether or not we could implement at a higher level going forward or whether there are things we need to rethink about the level we're already implementing things. So I'm happy to chat with anybody about trying out and piloting that kind of continuous quality improvement process at a journal that you are an editor for. Thanks, Sean. Anybody else wanna chime in on that one before we go to the next question? Lauren, I might ask you to take a stab at the next one. This question says there's a network, the international community of medical journal editors that many of us are familiar with. Is there a network of editors of social intervention journals that could help implement some of this? Yeah, that's a good question. I don't know, maybe that others on the call here know. I actually don't know if there's a coordinated network. There are coordinated professional associations that often have then journals associated with their professional association. So that might be an avenue to think about reaching journals more broadly. And many of those associations also have multiple journals under them. But I don't know, Sean, if you're aware of any similar body. No, it's come up in conversations throughout the project. And I think we've left it at man, that would be great if there were. So anonymous caller in Radio Land, if you want to continue that conversation, please get in touch with us. I also see that we have at least some journal editors on the call. So if anybody who's joining us as an attendee wants to chime in in the chat or in the Q&A, please do. Matt, I'm not sure that I can see any hands if they're up. I don't see any hands, but have you answered the question from Richard Nguyen? Journals face a lot of complaints from authors if they implement too many requirements on initial submission. How can journals implement top-guided lines without losing authors to more lenient journals? Yeah, I think Sean took a stab at it. I might add to that that I know people worry about a race to the bottom. I think for leading journals, this is less of a concern. And that is often a place for us to start with the journals that are seen as the leaders in the field and that are not worried about losing submissions if they make things more challenging. So if you submit a clinical trial to the New England Journal of Medicine, you'll be submitting quite a lot of material along with it and you're willing to do that because it might be published in the New England Journal of Medicine. And I think that's true in lots of disciplines. There are journals in which people want to see their work and those journals can do this sort of thing without the threat of losing submissions. But I do understand that that's a real concern for smaller journals. What about collecting additional data post-publication? I'm not sure exactly what you mean. Can you clarify that question? I can allow you to talk Richard, if that's what interests you. Yeah, if you're able to talk, please do put your hand up. Just giving you permission to speak. Hi, can you hear me? Yeah. Oh, hi. I was just wondering whether... The way a number of journals handle this is they make initial submission pretty easy and then they collect more requirements on revision or final acceptance. And I was suggesting that maybe even after publication you could go back to the author and say, if you want this badge or this recognition for your manuscript, maybe you could do X, Y, and Z. See it as a continuous process rather than a sort of gated process. I can respond to that, Richard. Thank you, yeah. So for the journal that I'm the open science editor, it's the Journal of Research on Educational Effectiveness. It's published quarterly and has, I think about five or so articles per issue. So we have taken that process because we feel like that's something that we can handle within our workflow. If something makes it to the revised and resubmit stage, particularly if it seems like it's going to make it to publication, we've created forums as part of that process to ask for some of the information on these open science practices as well as help folks even get open science badges. A recent example, someone went for a materials badge when they provided the Mendeley repository with that information. I noticed they also had data in there. So I went back to them and said, we could give you the open data badge as well, but there's some metadata that's missing. Here's some standards. Let's work together to get that open and that leads to additional badge. If we're a larger journal with more submissions, to Evan's point, if it's a journal that perhaps does worry about steering folks away because they're not one of the ones that gets more submissions than they can handle in a given year, I could see real implementation questions there about feasibility for the journal as well as acceptability for the authorship. But I love that thinking through that stage of price of the publication life cycle. And Sean, if I could just also add, I think this is where I think there are synergies between these government evidence clearing houses and the journals, which is that the evidence clearing houses, they could make a standard to say in order to get to a certain threshold, you must have met one of these standards or multiple standards. But even the encouragement of it, given the dollars attached to it, actually starts to nudge the field in that direction, which may actually then push people to say to their journals, like we want recognition, we want to acknowledge that these practices have been followed so that our program can be eligible for these government dollars. So I do think that there is a nice balance back and forth between these. And as the presentation mentioned, the government clearing houses often can't set a hard threshold until there's a wave of enough people because these communities do need to have something to implement. It's hard for a government agency to say, oh, we're gonna require evidence, but there's nothing for you to actually implement. But I think we can nudge towards where that tipping point is through these synergies. When I acknowledged, Jeannie Barber has commented that she's going to talk about COPE later today. COPE, the Committee on Publication Ethics is a big international group of journals that covers lots of different disciplines. So I'm actually not sure, Jeannie would know whether there's a subgroup within that of social intervention oriented journals, but there are certainly journals that focus on social interventions within that group. There's also a comment here without overloading the field with more badges, is there a need for badges related to the top guidelines? And I might take that one to start and then let others jump in. I think that one of the things we've focused on throughout this project is the importance of having metadata and having structured data. So journals we're seeing are collecting lots of bits of information about transparency and openness, but sometimes it's in unstructured text boxes during the submission process where you could enter all sorts of things. And then, if you were trying to assess the transparency of that, a machine or a person would have trouble doing that. We had trouble assessing what those fields were meant to contain and what they did contain when we looked at the publications. So whether we call them badges or something else, I think having metadata about these practices is very important and that would help clearing houses and other end users assess the trustworthiness and the validity and the generalizability of the results. So if it's a badge or if it's a field in an article or if it's some other way of tagging things that have been registered that have open data that have statistical analysis plans, I think that that's very valuable. I know that badges are meant to be used for lots of different purposes, partly to encourage these practices and to give people credit for doing them, but I think that structured data in that way has lots of different applications and I'm sure that people will find more applications if we have structured data like that. Hey, I can't see any other questions in the hands raised and we've got just about five minutes left. So unless there are any final questions or comments, I can't see any. So I wanna thank the trust initiative team for their fantastic session this morning, given us a lot of food for thought. And we are moving into a session, the next session will be on some lightning talks, but if you wanna talk more about these issues, I encourage everybody to head over to Remo. Hopefully the panelists will be heading over there too. I'll just put it in the chat link to the Remo place where you can potentially speak with the speakers and get some more insights into the fantastic work they're doing. So thanks a lot team for a great job. And we will put a couple of back five minute breaks. So maybe if you, you know, for a little tea break, but I'm starting again here in about four minutes with the four amazing learning talks. So thanks everyone for joining. Thank you, Matt. Thank you Whitney and thank you to everybody for your great questions and for joining us today. Resume recording. Okay, so welcome everyone to this session of four lightning talks. And so welcome wherever you are in the world probably quite late at night for some of you in America or Europe, but it's a nice bright early morning for us down here in Melbourne, Australia. So these lightning talk sessions, each speaker will talk about maybe five to 10 minutes. And what we'll do is we'll wait for the end of each, each of the four speakers and then we can open it up for any questions. So if you do have any questions, you can put them in the Q&A or in the chat or just raise your hand and we'll get to them at the end. So first off, I'll introduce Cooper Smout who can go ahead and present this one. Thanks Matthew. Just going to share my screen. Is that working? You should be. Let me know if there's any problems. Okay, cool. Hi everyone. Thanks for joining. Particularly those of you that are staying up late for this. So I'll be giving a quick introduction to Project Freya Knowledge which is a collective action platform for researchers. And this project is based on the premise that we're trapped in a giant collective action problem in academia. Like there's some idealistic future, we can call that open science land that we all want to get to. But under the current system where people are primarily rewarded for publications rather than other open practices, it makes it difficult for us to make progress towards this future. And we know that there's not really a technological barrier anymore. Brian Nosek has made this really cool slide that I'm sure all of you have seen before. But the main point is that we've already achieved the infrastructure and the user interface. And there's really no technological barriers towards adoption of open science practices. And what we're stuck at is this cultural barrier at the community level. And we also know through research that there's a high level of support for open science practices. These studies showed that people have over 80% support for open data and open access. But when it comes to actually doing the practice, there's much lower rates of adoption. So the psychologists in the room might recognize this as being a bit similar to a prisoner's dilemma where everybody acts in their own interests and ultimately this hurts the collective but also hurts the individual. But the main point I want to make here is that actually this is very different to a prisoner's dilemma because in that paradigm we don't let people talk to each other. So that they don't have the capacity to communicate their intentions to act. We can do that. And we actually have the internet which could facilitate this on a global scale. And in recent years, there's a precedent for this type of platform which is called a conditional pledge platform. And probably the most well-known example is called Kickstarter. And the way this platform works is by taking conditional pledges. Now these are pledges by people to act in a certain way if and when you reach a critical mass of support. And so Kickstarter has funded thousands of projects and raised billions of dollars of capital to get projects off the ground. And probably a less known example is called collection. And this takes the same process but applies it to behavioral actions. So their focus is on environmental issues, social issues and that sort of thing. But what Project Free Our Knowledge is trying to do is just tailor the same solution that has proven successful in other spheres for the research community. So the way it works is that anyone can propose a campaign using our GitHub repository. And this is basically just asking people to adopt a particular action if and when some critical mass of support is met. So this action could be something simple like posting a preprint. It could be sharing some data. It could be posting an open review to a platform. Basically any open science or cultural practice that you would like to see your community adopt. Then we go through a bit of a process of developing it on the GitHub repository. And once it's ready, we put it on the website to put it out to the crowd. So this is the point at which anyone in the world can now pledge to adopt that action if and when that threshold is met. And at this point, people can remain anonymous which means that they're protected from any kind of risk or potential repercussions to their career. And then finally, if we reach that threshold, then everybody is listed on the website and directed to carry out the action together. So it's been a good chunk of last year developing open processes so that now anyone can propose and develop a campaign using our GitHub repository. And using this process, we've got probably around 15 proposals at the moment that are desperately needing development. Some of those have recently gained a lot more momentum. So one of these proposals is to share your journal commission reviews. So the basic idea here is that we spend a lot of time collectively reviewing articles and a lot of the time these reviews just get wasted and locked behind closed doors. And so what this campaign is asking people to do is anytime you propose, sorry, anytime you review an article that is also a preprint, then you just go along and attach that review to the preprint itself. And so this is a campaign that we're developing that Professor Waltman proposed and we're currently developing on GitHub. Another campaign that evolved out of the recent OHBM brain hack is to share your code in a site of your repository. So this basically just asks you to make all of the code that you use for any upcoming publications publicly available and to also put it in a repository that has a DOI. And so for this second campaign, what we're going to do is let people pledge to take action immediately or they can also wait until some larger critical mass of pledges have been made. So the main point I wanna make here is that if you're interested in developing campaigns, you think there's some actions that your community could and should be adopting, then jump on to our GitHub repository and check out what campaigns have been proposed. But you can also propose a new idea if you think there's something that we should be adopting. And we've got a few campaigns live at the moment. So we started the platform with some open access campaigns. And more recently, we've posted a preregistration pledge and this basically just asks you to preregister a single study along with a hundred of your peers. We are currently at around 75 pledges for the field of psychology. So could really use a few more pledges just to get that over the line. And of course a hundred pledges is not a huge amount of pledges, but the idea here is to try and demonstrate this concept in action and then scale up over time. So that brings me to this slide, which is a figure I just made. So I've got it included here. But this tries to capture the grand vision for how this project could evolve over time. And the main point I want to make here is that this is not just a single campaign or a single pledge initiative like have come previously. The idea is to try and scale this up over time and build on the momentum that we create with each campaign. So this is where we're at right now. We've got an ambassador network of around 10 people who have agreed to support this open code campaign when we launch in a month or two. And the idea is that they will reach out to the community and connect with these researchers who would be willing to share their code but might not feel comfortable doing so on their own right now. So at that point, we're collecting these conditional pledges. And if and when we reach that threshold of activation, then everybody starts sharing their code together. And this means that we're all now supporting each other. We can chat about what the best practices are. We can help each other through this process and we can act in solidarity to make this the norm in our field. Now this first campaign is only asking for a few hundred pledges. So it's not a big deal. It's not yet normative throughout the community. But the idea is that these few hundred pledges that we capture for this campaign can then be used, leveraged to increase the size of the next campaign that we run. So in some theoretical future campaign, we might collect enough pledges where we actually make this practice the norm and what this does then is if the majority of people are now sharing their code or doing whatever open science practice the campaign is about, it means that those that are not sharing are now the outliers. And so instead of seeing potential risks to people's careers, they would actually see it as a benefit because otherwise they're going to be frowned upon for not sharing their code or whatever the practices we're targeting. And so particularly relevant to this community I thought is to just try and highlight that what we're trying to do here is develop replicable processes over time that can be used to enhance and improve future campaigns. So a lot of previous initiatives have had great success and they've created change in various domains like the cost of knowledge, peer reviewers openness initiative and these are all fantastically successful in motivating change, but unfortunately over time the momentum that they created got lost. So the idea here is that if we can try and capture these processes learn from each campaign, then at the end of each campaign we can also analyze what's happening, find out what made the most impact, what strategies were successful and then use that information to inform future campaigns moving forward so that we don't lose that momentum that we've created. So this is the grand vision. I would love for people to get involved. Here are some ways that you can get in touch with the projects. You can email, follow on social media and of course the main thing right now is to pledge and develop campaigns if you're interested in that. And with that I'd like to thank everybody who's been involved in the projects and all of our partners who are helping us increase our reach throughout the community. Thank you. Thanks so much, Cooper. As I say, we'll take questions at the end but really inspiring work you're doing there. And we'll chat about that more at the end. So thanks so much for your talk. Next up we have Alex Holcomb. So you can share your slides with us. I wait till Cooper ensures. Okay, now, okay. Thanks, Matt. Thanks to all the organizers. Can you see that and hear me? Yeah, perfect. So I'm talking about authorship versus contributorship. Everybody knows the term authorship. It goes way back 1600s. And back then a scientist really at least ostensibly worked on their own or sort of these aristocrats or people who had time on their hands and they sort of did all the work themselves for all those early publications in the Royal Society journals. You'll see that they tend to just have one author. But today, of course, science is quite different. And I think we need to shift our norms for how we attach names to people. Sorry, attach names to papers in order to accommodate the reality of today's science. So over time, as you can see in this graph, the number of authors per paper has increased dramatically. And that's natural when something advances. You need specialists to contribute together in order to get something done. But unfortunately, this is not really the ethos of science, I would say. When I went to graduate school and got my PhD in this ivory tower here, at one point I was actually chatting with a professor and the one skill I had as a first year graduate student which was being able to do some computer programming because I'd done that before. And a professor in the school, he needed someone to program and experiment. So I was excited that I could already contribute in some fashion, but then he quickly told me, oh, but I don't give authorship to people who do the programming. So it was in that kind of experience where I learned where it was signaled to me that what's really valued in academia is something called like intellectual contribution sometimes and not skills. And so what I did is I didn't actually contribute to the programming of that experiment. I went away and focused on learning like every skill in order to be able to do everything myself and becoming a jack of all trades, which is what I needed, it seemed I needed to do to be set up as a principal investigator. Now this is not a good way to have a system that's going to advance and as was recognized all the way back in the 18th century, for example, by Emmanuel Kant as the Industrial Revolution was just getting started, he pointed out that if you don't have division of labor, if you don't have specialization where work is not thus differentiated and divided where everyone is a jack of all trades, the crafts remained at an utterly primitive level. So I think this resistance to specialization has been holding back many of the sciences. Now we really need to be able to give credit to many different roles, but if you look at authorship criteria, they're sort of stuck with this kind of authorship that is who's doing the writing conception of which name should be attached to papers. So for example, if you look at the International Committee of Medical Journal Editors who set the authorship guidelines for hundreds if not thousands of journals, their guidelines say that you have to contribute to drafting the work or revising it critically for important intellectual content. So it's writing-based, you can't come and have your name formally attached to a paper as an author unless you contribute to the writing. And also they always tend to throw in this kind of intellectual contribution thing, which by which we'll use to exclude people who do certain tasks. Now maybe if those tasks were menial enough, that would be fair, but I've seen it happen that people say, well, yeah, he was the only one who knew how to use that machine. So that's not an intellectual contribution. So that person shouldn't be an author on the paper, which might be okay by some kind of like pure idealistic view of authorship, but it means that we're not giving credit to those technicians. And thus we're not able to see it with funders. They're not able to see all the range of roles that is needed in order to get modern science done. Now, fortunately, this has been changing. I call it contributorship where you just indicate who did what rather than just having a list of names attached to a paper without any differentiation. Now this goes quite far back in terms of writing just plain text. Authors were invited in many journals for 15 or 20 years to indicate who did what in some kind of little author note. But that falls short of what we need in this modern era where everybody wants to tally up people's papers and their impact factors and all these sorts of things. Whether we like it or not, we live in such a world of bean counters. And so we need to have a machine readable or something that can be aggregated across multiple papers. So here's one example of that is the credit contributor roles taxonomy that PLOS and other journals have adopted. You can see it in action here where each author has a number of roles that are from a standardized list that when you submit to a PLOS journal, you'll be indicating for each author what they did. So this was developed back in 2014. And this is just one particular taxonomy for contributorship for signaling the different roles that people have. And I think it helps encourage recognizing the broader set of roles. In my case, yeah, I became a jack of all trades, but, and I turned out fine, I've got a good job. But what I hate is talking to people like a technician associated with my department, I call them, I mean, it's not really a neuroimaging specialist who consults on lots of different neuroimaging projects for setting up the equipment, but not only that, you know, various aspects of the design. And I asked him during his annual review process, whether he had seen many cases where he would go in, he would help the other researchers at the beginning, but then he would see, would he see a paper come out, you know, two years later without his name on it. And he said, you know, yes, that happens pretty often. So fortunately though, it's not just plus, but a very long list. This is only, this is an outdated list of publishers and journals have rapidly adopted this contributor roles taxonomy so that we can formalize this giving of credit to where it's due. And some of the largest publishers, even some for whom I'm typically not on the same side as them, in this case, I am, it's been rolled out at thousands of journals. So I encourage all of you to go to the journals that you're affiliated with and talk about adopting a policy for moving beyond the more antiquated authorship guidelines and which is going to both give more credit to where it's due and also result as a result of that in better resource allocation when funders and universities can see for a success of scientific projects, what it took the range of people that it was required to make that happen. That's the only way we're going to see money come in to better fund the infrastructure and teams that we need. So here's just one, here's a tool that I've been involved with to make this easier for authors. I think Richard Nguyen of Reds Cognito is also here. He's got another tool. So this one, which we call Tenzing is named after one of the people who may not have gotten the recognition that they deserved. What we do is it's a tool that's supposed to help both researchers to plan in their team, in their project, what different roles the different researchers involved should will take and for reporting that when it comes time to submit to a journal. So the idea is that you circulate this Google doc to everybody on your team and everybody will check off what things they're expecting to do. That way there's less likelihood of misunderstandings later on. And then when it comes time to submit your paper, this tool that we programmed will output some outputs that you can paste into your manuscript that can facilitate and hopefully reduce the burden which is constantly increasing it seems on submitting articles to journals. So in summary, traditional authorship has a number of different problems only some of which I've talked about today but one of them is that they don't reveal who did what. So I hope we can all shift towards contributorship to remedy that. Thank you. Thanks so much, Alex. Great presentation and really interesting stuff. I hadn't actually seen this tending app he created so it looks like it'd be very helpful. As I say, we'll have some time for questions at the end but for now we'll move on to the next speaker who is Alyssa Mickey-Took who I think it's quite a link for her in the US. So we're really appreciating you staying up to give you a talk today. So I think if you can now share your screen. I'm struggling to turn off my screen sharing. How does all the windows go crazy? I might just have to, it might be better if I just leave rather than try to spend time finding the windows and I'll come back. Okay. Great. Are you afraid to share now? Yes. Great. Give me one second. I'm going to try to do it as my slide background. Okay. Well, let me. Okay. Can you guys see me and- Oh, cool. I'm very excited. I've never seen that. Wanted to try this out is that it was a beta feature, so. Hi, everyone. So I'm going to be discussing scientific culture and how that can influence researchers' motivation to share reusable research data. And this is work that I conducted with Sarah Nusser who has joint appointments at Iowa State and UVA, as well as Gism Corkmaz, who is an associate professor at the University of Virginia. And it was supported by the National Science Foundation. So sharing data is a necessary but not sufficient condition to enable reuse by new researchers. Documenting and processing data for others requires additional time and effort by the original researcher. And the culture in the field or discipline, the researcher's specific field or discipline can facilitate or impede their motivations to share publicly accessible and also reusable data. And I'm going to talk about three cultural factors that we found through qualitative interviews with 20 researchers of various scientific backgrounds. So that included biology and astronomy as well as psychology and sociology. And we used a grounded theory approach to analyze the data. I mean, I'm not going to go into detail about that, but if you have questions about that, feel free to let me know and I'm happy to. What I want to talk about are the cultural factors and some of the findings. So the first influential factor that we focused on is the practices and attitudes of notable researchers in the field. So a researcher in linguistics who was a director of one of the main repositories in that field noted that they had commitments from some of the biggest names in the field. So the most notable researchers and they had really big projects. So when they got that data in their repository, it gave them a lot of credibility. And then it motivated other people to share their data there. And it's important to note that that repository had high standards and expectations of the shared data, but because notable researchers with big projects were sharing there, it motivated people to take that extra time and effort to do that as well. So their data would actually be reused. On the other hand, you could have notable researchers that can impede motivations to share reusable data or just data period. So this sociologist noted that many notable people who have incredible careers have really valuable data, but they just sit on it and they publish paper after paper for themselves. And that's the model for success that they've demonstrated. But it's not a model for success for reusable and sharing reusable data. The second cultural factor was the ability to receive credit and recognition for sharing. So one of the researchers who was able to receive credit and recognition was not a faculty member, which most of our participants were. So they were a data science director. So they had slightly a unique position. And their position was really designed so that they could get credit for the amount of people that use data that they shared and the number of data management plans that they contributed to, in addition to the kind of typical or standard academic credit. And that motivated them to make reusable research data because it was something that they got credit for. Let me see if I can move myself. Okay, so I'm not blocking this. On the other hand, other side of the spectrum, we had a researcher in the bioinformatics and genomics field that noted that similarly, people do get them credit, but that hadn't included sharing data. So they focused their effort on publications, grants, other ways to support their lab, but that being a good member of the scientific community, so sharing data for others didn't give them immediate benefits and that lowered their motivation to take the time and effort to share reusable data. And then the final factor was the field norms or expectations around data sharing. And those could be communicated in a variety of ways and by a variety of sources. So another researcher in the bioinformatics field noted that journals in that field mandated sharing in certain repositories. They mandated software under open source license and then the funding agencies were pretty strict. And all of that communicated to them that it was expected, it was a norm and it motivated them to take the time and effort to share reusable data. But we again had a sociologist who said that, the norm is to hoard your data, collect it, sit on it. And I thought this part was crucial, that the expectation was that you own your data, that it doesn't belong to this broader scientific community. Now I wanna highlight that the same field can have aspects of their culture that facilitate sharing and other aspects that impede sharing. So of course, fields that have more aspects that impede it, like we see with the sociology field, typically have less robust sharing practices compared to fields that have, several aspects of their field or culture that facilitate sharing. But most of the researchers that we talked to felt that they were in a field that was still developing a robust sharing practice. So they were more likely to report mixed cultural findings. Some aspects supported it, some aspects hindered it. So just to wrap up, I just wanna kind of summarize that we looked at just a few factors in your cultural factors in scientific fields that can influence the motivations to share reusable data. And that influences my meta-scientific studies that reuse that data. And of course, federal mandates to ensure researchers engage in sharing reaches a minimum level of sharing, but in order to really foster reusable data, you have to foster a culture that motivates the extra time and effort that it takes to do that. And that's all I got. Great, thanks so much, Alyssa. Can you move on to the next speaker? And that will be Bob Reed. So if you can share your slides, Bob. Great, well, thank you very much for having me here. Let me get my screen up. I can't do the talking hand thing that Alyssa did, which boy, I really wish I could. That was pretty cool. Okay, so my name is Bob Reed. I'm at the University of Canterbury. I'm affiliated with a research group called UC Metta. This talk is a little different than most talks. Most people, most presenters, they present about the research they've done. And this talk is really pitching an idea, hoping other people will do this. So my talk is entitled, why aren't replications cited more? Why don't we just ask? All right, so I've spilled them on a couple of facts. The first fact is that replications don't get published very much, certainly in my field. And I think that's been demonstrated for a number of other fields. This is a bar chart that keeps track of replications in economics. It's replications that have been published. So it's only published replications in web of science economic journals. And don't let that increasing trend deceive you. These numbers are minuscule. So in any given year, it'd be unlikely you would see more than 30 or 40 replications published in a web of science economics journal. There are about 40,000 papers that get published in web of science economic journals every year. So this is a tiny, tiny minuscule number of replications. And of course, that raises the question, you know, why? Why aren't more replications published? And there are of course many answers and that's still an unsettled question, but an answer you hear a lot is that replications aren't cited. And since journals want to influence and upgrade their impact factors, their editors are not too keen to publish papers which are in general not so likely to get cited. So I've actually got some data on that from one of my colleagues. So Tom Coupe is also at Canterbury and he took a set of replications that we kind of managed. And he took 300 replications and matched them with the corresponding original paper. And what he did was he then followed the citations of those replications after the replication was published and compared that to the citations the original study was receiving after the replication was published. And I won't go through all those numbers, but original studies are cited about nine to 10 times more than replications even after the replication gets published. So that's a problem. So why, why, what's the reason for that? Why aren't replications being cited? Well, that's really the idea I'm throwing out there and I'm hoping somebody picks up. There's a lot of possible reasons, right? So one reason is that people who cite the original study just were unaware that the replication study was out there. And so they want to mention the influential papers and their discipline, they're unaware of his replication so they just don't mention them. That's one possibility. Another possibility is they want to only cite papers that appear in top ranked journals. You are the company that you keep. And if you're writing a paper and you're citing papers all in low ranked journals, you're sort of guilty by association. If this topic doesn't get published in top journals and all your references are low ranked journals, well, that's probably where you belong. So people tend to want to cite papers from highly ranked journals. And sadly, while there are exceptions, replications in general get published in lower ranked journals, at least in economics. And so that could be a reason perhaps that authors don't cite the replication. Another possibility is that you cited a paper because hopefully that paper has something to do with your topic, which means perhaps the original study, the author of that original study is gonna be a potential reviewer on your paper. And perhaps as replications frequently do, that paper was replicated and they did not confirm the original study. And so you're an author of a paper, you cite the original study, you're aware the replication was done, but it was a negative replication. Are you gonna put that in your paper knowing that the author of the original study might be a reviewer in your paper? Well, I can see why you might not wanna do that. You don't wanna get off sides with a potential reviewer. And as a result, you sort of just stay away from that sensitive topic and don't mention the replication. And of course, there's other reasons as well, right? So how would you do this? How would you do a study like this? Well, we have a really nice archive at something called the Replication Network. It's publicly listed. We update it pretty regularly, replicationnetwork.com. Currently we got 509 replication studies there, all identified and easily located. And the idea is that you would take these replication studies, you would then match them to the original studies. And then what you would do is you would find people who had cited the original after the replication had been published, but they cited the original, but not the replication and survey them. Maybe you'd wanna survey the people who cited both as well, but we're most interested in the ones who did not cite the replication study. And so that's a very doable project. All the replications are out there. Just gotta find somebody to go ahead and find the citations of these original studies. And so if this is such a great idea, why don't we do it? Well, there's two explanations for that. One is money funding, but probably the more reasonable one is we just don't have expertise at my little group with doing the kind of survey that would contact academic researchers. Obviously you just don't come out and say, hey, why didn't you cite this paper? You'd wanna be nuanced and sophisticated in how you got this information and we don't particularly feel that we have the expertise to do that. But I think it's a really important question and we'd be really happy to help. So we got data, we got these replication studies, we have them all matched to originals. Be glad to give that free of charge to ever who's interested in working on this and to help in any other way that we could. And why would a person wanna do this? Well, I personally believe that replications are the single most effective way at addressing the reproducity problem and scientific integrity issues in literature. If somebody writes a paper and they know there's a good chance that somebody's gonna go out there and replicate their study and publish it, that produces a very strong incentive effect to make sure that your work is reproducible. So I think replications are really important and the fact that they're not being published and cited is a problem. And so we wanna fix that problem if you agree that replications are important, but we can't really fix it until we know what the problem is. You know why people aren't citing then perhaps we can come up with some solutions about how to improve things. And that's it, thank you very much. Thanks so much both, great presentation and interesting, I guess, advertisement for linking with other people who might be out of it. I embark on this survey with you. So hopefully we do find some people who can help out. So I wanna thank all the speakers who've talked in this session so far. It's really exciting and interesting different pieces of work that has been occurring. I will just point out Alyssa did have to run out. So if anyone did have questions for her, you can put them in the chat and I'll make sure that I email those to her and maybe link you with them if that would be of interest. But anyone who has questions, if you wanna raise your hands and then I can let you speak, but I know we do have some in the Q&A. So I might just pick out some of those to start. And what I'll start with is a question from Jenny Byrne to Cooper, which says, hi Cooper, thank you for a great talk. Mentioned previous initiatives that have experienced real-world issues in sustaining momentum and activity all the time. Can you comment on why this has happened and what can be learned? Thanks for the great question, Jennifer. It's a great question because the short answer is we don't know. And I guess that's kind of the goal of for our knowledge is to try and find out and use that information to inform future campaigns. So I could take some guesses though as to why momentum doesn't tend to get maintained on these initiatives. And probably the most obvious example is the cost of knowledge boycott, which has around 17,000 pledges right now and started in, I think it was 2012, that one started. One of the potential reasons these campaigns don't maintain momentum is a lack of incentive or reward. And potentially there are some ways that we could highlight people's pledges and get them to keep pledging in the future. So one idea we're kicking around at the moment is publishing pledges in a journal so that we can actually get recognition for the pledges that we take and sort of incentivize those behaviors. I think another main problem is just that everybody's busy. Academics are busy people and typically these initiatives will be started by one big figure in the field or a few big figures and over time they get distracted or they have other interests that they prioritize and move on from promoting that campaign. And so I think a solution to that is potentially building a community around these ideas rather than just relying on one or two people to be driving the campaigns. And I think another problem is that all of these initiatives tend to get started as their own thing. So in that process, you have to build up a community, you have to set up a website, you have to establish a mailing list and ultimately this means that all of that momentum is siloed within that individual initiative. And I guarantee that there's a lot of the people that have signed Cost of Knowledge that would be interested in signing other pledges that already exist, like the peer reviewers openness initiative and so on. So I think a solution to that problem is to try and bring all of these pledges into some kind of common banner, some kind of common format so that we can capitalize on the momentum that each campaign creates and then feed that into future campaigns down the track. So I don't know if that answered your question but I tried to take a stab at it. Cool. Okay, we've got a hand raised from Alex, you all. Just to build on what Cooper was saying, I mean for communities, of course a natural community that are around some journals or in these scholarly societies that actually publish some of the journals and if we reflect on where reforms have been most successful, sometimes it's because there's this institutional support from a scholarly society where they form an open science committee as the APA and the APS both have and then that results in reforms at the associated journals. So I think we should build on the communities that we already have, although many of us are unhappy with our scholarly societies that they seem to be the only way to get traction with certain journals and of course we've also got these larger institutions forming like the UK reproducibility network and we try to start one in Australia and things like the Dora Declaration. So I think we should try to latch into these existing institutions and grow them. Yeah, 100%. I mean, I imagine how powerful we could be if even just a single society, all of the members agreed to act in a certain way that would be incredibly powerful but all of these communities are currently not coordinating their actions in an effective manner and I would absolutely support and love for societies to get involved. So I see a couple of questions were directed at Alex and you answered them in the Q&A which I assume it's able to see. But one other, if not people can please let me know. But one thing I was wondering Alex was what do you think it will take to get to a essentially a uniform adoption of this contributorship model? And I know people in the past have kind of advocated for turning papers into kind of like something akin to film credits rolling at the end. And that's kind of been raised several times and then it's raised again and it raises its head and then disappears. And so what sort of thing, what sort of momentum needs to be, what you think can actually make that happen, I guess. After a long interval in which there were, as you say, there were those ideas raised but not much progress. For some reason this thing movement has gathered a lot of moss and has a lot of steam behind it now where as in that list of publishers I presented, there's hundreds, it's not thousands of journals that are adopting the credit taxonomy and NISO National Standards Organization of the US is turning it into an ISO official international standard. So it seems to be really happening. But I would say instead, unlike almost all other science reforms I've been associated with, the danger more is that it may be adopted too quickly in that we need to go to our scholarly societies, go to our journal editorial boards and make sure that it fits what should be happening for our discipline and shape the future of it. So NISO is working on policy and I encourage you to check out their website. And you mentioned it, getting it uniformly adopted and I don't think it should be uniformly adopted because like at least the credit taxonomy isn't best suited for certain disciplines. And so we need a lot more development work on that. So it's difficult because we need a taxonomy that allows all the bean counting and the tallying across papers, which is really key to this even if you're not into counting the beans and at the same time we need the flexibility where scientific roles are going to change as different sorts of disciplines and are boosted like bioinformatics has totally changed how people work in biology and so on. So it's a constantly evolving beast. We need to make sure it constantly evolves. I just wanna give Jennifer Ben, I'm gonna allow you to talk because you're raising lots of good questions and comments and rather than me reading out your words. Oh, gee, thank you. Hope you can all hear me. Yeah, look, I think, Alex, I know as a laboratory scientist I guess I've always really tried hard to include everybody who's involved in laboratory research which often there are a lot of sort of unsung heroes I guess in that kind of research. And the issue that my question sort of pertaining to is there's another issue which is the people who were involved don't get acknowledged but then there's the other tricky issue of people who actually weren't involved, getting acknowledged. And that can be really, I think a hard thing for particularly ECRs to navigate, trying to make sure that the people who did the work get credited and the people that weren't involved don't. That's kind of what my comments have been about. So anything that you can add to that I think would be probably really great. Thank you. It sounds like you're talking about what's sometimes called honorary authorship or even worse ghost authorship. So, and that's very common in certain contexts, certain disciplines. For example, the practice of the big lab head or the big institute head who says, his name needs to be on every paper. I've been associated with not directly but with labs like that where a postdoc is hired and they're told every single paper that comes out of this is gonna have my name on it. It says the lab director. And they actually have a reason for saying that which is that in that lab, I think they had something like 14 postdocs. The reason they were so successful or the only way to be successful was for the head, it was a great lab, was for the head of the lab to just be constantly writing grants all the time. And the only way his lab was gonna get the next generation, the next years of funding was if his name was on those papers. So for because of system pressures like that, I think we can't win against honorary authorship. And in fact, we just have to, if we can't beat them, join them. And so I think the credit thing actually provides an outlet for that because if you look at it, it's got this supervision category and it's also got this funding category. So it actually provides a way for the many researchers around the world who have been sort of lying for a long time the sense that they've had their names on papers even though they don't fulfill the authorship criteria of the journals they've been publishing. And it allows them now a way to attach their names to these papers in a way that's not misrepresenting their contribution. And thus maybe that, if funders, maybe they realized they should be valuing this role of bringing together an incredible environment and keep writing the grant applications that these funders are expecting, that that's a role in modern science. So I tend to think there's been, if you look at authorship criteria like ICMJE and others that keep adding like paragraphs saying, oh, but you shouldn't do go honoring authorship by trying to combat this behavior, but I think it's a losing battle. So we have to incorporate it and recognize it into our systems. Great, thanks. And I'm just still on this topic is another comment from Richard. I'll allow you to talk, Richard, if you want to make that point too. Richard Nguyen. Oh, hi, thank you. Well, it's Friday night in Boston and I've had a beer and a half already. So I hope I'm here. A key point Alex made was that for credit to be useful, it needs to be aggregated across journals. And in fact, the way a lot of journals are implementing it today, they're actually destroying the data in that they just put initials and then the credit associated with the initials. So from a sort of data science point of view, it's actually useless. So associating credit with Orchid ID, I think is the minimum needed for useful implementation. So it's just a comment rather than a question. Thank you. Great. I wanna bring Bob into this and I'll notice there's some question here for Bob by Michael and draw this. A few, I'm gonna allow you to talk too to ask your question as well. Michael, are you still there, Michael? Well, I'll read that out for you. So Michael says, as replications always come after, sometimes too long after the original publication couldn't that be the explanation for the high citation rate of those original studies compared to the replications? And is there a way for controlling for that factor of time publication in this work you plan to do? Well, I don't know, Michael, but I wanna thank you for planting that question for me because it's a great lead in to a talk that I'm gonna give next weekend where we do actually that. So a little different topic, but this trick of matching replications to sort of control papers is not a simple exercise. And so again, with work with Tom Coupe, that's what we do. We basically find, I think the final list is some 400,000 papers that we match with replications and we match with the original. And we follow that. The point that Michael's making is that how long a paper has been out is an important factor in how often it gets cited. And so what we do is we control for that with some very careful matching techniques. And I'm not gonna go and go through that whole process here, but the point is well taken if you want it. And that's why actually some of these questions are kind of hard to answer about, are replications really cited less than original studies? Though to answer that question, you have to know the counterfactual. So what's the original study? What's the paper a journal is gonna publish that's not a replication, that's a fair comparison for that replication study to see whether or not it would get more or less citations. So those are not such easy questions to answer, but they can be answered. And the trick is we got scopus, we got web of science, you can cast a huge net, pull in lots of papers and buy some hopefully halfway intelligence, sophisticated matching procedures, really try to identify counterfactuals pretty well. So we'll go through some more details next weekend, but that's a great question and it's a really important thing to consider. Come on, we'll go to the presentation next weekend. Yeah, Koopa, question. Hi, I have a question that's probably best for Whitney and Alex and it might be a combination of both of your talks. It seems to me that a lot of the disincentives that Whitney highlighted in her talk was to do with inadequate recognition of, say, an open data set that you might provide. So one of the examples was someone squirrels away their data and then gets multiple publications out of it and so that's a disincentive for sharing that open data set. And then what Alex's talk is about is trying to improve the way that we give credit to previous work. So I guess I'm wondering is there a crossover there because if we were to not just start rewarding people who write code and so on, but also start rewarding people who have contributed data to a study that we run, then it might be a way to overcome that obstacle and try and give recognition to people who actually make their data sets open. Do either of you have comments on that sort of mesh? Just interject to say, it was actually Alyssa who was speaking to me. She's actually had to leave earlier. Oh, sorry. Whitney's an amazing co-host. Oh, the lead of this. Making this all happen. Oh, sorry. I wanted to hold the chat. Alex will be at the tip of that. Yeah, well, I mean, there's two. In my mind, your comments bring up two roles of contributing data. One is like collecting the data and then the other is actually participating in a study or and for the first one, there is in credit and maybe it's not as broken out as well as I might like, but there was this investigation category as well as a, I think a data management category. I can't remember what it's called. So that helps recognize these people who are focused on the data collection. But of course that's within a larger paper that probably would only be accepted if it has a lot of other components. But there has been also this rise in data papers. I'm associated with this journal called Journal of Open Psychology Data. Hasn't been successful at all, but I know in some where the idea is to publish data sets, but I know in some fields, this is much more part of the culture, become part of the culture, maybe with trials and so on. And then I don't know, Cooper, if you were thinking of on the other side of actually, participating in studies and how you're contributing to science there. So for example, in citizen science, some fields have harnessed that pretty effectively, let's say ornithology with the Cornell University in the lead there. And they do name participants in their scientific publications or on their websites. But yeah, I mean, this could go so much further, right? I mean, you can imagine, you know, really we ought to be naming everybody, but the hard part is maybe going back to Jenny's question is deciding on the threshold and naming them in a way that is gonna help them. Yeah, just to respond to your first point about publishing data, like it seems to me that is the key problem right now is that if you publish a data set, the most you can get in the future is a citation. And one citation is not a big deal, but an authorship down the track is obviously a huge deal. And so it seems like there's a disjoint between the reward that you can get for publishing data, whereas if you were to keep it to yourself, you could potentially get authorships down the track. So I guess that to me says that, I mean, obviously the systems we have are developed from pre-internet eras, but there must be a system moving forward where we can give appropriate recognition to that data set, even if someone doesn't actually like contribute to the writing of a paper, that if that data proves integral to the future study, then they should get some kind of authorship or a contributorship system. Well, do you feel that the citation to the original paper doesn't handle it? I agree currently, like the first paper that published data from it, which should be cited, I agree that currently it's not, doesn't do the trick because, as you say, these data sets aren't valued enough. But so I feel like we need to just lose, how much we're recognizing in our institutions and our promotion systems and our grant funding systems, a citation to an original data set that leads to many other papers should be a more valuable citation or a more valuable paper. So like the, I mean, for example, the Hubble Space Telescope, you know, the people who first collected data from that, that data set has led to hundreds or maybe thousands of papers, but our citations perhaps don't reflect that very well. I think maybe having a typology of citations is one way to address that. I agree. I think what I'll do is I'll stop us there because we're just closing us to the end of the session. So I'm glad that we did it this way rather than directing everybody to a remote, but I will also give that link again to everyone who wants to continue the chats further with the speakers. There's the link in the chat to the remotes for MetaScience. So I want to thank all the speakers again for a great session, really informative and interesting discussion. And we are now going to have a half hour networking break, coffee break, but we'll be back here in half an hour for the final session, which is one of the highlights of MetaScience 2021. It's a symposium titled Reasonable, Questionable or Inexcusable. Do we need to do more to protect academic publishing against editorial misbehavior? We'll speak as Daniel Hamilton, Rink Hookstra, Ginny Barber, and Samine Vizier and moderated by Fiona Fidler. So hope to see you all back here in half an hour. And thanks again for everyone's great participation.