 Should I start? Hi everybody, welcome to chapter six. I'm not a teacher like George so I can't capture you all. And unfortunately if you have seen the old agenda, Emily White is not able to make it and we were going to give Kristin 10 extra minutes but now you blew that so she has to stay with her time. I'm, as you see, Mary Rose Franco and our speakers are up there. We heard in the last session that changing norms is an essential component of sustainable culture change but change doesn't just happen because the cool kids are doing it according to George. Even the cool kids have to eventually be rewarded for their behavior. So we need to acknowledge that there's a misalignment between norms and the reward system. If the reward system itself is not aligned with the norms, then that reward system is the counterweight, the unrelenting counterweight that's going to defeat all the good work that's been accomplished by the communities setting these norms. So in following up the last session, making it normative, in this session you're going to hear examples of making it rewarding. Our first speaker is Leslie Markham. She's a program manager at COS. She oversees the grant-funded projects that support researchers with training and community building opportunities that advance open scholarship practices such as pre-reg, registered reports, and data management. So Leslie's first going to discuss how registered reports can make specific and critical change to the journal peer review process. This fundamentally changes the reward system for researchers and she'll present evidence for the effectiveness of the registered reports model. And our second speaker is Kristin Eldon Wiley. She's a senior program manager and change management leader, what a fabulous title, at Templeton World Charity Foundation. As change management lead, Kristin coordinates the foundation's ongoing effort to improve processes through the continuous improvement program, including adoption of best practices to promote open research. And Kristin will describe how the foundation, TWCF, I call it, is taking proactive steps to change the norms and reward systems for TWCF research community. With that, I'm going to turn it over to Leslie. Thank you, Mary Rose, and good afternoon, everybody. We'd like to believe that when the good actions are clear, consensually held, then good people will do them. But people are shaped by their social structures and they inhabit and the reward systems are a powerful influence. In their paper, The Natural Selection of Bad Science, Paul Smaldino and Richard McElrief illustrated a drift towards a literature filled with false discoveries with either of two basic conditions being met. Negative results are harder to publish than positive results. Or a lab decreasing the effort to distinguish between signal from noise increases the likelihood of producing publishable results. As a simplified illustration of Smaldino's model, this slide demonstrates career advancement in a single generation of researchers from graduate students to professors. Assuming a starting point in what makes most researchers prioritise values-driven, high-quality research and a minority prioritise rewards-driven research, when there's a conflict between what is rewarded and what is valued, those that prioritise what is rewarded and advantage their career advancement prospects. Depending on the side of the average and the competition level, after each selection process, the proportion of rewards-driven researchers becomes larger. Smaldino simulated these processes across successive generations of researchers representing a variety of conditions, but the implication is persistent and fundamental. Failing to align the reward system with scholarly values will inevitably lead to a research culture chasing the rewards and producing many false discoveries. Registered reports aims to change the reward system for earning publication. In the standard model, researchers design their study, collect and analyse their data, write a paper and send it to a journal for peer review. Authors decide whether and where to send their papers and editors and reviewers decide on the value of the research based on the results. Publication bias produces published literature looking more beautiful than reality, particularly with an inflation of false discoveries and exaggeration of the strengths of evidence for true discoveries, as Tim showed this morning. With registered reports, the initial phase of peer review moves to before the results are known. Authors submit a Stage 1 registered report in which they provide the motivation for the research question, preliminary evidence, and any proposed methodology. The editor and reviewers evaluate the submission with two considerations in mind. Firstly, is the question important? Secondly, is the proposed methodology an effective investigation of that question? If yes, then the journal commits to publishing the paper regardless of the results. The authors conduct the research, add the results, and resubmit a Stage 2 registered report. During Stage 2 peer review, the editor and reviewers only assess whether the authors followed the planned methodology and interpreted the results responsibly. The results themselves are not a basis for deciding whether to publish the paper or not. Registered reports should eliminate publication bias that favors positive results over negative results. Anne Shealan, her colleagues, compared registered reports and standard articles. For each paper, they identified the first hypothesis tested and coded whether the results supported the hypothesis or not. Standard reports were almost always reported positive results for their hypotheses, 96%. If the authors were nearly always right, one wonders whether the research actually needed to be done in the first place. Registered reports revealed that a 96% success rate is a mirage. When the authors and the journal pre-committed to the hypothesis and reported the result, just 44% were supported. Making the decision to publish prior to knowing the results reduced publication bias and ensured that the published results reflected what occurred rather than what emerged through selective reporting or questionable research practices either during the research or during the review process. Registered reports might be higher quality on average than research that is published in the standard model because of the earlier peer review and an opportunity for revising research methodologies. We studied that in a collaboration with several researchers including Sarah Scavone and Samine Bazir who are here today. We gathered a sample of published registered reports that tested novel research questions and then identified two matched standard model papers for each registered report. One published by the same lead or authors and the other published in the same journal or on a similar topic at about the same time. 353 researchers participated in a structured peer review process. We matched them by their expertise to a pair of papers, one registered report and one of the matched control papers. They evaluated each paper in three stages. First, they reviewed the introduction, preliminary studies and the proposed methods and evaluated the paper on the criteria shown on the slide in pink. They continued reading the results and discussion of the final experiment and evaluated it based on the criteria shown in blue. And finally, they read the title and abstract and evaluated the paper as a whole on the criteria shown in green. The figure here shows the Bayesian credible intervals around the estimate, the black dot. The vertical dotted line indicates that registered reports and the standard model papers were evaluated equivalently on that criterion. Intervals on the right of the dotted line indicate that the registered reports performed better than the standard reports on average. And intervals on the left of the dotted line indicate the standard reports performed better than the registered reports on average. Registered reports outperformed comparison papers on all 19 criteria, with effects ranging from little improvement in novelty and creativity to larger improvements in rigor of methodology and analysis and overall paper quality. This suggests that registered reports were associated with higher research quality than standard reports. This is promising, but more research is needed such as better causal evidence and more investigations of the model's strengths and limitations as it is used in a wider variety of domains. Today, more than 300 journals have adopted registered reports. Chris Chambers' figure here shows wide adoption from the social sciences through the biological and medical sciences and economics. Since 2020, some notable advances include adoption by large multidisciplinary journals, such as PLOS ONE. And this year, nature has adopted the format. An open question is how far can registered reports model scale? It's currently being used in experimental research, observational studies, qualitative research, and for some planned releases of large data sets. It's most obviously a fit for hypothesis-driven research and it's been successfully adapted for work that begins as exploratory and then has some capstone investigation to test what researchers believe they have found. An innovation with registered reports is merging the rewards from journals and funders into a single process. Authors submit a registered report proposal and if it's approved, it's given in-principle acceptance from the journal and resources from the funder. Everyone wins. Authors submit once and get both rewards. Publishers get high-quality, funded research coming to their journal. Funders get greater return on their research investments, guarantees that the work will be published rather than languishing in a file drawer. These partnerships provide a community-building opportunity to engage researchers with the concept and potential benefits of pre-registration and registered reports. Publication is a key reward, but it's not the only component of the reward system that needs to change. There's a complex network of what gets published, who and what gets resources from funders, who gets hired and promoted at universities and who is rewarded with al-qa'laids by societies. Each of these stakeholder groups has opportunities to align how they reward researchers with the scholarly values and practices they promote. Some examples include Koara and Helios who are collective efforts to improve the diversification and realignment of reward systems and research assessments at institutions. Societies and organizations are elevating visibility of awards for open scholarship, such as the Einstein Foundation Award for Promoting Quality in Research and the Research Parasite Awards for Rigorous Secondary Data Analysis. Among funders, there's a substantial increase in experimentation with allocating funding. For example, lotteries to decrease administrative burden of proposal writing. Proposals receiving polarized reviews get a boost in funding opportunity as potentially high-risk, high-reward efforts that elicit exciting optimism and extreme skepticism. Reviewers get a golden ticket to select one proposal regardless of others' reviews. Blinding reviewers to the identities of the proposes to eliminate status basis by us rather, funding the person, not the project, to encourage risk and exploration. Little is known about the best ways to select and fund research to maximize progress, but there's a wide-open landscape for imaginative design and experimentation. This is an exciting area for meta-science research. When we asked whether open scholarships such as sharing data, materials and code matters for getting a job, a promotion, funding or getting published, few researchers suggest that it's a competitive disadvantage. You can see that in the bars in yellow. And many perceive it as a slight advantage, the bars in blue. But many people perceive the practices as irrelevant for obtaining the rewards. Altering that perception with tangible, visible evidence of stakeholders rewarding open scholarship will play a profound role in accelerating adoption. The field of meta-science has emerged alongside open scholarship to promote experimentation and evaluation of research practices. We need experimentation with new types of institutions, new types of reward systems, new types of publishing models, new types of peer review, new types of crediting systems, new ways of funding programs, new ways of conducting annual reviews, new formats for hiring, promotion and tenure and basically everything else. Next, we'll hear from Kristen Eldon Wiley from the Tumpton World Charity Foundation about how they're taking proactive steps to change the norms and reward system in their organization. All right, good afternoon. Today, so Kristen Eldon Wiley I'm from Tumpton World Charity Foundation. Actually, it's probably the third introduction you've had for me, so let's end it there. Today, I'm gonna talk about one of the initiatives that TWCF funds. It's Accelerating Research on Consciousness. It was started in 2019 and it's in a $30 million initiative to ARC and some of the grant-making techniques that I'll be talking about in the next 10 minutes is the brainchild from Dawood Patskeeter who is also no longer at TWCF, but he's in the audience. And Virginia Cooper, who is the principal advisor for ARC, is also here. So any really difficult questions, please ask them, not me. I'm gonna skip this time. So back in, oh, I'll also say, anytime I say we, I'm probably referring to Dawood and Virginia and maybe a little bit of me. But back in 2017, 2018, we spoke to a lot of scholars in the consciousness field and held a meeting. And the thing that we came to is that the consciousness field is stuck in slow motion. And this slide kind of shows like there are a whole bunch of different scientific theories on our radar at that point and all of the different theory leaders are also listed there. And what we were finding was that there was so little discussion between the different theory leaders and different data was coming up to have different conclusions just because they couldn't agree on how consciousness actually worked. So our goal that came from that is to reduce the number of possible, scientifically testable theories of consciousness by 50%. This we thought would accelerate the progress in the consciousness field. We knew that this needed a direct intervention in the reward structure as Leslie has very pointedly presented already. And we knew that our support had to be designed in a kind of cross-disciplinary cutting edge practice. And this kind of our thought process ended up into two different sort of bubbles which I'll be talking about very quickly. Structured adversarial collaboration and grant making program with registered reports. I'm gonna be calling this SAC going forward because on a good day I mumble my words and try saying structured adversarial collaboration five times very quickly. So SAC is basically adversarial collaboration but very key important factor of that is that as a partnership with a funder who obviously provides the funds but very quickly also enforces best practices and open science to ensure that there is a fair contest. Our grant making, the grant making approach, typical grant making approach, you have a funder will have a call for proposals, you have two different theory leaders submitting two different applications. They both get awarded. They're both doing their independent research. They both produce their publication results. And then at most there's criticism between the adversaries after post publication. The structured or the SAC approach is a little bit different in that it brings the theory leaders in or quite a lot different. Brings the theory leaders in earlier with a workshop. They talk about their different core predictions of each of their theories and brainstorm different experiments they could do to test to see whose core prediction is right. So they come up with a collaborative research designed together to test these incompatible predictions. There's a collaborative research study and then finally there's a publication of the coherent results. I'm gonna skip this and go here. And I'm really sad that I did not pick up on your new replication badge. I think it might be new too, but this approach I just, I do wanna kind of put it into perspective of the open science tools that we really called upon in order for it to make this work. Essentially, like if you go back to what I was saying earlier we wanted to reduce the theories by 50%. That really meant that our goal was to kind of kill off a theory or two or more. And so these open science best practices were extremely critical for that to work. So at the point of the, right after the workshop, some of the workshop does not always succeed. We're not always gonna have, it's gonna be two or three days of discussions and some of them discussions be quite intense, but not in every case are we gonna expect the workshop results in a research design that everyone can kind of agree to that would test the incompatible predictions. But after post the workshop, several months they would be able to come up with a collaborative research design. The theory leaders, if they were able to sign off on it and they pre-registered that research design with the kind of the possible outcomes of the experiments and what that meant for each of the theories. Then we would, TWCF would do the large funding at that point, but we wouldn't do it earlier until they came up with that research design and was pre-registered. And then they would do the collaborative research study. They would ideally register their study protocol at that point too, and they would also replicate. This grant would also cover the cost for them to replicate these studies. And then they would publish the findings of the data, of the findings and the data with the registered reports. The data would be open and fair. And basically in that publication and we would expect to see what the results, the actual results of the different experiments and what it meant for each of the theories that were going head to head. So where we are now, we have hosted or funded six different workshops. One workshop was not successful in coming up with that research design, but five were, which is really exciting. Four have been awarded an active and we awarded our last one earlier this year, but it's not, the contract is still under negotiation. But those are all the different theories that are kind of going head to head in each of these kind of research grants. I'm gonna dive deeper into our oldest one. This is a $6 million grant led by Dr. Lucia Maloney. It started in 2019, it's ending at the end of this year. COVID put a little bit of dent in the timeline as it did for a lot of our research projects. But they're at the stage of the collaborative studies for study, but at the point of where they're replicating the research and we're really excited that we expect to have results a little bit later this summer in terms of what, which of those consciousness theories kind of withstand the tests. So this is really exciting for us and kind of nerding out about it, but stay tuned. Also, I think this is really, really important to highlight because we've heard a lot about in the last few presentations that science is iterative, they're doing corrections and this is what they've done here as well. They've pre-registered their studies back in, the study back in January 2019 before the grant was even funded. At that point, the pre-registration was embargoed, but they revised the pre-registration document, post their pilot studies, and then again, post their original data and before their replication. And also I wanna point out, Lucia Melanie obviously is the contributor named there, but there's also, those people who are underlined are the theory leaders that were in the earlier slides. So they're listed as contributors, they've signed off on this research design and they've signed off on these outcomes and that's extremely critical to the success of this approach. They also published a study protocol and plus earlier this year. And this is also a really critical part of this figure is both in the pre-registration study and also in the study protocol, but those are the two competing theories, those are their core predictions of the competing theories and then we see exactly what each experiment was the possible outcome of each experiment and what that meant for each of the theories. So either it was a positive and they said this theory is predicted it correctly or it's negative or negative, but inconclusive. And this is really, really core to making sure that the outcome, that it can't be like dismissed because this is a theory, this is what was projected in terms of what it meant for each of those theories and the theory leaders signed off on it. Sorry, I know I really rushed through all of that. The second aspect is a grant making program using registered reports to improve rigor and credibility of consciousness research. This is another sort of approach we've taken to change the reward system and the structure. It's a 1.5 million grant to a center of open science. They're much, TWCF is a small team and we don't have the capacity to run a program like this and center of open science is very uniquely placed with expertise to run a registered report program. It's also in collaboration with the ASSC but essentially it's a one point, we expect that there's 1.125 million dollars of awards going out to about 50 different consciousness researchers, about 10 to 20K each researcher. And they have to go through the registered report model in order to receive the funding and center of open science. Will, this is very complicated, not complicated. There's a lot going on. I only have, I'm over time by 18 seconds but basically the researchers, center of open science reviews it and says whether or not they could, the budget aligns and whether or not it's good for the consciousness field. And then if they, if that's okay, go to the stage one journal review and then if the journal accepts it in principle then center of open science will give the researcher a portion of the funds, the researcher conducts a research and then the stage two journal review after the research is done. And then if it's accepted at that point it's when they get the remaining funds. I'm sorry, I really wish I had those extra 10 minutes. You're late from lunch, but that's me and us. Thank you.