 Thanks for having me. Today, I'm going to be speaking about how misaligned incentives hurt science, but we can fix them. We know that human beings are sensitive to rewards and punishments in their environments and researchers are no exception. In academia, the currency of success is publication, but in order to publish, research results need to fit a particular profile. For publication in top outlets, results must be innovative, exciting, and most importantly, statistically significant. Papers that don't fit the mold are doomed to less prestigious outlets or resigned to the file drawer. Importantly, the ability to get published affects other key motivators of researcher behavior. The chance to get funding and ultimately jobs, tenure, promotion, and a strong professional reputation. Now without publication, researchers are cut off from these other modes of professional advancement. So we can see that publications loom very large in the incentive structure. Now, of course, as careers progress, the causal arrow begins to point in the other direction as well, with secure funding and prestigious positions smoothing the road toward additional publications. But we can see that publication plays an outsized role in researcher evaluation, and this fact creates incentives to get work published at all costs, even sacrificing quality and accuracy in the process. So researchers are incentivized to get results published, but not to really probe whether or not they are right. Much of this would not be a problem, of course, if published or synonymous with true. Indeed, the current reward system prioritizes quantity of output over accuracy of output. And so the big question for us is, how do we realign incentives to reward and encourage accuracy and not just quantity of output? So today I'm going to discuss five distinct ways that we can try to realign incentives, focusing on the steps that funders can take to support rigorous and transparent research. This will help improve the quality of scientific evidence and give funders a higher return on investment. Let me also emphasize here that I recognize that many of the funders at this meeting are already doing some or perhaps even all of these recommended practices. Others are at an earlier phase of revising their practices. Now one of our goals here today is to encourage communication among the funders here about what is working well for them in this space, along with where they hope to do more. Also unstated is the idea that universities and journals exert significant power in the incentive system. Here we are working from the assumption that funders are seeking to make changes to their own practices, while applying strategic pressure to other parts of the system to encourage and support change there as well. So first, recognizing that publications are currently over-weighted in our incentive structure, we need to find ways to amplify other desirable outputs. Incentivizing and rewarding other kinds of outputs that are more directly aligned with research quality would help researchers get formal credit for these behaviors and help them be seen as something necessary for success on the job, rather than extra bureaucracy that gets in the way of progress. So what are some of these currently undervalued outputs? On the screen we see six possible things that funders could attempt to incentivize. So first, we have fair data. This is data that's findable, accessible, interoperable, and reproducible. Researchers publish the results of their studies, but they don't always publish or share their data. We have open materials and reproducible analysis code. These also represent crucial ingredients, crucial parts of the research process that are not always shared and are currently undervalued relative to publications. We have preregistrations, both of study protocols and analysis plans. As we've heard in other talks, we have replication results and the documentation of null or negative results. So these are all extremely valuable scientific outputs that are not valued in the current incentive system. Current incentive system values only publications. So what can funders do to encourage these alternative outputs? Well funders can encourage researcher behavior directly by issuing requests for proposals that are targeted at the desired behaviors. So they can do requests for proposals for replications, for preregistrations, for encouraging researchers to empty the file drawer. Now such calls can help foment a culture around these practices and provide training opportunities for junior scholars in these practices. A second thing that funders can do is that they can condition funding on past practices or perhaps on university policies. So funders can condition awards on past researcher behavior. Is this a person with a track record of working rigorously? Is this a person who has a track record of sharing data materials and code and so on? Funders could even condition awards on university compliance with particular policies. So funders could require that departments that they fund have a written policy requiring open data materials and analysis code. So supporting organizations that employ rigorous practices. Okay, for our second implication, it's important that we decouple publication decisions from study outcomes. Now one of the things that makes the current publication system misaligned is the idea that studies that are statistically significant, that are exciting, are more likely to be published than studies that have null results or studies that are seen as boring or not innovative. And ideally, studies that are accurate or that are closer to the truth are the ones that would be more publishable, not simply the ones that are flashier or more exciting. So if we embrace an alternative model of publication, we can decouple the decision to publish a study from the outcome of that study. So the registered reports model that we've heard a little bit about at the meeting already is one such model that attempts to separate these two features of publication. So a few things make the registered reports model unique. First, study proposals are reviewed prior to data collection and this improves study quality on its own. Now after a study clears this hurdle, it is deemed in principle accepted. And what this means is that if the study is carried out as proposed, the results will be accepted for publication regardless of whether or not they are statistically significant. So this model refocuses the publication process on the rigor and robustness of study designs and methods and it de-emphasizes a focus on results. So here publication is focused on doing the most informative study to answer a question rather than on what the answer to that question turns out to be. Now there are already over 250 participating journals that have adopted this model and what funders can do is two-fold. Number one, funders can partner with journals that offer registered reports to offer a streamlined funding publication approach. So one way that this could work is to partner with journals to develop a request for proposals. The journals editorial process could then serve as the grant evaluation mechanism. Accepted proposals above a funder's threshold would then be awarded funding and once completed, the partner journal would publish the results regardless of the outcome of the study. Now another thing that funders might consider that's a little bit of a lighter touch would be to develop guidance for grant reviewers to encourage them to give weight to previous studies that are done using the registered reports model. As I've just discussed, these studies provide a more rigorous base on which to rest future work. So researchers working in domains that are based on registered reports are more likely to make stronger progress. So it makes sense to give funding and support to projects that are based on a literature that's founded using these rigorous practices. Now for those who are considering the first option, a journal funder partnership, I just wanted to add very quickly that Center for Open Science actually has a funded mechanism already that is an opportunity for funders to evaluate the impact of these partnerships on disciplinary culture change. So Brian Nosick and David Meller can provide more details on this, but this would be an opportunity for a funder who is considering such a partnership to have it formally evaluated to see whether or not it has some of the desired outcomes. Okay, our third implication. Perhaps most directly, one thing that funders can do is they can support rigorous practices with funding and also make rigorous practices required. What does this mean specifically? Well, all of these practices, sharing data, sharing materials, sharing code, they take time and funders can choose to pay for that time that's required to do the practices. So we can pay for time to do data archival, pay for labor to do reproducible analyses. A really innovative mechanism is that you can pay for so-called red teams. A red team is a team of researchers who is not aligned in the same theory or in the same direction as the original team and they work to find flaws and opportunities for improvement in the research itself. So they're actually, the red team is incentivized to find the weak points in a given study's design results. So funders could allow for funding of red teams to increase regular studies. And to make these practices actually required, it's necessary to enforce them by withholding future funds. So here's our two take-homes that funders can do. One, we can actually allow researchers to budget for open science practices. So give an incentive for researchers to do things like data archival, open sharing materials, open access publication, and so on. And we can use mandates. So for a mandate to be effective, there needs to be enforcement in the form of holding back a portion of funds until compliance is achieved. Or perhaps funding could be held back for subsequent grants until the practices are completed for previous projects. All right. Fourth, how can we further realign incentives? So one thing we know for sure about human behavior is that even the most sensible plans can be stymied by habit, inertia, and resistance to change. Now a clear way to smooth the path to changes is to make certain that new behaviors fit relatively seamlessly with old habits. So a goal to run a 5k is within reach for someone who is already a daily walker versus someone who lives a sedentary lifestyle. So the new behavior fits in more easily with the old behavior. Additionally, people are not likely to adopt a new practice if they don't feel confident that they have the skills to do so. And so of course, training is needed to provide those skills and that confidence. So we want to make changes easy to do and teach people how. So what can funders do to support these goals? Well, it's no surprise that I think that tools like the Open Science Framework make these rigorous practices like data sharing easy and are within reach of the typical practicing researcher. So learning a new software is relatively simple and it's accessible for someone who's learning a new practice. Importantly, OSF is openly developed. It's free for end users and it's openly licensed for adaptation and reuse. Now of course, there are other tools and infrastructure besides OSF that are also worth supporting. But collectively, these services are absolutely critical for growing and supporting open science practices. So we want to support this open infrastructure and support the kinds of infrastructure that has features like OSF. It's openly developed, openly licensed, can be extended and is durable. And we know that these tools are not free to develop and sustain. And without the support, the community has little choice but to use locked in commercial substitutes. Companies like Elsevier are actively working to monetize this infrastructure and they're doing so under terms that will ultimately stifle scientific progress costing universities and funding agencies far more in the long run. Now, although new tools and tax simplify the route and treat immensely, early researchers must be trained and to some extent later researchers must be retrained to use these new tools and practices. So a simple thing that funders can do is to provide funding for workshops and free conferences that can help these practices really take root. All right, finally our fifth implication. And this one might sound a little silly at first. We want to focus on process not people. Now that might sound kind of cold or distant from human connection but that's not exactly what I mean. So what do I mean by focus on process not people? Well, let's return to our diagram of the status quo that we discussed earlier. So we know that journal publications definitely exert an outsized role in the system and perhaps especially early on in people's careers. But recall that I mentioned that as careers progressed, a mutually reinforcing relationship develops between the different elements in the diagram. So access to funding leads to more journal publications, admission to a tenure track or tenured role opens the door to new funding opportunities and so on. Now this is known as the Matthew or Matilda effect, essentially the rich get richer. So early success begets later success and so on. So what we need to recognize is that these feedback loops are likely producing bias. Suddenly it's not the researcher with the best idea or the most rigorous methods who's getting funded. It's the person with the longest publication list, the highest H index or the fanciest institution. This pattern can also undercut rigorous research practices from taking hold. So if a quick and dirty publication strategy yields the currency needed to get the next grant, then researchers with more rigorous practices may be left without funding. And without funding, the chances to advance in one's career or to get the next publication become more limited, perpetuating the problem. So what can funders do to avoid this biasing influence of earlier success? First, funders can amend review characteristics to focus on rigor and deemphasize personal characteristics. The funders should carefully examine their review processes to ensure that reviewers are carefully considering the methods of proposed research and rewarding those rigorous practices that we discussed earlier. So researchers who work transparently, who expose their work to scrutiny, who share valuable data, and who create tools for the community are the ones who should be supported. Secondly, funders should put processes in place to reduce or eliminate emphasis on traditional metrics of research success, like publication count and H index. So using blind review, for example, when possible, can remove many of the potentially biasing factors from consideration. So things like institution, gender, the length of someone's CV. Now when blinding is not possible or desirable, explicit reviewer instructions and training are likely needed to emphasize the funder's commitment to focusing on the project at hand and not these other biasing characteristics. In addition to supporting rigorous research practices, these steps can have the additional benefit of supporting early career researchers from underserved institutions and members of demographic groups whose work may be overlooked in the traditional review processes. So taken together, I hope that these five potential ways to realign incentives give us all some ideas for how we might change our practices in order to support rigorous research. Now practicing researchers recognize that there is a disconnect between what is rewarded in academic positions and what is needed to do high quality research. The academy values the quantity of output at a high, far higher level than quality. So I hope that our discussion of these five domains has given the funders in the group some ideas for changes that you could make in the very short term which when taken together could have very big effects on realigning incentives such that accuracy and rigor are rewarded rather than simply volume of output. Thank you very much for your time and a copy of this presentation is at the link in the slides.