 Thank you, Tim. When methodologists see all those results that Tim described, they all say, ah, I knew that. We knew that. We knew that. We knew that. And the reason they keep saying that they knew that is because they have known that for a long time. When you look at it, it's hard to see from the back. But you look at these papers, 1959, 1962, 1968. And they raise, here are all these challenges that are happening in how the scientific process works. We have low power and precision. When I actually have sample sizes of sufficient size to investigate the questions that we're investigating. People are employing questionable research practices. They're making changes to their analytic strategy as they go. And that's inflating the likelihood of false positives. They're not applying rigor. They're using measures that have already been invalidated. They're not validating their existing antibodies, whatever else is happening. There's publication bias. We're ignoring the negative results and we're inflating the likelihood of observing findings by selectively reporting what gets into the literature. People are replicating studies. And so we're not even having an occasion to verify what it is that's whether those findings are repeatable or not. And people are terrible at using no hypothesis-significance testing. They always use it wrong. Why do we keep using it? How about this other thing? No, let's go back to this thing. And around and around and around. So we've known about those problems for decades. And then those same papers say, here's all these solutions that we can do to fix this. We can increase the sample size. We can promote transparency. We can be clear about the distinction between what was planned in advance, what happened after the fact. Let's report all the outcomes. Let's do replication studies. Let's work on aggregating evidence and let's make sure NHST is used for what it's supposed to be used for, not other things. So when I got to grad school in the late 90s working in Mazurings' lab, we were reading these papers and saying, okay, wait, they knew what the problems were. They described the solutions. And then some of these authors like Jacob Cohen was now in the late 90s writing these reflective pieces saying, I've been talking about this for 35 years and nothing's changed. It's ridiculous. We pointed out what the problem was. Why hasn't it changed? Well, to me, it brings to mind an important factor. One is, do we know that there's a work? And there's a growing body of evidence suggesting that, in fact, those things do improve credibility. So here's one example where my lab and then three other labs, when Stanford, UC Santa Barbara, and Berkeley were funded to do their discovery research. And when we thought we'd discover something that was eligible, we would enter it into this pipeline. And so each lab of the four labs produced four novel findings over this five-year period. And the pipeline was, as soon as you think you've got something, you submit it to a large-scale confirmation study. Large sample, precise estimate, pre-registered so that you know exactly what your plan design is, et cetera. And then once you have those findings, you share all of your methodology forward, you write it up so that the next team can do an independent replication. And then each of the four labs did independent replications of each other, trying to adopt all of these best practices that the methodologists had been crowing about for years. And so in the end, we produced self-replications 16 times. We reproduced our replicator own findings. And then the other three labs did independent replications with unique samples. And what we find in that case is that the average effect size on the left across those 16 findings is very close to the average effect size of the replication studies. When you adopt all those practices, you get replicable results, at least in this example of 16. So why is Jacob Cohen crowing? Look, we now have the evidence. We know this is gonna work. Why are they doing it? To me, it recalls the example from South Park of the underwear gnomes who had this amazing three-part plan where the first part was they're gonna collect lots and lots of underpants. And they've been collecting underpants for years. And the third part is they're gonna make a massive profit from all of these underpants. And the only thing they had not figured out yet was the business plan for how it is you take the underpants and turn it into profit. Everything else was worked out. The methodologists really are adopting the same strategy. Look, we figured out the problem. We provided the solution. We put it in a paper. So why isn't science better? There are big gaps here between writing a paper, which is the academic hammer for every problem. We'll just write a paper about it and then it's solved to how is it that we actually implement that in practice? And two parts of it are big gaps. One part is why is it that those practices exist? Jacob Cohen was thinking it's just people don't know. They don't know about it. If they knew, then they would do the right thing. In their practices. Of course that's not it. Everybody knows. Talk to any graduate student in any department and they know that those things are occurring. It's not about knowing. It's about the structure of the system. And the second part is how do you develop a realistic behaviorally informed implementation strategy to get the culture to change? How can you actually restructure the system and enable people to live and practice according to the values that they have. They come into science in the first place. For us, the lion's share of the challenge is focused on the incentives. The incentives for my success are focused on me getting it published, not on me getting it right. Of course I want to get it right. I didn't get into science to write papers. Nobody gets into science to write papers. Get into science because we're curious. You're trying to discover things, trying to figure things out. But there are certain ways that we are rewarded. That we are advanced in our careers and publishing as frequently as possible in the highest prestige places that we can is a sure bet for advancing our career interests. And we know that not everything gets published. More likely to get published with a positive result rather than a negative result. More likely to get published finding something new rather than increasing my confidence in something that somebody else had done previously. And I'm more likely to get published if everything in my evidence fits together nice and neat rather than have exceptions and things that don't quite fit in parts that I can't explain. The novel positive tidy story is the best kind of story in science because it is the best kind of story in science. When we discover something new and have a comprehensive explanation for it, that's amazing. It's an incredible contribution. But it doesn't happen at the rate of five million papers a year. It happens at the rate of every once in a while. And the reward system isn't prepared to put those two things together. And so the consequences of that are what lead to the challenges that Tim described in the opening. We have these incentives for novel positive and tidy outcomes. And so it is no surprise that when I'm confronted with the dozens of experiments that I did in that semester, that the ones that showed more publishable results are the ones that seem to make more sense to fit together and put into a paper. It's not surprising that the system itself then reinforces that by selecting those that have more interesting results for publication. It's not surprising that when I'm confronted with multiple ways to analyze my data, the ways that look better for publication, I can easily rationalize as the right way to analyze that data, rather than the ones that look less good for publication. I'm not trying to be deceptive. I'm not trying to do it wrong. I am in a context where my reward system puts a conflict of interest between me and my evidence, which is I need certain kinds of things to work and to work well. And so I will mistake rationalization for reason and go ahead and just proceed as if I'm developing an understanding of the phenomenon rather than a selective search in order to advance my career interests. There's little reward for being transparent or sharing. In fact, there's a disincentive for me to be transparent with how it is I got to my findings because if you can interrogate my data, you're more likely to find errors. And I need you not to find the errors for me to get the reward in the current system. And likewise, there's no reason for me to do a replication of your work because replications aren't rewarded and produce novel findings. And there's no incentive for me to replicate my own work because all I can do is lose a finding that I already have. So why would I do that? The consequence of those is a decline in the credibility of the published literature and no self-corrective processes, the vaunted self-correction. We're gonna get things wrong all the time. Science is hard, as Tim was saying. It's okay to have hundreds, thousands, millions of false starts. We're pushing the boundaries of knowledge. The point is, is that we have a system that helps to sort out the things that are promising new directions from those that are dead ends. And if we don't have transparency in sharing, if we're not doing studies of replications of things that are starting to change practice, then we're not giving ourselves the occasion to implement those self-corrective processes. And all of that just produces waste. Waste in the system, friction in the pace of discovery that isn't necessary. That if we nudge the system and how it's rewarded, we might be able to accelerate that process more effectively. So there are many different solutions that we might conceptualize. Our focus has been on these as direct mechanisms for trying to deal with that model that I just described. So for example, if we have more showing of data of code and materials, then we create occasions for that self-corrective process to occur. You can see how it is I arrived at those findings. You can reproduce my results to see if I got it right. You can apply different analytic strategies to check its robustness, just as Tim described. If there's some reward for some replication studies, there's additional occasion for that self-corrective process to work. And if we involve pre-registration where it's now clear to me and to you what I planned in advance versus what happened after the fact, then we can address questionable research practices, maybe not stop them, but at least expose them. And by registering my studies, we know that they exist. So even if I did 100 studies and only wrote up 20 of them for publication or five, you can still find the other 80 or 95 to be able to assess the credibility of the literature as a whole. And then finally, the model of registered reports goes after that core initial incentive in the first place. What does it take to get a publication? We don't try to remove publication as the incentive. That would be a very hard thing to change in the way that science is structured right now. But if we change what it takes to get a publication to be about asking important questions and applying rigorous methods to test those questions and not about the outcomes, then we change fundamentally the incentives for researchers. And Leslie will talk more about that this afternoon. But all of that is well and good. It's not a theory of change. We may say, oh yeah, okay, that whole model, it's all there and we have the answers now. So everybody just do it. Of course that's not gonna happen because the system is self-sustaining and it's decentralized. And as a decentralized system, even if a few actors change, the resilience of self of the decentralization will bring everybody back to the system as it currently exists. So we have to have an implementation strategy that brings together all of the different stakeholders in the research process, aligns how it is they set up the norms, the incentives, the policies and rewards, and brings the research community along with that in order to have a change that can scale and can sustain. And to get us there, Lisa will tell us how. Lisa. Thank you.