 Jen, do you think that the psychological literature in the highest-impact journals is scientific and trustworthy? Well, I think we've had a number of provocative examples when it was found upon closer look that was originally reported as a strong effect, turned out to be at least exaggerated if not outright false. Exaggerated and false. Does that be due to fraud? Well, I think there are some instances of fraud, but they're relatively rare. I think what is much more common is there are such accepted, flexible rules for reporting studies, for picking outcomes after the study, results of the study are already known, for increasing or decreasing numbers of subjects in the study, that by these kind of manipulations, they're sufficient to produce exaggerations of effect. You don't need fraud. But wouldn't this just be due to publication bias, you know, the tendency of editors and reviewers to want positive outcomes and significant findings and to reject those papers where they say, well, we didn't find it. I think certainly that editors and reviewers play a role. If you look at some journals like Psychological Science, you'd think that an experiment never, ever failed to work out. But I think sometimes authors anticipate that they've got to be required to have a positive finding and they chase significance until they have one. But I think it's some, I think it can't fall just on the journals or the authors. I think it's the expectations of the field that some single findings are newsworthy. And I think what we need to do is suspend judgment about how exciting a study is until it's been replicated. Now the premier's placed on the first person to find a particular result, and I think it should be placed on the first person to replicate it in numerous replications. And that we have the expectations from the existing literature that probably most breakthrough findings have got to prove exaggerated or maybe false. So in other words, you're saying that it's kind of ridiculous to expect a sequel study to be conclusive? Absolutely. And I think that what science does best is knock down hypotheses. If you look at the current psychological literature, the null hypothesis is almost always proven false and the investigators' preferred effect is what's found. Playing devil's advocate for a moment, what about a proposition that's been advanced by the statistical reasoning of sophisticated people? The null hypothesis really is always false. It is, but so often is the investigators' expected result. Right. So in reality, the null hypothesis always is false, but in fact, they affect my very well-lying analysis direction. And I think a lot of the phenomena that we study with great confidence, they're a lot more unstable and less clear-cut than we'd assume for the psychological literature. Okay. So what can we do? What can we do to reform? I think we should start with a skepticism and we should enforce the idea that people should declare the hypotheses ahead of time. And when they submit their paper to a journal, they submit their data also to allow independent re-study and replication. And I think there's a movement afoot to do replication studies, and that's fine. But I'd be comfortable if we only replicated people's results using the existing data set. And I think we'll find that's a lot harder than we'd expect. Okay. So by replicating the results using the existing data sets, are you re-analyzing that? Exactly. I'm using alternative analysis. Sure. Maybe re-sampling techniques so that you... All of those things. I think, but even for a start, a lot of times people use very complicated statistical techniques. They use imputation, they control covariates, and they never get around to reporting their simple results. And I'd be much more comfortable if something done with sophisticated analyses could be replicated with the simple effects. If simple statistics produce the same effects, then it's very reassuring. When there's a discrepancy, it's worth looking at why. The problem is when I go and try to examine simple effects, so often I can't find them in the paper. And when I write to the authors, they've lost the data, the dog ate it, whatever, or they simply don't produce the data. And I replied the other day that my university doesn't allow me to share data. I highly doubt that. I thought, APS still is requiring maintaining data and making it available for five years after publication. They do. But when you ask the journalists to assist you in retrieving the data, they'll say that's between you and the author. I think we can teach them to adopt certain rules based on the weight of past studies so that when we find out that there's new treatments that changes the immune system by breathing or by meditation or by cognitive behavior therapy, the chances are most past claims of that have not proved to work out. If it's unusual for one treatment to be better than another treatment, which has a different rationale, but a credible rationale that provides support to the patient and positive expectations, it's unlikely to be anything earth-shattering or anything different. And give them some basic Bayesian prior probabilities. And also that when gurus, psychological gurus, give advice to patients, recognize that often the advice is based on studies and the effect size is as small. They don't generalize to everybody. And that when in doubt, trust their own sense of self-efficacy and autonomy, and be skeptical about people selling them advice and undermining their sense they can run their own life. So a healthy dose of skepticism, fair amount of vigilance, and not falling for the idea that a single study is going to be inclusive. That's right. And I think we constantly have to remind people to keep them on track because they would like clarity. They'd like certainty. And I think it's the job of other skeptics to keep finding examples where trust in the psychological literature isn't justified. And maybe at some point we'll reach a balance. And so it becomes more trustworthy, but certainly the current literature is not. Let's hope we get to that balance in time soon. I hope so too. Thank you.