 And with that, we're going to go ahead and get started. So first I want to welcome our first speaker, John Protsko, to talk about how researchers update their beliefs. So thanks for being here and take it away, John. All right. Thank you very much. So I'm Protsko John Protsko of Central Connecticut. And this is a study that was done at a very unique time in the meta science process as we were starting with big team science in psychology, that the team numbers started off a little on the smaller side. So scientists are supposed to change their beliefs when faced with evidence. This is the stereotype that non-scientists have about us. But once you actually get into science and you start working with colleagues and you interact with reviewers, you start to realize that that might not actually be the way that scientists actually behave. For example, this is a fun example, the finding that an asteroid is what actually led to the extinction of the dinosaurs was discovered in about 1980, or the evidence for it. And I didn't know it was that sort of earlier, or sorry, that recently. I thought that this was decades prior to that. But there was some early meta scientific work on how this research was received. And in general, most scientists just mocked the research and completely ignored it. It took over about a decade before science had actually gone, okay, now we see it's an asteroid that killed the dinosaurs. Similarly, the moon landings, the Apollo moon landings, the purpose of them from a scientific perspective was to try and understand where the moon came from. And so it was to go and bring back rocks. And we didn't know where the moon came from. It could have been, there was a theory that it was a passing body that got caught in the Earth's orbit. There was other theories that, for example, there was an early massive impact with sort of proto-Earth and spewed out a bunch of debris into space that got caught and turned into the Earth. And finding these, bringing back these moon rocks allowed scientists to actually understand where the moon came from. And in a sort of similarly early meta scientific work, there was a researcher, Mitrov, who spent three and a half years over the course of numerous Apollo moon landings following about 35 astronomers who had different theories about where the moon came from. And what he discovered was, at first, the moon rocks came back and it seemed like the moon was made of the kind of the same thing that the Earth was made of. And so astronomers who had this belief that the moon was a passing body, they got caught in orbit, they changed their beliefs a little bit. They were like, okay, maybe we're wrong. But he found that within that three and a half years span of studying this, all those people had gone back to believing what they originally believed. Even though nowadays, I'm a psychologist, so don't quote me, but the scientific consensus does seem to be that the moon did come from an early impact. So these are fun little stories about scientists not very acting very scientifically, not really changing their beliefs. But what we really need here is we need some sort of systematic work on scientists as they make discoveries themselves. And that's sort of what I hoped to do some years ago. Just as a side, this work was done with Dan Simons and Alex Holcomb as well. But all the mistakes are entirely mine. So we got this opportunity to ask, do scientists change their beliefs? And if so, how do they change their beliefs? The problem is when you study a whole bunch of scientists, there's a lot of idiosyncrasies with the individual people, the studies that they're running, and it's too heterogeneous. And I'm an experimental psychologist and I don't like that many objects that you can't control. And so what you really need to study this is a bunch of researchers studying the same thing at the same time. And so what we did is we worked with the registered replication reports, the early big team sciences. Now these were teams of about 15, 20, 25 different researchers who would all get together and run the same study at the same time. And we surveyed 214 of them in these different studies. And we did this at three different times. We did this at pre-test before they had actually collected any data at all. Sort of insured it ourselves and said, hey, you're about to undertake this study. What do you think is gonna happen? What do you think the effect size is going to be? And then in this early stage of big team science, people were still analyzing their data just themselves. So they collected their data and they analyzed it personally. There was no centralized data collection. So there's this brief moment in time where the scientists know the results of their study but nobody else knows the results of their study and they don't know the results of anybody else's study. Then it all gets put together in a meta-analysis and after all of the researchers learned, hey, what did everyone else find on this topic? We got to ask them again, okay, now seeing all of this evidence, what do you believe? And we asked them this sort of classical, how much do you believe in this hypothesis? We also asked them to estimate the effect size at all three time periods. And with this type of data, we can ask certain questions. Like, do your prior beliefs influence the results that you get? It might be the case that people who believe very much in a theory get good results but people who have this sort of low prior belief, they somehow subconsciously sabotage it. We can also ask how do these scientists change their beliefs? Running the same study over the life cycle. And then we can ask from their own results, this moment where they just get their own results but it's 15 teams getting their own results. And then again, at the end. So one of the questions with something like this is these are researchers, these are psychologists, engage in systematic replication work. And so it may be the case that most of the people running these studies, conducting this research, have a very low initial belief. They're trying to run these studies because they're like, I think this research is crap, I don't believe it whatsoever. And so that would create a problem if there wasn't any sort of sufficient variance in pre-test beliefs. But actually it turns out, there's a lot. And not only is there a lot, the majority of researchers in these studies actually believed in the hypotheses under investigation before they conducted their analyses. So it's negatively skewed if you think of negative as holding a low belief. So we have sufficient variance and now we're able to ask, is the prior belief related at all to the results that they got from their study? And we see it's absolutely unrelated, which is great because that's one of the central assumptions of the statistics that we use is that when you run a study, you get a random draw, an exogenous draw from a distribution of effects. If this was biased in some way, it would create some problems. And my excitement for doing this study at the beginning was as an experimentalist, I'm thinking it was like, people are now effectively being randomly assigned to get statistically significant results or statistically non-significant results because that's the assumption this exogenous draw from a distribution of effect sizes. And what we didn't expect to happen is what happened over the course of these six registered replication reports because the very first one that was done, I wasn't a part of with this, it was a successful, successful replication. But in the first replication that we investigated, no team got statistically significant results. So obviously the meta-analysis was non-significant, no team got any significant results. The second registered replication report, not a single team got any significant results. And again, meta-analytically zero. The third registered replication report, however, this team up here, nope, sorry, this team down here, they did get statistically significant results, but it was in the wrong direction from what the hypothesis was actually supposed to be. This registered, same thing here, this team got significant results, no other lab did who did this study, but it's also backwards, it's in the wrong direction. There we go, in this registered replication report, similarly we got this one team right here, they're getting statistically significant results, every other team is running their own study and they're getting non-significant results, but this team, it's in the wrong direction from the hypothesis. And in the sixth registered replication report, this team right here, this wonderful team, the very top, they got statistically significant results. In the direction of the hypothesis, they successfully replicated the results and that team was not in my study, they didn't answer the call. So an important thing to keep in mind when you're looking at this data, this sort of change over time, is every single researcher in this study got non-significant results in their own work and then later when they all combined it meta-analytically, they similarly saw non-significant results. So this is just the sort of general change over time. Each of these lines is a person. And what we have is this sort of pretest, how much do they believe in the hypothesis? We have this data as well for the effect size, but 10 minutes is not long enough. Then they get their personal results and we see this change. And then again we get to see their change after they learn the results of the meta-analysis. So how are researchers changing their beliefs over time? So one way we can sort of start to play with this data and look into what's going on, it's very descriptive in its approach, is we do have their prior belief and we do have their results. So we can construct a base factor, which means with their prior and a base factor we can construct a sort of idealized posterior. Here's how much you should believe in this hypothesis. Here's what you think the effect size should be given your prior and the results that you actually obtained. And then with that what we can do is we can look for deviations. We know what it should be in one version of an ideal updating and we know what they actually said after they saw their own results. And what you see here is this is people being perfectly accurate and everything over here on this side is an under-correction. Meaning their belief should have changed to a certain extent or they should change their belief to a certain new effect size and it's not going all the way. They're under-correcting for their belief. Their priors are exerting too much of an influence. And we can look at this at the second stage too where we go from now they know their own results but they don't know the results of anybody else's study. And then they say, okay, now they've learned the meta-analytic results from every other team that did this research. And we can construct that same sort of, all right, now what should your posteriors be? What do you say they are? And we see something really interesting here. First thing we see is there's actually pretty big spikes around close to, I would say within error, being accurate. But then there's these people right here who are refusing to change their beliefs. They are deeply under-correcting their beliefs even though they personally got results that failed to replicate. And then they saw everybody else's study and they also saw that everyone else failed to replicate. And these people here to look at this, that's this clump of people right here who start off with a really high degree of belief in the hypothesis and then they run the study themselves. This isn't hearing about somebody else's research and then changing your opinion, this is you ran the data, you collect the participants yourself, you analyze it yourself, you saw that you didn't replicate and you didn't change your views. And then you saw that 14 other teams did the same thing, 20 other teams did the same thing and they all didn't get the results. And you went, no, I'm still gonna believe in this. So what's the sort of overall message here, right? The first is that people's prior beliefs at least in this context is unrelated to their outcomes, which is really important. Now in these registered replication reports, things are strict, it's preregistered, there's not a lot of room, there's no room really for pee hacking, right? Scientists are under correcting their beliefs when faced with contradictory evidence. I would have loved to have evidence that people were sometimes getting significant results and being a part of my study, but that didn't happen. And we didn't, we had no idea that that was gonna happen that we were gonna have six sort of failed replications in a row. And the priors are just too sticky. People seem to be unwilling to give them up. Thank you. Thank you, we probably have time for one quick question. Hey, I'm wondering if some of the individual differences in scientists changing their beliefs might be related to how much they either self identify as a Bayesian or have training in Bayesian statistics and or hobbies in games of chance. Yeah, I had all these like Bayesian jokes queued up and then when I was practicing last night, I realized I didn't have time to crack them. We didn't ask anything like that. However, we are currently doing this study again right now. And we've teamed up with some Bayesian statisticians and we are actually having researchers construct their own prior distributions, incredible intervals and so on and so forth. I think we might be able to ask that. The problem is a lot of big team science, as I said, has definitely moved to centralized data collection and analysis, which from a project management standpoint is wonderful. It's so much better than letting people deal with their own data and then try and aggregate it later. But I think that might be something that we can ask. And I'm always keeping an eye out for other sort of new places that we can extend this work as well. Thank you. Thank you. Thank you, John.