 Thanks, everyone, for joining this conference. These are the most important events that the Center for Open Science is associated with because our mission is really to support communities and catalyzing change themselves and the participation by you as a collective body is a critical part of how it is that we can engage the broader community that isn't present on ways that we can continue to improve and enhance the quality of the research that we do and support and try to translate into practice. What I'd like to spend my time talking about today is really a stage setting for what might be conceived of as the ultimate goal of this community action to try to reform and improve research practice, which is to make the research that we produce, that we share, that we translate into policy or practice itself trustworthy. That's something that we can count on, that we understand and can calibrate the confidence, represent the uncertainty, remove the bias, maximize the generalizability or at least understand the boundaries of where things generalize and don't and have the research community, the researchers and the research suppliers all be accountable to that evidence to the extent possible. So what I'd like to do in talking about that big picture long-term objective of improvement and reform is give a couple of examples of public trust of researchers in different scenarios and argue that these small scale examples provide an object lesson for advancing trustworthiness of research more generally. A common perception of the end goal of achieving public trust is for researchers to ultimately be confident, be certain and be correct in the claims that they make. Thereby, if they achieve that, they've earned the public trust and research findings, what they produce, by demonstrated of their authoritativeness. We know what we're talking about, trust us. That I'm gonna argue is the wrong way for us to think about what is the end goal for achieving trustworthiness. Instead, I'm gonna argue that making research trustworthy and ultimately earning that public trust is best achieved if we think of the end goal as helping researchers be humbled in their claims, calibrated with the quality of the evidence and consistently truth seeking. And so let me illustrate what I think the implications of public perceptions of researchers, in that case, versus the alternative way that we might conceptualize, trust and trustworthiness of researchers. And I'll do that by summarizing the results of a survey of almost 5,000 US adults that we did in my lab several years ago, Charlie Ebersole and Jordan Axt were co-authors on this work. And what we did was we presented very simple scenarios of here as a researcher that did this thing, how much do you think they are competent, their ability? How much do you think they are ethical as researchers? And how true do you think their findings are? And so we had those three outcomes, the first two being about the trustworthiness of the credibility of the researcher, the last one about the actual truth value of the findings that they produce. And we wanna see what happens when different situations emerge of researchers' behavior, what drives the perceptions of researchers' ability and ethics and how is that associated with whether they are correct, whether their findings are true findings, at least in the context of reliability or replicability. So I'm gonna summarize the results in this graph and what I'm showing here is the baseline question. So a first question that we asked, researcher X, Brian, found an interesting result and I published it. And then people rated on a scale of how, what do you think of Brian's ability? What do you think of his ethics, given this is all you know, and how true do you think the finding is? So if all you know is that I found this result and published it, we get a baseline rating which is anchored here at zero. So whatever the response, individual response was, was put at that midpoint. And then we provided additional information and then we'll ask them those same questions, what's their ability, what's their ethics, how true is the finding to look at how it changes against that baseline. So for example, I found an interesting result and published it and David succeeded in doing an independent replication. Now rate me on how my ability, my ethics and whether my findings are true. And what you see here is that if, what you now also know is that David succeeded and replicate my findings, respondents saw me as more having more ability, stronger ethics and the truth of that finding, that it's a real finding was stronger compared to just knowing that I found a result that was interesting and published it. So they are responsive to evidence of replication and that has implications both for their perceptions of truth and their perceptions of my ability in ethics. What if instead David fails to replicate my finding? Well, what occurs is what you would expect perhaps which is they perceive decline in T compared to the baseline of the truth of my finding well, that makes sense, fail to replicate. So if that's all I know, then I think it's less likely to be true but they also perceive that I am some slightly less ethical and have less ability than if they had only known that I had found something and published it. With just these two scenarios, we might think or conclude that our reputations, ability and ethics are tied fundamentally tied to whether we're right or not, whether we find true things or false things. If we find false things, then we have poor reputations. If we find true things, then we have strong reputations. But let's look at a few different wrinkles in what other kinds of scenarios might occur. So I found an interesting finding result and published it. David fails to replicate it and I say, well, David did it wrong. His result is not valid. He doesn't know what he's doing. If that is the scenario that occurs, then respondents are similarly responsive to the truth value being lower as if it was just failed to replicate. My criticism does not say, oh, well, he must be right, David must be wrong. But also I lose some reputation on both perceived ability and perceived ethics for laying in that criticism. Of course, it doesn't mean you just have to criticize it. I could have instead say, oh, David failed to replicate my finding. And then I looked at his methodology and I agree. And I, oh, geez, maybe I was wrong. That result might not be correct. Respondents are responsive to that. The perceived truth of that finding goes down even a little bit more. Wow, they're both saying that maybe that original finding isn't reliable. But the perceptions of my ability and my ethics go the opposite direction. So compared to the baseline of just knowing I found something and publish it, David fails to replicate it and I agree, is actually associated with people perceiving my ability as even higher than the baseline and my ethics is even higher. So even though now the only additional thing is that they found out some evidence that I might be wrong, my ability, perceptions of my ability and ethics actually go up because of the way that I responded to that new evidence. Now, that doesn't only happen if I just give in and agree anytime someone fails to replicate my finding they'll say, well, they must be right. I can productively examine that. So David fails to replicate my finding. And then I say, that's weird. Huh, and I start some new research to try to figure out why I found it this way and David found it that way. If that's my response in this scenario, the perceived truth of the finding actually goes up a little bit even though the only new evidence is that it failed to replicate. And that might be because the perception of my actions in response lead to perceptions that I have even higher ability than any of the other scenarios observed and that my ethics are even higher. So that productive engagement with findings as they are observed, respondents are very responsive to that and adjusting their perceptions of the reputation, the credibility of researchers accordingly. Now, instead, we may not even have David involved at all. I might have just done additional follow-up research and failed to replicate my own finding. And then I published that too saying, well, I got it that one time, I didn't get it this time. In that scenario, the effect on the perceived truth of the finding basically changes just like it would when David fails to replicate my finding. But because I went ahead and published that, there is a positive impact on the perceptions of my ability and the perceptions of my ethics. And of course, that's not the only way that I might respond when I fail to replicate my own result. I might say, oh, well, I failed to replicate it but I did that second study terribly and so I'm not gonna publish it, I'll file draw that. If I do that, the baseline reaction is that the truth value is even lower. Boy, he's trying to hide stuff or something, right? I'm really not gonna trust that finding and the perceptions of my ability and ethics tank in comparison to just publishing my failure to replicate. And of course, a final type of reaction might be, I'm not even gonna follow up at all. I got my publication, I'm gonna move on and do other stuff. And in that scenario, also the perceived truth and the perceived ability and my perceived ability and ethics decline as a consequence. So when we look at this in aggregate, what we can see is that US adults appreciate that findings are uncertain. They recognize that those results could be wrong and they do not judge researchers ability and ethics solely on whether they were right or wrong. But rather on how they pursued the truth. So look especially at these three results where the belief in the finding and the perceptions of the researchers ability and ethics do not track each other if the researcher demonstrates that commitment to finding the truth even in the face of contradictory evidence. And what's remarkable is that in all three of those situations compared to the baseline at zero, there is now less evidence that the finding is true but more perception of the original researcher having more ability and more ethics. Even though the only additional thing about the evidence is that there probably or might be wrong. That's remarkable responsiveness to style how researchers approach their work. And so what I think the narrow implication of that is is that researchers are trusted more for a commitment to getting it right over being right. And then there is from those examples that are just about replicability something more broad that I think we can conclude about trustworthiness in general. And that is that the trustworthiness of research is more about the process of doing research than about the outcomes. The outcomes are findings from research will always be the basis for distrust because science sometimes presents us with unwelcome evidence. Evidence that challenges what we want to be true because of our ideology, our financial interests, our personal commitments or other was that's always gonna happen. And it will always be the basis for accusations of lack of trustworthiness. I don't like your findings David. So therefore I don't think you are likely to be trustworthy. And if our aim is to end that type of accusation in the broader public, we're in a fight that's unwinnable but it's the wrong fight. When someone receives unwelcome evidence from science what do they try to criticize? They try to criticize the process. Respect for science is high because people know that science has been successful in the past but they also know that it isn't perfect. It's subject to biases like everything else. It's subject to personal interests that we have as researchers that bring to it. It's subject to all the kinds of things that present in real life evidence that isn't quite aligned with the truth. So when I get unwelcome evidence from the world of science that I don't like I'm gonna be motivated to find flaws in the process of the research behind the evidence so that I can dismiss it. So our broader project is not to cater to ideology but rather to recognize that the outcomes are going to be contentious and the best opportunity that we have as a research community is to enact the behaviors that address those areas of process that could justify a critic's response to I shouldn't trust this because the process was bad. So what are those different aspects of behavior that we as a individual researchers and as a community can take on to make research more trustworthy? A first dimension is accountability. Are the researchers accountable to being trustworthy? Did I go through IRB approval? Did I acknowledge any conflicts of interest that I might have? Did I provide a positionality statement for how it is that I am interacting engaged with this work? Did I disclose my funders? Did I credit all of the contributors to the work? Another dimension is is the research assessed can it be evaluated? Is it a valuable? Did I share my research plans? Did I share my data, my code, my materials? Did I show all of the outcomes of what I did? That doesn't mean that it's true. It doesn't mean that it's replicable, but it does mean that you can check and that evaluability is an indicator of trustworthiness on its own, even if it's not an indicator of validity or generalizability or anything else. Has the research been evaluated? Has it been assessed? Has it been reproduced? Has it been shown to be robust? Has someone replicated it? Has it gone through peer review? Has it been engaged in the scholarly discourse and been debated and critiqued than everything else that occurs? Is it well formulated? Does the research take relevant knowledge and perspectives into account? Is it based in the existing literature? Is it informed by theory and provided a sequence, a logic, a rationale for what the expectations of this work is given the understanding that currently is present in theory or otherwise. Is it representative of and generalizable to the population in which one tries to infer it? Does it, did it include the stakeholders that are affected by the research? Does it engage in participatory research with those communities that might be impacted to help inform and shape what the questions are? Does the research control bias? Does it promote accuracy and validity? Does it use validated measures, blinding, randomization, preregistration, all of the types of actions that are a ways to address bias in the research process itself? And does it work to reduce error, promote precision and reliability, attention to sample size, using reliable measures, power analysis, et cetera? And finally, do the claims match the evidence? Is it calibrated? Does it represent the uncertainty of where it is compared to the conclusions that we're interested in drawing versus what the evidence justifies? Does it identify limitations? Does it pursue alternative explanations? Does it identify the constraints on generality that are known or potentially unknown? And are the conclusions aligned with the evidence? So that's a lot of things, but it is the lots of things that we are constantly engaged in to try to say as is the work that we're doing, work that we think can earn trustworthiness and then ultimately will earn the public trust. One of the challenges that we face and the reason that we have meetings like this is not just to improve our own skills and abilities and ideas for how we can do those various behaviors, but also with the recognition that this is a systems challenge, that it's not just us individually that have to start to enact these behaviors, but how is it that we reform the reward system, the training models, how it is that people are engaged with this as a collective effort in our field and in science more generally. And the general conclusion that I take from a lot of this work is that a core basis for science being trustworthy is because it has mechanisms in the ideal instantiation that lead to it not trusting itself and to constantly be questioning with that openness to potentially being wrong humility with that calibration of really interrogating where it is that the claims are aligned and misaligned with the evidence and that is always looking to find the truth in the best way that we can so that science serves the public interest in the way that we all aspire it to do. So thanks again for coming to this on conference for being part of this effort to improve ourselves and our communities and I'm excited to see what will come out of it over the next couple of days. Back to you, David. Brian, there was a great question in the Q&A feature. I think I know the answer to it, but I want to see if I'm wrong. You're using me as a straw man for the competent or the incompetent replicator of your work. In the research itself, I believe you used researcher X or researcher Y. Monica asks if you're using a particular person's name like David, it's obviously it's commonly associated with white males and all the assumptions that go along with that. So just a little clarifying question on that study there. Yes, thank you for that. You are my favorite straw man so I appreciate your willingness to step in that regard. It is you are correct that in the actual research we avoided any names at all. We used variable labels X and Y and otherwise to just try to decontextualize those factors, which I think that Monica is quite right that if we had varied names that there might be different impacts depending on assumptions that people bring to trustworthiness of people based on things that get inferred by name in terms of social identity and otherwise. So thanks for that. I'm gonna open it up to any other questions that come in through the audience right now. As folks who are typing those in, I'll take the moderators prerogative and start the Q&A session. Brian, given the trustworthiness implications of everything that you've kind of seen today or you've seen forever but described today. What are the, I think a lot of folks are gonna chime in on this as the particular context with education research comes where there are unique challenges with direct replications and a lot of it comes down to questions of, well, how much can the different contexts account for the different results and what are the implications of that? Given that different contexts are always going to occur, what are the implications for how you see replications being potentially useful in education research? And I'll ask kind of a similar question tomorrow when we have a special guest from the National Center of Education Research who are really tackling this question directly in the context of education, but what's your perspective on this from what you've seen in conversations and different disciplines? Yeah, thanks for that. It's a really important question. And for me, the starting point in thinking about the role of replication for establishing trustworthiness and understanding of phenomena is rooted in a phrase that you said which is given that there isn't an exact replication. There is no such thing as exact replication. Some things feel closer to exact because, oh, that's a paradigm we can do again and it feels very mechanical and it even might be automated or it might not be obvious why it would vary across different samples or settings. But there is no such thing. Every time we conduct a study that's different than, it is different than the prior study that we conducted. If it's done on the same people, those people are different. If it's done on different people, the people are different. History has inevitably changed, time has passed. So no matter the circumstances, the replication is not the same event as the original study. So given that, what is the role of replication in helping us understand the phenomena? What it doesn't do is tell us precisely why or how the original findings were observed. That's an historical event and we can't go back and make that happen again. We can question its credibility. We can question its certainty as an event that occurred in the past. But we're not ever revisiting it in any formal way. The replication is new evidence and additional evidence about the phenomenon. So what I think the role of replication really is is to confront our current understanding with additional evidence to either challenge that interpretation, create occasions for theoretical innovation. Oh, we observed it there. I did a replication where given our understanding of the phenomenon today, we thought we would observe similar evidence. We didn't observe similar evidence. Now what? Either we've found a boundary condition unwittingly, something is different that matters, even though I didn't know that there was something that different as matters. Or the original finding is less robust than we had originally thought. So maybe it doesn't actually occur ever or even except under the most constrained circumstances. Or there may be something in the implementation of that replication study that we didn't realize was suboptimal, that could have been realized by somebody else or even nobody realized, but now we have to identify is suboptimal about it. All of those possibilities are opportunities for theoretical innovation, meaning better understanding of the phenomenon that we're pursuing. So when we set up those conditions for I expect to observe similar findings as before, whatever the domain, it provides a chance to one generalize if we do see similar findings. Oh, these are different occasions. We now have a broader understanding or evidence base that this occurs under different circumstances, change in sample setting, history and otherwise. Or it provides an opportunity to question and start to do that iterative search for potential boundary conditions or a weakening of the understanding such that it wasn't as promising as we initially thought. So thanks for that. Yeah, that's great. The floodgates have opened for questions. So read a few as they come in and we'll answer them live. Monica asked a follow up about your research. Did you ask participants at the end and visualize what the researcher in the examples looked like? Oh, there's some classic research on that that is, you know, a draw scientist that I love that original work that shows a lot of implementation of stereotypes very quickly. We did not be a great follow up to give these scenarios, especially to see if some of those stereotypes end up manifesting differently depending on the researcher's behavior before you have anything about their social identities. Somebody should do that. Monica, if you don't, somebody else do it, please. That'd be really cool. So thanks for that. I can just imagine what the least trustworthy scientist looks like. There could be some fun stereotypes in there. He's not wearing a badges shirt. I guarantee it. Yeah. I hear about ability to publish replication. Shannon asks, did you find that publishers are more or less willing to publish replication research? Seems like there could be another barrier to having opportunities to establish trust. Also any room for researchers to conduct their own replications. Yeah, great question. And of course, this is part of that systems challenge, which is if the ways in which we get the rewards we need to advance our career publication don't actually encourage or accept the things that would produce trustworthy science, then we're never going to solve it individually because it has to be solved as a system. And the devaluing of replication research has been a longstanding problem and is still a challenge in many domains. Attitudes among publishers are shifting to some degree, not comprehensively, but there is more openness to replication research in some pockets now than there was before. But there also are opportunities to lower the barrier to entry to trying for replication. And from our perspective, the best opportunity for that to lower the risk for individuals to pursue replication is to propose them as registered reports at journals. And the reason that we advocate for this is because if the work is just to do the design work and submit that to the journal of journal that accepts registered reports, then you don't have to do all the work of conducting the study and only then to find out that journals are not interested in it. Instead, you present, this is what we want to do, this is why we think it's important to replicate this finding and this is how we're gonna do it. We have found in just in anecdote data, a lot more willingness for editors to consider replication studies in that context because it's a lot easier to see without the present of the evidence, whether it's succeeded or failed, that, oh, this is something that is uncertain when we really should have some more evidence about it. The other advantage of using registered reports for replication is that because people have skin in the game, if you're replicating my findings, I want my findings to be true, I'm ego involved in my findings. And so if I know the results and they're not looking good for me, I'm gonna find problems in your methodology and say, oh, this is all the reasons it was terrible. If I see the results and they are favorable to me, then I'll say, oh yeah, it's a great study, you should publish it. With registered reports, me being pro that finding and David being against that finding, neither of us know what the outcomes are gonna be. So we will peer review to maximize the quality of the test because we both believe that it's true or not true depending on our preexisting points of view. And so registered reports actually, I think ease the means of publishing and then the ultimate rigor and diagnosticity of the findings because people have to engage with the design prior to knowing the outcomes. So that went a little bit off what you were asking, but thank you for that question. It's great, great issue to tackle. Johnny asks, do you think that any of these findings and trust and then scientists and researchers might have changed over the past few years since the COVID pandemic? I don't wanna think about it, it's a good question. No, great issue to raise. And in fact, I'm on a National Academy of Sciences Strategic Council, it's called about trust. And I can't remember the name. Trust is in the title of what the Strategic Council is all about. And Skip Lupia is also on that and he finished a review to document what are the actual changes in trust and science over time? And Pew data and other data show remarkable resilience in trust, the public trust in science over the past 70 years, up until maybe about 10 years ago. And the challenges that and trust in science is among the most trusted of all of the other things. So even as trust in government and Congress in other things have declined, trust in science maintained. Trust in science has started decline over the last several years, but the surprising thing is that it's not declining at a rate different than the decline in trust in any other institutions, Supreme Court, Congress, military, et cetera. Trust in all of them are declining at about the same rate, suggesting that rather than COVID or more general issues in lack of trust of science being unique to science, it's more of a pervasive challenge of lack of trust in expertise in institutions that maintain and have responsibilities for whatever area of practice of humanity they're engaged in. And that's a huge problem, but the point is it's not a problem that's unique to science. It is about institutions and expertise more generally. Just putting in some links to the relevant. Oh, thank you for coming. Content they put in that Pew study. Brian asks, Brian Cook asks, what are your thoughts about the future impact of the OSTP memo? And this is a great qualifier game changer or just rearranging the deck chairs. Thank you, Brian. It is a game changer. And for context for those that are not familiar with the memo, this came out while Alondra Nelson was serving as intern director of OSTP and had been in the works for a few years. And the basics of the memo are to require open access, direct all federal agencies that fund research to develop policies that require open access of research articles with no embargo. So upon publication and to require the agencies to develop policies for open data, whether the data are supporting a published article or whether it's data that did not end up getting published but was funded by federal research. So on those dimensions, open data and open access, the OSTP memo is a game changer. It will instantiate those policies for 30 billion, whatever huge amount of money is spent yearly by US federal agencies on research. It will also increase the likelihood that other national governments will adopt similar policies and there are others that are even further ahead than the US. So a lot of action in Europe has actually made it easier for action in the US to occur. And other UNESCO is also now advancing and advocating for these types of actions globally and lots of nations have signed on to that. So this is super important, but of course it doesn't solve some of the problems about rigor and reproducibility because sharing the papers and sharing the data are retrospective. If all of the research had low rigor from the outset, all's that sharing the paper and sharing the data is gonna do is expose how bad the research is. It isn't gonna solve the actual credibility and trustworthiness other than improving transparency. So it is a part of the solution and there's still a long way to go to get to the full scale of the solution. But even at the national level, there are steps that are occurring that really address rigor and reproducibility. In fact, the general accountability office in the US issued a report that really have, this is not a directive report. It's more of an analysis and recommendations, but it really leaned into, we need to improve rigor, we need to improve replicability. Here are the solutions that are worth looking at at scale, preregistration, registered reports, all kinds of stuff. It is really impressive report for sort of keeping that, the bar moving up on open science practice on a national scale. And I should note, if you're interested in these questions, Laura Naomi is here in an attendee and she'll be presenting from the National Center for Education Research tomorrow on their perspective and their findings for how the OSTP memo will be affecting the Department of Education. So stay tuned for more insight into there. Let's see. Matt Makle asks, some fields have made more systematic change than others to become more trustworthy. Do you feel there's a generalizable secret sauce to bringing about this type of systematic change? Do you think that there are any big field specific problems or different cultures between disciplines that reduce likelihood of having that general formula? Great question. Thanks, Matt. I think that there is a general strategy that works, but how it's translated into tactics and implemented is field dependent. And the general strategy is what we propose and promote as our theory of change that lots of others use in similar or variations for trying to reform the cultures that they're in. And the base idea is that there isn't, you have to take a systems level approach and appreciate that these are not just individual challenges and that there are lots of agents in the distributed decentralized system of science that all need to shift in order for actual behavior change to start to scale and to be sustainable. And so the baseline that I think is widely applicable comes out of the long history of social behavioral research that looks at how do you change behavior and how do you change cultures? And that is at base, you need to have technologies that make it possible to do the new behaviors, do open science. You have to integrate those behaviors with people's daily workflows to make it easy to encourage adoption of those behaviors. You have to, like this meeting is doing, work on the norms within those communities to make visible the champions of open science, their activities to indicate to others that this is a thing that one can do that are doing and are actually becoming more popular in our field, are valued by our field. You have to work on the rewards. What gets published? The earlier question, how can you get a replication published? If journals aren't open to that, then it's gonna be very hard to promote replication as an individual practice because the reward systems are misaligned. I can't get funded, I can't get published. Why would I do that thing? And you have to work on the policies like the OSTP memo is directing. But you can't do any, just one of those things, all five of them, make it possible, make it easy, make it normative, make it rewarded, make it required, are all interdependently required to for successful sustained culture change. Cause if you just implement a policy and you don't do the normative work in the community to have buy-in, train people how to do it, how to do it well, get them to see the value of that behavior, then they'll treat the policy like a bureaucratic burden. This is just something that NIH tells me I have to do. Okay, I'm gonna do that. In fact, I'll assign it to somebody else just to get it done and fill out that dumb paperwork to post the data. Who cares if it's usable or useful? It's not what I'm here for. It's just a thing that the government makes me do. Whereas if the normative work has been done, if the technology is there that makes it easy and guides you through doing it, then I'm much more likely to say, oh, this is good practice. And it just so happens that my funder also agrees that it's good practice certifying that. And so I'll do it well. So as a general principles, I think that model of behavior change is widely applicable. How to instantiate it in this context requires a lot of on the ground tactical thinking. Who are the agents of change? Where is a beachhead available where we can start with change here and then expand across the community? Where are there areas of resistance versus just complacency? Cause let's not start with areas of resistance. Let's start where people are ready to champion it, try it out and see if all of those objections that are very normal, ordinary, reasonable objections, whether they actually occur so that we can adapt the solution for this particular context and then bring it to scale. So thanks for that, Matt. Sandra asks a follow up question on the trustworthiness study. Who is the sample population? U.S. adults, I believe, but was there any variation, do you see any variation between lay public or scientific community within that? Yeah, great question. Yes, it was a general sample of U.S. adults. The sampling companies claim that they're representative, but they were not. I don't call them representative cause I don't trust the sampling companies enough to say that. So, but they were a diverse cross-section of U.S. adults. We have done the studies with members of the scientific community, much smaller sample. So the estimates are much more uncertain, but the findings are quite consistent. The one place where they differ is in a substantial way is in data that I didn't show in those slides, which is about rewards. Who's more valued? A researcher that is boring but reproducible versus a researcher that is exciting, produces exciting findings, but that aren't reproducible. And so we had U.S. respondents and scientific researchers answer questions about which one is valued more. The boring findings but reproducible, creative, exciting findings, not reproducible. And the general public thought that science was more sensible and valued reproducibility more than scientists themselves think, say, no, no. The person that's doing exciting stuff gets rewarded more, even if none of their stuff is true. So that's one area of difference. That paper, I think I've provided a link. We can provide a link in the chat to the paper and the slides and you can look at that data. So thanks for that. Shelby asked, are there negative trustworthiness effects on the journals or on the platforms that shared original studies after publishing a study that failed to replicate? Not that I know of systematically, but I certainly have seen individual instances where someone is so mad about a replication of their result that didn't work, that they blame anybody that's associated with it. The original researchers, they're terrible. The journal and the editor, why the heck did they publish this? What's this funder doing questioning my research? Don't you know who I am? Kind of reactions. So that has occurred, but it's so unusual that it's memorable. It's like, oh yeah, I remember that crazy guy who reacted all this way. So I think the baseline is, no, this is everybody recognizes that replicability is just part of how science works. And yeah, there is contentiousness around the edges, when especially as we are, see it as an unusual event. But places, areas of science where replication has been done more and become more common, that heat declines very rapidly because it's easier to recognize this is part of ordinary science. This isn't an unusual thing that is done to target, although some people do have a bone to pick and do target things. So that does occur. But if it's done well and done broadly, then it is normal and even a compliment to researchers for someone to try to replicate their findings. But of course it doesn't feel great to not have a finding replicate. We're not gonna get rid of that. We like our findings. I think we have time for one more question. There are a couple more in the chat that we'll document, but I'm going to go to John Whitmer's ask, can you speak about some of the underlying cultural values required for this type of behavior change? It strikes me that there are some deeper values or commitments that come before we can move to tools and technologies. That is a great point. And in fact, in that theory of change that I described, there's a presumption that there are idealists and champions that want to do these things, that do value these behaviors. And the positive story there, and we have the Open Scholarship Survey, I'm sure we can share some slides on education research community. The positive story is that most of the behaviors that you would associate with open science and the underlying philosophical commitments are much more widely endorsed by research communities, including the education research community than people even perceive that they're endorsed. There is a challenge of pluralistic ignorance where we think that our communities are less supportive of open science practices than they are because we don't see others doing those practices. But when we survey as representatively as possible, we find broad endorsement of almost all of the behaviors and at very high levels, very little active resistance. When there's resistance, it's more like, I don't know, sort of in the neutral zone. So that suggests that the opportunity for change is real in the sense that people have the values that are aligned with the behaviors and the barriers to adopting them are, I don't know how to do it, I don't know where to do it. I'm not rewarded for doing it. I don't see others doing it. I'm not sure why I would do it. If you can address those practical barriers, here's where you can do it, here's how you can do it. Here's evidence that others in your community are doing it. Oh, and in fact, now here's a funder and a journal and even your institution starting to reward it. Now you're gonna get quick adoption because those value, it's an opportunity for me to say, oh, I've been wanting to do this. I just haven't had any of these things in place. So great, I'm on board. It's aligned with my values. In fact, I feel better about the culture of science now that it is more, it's rewarding me for the values that I bring to the science in the first place. Brian, thank you for that. We do have more questions and I took a note of them and so we'll share those around, but for everybody who participated, thank you. Thank you, Brian, for sharing your thoughts. As folks leave this plenary session, you'll go back into the virtual lobby and the next round of sessions will be starting in just a few minutes. So please enjoy and I'll see you later today and tomorrow as well. So thank you, everyone. Thanks, everybody. Love the questions, really appreciate it.