 Hi, everyone. Thank you for joining us this open access week. I'm Dr. Gayatri Parke, managing editor at Ananko Academy. With me today is Dr. Brian Nosek. He is the founder and executive director of Center for Open Science, a non-profit organization that actually focuses on increasing transparency in the academic research sector. They develop a variety of tools to share data and conduct research as well as report it. But aside from this, he's also a professor of psychology at the University of Virginia, where his research interests focus on people and system behavior, solutions to align behaviors with values and how to improve and accelerate progress in science. So Brian, thank you so much for joining us today. I know it's the 10th year anniversary of Center for Open Science. So I'm really glad that we could have you this year, particularly given the theme for the open access week this year, which is all about community over commercialization. So I think I would actually like to use that to jump right into my first question for you. Center for Open Science is somewhat of an odd organization to start, you know, most people when they start with innovation in the research space, very often it's geared towards a product that is for profit. But I think your vision was a lot broader and perhaps a little ahead of the time when you began. So maybe if you could tell us a little bit about what your motivation was in starting Center for Open Science. Yes, and thank you for having me. I'm delighted to have a chance to chat. The motivation underlying the founding of the Center for Open Science was recognition that the values that we have and aspire for for how science should operate. Transparency, reproducibility, high integrity are not the same thing as what the daily practice of science is and especially not things that are necessarily rewarded in the social system of science. I am rewarded for getting things published and to get things published, I need to produce exciting novel interesting findings. And of course I want them to be correct and I want them to be as accurate as possible and I want to be as rigorous as possible, but I'm not actually rewarded for being accurate or rigorous or reproducible. I'm rewarded for exciting results. And so that creates a dysfunctional reward system that we think ends up inhibiting progress in science. So the goal for founding the Center was really to tackle that systems problem. How can we shift the reward system and provide solutions so that the values that we have for doing science openly, transparently, reproducibly are things that were rewarded for as scientists doing our work. So it didn't make any sense to us that were involved in the early years of of COS to form it as a for-profit because the mission is really a public goods mission. How do we help science as a social system become better aligned with the aspirations? So we didn't want to be having that internal conflict for the organization between a profit and what our mission is about. So we're mission focused. Absolutely. That makes so much sense. So I believe you started with OSF first, which is sort of a pre-print server that it has grown to be with across multiple domains now. However, I just wanted to get a little bit of insight on what is guiding the rest of the products that COS has released. Is there sort of a roadmap that you're following on what parts of research activities you're focusing on? Yeah, great question. And we are trying with our approach to tackle the challenge of the reward system by leaning into the fact that it's a system and you can't inject just a single solution to shift an entire system. So the base of our strategy is infrastructure, the open science for any worker OSF and its goal as you indicated is to try to support the entire life cycle of research. So it helps researchers do better planning to document their work as they're doing it to share data and materials and code underlying their papers and then make their papers openly accessible as well pre-prints or otherwise. So that whole life cycle is intended to increase overall transparency, but just providing the tool doesn't change anything about the reward system. Why would researchers use it? How would they integrate it to their daily work? So the next set of activities we have is really focused on training and how is it that we integrate these tools into the daily workflow that researchers have. So how can we make it easy for researchers to be more transparent, to be more reproducible? The next layer of our work is about how do we create new social norms about why it is we would be more open and transparent. And this is a really important part in science because most of how you and I decide how should we do research is by looking at our peers. How do people in my field do research? How do people in my program do research? And that informs, well, that's I guess how I should do research. And so a lot of the change movement is about community building. How do we identify what our values are collectively and how do we start to demonstrate that these are new things that we can do that are actually aligned with what we're trying to do in science? There's two more layers and they are focused on the reward system. So all of that is sort of the bottom up change. Infrastructure, training, norms, but we really, if we're going to fix the system, we have to change how funders decide who gets the money. We have to change how publishers decide who gets published and we have to change how institutions decide who gets a job and who gets to keep their job doing research. And so that part is providing a policy framework to those different stakeholders so that they can make the actual doing of open rigorous research incentivized and even required for people to succeed in the system. Very interesting. So definitely I see the impact of something like a pre-registered report, right? So transparency or openness can begin from the time you actually think of a research project. That absolutely makes sense. My question here is, you know, I feel it could perhaps fit the clinical field very well because we are typically planning out protocols much in advance of patient recruitment and whatever other experiments you might be doing, but perhaps walk us through how that could work in research fields where protocol publishing is perhaps not the norm or you know, research perhaps evolves as you go ahead in doing the experiments. The excellent point, the concept of pre-registration or committing in advance what the plans are to help distinguish between the things that were discovered after the fact is a very natural fit to some kinds of research. Like, oh, it's obvious. I'm going to test the hypothesis. I should write down beforehand what my hypothesis is so that when I get the data, I have some accountability to say, well, I said I was going to test this. I said I was going to analyze it this way. Do I see evidence consistent with that? Various areas that have very strong theory. That's also what's happening, right? The theoretical expectations are that when we do this, this is what happens. So that constrains us from over interpreting noise as signal. But the concept of pre-registration does have broader application because its goal is to do two different things. One goal of pre-registration is to make very clear the distinction between what was planned before and what was what was discovered after the fact and in confirmatory domains, very clear, but even in exploratory, more exploratory domains, if we have plans, for example, for how we're going to analyze our data, providing those plans in advance can increase the strength of our statistical inferences that we make after the fact compared to, for example, while I'm exploring, so I'm just going to analyze it 5,000 different ways and show you the 10 interesting ways that came out of that. You would feel less confident knowing that I did 5,000 without being able to see them than if I had laid out those 10 ways and said, I have no idea what's going to happen. I'm just exploring, but here are the 10 ways I'm going to analyze it and show you all 10. And then the second purpose of pre-registration is to make sure that all the research that I do is ultimately discoverable. So I might be in a very, most of my lab work is exploratory, but we do lots of things and not all of them end up in the papers. So if I did a hundred projects and only reported two in papers after the fact, if you don't know about those other 98, you would say, well, great, that's an amazing finding, but if you learn about those other 98, you'd say, well, wait a second, like y'all tested kind of the same hypothesis or you were sure you were exploring, but you were investigating similar questions. Maybe these two kind of popped out by chance. So in that context, just knowing that those other studies exist is very useful to address publication bias or the overvaluing of positive exciting results and ignoring all those things that just didn't work out. Right. Okay. That makes sense. I guess my follow-up question is do you see value in utilizing these pre-registered reports to establish research collaborations for that, keeping in clean with community over commercialization for this year? Yeah. The act of pre-registration is an opportunity to announce early. Here's something I'm thinking about doing. And that could spawn exactly as they're suggesting opportunities for others to say, oh, I've been thinking about that too. The best example of what you're describing that I know of is the journal amps, advances in methods and practices in psychological science. What they have done with the pre-registration or registered report model is a small team says, we're really interested in replicating this particular earlier finding. It's really important, but we don't know how it operates or if there's variation in when it's observed and when it's not observed. So we want to pre-register a design and commit to reporting based on that design. And what amps does as a journal is set. Once the protocol is accepted, they put out a call for anybody to join on a collaboration. We want this to be run in every nation in the world. We want this to be collected in variety of different social contexts. As long as you can follow the protocol, you are invited to participate in this project and we will publish all the results from all of the different sources. I love this model because it really provides opportunity, especially in domains where there's going to be high variability like psychology or ecology or other places where this context really, really matters. It provides an opportunity for many different communities to say, I don't know, but let's see how our community ends up showing that evidence compared to yours. Yeah, that sounds wonderful. I guess my question is does do you think it's a model that could fit well with a peer review of protocols as well? Yeah. Yeah, and if so, I mean, we all know about problems with peer review, right? Just the length of time availability of peer reviewers. Do you think there's perhaps a lesser burden or why do you think, you know, this format, this model could work for peer review then, that publication would not? Right. So pre-registration is a general concept that you can do independent of a journal or peer review. So if you go to OSF and you pre-register your studies, you can do it just on your own at any time, no restrictions. The registered report model is a subset of pre-registration that engages peer review exactly like you're describing. So the idea of the registered report model is that the authors develop their concept, the question they're going to investigate their methodology, their analysis plan, and then they send it to the journal for peer review before conducting the research. And then the journal commits to publish it or not based on is it an important question and is it a good methodology to test that question? And then you go get the data and then you just add the data and then submit the final report. This is what we found. So what's great about that in addition to changing the incentives for publication to be about the outcomes. It's now it's about the methodology and the question. The outcomes are just the outcomes. It also brings in expert reviewers at a point in time where they can actually help improve the research. When it's reviewed all afterwards and I send my here's all my studies and I send it to you for review. Your job is to say, oh, well, you should have thought of this and you really should have done this and boy, you screwed this up, Brian. Like what were you thinking? And and then I get all those reviews and I say, oh, God, I should have thought of that. But it's too late in registered reports. When I send that to you, you say, oh, how about you think about this? Here's an alternative way you could do that. Here is a potential alternative explanation and here's a fix for addressing that. So our dialogue between author and reviewer becomes more collaborative and your expertise then makes my work better. And I just I just in theory love this and then in practice, I think it is actually working out that way and in many of the cases of registered reports that it's actually helping the research get better and reviewers feel like they're more valuable in the process. They're actually contributing. It's actually quite coincidental that we're talking about this. There was a recently released article about how peer review actually doesn't lead to any significant changes in the final outcome in, I believe, ecology. So clearly a big positive to step in when the research is still raw. Yeah, great efforts there. I think I wonder though, I mean, I do see that there are about 300 journals that have signed up for this model. Where do you think it's heading into the future? And how important is publishing this data, you know, that is it leading to more publication of negative results perhaps, or is it improving timelines for peer review once the data is collected? Do you have that type of measurements in place? Yeah, great question. So you're right that more than 300 journals have adopted the model of registered reports so you can submit to them the plans and they will review it beforehand. One of the most recent adopters is the journal nature. So it's not just journals that are people don't know about or are very specialized. It's also some of the most recognized visible journals that have made this an option. And we have, it was first introduced in 2013, the first published registered reports were 2014. And so in that intervening time, a meta science community has emerged to evaluate lots of things about how to improve science. But a lot of studies about registered reports specifically is it meeting the promise that we hope for? Or are there unintended consequences that it actually creates some problems? A couple of others, many studies, but I'll just mention two. One is by Anne Shield and her colleagues where they looked at does registered reports pre-committed in advance make it more likely that unexpected negative, no results are published, things that are different than the hypothesis. And they found exactly that. So in standard reports, they found that 97 of hypotheses are supported, which is like, we're right all the time. We don't even need to do the research. We're always right. With registered reports where that commitment was made in advance, it was more like 40% of the first hypotheses were supported. That's more realistic for anybody that's does the work in the lab. We know we're wrong a lot. Yeah. In fact, 40%, I'd love to be right 40% of the time. I am not. So that's pretty good evidence. Another piece of evidence is was led by our team at COS where we put registered reports and then comparison articles in the same journal or by the same authors that were done in the standard process and put them through a structured peer review process again and had them rated on a variety of different criteria. And what we observed in that observational study was that the registered reports were rated higher in rigor and quality across a variety of different criteria compared to the standard articles. So that's good initial evidence that perhaps like you were saying earlier, the peer review process might actually help improve those or when I know I'm going to be evaluated based on the rigor of my methodology. I make my methodology more rigorous. So that initial evidence is very positive. The last point you raised was on the peer review process itself. Does it add burden? Does it shorten time? What's the impact? We don't have good evidence about it yet. We only have sort of anecdotal evidence and the bits that we have are that one, it may actually lower the overall burden of peer review because in an interesting way, which is most of the time when I have a new paper, I end up having to send it to three or four or five different journals until I get it accepted because they find all these problems that I wish I had thought of. And so the burden, the key inlet of burden is five editors, 15 reviewers, et cetera. With registered reports, the acceptance rate so far appears to be much higher, which suggests that lots of research has good ideas as an improvable, but then it suggests that it could lower the overall burden by only needing to go to a couple of journals at most before it gets into the pipeline. So that's an interesting observation. The second on just reports of what reviewers have experienced is that the experience that it's not much higher burden than doing like a revise and resubmit. You read it once and you read it again, but it might even be a bit lower burden because you review the first part the first time and then you review the second part the second time and the second time when you get it back after the results are in, you're not evaluating whether the results are interesting. You're just evaluating did they do what they said they were going to do and did they interpret their results responsibly. That's it. And reviewers tend to really like it because they're invested too. Oh, I wonder what's going to happen. Oh yes, definitely. I can imagine that being a motivating factor to do the second review as well. Absolutely. So I guess keeping the sales team in mind, we often see that a lot of these open access practices are taken up in the US perhaps in Europe, keeping in mind that audiences across the world, what would you say a researcher should try and do if they're just trying to pick up open access practices? Maybe the institution doesn't quite support them the way you envision a change at the broader level of their policy can be, you know, a repurpose to promote open access in those situations. What's your advice for the project? It's such an important question because the this kind of reform movement relies intensively on community. It's people can't do it alone, right? You need to have others that you can connect with that you can share insights with that you can grow interest in among the researchers that in one's department when one's field in one's region and that's hard to get started. So anyone that feels disconnected from the reform movement as it's happening because of where they live or what the circumstances of the their context are. There are lots of points of connection to make with the existing community. There are a number of grassroots open science communities. For example, in psychology, there's this society for the improvement of psychological science sips in ecology and evolutionary biology. There's a group called sortie that is trying to connect and provide instances of opportunities. Our own organization, the Center for Open Science has an ambassador program where people can join and learn about these issues and learn how to advance them in their communities. So my main advice would be don't try to go it alone seek out these points of connection with the existing communities and then educate those communities about how they need to adapt to be relevant in their in your local context, right? The COS is US based institution. And so we have our blinders on for what the reality is like for researchers in the US. And so we really rely on our ambassadors and other collaborators to sort of help us say, well, that's great and fine, but that just does not work in the Brazilian context or in the, you know, Slovakian context, wherever. And this is how we need to think about adapting that for this context. That is hugely important for us and hopefully likewise a benefit for those that we collaborate with. Sounds like community development is a success factor. You were right. Absolutely. And then what would be your advice for universities? Let's say sounds like open access practices are lot more domain specific or let's say even openness towards change is higher in certain research fees. Whereas universities are perhaps catching to multiple researchers, their viewpoints different, you know, practices and the fees. So what's your advice there? Two recommendations. The first is that UNESCO has an open science movement that it has coordinated and has a lot of national signatories to that and UNESCO has from that developed lots of resources to help institutions start to think about how they can start to reward, support open science, open access, open scholarship practices. So I strongly recommend connecting with that community and that they're amazing. The second recommendation is to consider the institution signing on to the Kauara agreement that's C-O-A-R-A and it's C-O-A-R-A dot U. This is a commitment that institutions can make to evaluate how they assess research at their institution and it's a fabulous movement with again lots of community building resources, collaboration across national and regional boundaries for institutions to decide how is it that we want to align the values of our institution with how we reward our researchers. And so that is a really important step to making all of this work. That sounds like a fantastic set of resources and I hope this open access week, a lot of universities actually explore these given the theme. So before we leave this conversation and move on to, you know, I'm sure whatever is happening in our day, just a couple of thoughts on where do you see future of open scholarship? And what do you think is the biggest challenge right now that we need to fix to really ensure inclusivity in open scholarship? Yeah, I would say the future of open scholarship is that it's happening. It has passed that point of, oh, that's an interesting idea. And now there is sufficient investment. UNESCO is one example in the EU. There is a substantial investment and commitment to moving towards open scholarship. The US has made very big strides in the last year for starting to make this into policy that federal institutions will have to support. So that is saying like this is coming. If it's not in one's current view, it will be there. So in that context, the opportunity here is to make sure especially through the equity lens is to make sure that communities do not get left behind, right? With all of these kinds of innovations, right? The ideals are big and grand. The implementation moves fast for places that have lots of resources and everybody else is sort of left like, oh, well, now what do we do? And it can ironically even, you know, even though the goals of open scholarship are themselves to increase inclusivity and equity, the inaction of it can create more disparity. And so the key thing for success, I think in the next few years is how do we make sure that the tools that are being developed, the services that are aiming to support it and the community building that's happening around it is available to all and responsive to all the diversity of needs that occur, right? If our solution, for example, the OSF is tuned for the US context and makes all of these assumptions just in how the tools work, that's going to be a real big barrier to how someone in Kenya or Indonesia or Australia tries to engage in that work. Absolutely. So I guess the takeaway is no one solution fits all and we have to adapt and be responsive. Yeah, exactly. Well, that was a great insight Brian and thank you so much for joining us again today and I wish you all the best for whatever falls in the future of COS and I'm sure what could be a busy year for you. So thank you so much. Thank you for having me. I'm delighted to have the conversation.