 So I want to welcome you all. The Center for Open Science is a nonprofit organization with a mission of advancing open scholarship. And as part of that mission, we hope this conversation today will give you some insight into how some of the infrastructure tools that we build help research communities in advancing open scholarship. So with that, I will turn over the conversation to Brian Nosek, who's the executive director at the Center for Open Science. And Jan Ole Hesseberg, who's the program director at one of our member organizations, Stiftel Sundam, who utilizes the OSF registry toolkit. So I'll pass it over to you both. Thank you very much, Nadia. And thanks, Jan Ole, for joining us for this conversation. And thanks, everybody, for being part of this. We have a shared interest in making the quality and credibility of research the best that it can be, because if research is a worthy investment to try to advance the causes of humanity, then it's also worth doing well. And so how is it that we can take the best of innovations and insights about how research can be advanced to increase knowledge, to find solutions, to offer treatments? And what are the things that are emerging as ways in which the system of science and the practice of science can be improved? We didn't get from Sir Francis Bacon in the late 16th, early 17th century, the full recipe of the scientific method that has remained static. We know that there is no singular scientific method. There are many pathways to building knowledge. And one of the areas of emphasis over the last several years has been a deeper recognition of the humanity that's involved in conducting science and the roles that the humans in the process play in trying to think and reason and understand the things that they are pursuing, the process of discovery, and the challenges that the own limitations of our minds, our biases, our constraints of trying to discover things could be perhaps addressed by some innovations and solutions like preregistration and the registry service, which is the topic of emphasis for today, but can bridge into broader topics. So with that general context, and what would be great, Janola, is just to give us a first sense of your organization in the big picture scheme and what kind of work you fund, how you fund it, et cetera. So thank you for being here. Sure, thank you, Brian. So yeah, my name is Jan-Ole Heselberg, and I'm the chief program officer at Stiftelsen Dam, Foundation Dam in English. It doesn't sound that good, but that's the name. We all live like this in Norway. This is in the middle of Oslo Center. Just kidding. So my journey started in the foundation in 2016. It's Stiftelsen Dam. It's a foundation, one of Norway's biggest foundations. We grant about $50 million a year to health research and health projects in Norway. And I am a psychologist by trade, clinical psychologist, and started to research in judgment and decision-making. And there were two very important papers that affected how I thought about developing the foundation. So the first one was a paper that came out in the Lancet in 2009 by Ian Chalmers, one of the founders of the Cochrane Collaboration and Paul Glassio. They claim that 85% of health research is avoidably wasted and had some really sound arguments as to why that number is so high. And then your paper, actually, Brian, The Reproducibility Project that showed that a lot of social psychology and personality psychology research is hard to replicate. So as a science, as a research funder, those two papers, I would say, are nightmares for a funder. We really want our money to be well spent, and it's obvious that although a lot of research is of very high quality, a lot of it is of poor quality as well and is avoidably wasted. So that's how we started to try to focus on what can we as a funder do to limit research waste and more particularly limit the questionable research practices like p-hacking and arcing. Yeah, so that contribute to the problem of wasted research. Yeah, well, thank you. That's a short start here, and I think we made a lot of leeway here in Norway. We were small enough to be able to really put this into focus and be unpopular sometimes. And we see now that some of the big funders are starting to follow some of our practices. Great, thank you for that context and sort of big picture connecting the notion of research quality and reproducibility of the findings with the mission of the organization. It's clear that that notion of waste, like if we're gonna spend this money, we wanna make sure we get some return on that investment. So might as well try to find ways to improve it. So can you sort of unpack a little bit? You started at the end of your comments of referring to terms that may or may not be familiar to everyone, p-hacking, questionable research practices. Can you unpack a little bit sort of what the core challenges that lead to waste and opportunities to improve that you see of having a role as a funder and trying to redirect and shape? Yeah, so I would say there are two, from my perspective at least, I would say the way I read the research, there are two main problems. The biggest problem is just publication bias, that there is a lot of research that's initiated and that is never published. So I recently took part in a study by a Stanford group, looked at all Nordic clinical trials from 2000 registered as ended from 2014 to 2019, I believe. And we found that about 22% of these clinical trials have never reported any results. So they're just gone, probably for eternity. And it's about 80,000 people, patients included in those trials. And the problem is, the problem is really not the missing trials. The problem is that these missing trials have are not randomly selected. They have some attributes that are different from the published trials. So they have, there are more side effects in those unpublished trials. There are more dropouts among patients. There are more negative results. So they distort, so the missing trials contribute to a really bad distortion of the evidence, the published evidence. So that's the main problem I would say. And it's still very prevalent, even though we have known about this for a long time. It's prevalent in the most rigorous trials we have, the clinical trials. I suspect it's even, it's worse in other kind of trials. So, and the other one in the other problem is similar. And that's just missing parts of the results, so or changing results. So you do 10 tests and you report on six of them. You have the 10 outcomes and you report six of them. And it's not random what outcomes are reported and those that are not reported. So I believe those two are the main problems and there's a lot of technical details also. You can do selective reporting within a single outcome as well. You can remove some data points and stuff. So we have too little control. And I believe researchers sometimes do this deliberately. Sometimes they have really good arguments as to why they shouldn't publish the outcomes and why they should remove data points and things like that. But the problem is we have too little control and we know that the research and the evidence is biased because of it. So that's what we have to fix and we need a digital infrastructure that can help us avoid it. Yeah, that was a great summary of two real deep challenges. The publication bias, especially the ignoring of inconvenient or negative information and then that selective reporting of what researchers are doing interactively with their data but only a few of the things get reported at the end. With the consequence of that being the things that get reported end up being an exaggeration of the reality of the actual evidence because if it's only the positive results that get reported then everything works. Look it, we're solving every problem and the actual negative studies that say, well, maybe not are missing or gone as you described. And the other one, please. I just want to add I've really seen the consequences of it because I'm within the field of judgment and decision-making and I've done a lot of talks about the results from those kind of studies. And I used to have social priming as one of the things I really reported on. Small tweaks you can do in the environment around people and then they behave in dramatically different ways. Really popular findings. But now we know that those findings are likely to be very exaggerated at least. So that's, it's been hitting us really like a sledgehammer, I would say. Right. So then we're, we have false confidence in the findings as they are. Then we start to invest on the extensions of those findings but without really understanding their reliability and so that investment may be totally genuine and well done but based on not solid information. So it ends up being wasted as well. So they serve compounding challenges without getting the core of that evidence base, right? You've also signaled something. Please, go ahead. No, and that you, the cascade effect explains why the numbers are so dramatic. 85% is avoidably wasted. It's hard to believe when you hear it but it's a result of what you're describing there that you're building on previous non-solid research. Right. Because if a preclinical finding is not actually reproducible but it ends up prompting an investment in a clinical trials pathway, then all of that ends up just being, no matter how rigorously it's conducted, ends up being building on a false lead that could have been caught earlier. Right. You mentioned something at the end also sort of the, we don't know about it. Like people are making decisions, maybe they're justified decisions, maybe they're not, but the key it seems is that we know what those decisions are, that it's available to the reader to be able to understand how I as the author made these decisions so that you can say, well, that's sensible or that's not, and I wanna be able to see some of these others. And so the solution, having some public infrastructure to make all of that more available and accessible for scrutiny is a key factor in advancing those solutions. And so you have adopted the OOSF Registry Service. Can you tell us how that fits in to addressing the challenges and opportunities that you've described them? Yeah, so I believe we started in 2018. We demanded that all the trials that we fund have to be pre-registered. So we started with really with randomized trials and observational trials. And we said, you can register where you want in the EU database or in clinicaltrials.gov or OOSF. And then there's a value to us as a funder to just gather our projects in one place so that we get an overview of our projects and a workflow where we can check the registrations of our projects. And that's where OOSF came in. It started to, we saw it started to get some traction and it had some good templates for different kind of studies or trials. So we just decided to make pre-registration a requirement for all our trials, even the qualitative trials that we fund and to say that they should all register in at the OOSF. And then we moved to the registry where we can screen the registrations before they become public and make sure that all our projects register there. Okay, and can you? I just want to add that. So we have our own system. I believe we spend about, we have our own application system. We spend about maybe $200,000 a year maintaining that system. But it's isolated from the rest of the world. So it's not somewhere where other researchers can come and comment on the ongoing projects and scrutinize the projects that we fund. So that's one of the benefits of OOSF as well. And we realize that we're not able to, we're not able to be the ones developing a system that really should be just a part of the scientific international infrastructure. That's where you come in. Someone has to do that for us. Yeah, yeah, that's great. Thank you for that, how it fits in. And maybe just to make the connection explicit, the role of preregistration, we can describe for those two key challenges you mentioned, publication bias and questionable research practices are selective reporting in the work. What preregistration requires of the author is pre-commitment. I am going to do this study and here I describe the things that are gonna be in my study. And I'm going to analyze the data and identify my primary outcomes and here are the ways that I'm going to analyze it. So the way that that preregistration process then addresses publication bias is that it creates a record. This was my plan. This is what I was going to do. Now that's discoverable. Even if it doesn't end up in the published literature, it is there. And then the second part is it makes very clear what things were planned beforehand and what were discovered after the fact to address that selective reporting of, yeah, report what you're gonna report, but let's make it possible to see what was there in advance. So when you're using that registry, how does it, you mentioned you're checking this, how is it fitting into the workflow of grant management, of giving grants, of monitoring compliance? How is it that, what resources do you need to operate this effectively for the organization? Okay, I would say that depends on where you put the bar. So our process is we have an application system. People submit proposals to us. They are reviewed by our experts and then they're granted or rejected. And then when you get the grant, we say now you have to register your project at the OSF in our registry. They fill out a template for registration suitable for their project. So if they have a qualitative research project, they use another template if they have do a randomized control trial. And then we have a person screening those registrations and what she does is not really diving into the details of the study. She's just making sure that, okay, is there an hypothesis there? Are there some outcomes there? How are they formulated? The outcomes, the analysis, the plan for the analysis. So she really just makes sure everything is in place. She doesn't, what she doesn't do is look at the proposal that we got and match that up to the pre-registration. So I would say that's a fairly low bar. So she does not spend a lot of time doing that. So then we leave it up to the scientific community to make sure that that's enough, that the pre-registration is accepted then by the journal that assesses the publications coming out of the project. But what we ideally should do, I would say if you put the bar high, then you should have the reviewers from the proposal process, application process come in and check that they are doing what they said they would do when applying for funds. Because I suspect there are quite some changes in the projects. Yeah, and sometimes that kind of change is a natural. When you go to implement, you realize, oh, we proposed to do this, but there's no way we can get that done. But it would be nice to have some substantive dialogue with other experts of whether the choices that I now make are ones that are effective. So the best thing for us, the absolute best thing would be to have an application side of the registry as well, so that you submit your registration as really your proposal to us. Yeah, and then structured in precisely how it's done so that you would then engage reviewers right at the outset with the pre-registration. Yeah, yeah, so that would, I would say that's probably ideal, but it's hard to get to as well because there are so many different needs at the funders. Right, but it sounds like from what you've constructed to date, it is you're not trying to bring in-house the substantive critique. You're still using external reviewers for that, and then once it's registered, the community to assess, the core is, are they actually following through with developing a plan and transparently reporting on that plan? Is that right? Yeah, yeah, absolutely, yeah. And so how are grantees reacting to this? I would say they're, I would say if I could, if I should just pick one reaction, I would say they are really delighted they are very positive towards the project. They say they see the problems within research and that OSF is a nice, good solution, one of the solutions, but it doesn't fit my project quite well. So we've gotten some pushback. Usually it's fine, they just do what we expect them to do, but some we have some pushback, sometimes they are good arguments, sometimes they are not. We've gotten a lot of pushback on demanding this from our qualitative projects. Sure, yeah. So the argument goes that, yes, but the qualitative project develops, it is a process more than a planned study and can't be compared to a randomized control trial. And of course they are two very different ways of gathering information, but I would say if your plan is, if you say to the world, I plan to interview 20 people and then do this with the interviews. And then in the finished paper, there are five interviews. It's relevant for the reader to know what happened along the way. Did you do more interviews than five? Or didn't you have time to do the remaining 15 interviews? Where are those 15 interviews if they were done? So I believe it's just valuable information in interpreting the results anyway. Yeah, that's a great example of a general concept, pre-registration of making pre-commitments in advance and then discovering what changes after the fact. How does it translate to be an actual benefit in different areas of scholarly research? And qualitative research is really a leading edge area of trying to understand how these concepts of planning in advance and making commitments may or may not be applicable to different parts of that scholarly process. There's a great work by Tamarinda Haven who developed the qualitative template for that's on the OSF of working with many different qualitative researchers across fields of trying to figure out what is it that should be reported? Exactly as you're saying that are things that can be planned. In every area of research, not everything can be planned because it's research. We find out once we get in there, oh my God, this is so different than what I could see. But the ideal I think is exactly as you're describing which is that as you get into the work, things that change ideally become transparent. We have these expectations and this is what actually happened. And it doesn't mean that those changes are wrong in any way, sometimes they're the best thing in the world. But the fact that they changed is relevant for the reader. Yeah, yeah. And I really believe that it would help researchers as well just to keep track of those changes. But it's because it's really, really hard to remember what you planned and the changes you, if the changes you make along the way are a deviation from your plan or not. So having a tool to help researchers just track those changes, I believe would be a great help. And also, I've done some research, not a lot, but some research and I've been sitting in discussions with my supervisors, for example, where they suggest changes that I'm not comfortable with, for example. And we discuss, maybe they are right, maybe they are wrong, but it's I think it's a lot better for the candidates doing the research or at least the students, PhD students, for example, to be a part of a system where those changes are tracked. So you really don't have to... Yeah, so you know that if that decision is made, it will be visible. So I would probably think that it eases some of the burden of the students in discussions with more senior researchers as well. Right, yeah, that's a general truth, right? Explicitness, having things written down helps to navigate relationships, especially power relationships where we said, look, what can I tell you, we wrote it down. Yes, we can make the change, but let's be clear that it is a change. And yes, you have more power than me, but we're still going to have to be accountable to what we said at the outset. Exactly, yeah. You mentioned that the different types of pushback and I have such great empathy for the researcher experience in wading into preregistration, because when we started doing it in my lab, one of the experiences in trying to create a plan was the recognition that this isn't actually how I thought about the work at all prior to doing this or how the work sort of played out. Like, how do I figure out how to analyze my data if I don't already have my data? It sort of hits me like, oh, that's the problem. I was figuring out how to analyze my data. So I was analyzing it. That's why all of these questionable practices are, but it is disruptive, right? It has to work through this stuff at the outset is not easy. So yeah, so what kind of ways of supporting researchers in developing the capacity for this is needed? I really think just how you meet them, I think it's really important, you said, with empathy because this is really starting this way to think about research and it's really hard and the infrastructure is not really in place and a lot of the senior researchers in different fields have not done research this way in their career. So they're used to doing it in another way. And then you have junior researchers meeting with funders trying to push this field forward sometimes. And it can be really hard. So I think you should meet everyone with empathy and say, I know it's hard. We have to figure out this together and give them some leeway in the start also and try to feed the information back to you providing the infrastructure and just have an open dialogue of these things. I'm doing some peer review research and I'm gathering huge data sets from the Norwegian Research Council and different other Norwegian funders to see what factors are associated with disagreement among reviewers looking at the same proposals. It's a huge data set, maybe 300,000 reviews and I'm so frustrated trying to write this pre-registration because it never seems to end. Where does it end? Where I describe how the analysis are done and how the data is transformed? Where should I start? And should I provide the R script in detail? What happens if it doesn't run with the finished data set? It feels like really, yeah, it never ends. There's always more details that's possible to add. It's always possible to do it better. And so yeah, my mantra for this has been incrementalism. Let's do a little bit better today than we did yesterday. And pre-registration because it's unusual and disruptive to the everyday workflow of many researchers, it does create time's practice, developing a skill. We wrote a paper a few years ago called pre-registration is hard and worthwhile. And it really tries to articulate, this is gonna be a challenge. It's going to be something where the first time you do it, you're gonna have fits and starts and you'll get through the end and then maybe you'll look back and say, oh, why didn't I do X? Or I can't believe we didn't think of doing Y because of course those are decisions that we make later. But it's the practice of doing it that ultimately sort of yields that, oh, I see the benefits. Oh my gosh, doing more planning makes my studies better, makes it more likely that they're publishable regardless of what turns out. All of those things that are surprising after having done it. I remember this one conversation with someone that came up after a lecture and he said, I was not on board with pre-registration a couple of years ago, but I saw you give a talk and so I said, all right, I'll try it once. And so I pre-registered my study and I forgot about it and I went and did my work and then I had analyzed my data and then I remembered that I had pre-registered it. And so I went back and I looked at my pre-registration and my analysis plan was totally different than how I analyzed my data and I couldn't believe it. I didn't remember that that was my original plan. I thought it was this other thing. And so he was a total convert because of the actual, just the experience of doing it. And so those are fascinating, how we can lose so much of the context in the everyday rough and tumble of work. Yeah, yeah, yeah. And this kind of process can actually help with. Yeah. So the other part of sort of your signaling a couple of different times is the value of reviewers in engaging with the substance. And we mentioned before the call when we were organizing our thoughts about what to talk about. You mentioned that you're also doing things with registered reports in order to try to bring some of this review process and the expert engagement in line with pre-registration. So maybe you can just give the, you know, the quick what registered reports are as distinct from pre-registration of the registry and then what you're doing with that. Yeah, so registered reports is really a, you correct me, Brian, if I'm doing this wrong. Yeah, I know you. I know you have heard about the registered reports. So it's really a publishing format where the journals receive the protocol or you call it a pre-registration, a description of what is done, the methods of the study. And then when they review the methods and decide it's worthy publishing, they commit to publishing the final article when it's done. And if it's done according to plan or if you have good reasons to deviate. So it's, you get an in principle acceptance before you start doing your data collection and analysis. So it's really just connecting the principle of pre-registration to the peer review process in the journals. And we are... Just to connect before you continue, so that addresses publication bias that happens at the journal level where they say, well, we don't want negative results or things that are messy or inconvenient even if that's the reality by getting the journal to pre-commit to publish regardless of outcome, it solves that problem and someone separate from you with the journal and how their role in the process. And the other thing that you're highlighting there is it engages that expert review at a point in time where actually can improve the research. In the standard journal model, all the research is done and you've submitted it to the journal for review and what the reviewers do is say, here's all the ways you screwed up the research, too bad. It's too late now. Whereas in the register report model, they've say, oh, your plan is interesting but you really couldn't prove it at X, Y and Z. And so it actually makes the research better by engaging experts early. So sorry, yeah, those are great. So now how do you use this? Because this is something that journals do. Yeah, so what we have in our... We've been doing a lot for a while in our foundation. We have a two-stage process when you apply for funds to us. So you submit the very short pre-proposal and then we have five different reviewers looking at the proposal independently. And based on that score, the best ones are invited to a second round where they submit full proposals. So that saves a lot of time. We know the average applicant uses half the time now compared to before when we just had full proposals. Right. So what we've decided this year is that when you start on the pre-proposal, you get the choice. You can either choose our traditional process that I just described, or you can choose a registered report. And if you choose registered report, we say that we guarantee you a higher success rate than if you use the other route. And also, this pre-proposal is the only thing that you submit to us. So if you are among the highest ranked pre-proposals that chose registered report, you will get funding for your project provided that you find a journal that will publish your project as a registered report. So from the author's perspective, this is saving me some time because yeah, the pre-proposal, that's great independent of this, right? I get quick response on low effort. But then if I choose the registered report route, then I don't have to go through your peer review process for the full proposal and the peer review process journal after I do the research. Now that's essentially combined into one review. Yes, yes. And I, as it should really, I mean, because it's really, the functions are really similar. Yeah. Assessing the quality of the research. So how does budgeting work then? How, because they can't go to the journal and say, I want a $10 million thing. No, so they tell us in the pre-proposal form how much money they need. And if they are granted, we say we'll guarantee you that sum and we'll also give you $15,000 to write up the, what's it called? The phase one, the stage one article. Or the, yeah, the description of the methodology. That's great. So, and then after that, they will get the rest of the funding if they're accepted by the paper. Yeah, so the problem, the problem will be if the reviewers at the journal say, no, you need double the participants. That will be a problem because we have no more money to give that project. Right, right. And so in, have scenarios like that come up yet or do you have a sort of a feedback loop plan for how to manage that? Or do you think when it comes? Yeah, we'll do an evaluation now. This is the first time we do it on the deadline, the application deadline is 15th of February. So we're really excited to see. See, as of now 15% of the applicants have chosen registered reports. Wow. So that's a good sign. Yeah. And are these at the end? I'm just gonna interject. Please. Sorry, one second, Brian, because we only have a couple of minutes left and we do have one question I want to make sure that we got to from Nicholas Gibson, who has asked, have you encountered any downsides or challenges to locating all of the DOM funded projects within a specific registry? So for example, how do you handle research that's co-funded with another funder if that's an issue at all? That's not a huge issue at our end because we demand to be one of the largest funders. So, and it has to be a new project. So I don't believe that has been a problem. One thing I can say that's related is that that's probably an area for development that the OSF as well is getting data out of the registry to get an overview of the projects, ongoing projects connected to one funder. Yeah, so better reporting mechanisms so you can see it all in big picture. Yeah, yeah, and getting data out in Excel files and yeah, different formats. Great, great. So maybe that we can wrap so that we can go to any other Q&A. Maybe you can elaborate a bit on this, is what are the asks that you have for service providers or for the research community that would help to evaluate, improve, advance these efforts that you're pursuing. I was, this is just a dream and I know it's hard to get to but the absolute best thing would be at least in the clinical trials and observational trials would be a feature that really captures the outcomes in a good way, in a way where the outcomes are categorized and so that it's really easy to track is this outcome the same as the outcome reported? Yeah. So that you can just get an overview of, okay, these are the 10 outcomes and what happened to the 10 outcomes later during the process. Now, it's really hard to, that goes for all the different registries. It's really hard to see, okay, is this published outcome? What is, is it this outcome or not? It's, I've done this a couple of hundred times in the trial I mentioned. It's so confusing. It's so hard to really get to, is it the same outcome or not? So that would be of great help. Yeah, cause the format of a paper doesn't correspond at all necessarily to the structure of a preregistration. And so the mapping becomes very difficult. That's great. And it's also easy. You can write the same outcome in different ways in the preregistration template as well. So you can do that in a number of different ways. Yeah, well, hopefully we'll have a solution for you in the relative linear future because on our roadmap is an outcome reporting format which would mirror the preregistration format. And then what users do is just say, well, this was what was planned and this is what happened. This was planned, this is what happened, this was planned. And so that should be easy to export for that kind of discovery. But that actually is related to a question I see in the chat from Crystal is, is there any experience that you have yet of whether grantees are taking advantage of updating their preregistration as the reality of research appears, are they submitting revisions to their, here's what I plan, but here's an update, here's an update, here's an update even before they know the outcome. Yeah, I don't have the complete overview but I've checked a number of times and there are definitely updates to the preregistry. And you can see it very clearly if they have updated or not. So that's a really good thing. And yeah, I believe it's useful for us as a funder and it's also very useful for researchers out there. That's great. And we demand just to say that we demand that all our projects register openly. So it's not an option to keep it. Embargoed. Yeah. Yeah, okay. Yeah, so that is if people aren't familiar with the service there's the option to embargo the registration for up to four years so that if there's concerns about others seeing the ideas or otherwise that doesn't happen but most users even those not required to it's something like 70% of registrations are public as they're registered. Nick Gibson has a follow-up question of hearing whether you've found any advantages in terms of monitoring and evaluation for having all of the funded research in a single registry rather than allowing researchers to register wherever they would like to do it. Yeah, it's that's a lot better. It's just easier to get an overview of the ongoing projects. That was a nightmare before and it was hard to just get the overview. So I would really recommend that and it's a small cost to pay both to be able to gather the project but also just to be able to have the option to screen the preregistrations before they go public. So yeah, so that's a really good. Good option. And then you get also you get messages when they start registering. Yeah, okay. Yeah, that's great. And the other potential benefits I would imagine I don't know if you've experienced them is then consistency and training like researchers that don't know what to do you don't have to train them on 17 different ways of doing it. It's like this is the approach that we do it and this is we can give you feedback. And then likewise the standards for being able to evaluate you can have it you have an expected structure that the information will come in for you for the person you mentioned that checks all of these. I suspect that with other benefit. That's a great help as well. Yeah, absolutely. And we have a limited number of templates that they can use. So it's all that makes it also easier. Okay, yeah, that's great. Question from Blaine is wondering if researchers that you've seen so far that now have gotten into the registry are they also using other features of the OSF that are attached to the registry such as adding data or materials to their projects of connecting a preprint of their paper. Once it is, they finish their work maybe it's too early for that. No, they do. I think that's one of the things that we'll have to make sure happens more often but some of them they do use it like that. So, and we also demand sharing data. So it's we of course prefer they shared data in the OSF registry. So we have to when a lot of the projects start finishing we will we will demand that. And also we keep 10% of our funding until they do the end report. And at the end report, we will demand them sharing data and doing all that kind of things. So but none of our projects have been going to the end report yet. Okay. So keeping 10% of the money is a great motivator for doing things like that. Yeah. And have you worked through? I know you're not there yet. Have you worked through those scenarios where they've said, well, we haven't published it yet. Do we really need to share data now and we're still working on the papers? Yeah. It's the requirement going to be. So we haven't decided on that yet but we will demand sharing the results within 12 months after ending data collection. So regardless of if they've published or not. Okay. So we haven't done that yet but that's that's a part of our new open science policy. Right. Nadia, are there other questions or other context that you want to make sure that we cover before we wrap today? I actually have a question which is maybe a bigger picture question about other funders in Norway. So I know you're quite a large one but are you seeing other funders, smaller funders, other research organizations kind of look to this work that you're doing with registration as part of your workflow and changing their workflows? What has the kind of conversation been like in response to these requirements that you have? So we've been able to, since we're fairly big but we're still fairly small, we're just 13 people and we can do things very, we don't have huge decision-making processes that are cumbersome. So we decide on things very fast and so trying out a two-stage process is one thing. We have, we also have two programs that have a running process where you can submit your paper at any time and get it reviewed within 20 days and you get your answer within 20 days and the funds are never emptied. So it's always running. So this it goes on over the year as well so it's never empty. And we have this registry report and pre-registration at the OSF. So other funders are looking to what we do and they've started copying the two-stage process and the running process. And the Norwegian Research Council has said we're looking at what happens with the registry report project to see if that's something that we should do as well. So they're definitely looking to what we do and are interested but they're, they want some results coming in as well. Yeah, yeah they're all but so many, please. Yeah, the project that you're doing looking at the benefit of pre-registrations I can't remember the name of the project. There are a few different ones but yeah we have one where we're doing a trial of looking at registration and registered reports as well. That may be. So I think projects like that will really move things forward. Yeah and there's a burgeoning meta-science research community that's really interrogating these topics with a variety of different approaches real global community now. So I think there's a healthy dialogue to happen between fundamental research of whether these ideas are meeting their promise and those like you they're sort of pushing the boundaries of what might we try about what we try experimentally and then if those things that work other funders, other journals, other communities may then say, ah we have some confidence that we can adopt these and gain the benefits Absolutely. So yeah we're trying to just test something and see how it lands and build our processes on research done by others like you. Great, well let me try to get the one more question in from the chat before we close and Nick Gibson asks another one which is a fundamental one for registered reports which is if a journal is going to commit in advance to publishing these regardless of outcomes it creates an interesting challenge for types of research where the research process is sequential like what I do in my second study is contingent on what happened in the first study and so is there a mechanism yet or have you had any experience with this to provide some ways of enabling sequential types of research through that registered report model No, that's it. We get a lot of questions about that so I don't have an answer to that other than we're really in now we're just giving the choice to the applicants. Yes. So we have said these are the requirements your project has to end up in one publication and your project has to be you have to publish it in one of these 300 plus journals that you have mentioned on your website and they have to choose can they do it or not and then we will see what kind of questions we get and what obstacles they face on the way. Yeah, so this will be an interesting thing for the evolution of registered reports. There are several papers using the registered reports models that have multiple experiments most commonly it's the capstone experiment that is the registered report and the others are preliminary exploratory work to sort of build the case for doing that. There are some that are several experiments where they were all put through the registered report process. My anecdotal observation of those is that they are independent enough experiments that they're all sort of converging on a similar hypothesis and so they all test it in a different way so it's fine to propose them together as sort of the cumulative evidence for that. So it'll be interesting in your process to see how many of those that are planning in that way choose the route of registered reports. Yeah, I just want to add that we previously we've funded a lot of PhD projects and so we get a lot of questions that is this suitable for PhD projects and our answer is yes it's suitable but it's not suitable probably now to choose it at the pre-proposal level so you can still publish your research that way but it's hard for us to say that this will be this will end it can't like we've done it now end up in three different publications that and that's a requirement by Norwegian law now. So and we believe that that's that's inviting slicing your data and pulling the project apart so it's that's something we're working on and politically as well. Yeah well that is a great illustration to end on which is science is a social system and there are many different factors that are influencing the actions of any given researcher the national context policies that may not have even thought about what the potential implications would be of if you require three and they do one big project they're going to find a way to turn it into three even if that's the expense of the quality and reporting of the work. So all of this effort that you and others are committed to doing to improve the quality of research has to be in constant dialogue with those other actors other funders other publishers the researchers themselves so that we can keep sort of converging to what's actually working here what isn't working and that iterative experience improving along the way. So we just really appreciate the work that you're doing and your willingness to talk with us about it today to give us a sense of what sort of the progressive end of these reforms are like to maybe inspire the others to try out some things or maybe even reach out to you and say what about this what about that no what do you think would work or not. So thank you and thank you everybody for the questions that you offered today.