 My name is Belinda Orlin. I'm the Senior Manager of Research Operations at the American Heart Association, and this webinar is cosponsored by the Health Research Alliance and the Center for Open Science. The Health Research Alliance is a membership organization of almost 80 non-profit and non-governmental funders whose mission is to maximize the impact of biomedical improved human health. The Center for Open Science is a non-profit and technology and advocacy organization that's dedicated to improving research practices, increasing openness, integrity, and reproducibility in research, and accelerating discovery. I think we have some slides that they're going to put up that talk a little bit more about the HRA if someone could advance those. Does that look good? That looks great. So this slide is showing the diversity of our HRA membership in both size and mission. One of the strategies that HRA uses is to enhance the impact of our research funding through eight working groups. So one of those eight working groups is the Open Science Task Force and I along with Jason Gerson from PCORI are currently the co-chairs for this Open Science Task Force. In our recent HRA meeting we had a breakout session for this group and there was a lot of interest around pre-registration and that was a topic we really wanted to get some more information out about for our group. So HRA has partnered with Center for Open Science many times before including co-hosting a forum for funders to discuss how to maximize research impact by promoting open and reproducible research, offering a webinar on the top guidelines to the Thunder community. And today we're looking forward to hearing from Brian Nosak, which I believe Brian is snowbound in Virginia, so hopefully we'll have a good connection here. And Brian is the Center for Open Science's executive director and he's going to talk to us a little bit about how pre-registering research can improve research rigor and reduce Brian, and reduce bias, sorry. So with that I'm going to turn it over to you, Brian. Great, thank you Belinda and thanks everyone for making time for this today. A couple of general notes before we get going into the presentation. The first is that there is a Q&A feature that you should be able to see in your Zoom browser. If you click that you can ask questions at any time during the presentation. Most likely those will be addressed as we get to the end. We should have plenty of time for Q&A back and forth, but they will be addressed either by myself or another member of the team. A second note, as Belinda mentioned, is if you are like me enjoying the winter that was hoisted upon us over the last two days, you are stuck at home. So there is two disasters that could occur. One is that I will lose internet connection, but my voice is all by phone, so I should be able to continue that. And then we have backup at the office of folks that can manage the slides. And the second possibility is that one of two children will come rushing in behind me declaring that I need to exert retribution on the other one for the misbehavior. They know they're not supposed to come in, but who knows what will happen. Avoiding those two disasters, if we can get through this with plain discussion, great. So I will jump in. What I hope to do here is give 35-40 minutes of overview of what pre-registration is, why it's important, and how it can be implemented in effective ways to facilitate improving rigor and reproducibility of research. Pre-registration is a very popular topic these days, and increasingly so, as it's gaining awareness and traction outside of its most prominent application, and that is in clinical trials where it has been required by law since 2000. So the context for this is really in trying to think about how is it that we can maximize the quality of our research and advance knowledge, fears, and solutions as quickly and effectively as possible. So in the big picture, the goal for in general, for what kind of work we are trying to do, and the research community is trying to do, is advance transparency, rigor and reproducibility in order to maximize the return on our research investments. There's only so many dollars that can be dedicated to trying to solve the problems that we're trying to solve, and we want those dollars and that time to be as effective and efficient as possible in making discoveries, advancing knowledge, and creating the cures and solutions that we seek. Pre-registration of the many different things that one can do, increasing having open data, sharing materials, open review, many different practices for promoting greater transparency and rigor. Pre-registration is probably the most important among them, particularly on advancing rigor and the credibility of the findings that we observe. And so what I want to do today is make the case for that. Why is it, how is it that pre-registration can play such a fundamental role in improving the rigor of our research? And so that's what I will try to review. And so what I'll start with here is 10 to 15 minutes of just trying to outline what the problem is that pre-registration is relevant for solving, and then how it is pre-registration helps to address that. And a nice sort of simplistic, almost caricature version to sort of characterize the problem was advanced by Samine Vizier, and I provide a variation of the way that she spelled out these challenges that pre-registration addresses. So you can think of the research process in this simplistic way, right? I come up with some kind of idea that I want to test to investigate an interesting question. There's some domain, some phenomenon, some area of interest that I am trying to learn something about. And so I generate a study design. I have some question that I want to test. Maybe I have a clear hypothesis. Maybe I just sort of have a general set of questions. But maybe I have some predictions of what I think we'll observe based on our current understanding of what the world is. So the first thing that I do once I have the data outcome from that is observed. Did I, did I obtain the predictive effect? And in a simplistic sense, if I did, I found what I was looking for, then what do I do? Go to publication, right? Let's share that result with the world because I discovered something that I think is important enough to have studied and I want to share it with others. If I didn't obtain the effects that I observed, my interest in that study doesn't disappear instantly. I still may have things that I could learn from that data that I now observed. And so I can then look at the data and say, did I find anything that's interesting? Sure, it wasn't exactly what I anticipated up front, or maybe I didn't have strong expectations. Did I find something? And if I do find something, then one possible path for sharing that information is to then construct the story. Well, I didn't anticipate this advance, but now that I've observed the data, I come up with an explanation. Oh, I can understand how it is that this works or why it might have come out this way. I can might reference old theories. I might sort of fashion a narrative that sort of puts the story points together. And then I publish that. Here's what I think I've learned based on the discoveries that I didn't anticipate, but that sort of emerged from the data. Now, the challenge part of that is what is commonly known as harking, epochicizing after the results are known. So if I generate my answers, why it is I observed that post-fact though, then I am not doing a prediction. I'm doing a post-diction. I'm explaining it after the fact. But it's also possible that even after sort of looking around at the different things that emerged, I don't see anything right away, I still may not be done with this data set. I still think that there's more to do. But just first to point out what I mean by another way that people have analogized harking is the Texas sharpshooter fallacy. So this basic fallacy is there's a sharpshooter, he points his gun at the wall, he takes some shots, and then afterwards he goes and draws the bull's eyes around where he hit the wall and says, see, I'm a great shot. The obvious point is, if you put the target in advance, that's a prediction. That's where I have to aim and if I hit there, then I will treat that shot as a credible result of his skills. But if I construct the targets after the fact and say, well, that's what I was aiming, that's the explanation, then the results are not that credible. But if it is the case that I even want to go further, I don't find anything that's interesting just observing the outcomes, I may then look into the data set, start to pull it apart in different ways. So this may be a place where there's a number of outliers. So what happens if I change what my criterion is for including observations in my analysis versus exclude them? What happens if I put a different functional form on the analysis? What happens if I include some covariates? Maybe if I collect a little bit more data, just a little bit more, the effects would be clearer. So that kind of process is commonly known as p-hacking or taking advantage of researcher degrees of freedom or more generally questionable research practices where I may do behaviors that helped me to sort of see signal, but it may be signal that's leveraging chance, taking advantage of noise by looking for significant positive results when there aren't actually any there. And if I publish those, they may actually lose a lot of, that may be non-credible results because I have leveraged chance for trying to observe things but nevertheless, because I can construct interesting findings, they may be publishable because people say, oh, that is interesting. And they don't know the origin or the origin history of how it is I got to those claims. If I don't find anything even in that, then this is a dead study. It's all negative results, nothing interesting here. So drop it into the file drawer. And the main problem that emerges there is this potential for selective reporting, that as a practicing researcher, I incentivize to publish, I try to publish as much as I can and the things that I'm learning. I also am more interested in finding positive results and novel findings that haven't been observed before because those things are more interesting. But as a consequence, I may be more likely to ignore the null results, the things that come out negative, the things that didn't quite work. And if I'm taking advantage of chance in this decision to process along the way, if I'm searching for findings among what it is I observed, if I'm re-analyzing the data and findings are popping out, I may be obtaining positive results that are an exaggeration of reality compared to what actually happens. And so if I practice selective reporting, I promote positive results into publication and reviewers are more likely perhaps to accept positive results as more interesting and innovative and useful for the literature. And the file drawer may be more replete with negative results. And as a consequence, the published literature appears to be more positive, more significant, more innovative than the reality is in the data. Okay, so that's the sort of basic matrix of what the challenges are, harking, p-hacking and selective reporting that are ultimately particularly relevant for the role of pre-registration. So let me just give a couple of very quick examples of findings that suggest that some of these issues are real and significant challenges in the published literature. And one of them is about the power of research designs. So there are a number of studies that have investigated the power of studies to detect the findings that they're investigating. And across these many different studies across many domains, the average power of studies tends to fall between 20 and 50 percent. What does that mean? The power of a study is indicating if the effect size that we think, if the size of this effect is what we think it is, and the sample size that we have for investigating that question is what we define, then the power is the likelihood that we would find a significant result, a positive effect of that effect size. So to put this in more concrete terms, if I am doing studies that have an average power of 50 percent, but I'm studying all true things, the effect size I think is going to occur is the true effect size that really is an effect, and my power is about 50 percent, then what I would expect is that about half of my studies would find positive evidence for that effect, and half of the studies would find negative evidence, even though it's a true effect. So because of that, the average power of studies is a ceiling on what we would expect the positive result rate in the literature to be. So if we assume that the entire literature is true findings that are studying the effects that they study, then what we would expect, given that the average power is 20 to 50 percent, is that the average, the average study, there's about 20 to 50 percent of the studies would find positive results. But in fact, when we look at the positive result rate in the published literature across a variety of different disciplines, it tends to be 90 percent or higher showing positive results. So these two numbers do not line up. It's not possible that we could have 90 percent positive result rate, almost every study that's published finding positive results, while simultaneously those studies have an average power of 50 percent at best. And that's the most generous possibility, because it surely isn't the case that every single thing that we have studied is a true effect, and at the effect size that we believe it is. So what this implies very strongly is that something, something big, is getting left out, that we are getting into the published literature, those studies that happen to meet the significant threshold, and so they're showing positive results. And a lot of negative results or underpowered studies are getting left aside, are not getting into the literature. Okay, so that's one example. A second example comes from Annie Franco and her colleagues, where they wanted to see what happens to, in comparison of what the study was when it was actually done, and what was reported in the paper when it's finally reported. And what I'm showing you here is a plot where they compare what was in a study from a sample of studies that they were able to get versus what was reported in the paper about that study. And on the x-axis, where it's number of outcomes administered, that is, in the actual experiment, how many outcome measures did they have, from zero up to 40? On the y-axis is the number of outcomes reported, which means in the paper, how many outcomes did they report were in the experiment, for again, from zero to 40. So what you would expect if the researchers reported every outcome that they measured in the study is that all of the dots, which indicate individual studies, would be on the diagonal line, indicating that they had five outcome measures, they reported five outcome measures. But what you see is that there are many of those dots, in fact, 70% or so of those dots fall below the line, indicating that the paper reported fewer of the outcome measures that were included in the study. So that itself is selective reporting. For some reason, the authors or the reviewers and editors decided to exclude some of the information that was in the study. Some outcome measures weren't reported. But there's an additional thing that they did in follow-up, and that is to look at the outcomes because they had access to the data in this case. So they looked at the outcomes that were reported versus the outcomes that were not reported. And what they observed is that for the outcomes that were reported, there were 122 total outcomes. The median p-value was .02. Now, if you know about p-values, a significant p-value was p less than .05. So you want smaller p-values indicating that this data is unusual in some way. If there was no effect to observe, then this was unlikely to have occurred. And so a smaller value indicates greater un-likelihood. So something might be detecting something here. The median effect size about .29 is about a third of a standard deviation between what the treatment group observed versus the control group with an experimental design. And the number of the findings that achieved this p less than .05, the traditional marker for significant effect, was 63%. Compare that to the unreported tests. There are 147, and the median p-value for these, the ones that didn't get into the paper, was .35. So median was, these were no effects. The average effect size was less than half, .13, of the reported tests. And only 23% of the unreported tests were significant effects. So this is demonstrating that bias in selective reporting. That some of the tests, even in the same experiment, weren't getting mentioned at all. And those that weren't getting mentioned were more likely to be ones that didn't find positive results. The negative results were the ones that were left out for whatever reason and whatever part of the pipeline that occurred. So this is part of that challenge of there is a history from what the study was to what's ultimately reported. And as readers, all of those reports, we don't know that history. We can't evaluate credibly the research as reported in the paper because we don't necessarily know the entire history of what the experiment actually was and what the decision process was for how to analyze the data and which outcomes to report and which outcomes not to report. And so that provides a significant unknown to evaluating the credibility of that ultimate finding. Okay. So that's as much as I'm going to say about the problems as many other areas where we could discuss challenges about the research process. But what I want to do now is transition into talking about the context in which pre-registration emerges as a solution. And the context for that is thinking about the sort of two modes of the research process as it occurs in practice. And philosophers of science have talked about these two modes of research in a variety of different ways. One of them is this context of justification versus the context of discovery. And so when we are, let me talk about each of these in turn, right? So when we're in the context of justification, what I am doing when I start a research project is I'm looking for data that I can use to confront my current understanding of the world. So I have some hypothesis about how this phenomenon works. And so I want to acquire data in order to test that hypothesis. Can I confirm or disconfirm whether this hypothesis holds when I obtain data to evaluate it? So my, so this is hypothesis testing, right? I have existing models of the world that I know are incomplete in some ways, but I want to confront them to see which parts of it survive, which ones don't. So then I can advance to the next set of questions. So the decisions that I make about how to evaluate that hypothesis are independent of the data. I haven't observed the data yet when I'm in the context of justification because the data are the tests against which my hypothesis, my beliefs about what will happen in the world will be confronted. So this is true prediction kinds of context for doing research. The other mode of research, the context of discovery, what I'm doing in that context with data is actually interacting with the data in order to generate ideas about how the world works. I don't have necessarily a model or I already recognize the efficiencies of my model about the world. And so I'm generating hypotheses by looking at the data and finding different ways that I might understand, discover new things that I hadn't anticipated at all. So this is exploratory research. I am making decisions that are contingent on the data. As I observe the data, it influences how I then look at the data next, what other analyses that I do. So this kind of process, the exploration and discovery is a very productive part of research because our existing models of the world are very incomplete, especially when we're in preclinical or basic domains. We don't yet have in many cases real clear expectations about how a particular phenomenon works. We're really starting with a very open-ended mindset about let's look at many different possibilities and the data will inform us on how it is we might think about the phenomenon so that we can start to put together some kind of theoretical framework to understand what it is and how it works. So this is really important, but there is a very important difference between exploratory analysis and how we treat it versus confirmatory analysis and how we treat it. And a key part of that is that the ways that we ordinarily use statistical inference, for example, those p-values that we discussed earlier, they are for the context of justification. We design studies, we evaluate those findings with statistical inference when we are hypothesis testing. It's called null hypothesis significance testing. So this is the domain in which p-values are interpretable. And the reason that they're interpretable in the context of confirmation is that the p-values are estimates of unlikelihood. But in order for them to be diagnostic estimates of unlikelihood, we need to know how many tests were done and how those tests were planned for those p-values to retain their diagnosticity. So if I know that this is the one test that I'm going to do and I evaluate this test, then that p-value of .02 provides a probabilistic invitation that this was observed, this data, this extreme or more, was observed by chance versus null. But if there were many possible analyses that I could have done, then I don't know how to adjust that p-value's diagnosticity for making accurate inference because now there are multiple tests that make it the unlikelihood uncertainty. And so in data exploration mode, when I'm generating hypotheses and my decisions are influenced by the data as I look at it, the p-values lose their diagnosticity. A small p-value is no longer much information that this was unlikely because I've looked at many different p-values. And the choices of how I interacted with the data were influenced by what it was I observed in the p-values and the effect sizes and the distributions, etc. And so I'm much more likely to leverage chance and leverage noise and inadvertently interpret it as signal. So the consequence of that is that data findings that are outcomes of exploratory analysis are necessarily more uncertain in terms of the confidence or credibility that we can claim of them compared to ones that are done in a strong confirmatory framework. And that's totally okay because that's part of what exploration is, is we sort of discover what's possible and then we design studies to evaluate what's credible with confirmatory tests. And the real consequence that we confront is that these two both very important modes of analysis is that if we present exploratory analyses as confirmatory, we may increase their publishability because we're more likely to find effects that are positive effects or novel findings that look like they have low p-values if we're still using those statistical inferences. But at the consequence of loss of credibility of those results, they're less likely to be reproducible results because we're more likely to leverage chance and generate false positives and exaggerations of credible findings. So this is where pre-registration comes in because the role of pre-registration is a singular function and that is to make it clear when one is in the context of justification versus the context of discovery. We need both of these modes of analysis and investigation but pre-registration helps to clarify when we're doing one versus the other because with the pre-registration we commit in advance before we've observed the outcomes in the data, we commit to how it is we're going to analyze that data and what it is we're going to report. And so we can count the number of statistical analyses that we're going to do and we can adjust our p-values in order to have those retain strong diagnosticity for interpretation at the end. And then everything else we do once we observe the data crosses that confirmatory barrier and it's in some degree of exploration. I'm treating this as dichotomous here it is more complicated than that there is some continuous nature of this and we can talk about that more later. But once it is passed what it is we've registered and committed to before observing the data then the p-values start to lose diagnosticity until they're not very useful at all. Okay so that is the context of what role of pre-registration is in terms of distinguishing confirmatory and exploratory analysis. Let me return to this little matrix we started with that Samim had generated and clarify how it is that those modes of analysis are imposed on to this. Those first initial steps are where we are in confirmatory research. We have a question we have a reason that we're doing the study and we have some kind of plan it may be minimal plan but we have some almost always what some kind of plan for how it is we're going to look at that data and interpret it. That really can be minimal because many times we enter a study with very exploratory mindset already at the outset. We have very few expectations but if we have no plan at all it's unlikely that we would do a study. So at least we will often have some kind of plan that has a minimal confirmatory test. But everything else that we do after we observe those things that we predicted in advance is the exploratory analysis. The what we call harking and p-hacking are problematic only to the extent that they are treated as confirmatory tests that we treat the statistical inferences that we make out of them as credible indicators of for interpretation as if it came from hypothesis testing. If instead we treat it as exploratory we don't bother reporting p-values we talk about the uncertainty we surface that in fact these came out of exploratory analysis that needs to be followed up with a credible with a confirmatory analysis for greater credibility then we are more likely representing the data and the uncertainty in it more responsibly. So to just point out an example of how important it is to make this distinction very clear here is a example study from Bob Kaplan Veronica Irvin. This is a study where they examined the outcomes of clinical trials from the National Heart, Long and Blood Institute where they were interested in part in what happened as a consequence of requiring pre-registration for clinical trials. And so the key outcome to point out in this study is that prior to clinical trials being required to be registered you have to commit to what your primary outcome variable is for this particular trial. The positive result rate in the sample of studies that they examined was 57 percent. 57 percent of studies in this particular sample showed positive results when they didn't have to commit in advance what the primary outcome was. Once pre-registration was required and they had to commit to what was the primary outcome only 8 percent of the studies showed a positive result in this sample. That is a dramatic change that is associated with the change to pre-registration. Now there's a couple of cautions with this particular example. One is that it's a small sample so it's possible that we observe this by chance in this small sample even though it was a highly unusual change so it does like any other finding need to be replicated and I know that they are working on this further. A second caution is that it's not an actual experiment. People weren't randomized to do their study before or after clinical trials was imposed. So it's certainly possible even in this study that for example in this area of research there's just ran out of things to discover after 2000 and so you didn't see positive results afterwards because there weren't more things to find. Now that may not be super plausible but it is possible and so we have to identify those kinds of cautions but this is one example of many that find that the role of pre-registration of providing some constraint can increase the likelihood of obtaining negative results where we would have otherwise potentially observed positive results but also theoretically at least we presume that those are the more credible outcomes. Okay so one question that we in in presenting this with researchers one question that I often sort of use as people look at the evidence from even in clinical trials there being outcome switching and some of these reductions in positive effects is to ask are are you would I be okay with receiving treatment based on clinical trials that were not pre-registered is it okay that researchers might switch outcomes after the fact that they might publish the positive results and not publish the negative results and a lot of that happened without even them intending to right I am not intending to find false results but I'm human I have lots of reasoning biases that might influence how it is I interpret the data and because I have skin in the game right I need to get positive results in order to advance my career I may be more likely to promote the positive results reason rationalize that those are the right ways to analyze the data those are the right findings and not report as much of the negative results because they're not so interesting I reason that they might be the wrong way to do it so most people when they're asked this question in my experience say no I'm really glad that clinical trials are pre-registered that's an important constraint to impose because those have significant implications for my health the community's health so an obvious question is why wouldn't we then have that same expectation for rigor or pre-clinical or basic research and an argument that sometimes comes out before people think about it is well clinical trials are important if we're willing to go down that road that clinical trials are more important than our basic questions are less important then we shouldn't complain when funders decide not to fund our research as much anymore because it's not so important I don't think that's the right answer I think the harder answers are ones that we can unpack which is pragmatically how do we think about the role of pre-registration in basic and pre-clinical sciences particularly where there is a lot more exploration and confirmatory testing happening and it's not always clear when we're in one mode or another and so that's where I think the interesting challenges are for how it is we think about adoption of pre-registration in research okay so let me summarize where it is I think what answers pre-registration offers for the kinds of problems that we're trying to address in science and then get into some of these pragmatic issues so pre-registration solves two things first is selective reporting via the registering of studies so one action is the registration of the study that the study exists so by registering that the study exists I make it possible for you to know whether I'm recording in my papers all the studies that I've conducted or a subset of the studies because you can look at what studies has my laboratory registered and compare that to what studies are appearing in the paper and just that action thinking nothing of the analyses in them just knowing what research was done provides an ability to address selective reporting so you know I do lots of research on question this particular question so I am going to report only a subset of those studies because there's a lot of research that goes nowhere that isn't very productive but are errors there are lots of things that happen but it should be possible for you to discover what those are so that you can evaluate whether you would have made the same decisions about ignoring some of that research or not as I did and so the registration of the study that it exists helps to make sure that all the research is discoverable and and then the peer review and decision process for what gets published is a filtering process that we can evaluate based on what's registered versus what's reported so that's problem one that it solves the problem two that it solves is addressing this problem between confirmatory and confusing confirmatory versus exploratory analysis so by pre-registering the analysis plan how it is I'm going to analyze the data so study exists address selective reporting for registering analysis plan addresses all of the things that may happen and how it is I treat that data and then ultimately report that data in the paper itself so those two features of pre-registration have a profound impact on those three challenges that we started with parking pee hacking and selective reporting or the file drawer effect so ultimately pre-registration is important because it helps to clarify between confirmatory and exploratory research and both of them are critical for research process progress okay so that's it for the big picture of what the problem is and how pre-registration conceptually tries to solve that so what I want to do now is describe what pre-registration in practice if we say okay I'm ready to require for my grantees or encourage my grantees to register their studies what am I actually asking them to do and then what are the barriers what are the common objections that come up and how we start to think about and talk about those for registration so concretely what pre-registration is is before observing the outcomes of the research right down what the study design is how it is I did the study what the methodology is what the materials are all of the features of the design and how I plan to analyze the data what is it that I'm going to do with that data in order to draw some inference about what it is I learned from doing this study that's step one step two is post that information to an independent repository and that in that registry then provides a date and time stamp it might ask me to confirm have you observed the outcomes yet or not and so I is presuming that I'm not willing to commit fraud and I'm going to answer honestly no I haven't or yes I did but here's the what I have observed here's what I haven't then that's in the registry with prime and date stamp for what it is is going to happen so now it has an independent check from me then I once I have the data or if I already have it once I unseal it then I analyze the data following the plan that I just laid out in advance I report all of those analyses so if I propose to do a hundred different analyses of that data set and I only report four or five of them the ones that look the most interesting then I'm not actually following through with preregistration in order to follow through with the preregistration I have to report all hundred that I plan and so maybe if I thought about in advance I thought well maybe I shouldn't have said I was going to analyze it a hundred ways maybe I'll report the five main ways and then I could do other things as exploratory analysis but once I finished reporting those make sure that I report all of the analysis that I plan to do then I explore the data set in lots of different ways I and I look at the data set to see what I can extract from it what new things might I learn from it it might inspire the next thing that I study and if I want to talk about those findings from it I certainly can do so and I often do so but I try to make now clear that these are the things that were not part of the analysis plan the confirmatory test but rather the exploratory things that I did after the fact okay so that's what we are asking people for when they are going to preregister a study so the first thing that happens in many many cases when this is a new thing for researchers to consider is they identify practical challenges that are real and consequential for them in importing pre-registration into their own research process this is these are important because pre-registration is not only the most important thing that we might be able to do to increase research credibility but it's also the biggest ask it is a change in the researcher's workflow this idea of documenting this beforehand the planning process but the formalizing of that planning process is unusual for many researcher workflows and so asking researchers to take this on requires a lot of mindset shift of how it is we think about the design and the execution of our experiments and it confronts a lot of realities where the simple case where I run studies very quickly it's easy for me to collect data and I have a very simple way of describing how it is I set up my studies and analyze my studies it's easy to imagine pre-registration in that context but that is only a limited context a limited set of how it is that experiments and research gets done sometimes research is done on very very large data sets sometimes those data sets emerge over a long long period of time sometimes there are lots of issues of planning that are difficult to address and so it's very important for pre-registration to be adopted broadly is to consider how is it that we can apply the concepts of pre-registration against the challenges that are inevitably confronted in complicated complex research paradigms so here are some of the big ones that come up in researchers starting to think about how they can use pre-registration to improve the rigor of their own research so the first one but but what if I don't have predictions I I know this is exploratory I'm engaging in exploratory research and so I don't have a strong sense of a model that I'm going to test an analysis plan that I'm going to have that's not a problem that's an exploratory study so the implications of that are to call it that right now in the culture of science we are incentivized to generate narratives after the fact that explain what is actual exploratory results in some sort of narrative flow that provides more confidence than the data actually implies or actually indicates and so if we really are embracing that this particular study is truly exploratory then the most obvious solution is to not then use hypothesis testing in the analysis right I don't know how diagnostic my p-bounds are going to be in fact I know that they're losing diagnostics because I don't have an analysis plan in front in advance so don't report p-bounds they aren't meaningful information anyway with this exploratory analysis and so I can address that by being very upfront this is exploratory and here are my here are my descriptive findings that should inspire them the next study the next plan an actual confirmatory test in the future but there are also other solutions because oftentimes as soon as you say well drop p-bounds and say well how then can I make a strong case for my exploratory results well you can't it's an exploratory finding and so this is where sort of some of the mindset that researchers have confronts reality is that they are embracing this general concept I'm doing exploratory research I'm at discovery end but they want to use the tools for strong inference through confirmatory testing and those two things don't fit easily so either we have to embrace that it is exploratory or we have to find solutions that allow that transition from exploration to confirmation easily and so one of the solutions if if it's possible given the research context is to have a holdout sample so I have we've done this a number of times in my lab where we have a new area of research where we are jumping in without a whole bunch of idea of what it is we're going to find or how it is we even try to understand the phenomenon so when we do this kind of work we will collect a large sample and we'll take a portion of that sample for exploratory analysis up front so we randomly split the data let's say just split it 50-50 and that initial 50% of data we do our just jump in and explore analyze in many different ways discover what we think are a whole bunch of interesting things and then we constrain to what are the things that we really want to test now and we pick out the findings that we think are important findings that we would draw conclusions about and then we pre-register that we have a holdout sample that we haven't looked at yet we've explored with to our hearts content and we've created a model or a set of tests that we want to we want to see if they are credible findings and so if we are willing to commit to those after exploration then we can apply them to the holdout sample and then use the statistical inferences from the holdout sample as the things that we use to draw conclusions for what it is we observe now that's limited if data acquisition is hard right if it's very hard to get data I may not have enough data to do a holdout sample and that's just a reality challenge of having good enough power to do research with small samples okay so that's number one a second but what the data already exists how can I possibly use pre-registration if we already have the data and there are many many many research applications across disciplines where it the researcher conducting the analysis isn't the one that generated the data or they may have generated the data but they generate over a long period of time and so they don't it may feel like how can I possibly pre-register not a problem of course it is a problem these are challenges but I'm I'm using not a problem to say let's we can jump in here we can do something about this the what the opportunity here is to think about how do we implement the most parts of pre-registration that we want to implement it and what's the goal of pre-registration to provide constraints I want to minimize the flexibility that I have in decision-making about how I analyze the data and how I report that data as an outcome so how do I constrain that given whatever the context of looking at having that data are so if the data already exists then the thing to consider is who has observed it what have they observed in it and how so how much is known about that and how much of that influences the choices that the analysts will make in interpreting design analysis plan and then interpreting results and as long as I can transparently report what it is we know and what it is we don't know about that data set when I am maximizing what it is we can say about the constraints that we were under in drawing inferences and the potential areas of bias or influence that may reduce the overall credibility of that so the the key here from my point of view is what is the comparison we have when we talk about pre-registration we tend to think about the ideal no the data only exists you make the commitment then you get the data and you test the commitment that ideal doesn't work for many different research applications so the correct point of comparison is against the opposite no pre-registration at all is it better to pre-register when the data exists than cannot and the answer is if you want to get to credible inferences then it is better and then the and how much better depends on how much constraint you can actually apply and then reporting on that so that the reader the absorber of what it is you've done can adjust and calibrate their inferences just as you are trying to do the research okay but what if I need to analyze the data to know how to analyze the data this also happens all the time right those data sets are often hard and you do a lot of our figuring out about how to analyze the data when we're in it not a problem there are opportunities that we can apply here one is blinding the data taking a couple of the columns of the key variables and say that each row is an obverse a person say a human subjects or search each row is a person and we have the two primary outcome variables well for a blinded data set we randomize those two columns so that the person is no longer associated with their data and so what I can do then with that data set is it still has the properties of the data set but I can't evaluate the conclusions and so I can set up my models I can look at outliers I can do all the steps to sort of clean up the data set without then sacrificing the inferences the quality of the inferences because my data are randomized and then once my models are set up I'm ready I unblinded impose back the original structure and then test my inferences a second option is called multi-personals where there's lots of potential choices to make about how I analyze my data and instead of selecting one and committing to that in advance I commit to what are the different ways in which I could choose to analyze this data here are different covariates I could include here are different exclusion rules I could apply etc etc and if I lay those out then I can analyze them all the ways so instead of committing to a single analysis strategy I can I can do a multiverse of all of the potential analysis strategies and then look at the robustness of my findings across those many different choices that I can make the final strategy that's available is stepwise pre-registration so I might be in a modeling context where I need to create the variables I need to extract variables from the raw data that then are the variables that are part of the main analysis and so I don't I can't pre-register those variables that are the main analysis because I don't even know what they're going to be yet I need to extract them from the raw data first so a stepwise pre-registration would involve doing that committing to that first step right here's how I'm going to analyze the raw data to get to my extracted variables and then once I have those then I pre-register the next step of what I'm going to do with those extracted variables for inference and so this incremental pre-registration along the research process tries to identify points where my prior analysis wouldn't influence the choices wouldn't be influenced by choices that I can make later so that I can make credible influences along the way okay but what if things change during the study or the analysis right I make a study plan I say this is how many people I'm going to have in my study this is how I'm going to analyze my data and then once I get into data collection I realize oh my gosh we can't collect people that way we went into these schools we thought we were going to be able to get 30 classrooms turns out we can get 24 am I sunk right and then once I got into the data we thought we were going to analyze it this way but we have this crazy distribution we can't use that analysis we have to use a different analysis am I sunk no it's not a problem so the goal of pre-registration is to demonstrate what the plans were along the way not to bind you to decisions that are not the right decisions to make if the circumstances on the ground change and so the key in this and in almost it's probably fair to say almost all pre-registrations things change from what was planned to what was ultimately done at the end and the key with effective pre-registration is the document report what the changes were along the way and then justify why you made a change from one to the other so it could be that you say well we plan to do this and then we analyze the data then come out the way we wanted so we analyze it a different way and so that's why we reported that way well that's not going to be a very compelling change to the reader but it could be more compelling if you say well we try to analyze it this way but it turns out we have these outliers because the person in the magnet fell asleep and so they didn't respond at all but we didn't pre-register to remove people that fell asleep okay that's a reasonable change to make as a reader I don't think that that does anything to sacrifice your statistical inference but because you can still analyze the data with the person asleep and at least report that too and then me as the reader oh the effect changed as a function of whether the sleeping person was in or not does that change my confidence in the finding as long as all of that is laid out and available particularly in supplementary materials right you can report additional analysis that were done but are not the main analysis then you allow the reader the consumer of that information to make the decisions and evaluate your decisions transparently compared to not knowing at all that you made those decisions along the way all right last but what if what if my ideas are so important that I can't register them because everybody would be out there waiting to see what my amazing ideas were so that they can steal them and do them first before I have a chance to that's also not a problem registries like the osf it's not all of them but some of them offer the ability to embargo so it if you use the osf for registration you can set up the registration and set an embargo period of four years so that you get a full head start where your great ideas are registered but no one gets to see them they're in there they're patent time stamp and they will only become public after you've had sufficient time to actually execute the research all right so that's some of the what ifs there are also some okay I get it that's interesting this seems like useful do but it feels this still feels hard and so there's a number of additional things that we can think about that might help to improve the likelihood that researchers can proceed with registration with some confidence or at least some willingness to try this out so some of the examples of this include the the first one right feels like a big lift right and it does feel like a big list of researchers who are newly exploited to lay out all these things and I've looked at what the form is it seems like a lot and I don't know how to anticipate all the things because there aren't a lot of examples of pre-registration my area I don't know how to anticipate all the choices that we're going to make in advance so the answer for this is to think about registration incrementally again we don't need to anchor against the ideal of what pre-registration is and feel like we need to meet every single ideal for it to be a productive pre-registration instead we should be anchoring against current practice how can we do what we do today a little bit better and so the way that my lab started with pre-registration in 2012 was to take the notes from our meeting that final meeting we had of planning the study in our lab session and just take those notes and post them to the OSF and say that's going to be our registration and then after we did that because that's something we did anyway right we're already writing some plans we just write it down and then post that and then once we were analyzing the data we realized oh geez we never even talked about how we would exclude data at that time let's do that for the next time and then oh once we did the next time oh we didn't think about this factor why don't we include that for the next one the onset to providing some constraint and additional rigor with pre-registration into a research domain can happen more easily with experience as people start to step into the process they learn more about their own research process and how it is they make decisions and which ones can be made effectively in advance and how they can make those decisions and so an incremental approach just says just like everything else we do in trying to do the best research we can there's always room to be better and so how is it that we can incrementally advance the second is this earlier point that things change right and things do change and that shouldn't be a barrier to trying it and saying oh my gosh but maybe something will change here instead write down what your plan was in advance because you have some kind of plan and then just document those changes and doing that will increase the credibility of the claims that you make and help to calibrate for oneself the claims that we make much more than not doing it at all the another sort of concern that people say is this pre-registering registration will hamper me I will not be able to explore I have no flexibility if I pre-register my research and the simple answer nope nope it doesn't pre-registration simply requires that you pre-register what you plan to do in advance the things that you want to draw strong inferences about with your statistical analysis or otherwise and then you can do all the exploration and flexible work that you want to do after the fact and just call it that that's all it does so this one is just a fundamental but pervasive continuing misunderstanding what the role of pre-registration is it isn't to value confirmatory research over exploratory research it's to show you when you're doing one versus the other and then the fourth is if you do pre-registration badly it won't solve the problem and that is absolutely true bad pre-registration will not address the kinds of improvements that we want to see for rigor and reproducibility but bad pre-registration in many cases is actually better than no pre-registration at all failure to identify exclusion rules but you did identify the primary outcome will at least you identify the primary outcome in advance and but simultaneously it's possible that some kinds of pre-registration will actually not improve at all the way in which the research process identifies true findings or makes progress and so it is critical that we have a culture that that incorporates evaluation and continuous improvement into the process of pre-registration that's implicated its application to research domains and the improvement both on an individual basis and on a cultural basis for any kinds of research process okay so i want to close with a few minutes on information about promoting adoption and then we'll switch over to questions i see a number of questions have been coming in and so we'll try to address those so promoting adoption how is it that we can get people to do this well the good news is that people are starting to do this and and in increasing numbers so the open science framework osf offers the ability to register any kind of research basic pre-clinical work and you can see on this plot the number of registrations by year since we launched the service in 2012 and it's non-linear and what's particularly encouraging about this growth rate is that it's occurring without there being strong policies that are broadly adopted to say you have to pre-register by and large the adoption of pre-registration in the communities that have adopted it the most is a function of the emergence of it as good practice as recognition that this is a way for me as a researcher to improve the rigor of my research and so i'm going to test it out and then once people test it out they rarely turn back in my more anecdotal observation that they see once i've done this that oh this actually is how i imagined the search to be when i was planning to come in get into science in the first place and oh this is actually exciting to have that moment of here was our plan what happened and oh this helped me feel free to actually explore my data and not feel like i was doing something wrong in exploration i can explore my data with abandon i'm just calling it that and now i can do that with some additional confidence and so the emergence of registration as a normative practice has been occurring in advance of some of the stakeholders actually trying to promote it incentivize it or require it so that is a very encouraging sign for adoption of pre-registration as good practice it isn't in the communities that are adopting it being considered a bureaucratic burden but rather an opportunity to do better research the second thing to raise is that NIH has been taking steps to increase the use of pre-registration and research and in particular you may be familiar with their moves on NIH clinical trials which are required by law for clinical trials and any of the typical stages for which that work those requirements are in discussion for extending to pre-clinical work that is experimental work done on humans that is still a moving target as it were they request some information from the research community for how it is that this could be best implemented and most useful for these domains and there's a lot of debate about the relevance of clinicaltrials.gov as a registry service for the kinds of basic and pre-clinical questions that are used but it is very clear that NIH has some commitments to improving regular reproducibility in part through the promotion of pre-registration to more domains of research than it has been applied in the past and we are through the center working with NIH on the possibilities of for example having the OSF provide a pipeline of people can register there and meet some of the registration requirements and I expect that there'll be a lot of work by a variety of communities trying to promote pre-registration to align with the NIH standards and technologies. A last example to mention about promoting adoption is linking pre-registration with publication and the mechanism for that is this concept of registered reports. You may have heard the term register reports you may be very very familiar with it but sometimes people think that register reports and pre-registration are the same thing they're two very different things everything that I've talked to you so far through this entire session is about pre-registration as part of the research process regardless of publication regardless of the journals no link to journals at all. Registered reports is combining pre-registration with the publication peer review process and the way that it does so is shown with this cartoon version of how research gets done right you design a study collect and analyze the data write the report and publish it and the traditional model peer review is after the report all the research is done and so the peer reviewers evaluate the research that was done and then criticize it on what you should have done to make it publishable. Registered reports makes one change to that process and it moves peer review to after the design phase so as an author I submit to the journal my pre-register plan this is my design how I'm going to do the study this is how I'm going to analyze the data and you evaluate that as the reviewer and editor evaluate whether that's an important question I'm investigating whether my methodology is an effective test of that question and if you agree that's an important thing to study we need to know the answer that and the methodology meets the threshold of our criteria for quality then it gets an in-principle acceptance of the journal go ahead and do the research and when you do the research as long as you follow through with everything you said you were going to do and you do it with quality then we will publish it at the outset right so the goal of register reports is to reduce publication bias by reducing the influence of whether the results are positive or negative on decisions for publication because the reviewers at this stage don't know whether the results are positive or negative and it builds in pre-registration into the process because I submit my proposal what I'm going to do and the reviewers provide feedback on that to how do I improve my registration my analysis plan before I actually complete the research and so it gets the best of both worlds it's expert review and advice that can actually be incorporated into improving the research rather expert review that just says what everything that I did wrong after the fact and once I make that commitment with the journal of and they make the commitment to me that work is pre-registered and then will ultimately be published so this offers opportunities for lots of new ways to think about the incentive system for researchers and reasons for them to adopt pre-registration as a mechanism to get publication so register reports combined those two things okay let me close on the presentation remarks and then we can go to questions with some specific things that funders can do if they're interested in trying to promote pre-registration or evaluate pre-registration in their communities the first is just raising awareness it is very clear that there are a number of research domains where the concept of pre-registration is a novelty I had a presentation with a group of earth science journals and researchers and I talked about pre-registration for part of it and they said it sounds very interesting I've never heard of it before so going from never heard of it before to adopting it as part of standard practice is a lot of steps in between and so raising awareness by sharing academic papers like the one that I've been referring to reference here and many others that are exploring pre-registration and what its implications are and how it might be translated into practice being part of brown bag discussion groups can be an excellent first step for gaining traction in communities funders can also point researchers to guided workflows that have you know frequently asked questions and then have support for pre-registration along the way so rather than just having to come up with the concepts whole cloth they can start with a workflow that guides them step by step for how it is what it is they could pre-register and how to do that effectively and efficiently there's also plenty of opportunities for training webinars like this services like other members of our team and others who are gaining expertise in pre-registration to offer it as a service for researchers so that they can learn about and talk about sort of in real time how is it that we would translate some of these issues into practice right we could spend a whole two hours of a session we're working with a group that works with big data sets that are existing data sets and trying to think about translating some of the principles of pre-registration into how to do that most effectively with that kind of data in in their domain of research also opportunities for addressing pre-registration in policies for for grantees so the top guidelines provide that framework and pre-registration of studies to address selective reporting and pre-registration of analysis plans to address questionable research practices are both different are both items in the top guidelines a low bar for policy adoption especially for domains where pre-registration is new is a level one of pre-registration of studies and analysis plans all that level one requires this disclosure so researchers have to say whether they've pre-registered the study or not and if they have they have to provide a link to the pre-registration this is an easy way to raise awareness about pre-registration and to not impose something new on researchers that aren't yet prepared with training or or understanding a pre-registration for how it's relevant they just need to say whether they've done it or not and if they have to say that then I say oh that's interesting what is that how might I think about that is that something that applies to my work there are also organizations like the Lauren John Arnold Foundation that now require pre-registration for all I think it is all of the research that they fund and so that is a much stronger stance on the role of pre-registration improving rigor and reproducibility and that may be beyond what some foundations are prepared to do yet but there are some models that people could talk to to find out about how that has worked out in practice and I just wanted to mention Templeton World Charity Foundation as another that is pushing innovation in how it is to think about pre-registration and other things around pre-registration to promote rigor and reproducibility a just announced fund research program in consciousness research by TWCF is incorporating the idea of adversarial collaboration at the front end to design the study so there are lots of competing theories about consciousness but the research tends to talk past each other because they design their own experiments they confirm their own point of view but rarely find ways to confront the theories against one another so in their process they brought the adversarial groups together and had them fight for a couple of days in the inner room to come up with studies where their theories make actual different predictions and then design experiments that would test those differences of prediction and then pre-commit to those research designs and then actually have tests that meet some of these aspirational standards of confronting multiple theories with an experimental test it's really interesting and innovative and worth exploring that as funders for those areas where there are contentious debates about particular claims another area to consider is piloting the registered report partnerships with journals and there are a couple of different communities of funder and journals that have are piloting this to see if a single process where the reviewers where the this proposal for the research is reviewed for funding and for publication simultaneously and so the great outcome for this is that it simplifies potentially the process for authors right I get both my funding and my commitment for publication in a single review process and it increases the likelihood that funded research gets published there's so many studies that are conducted and conducted but never reported at all no return on investment and journals are excited about the potential for getting funded high-quality research for their journals so if you want to explore some of these there are a couple of examples here that have they are underway and there's a lot of interest in testing more of those and then finally there's opportunities to partner with registries that exist to try to promote registration one's community or even to launch registries for a particular research community our own service OSF we are launching soon in early 2019 OSF registries where communities can run their own service and create their own metadata for what it is should be registered for their kind of research to really try to maximize this opportunity to fine-tune how it is registration gets done to maximize its relevance and efficiency for particular research domains and so that's an opportunity as well I'll just close here with a slide showing some links to some of the main things that I raised during this discussion and then also point out the many members of our team that are highly involved in pre-registration in different roles that they have for implementation of it as policy of evaluation of it as a research process of providing training for it of maximizing its quality and execution across the team David Tim Ian Courtney and Alex so that is the prepared remarks that I have for this we have a number of questions that have come in and I will just transition to reading those and providing answers to the best that I can and just I'll mark the time here it is 12 13 we have east coast time we have 17 minutes for the plans portion of this and I'm happy to go through as many of these as I can and you can continue to submit them here and you can follow up with me or any other person that's has their address on this page for questions that you have about how we can help with pre-registration so let me go to the Q&A items and I'll just start at the top Brent asked do you have any data on uptake acceptance of using pre-registration by certain fields I showed that graph of overall adoption of pre-registration how it's accelerating but this is a very good question because we don't have great insight on how it is being adopted in particular across different disciplines we do though know in general about where adoption is strongest and it is very the strongest adoption in so basic pre-clinical work so far is in psychology particularly in the subfield of social psychology but expanding to other areas of psychology there's also a very nice uptake in economics particularly in development economics or randomized trials that are occurring in the economics community but by and large most of the adoption so far has occurred in surpassing through early adopter and starting to get to mainstream is occurring in the social behavioral sciences across the different subfields and there are registries and that's where most of the registries have emerged as well in politics economics and psychology there is some of it now moving out into the life sciences in particular like in ecology and evolutionary biology as the most prominent sort of transition discipline as well as neuroscience in moving into those fields but adoption rates aren't yet as high but that's where the very interesting growth areas are that are emerging but there's a lot more to study there about where it is where it is happening and then how best to facilitate it to new domains another question do you think there's room for pre-registration in students projects is there existing infrastructure for pre-registration for masters, apprenticeships, PCs and dissertations that's a great question the the I think pre-registration actually feels quite natural for students as they're getting into the research process because a lot of how we describe the scientific process as people get into it is in its idealized form you generate an idea you come up with a way to test that idea you collect some data you evaluate it against what you thought was going to happen and then you share what you learn with others that idealized model of how we describe the scientific process even to fourth graders my daughter has that just came home in her her assignment of how it is science works that process essentially builds in pre-registration and how it's described so I think there's lots of opportunity in terms of instructional focus for building pre-registration into student research in practice it's very easy to build it in because there are a number of registries that exist that welcome any kind of research and it doesn't whether it's student research or otherwise so for example on the osf you can select different from different registration metadata formats from very sort of simple to do not very comprehensive but easy to very comprehensive but more complex and so one can select from these different ways that you can register research that match up with student projects or otherwise and then just identify them as such so that's a great question next question let's see uh there's clinical trials dot gov or c os allow pre-registration of observational studies the answer is yes to both so clinical trials dot gov has uh over its its existed since 1991 I believe and over its history has about 20 000 I think or maybe it was 10 000 10 000 studies that are observational or basic research described so they have the facility to do that and while the rates aren't high at clinical trials dot gov because it is perceived by some as burdensome for that kind of research nevertheless it has been used for that and also the osf is wide open for observational studies that's a perfect perfectly reasonable use case there another question salvatore asks what is the appetite of high impact journals to publish experimental studies in a pre-registered research model and how about the confidentiality of labs if you do not want to share with everyone what their plan is okay that was probably asked right before I mentioned that you can embargo registrations on some registries so it stays private uh in advance however that may be also asking about uh in the review process the research the reviewers will inevitably know uh what the researchers are planning because they're getting peer reviewed for it now that that particular issue is the same issue that occurs in the grant review process so reviewers of grants have opportunity to see what the plans are for other researchers in their community and there are strong norms and standards about the confidentiality of that process and those should apply equally to the review of preregistrations which in many cases are very similar to what's actually proposed in a grant proposal process but nevertheless that is something to take very seriously particularly if you as a funder are trying to incorporate this into a review process or in a partnership with journals is making very clear what the normative expectations are so that reviewers have no illusion that if they are willing to try to steal an idea from someone else's work that they observe that they are doing that in a way that is counter to norms and even to policy and will be addressed addressed with that in mind if they are caught for being so in terms of high impact journals interest in publishing this there is for preregistration there certainly is high willingness to publish preregistered research among the editors that we engage with of high impact journals what is still a area of reluctance among journals which is similar among funders is the decision to require preregistration for publication and that is a very reasonable concern at this stage particularly because there isn't across the wide range of research types that journals especially high impact journals publish requiring registration at this stage would be a big lift and introduce a lot of chaos while people say I don't even know what that is how can you be requiring that of me at the onset of my research in order to do it effectively and simultaneously because research is new or preregistration is new in a variety of research domains it isn't yet perfectly clear how it translates effectively into practice in some domains so for example I did a session a number of months ago with the American chemical society and the editors there were quite interested in the concept of preregistration and what we puzzled with in a lot of discussions is how is it that you would translate into practice in some of the kinds of research that they publish right particularly in domains where really it's a descriptive research they are saying showing can we actually create this kind of compound does preregistration actually apply in that and there are certain domains of research that are knowledge advancing that aren't hypothesis driven even in an exploratory sense where preregistration may not be a useful tool or useful as conceived so there's a lot of things to still explore in the research community about how this gets translated into practice so imposing as a requirement requires pretty clear knowledge about how it is that the registration occurs in that kind of research and so mostly I think adoption by high impact journals and by funders will start with pilots on let's try this out and with domains where it is pretty clearly laid out and with incentives like it'd be great if you tried this in these new domains in order to see how far it could go okay thanks for that question another question is there a listing of all of the registration sites that can be used and which are the best for different disciplines I don't think there is a formal listing of that and that would be a good thing for us to post on our website somewhere or more generally there aren't very many outside of clinical trials so there are a number of clinical trials there's clinical trials that don't that applies for any research but particularly for the US context and then there are sister clinical trials registries in a number of different global regions and so that's the main collection of registries that exist that are focused on clinical trials work so basic and pre-clinical work the most popular registries that exist right now are the OSF is one the AEA American Economic Association offers a registry called social sciences registry and that's particularly useful for rcps in economics but I think they accept other kinds of research eGap offers a registry in related to research in politics and then Rydey offers a registry RDIE we'll send the afterwards we'll we'll send some links with all the registries that we are aware of that can be used for research oh and I should mention that three education research just launched any registry in in at the end of October for randomized trials in education and case study research another question can you provide a link to your pre-registration on your application if you would like to yes that certainly is the case for it's very easy to provide a link just like it's easy to provide links to other supplementary materials and other things and so to the extent that researchers are already adopting pre-registration it's very easy and often if they're doing it they want to say that they're doing it because they believe it in good practice to share that in their application materials and otherwise the last question I have here is do you see that there is a growing expectation from publishers or journals for pre-registration and we cover that sort of in general terms in the answer one of the last questions but I think the the specific answer here is yes and incrementally so as you're describing imposing a strong standard of requirement across the board is unusual unless their funder or the journal is in a domain where it's clear how that could be done and so two examples where it's been imposed as a requirement Lauren John Arnold Foundation requires it for the research that they fund the kinds of research questions that they fund are ones that are very amenable to pre-registration in the kind in the research domain they have education social policy evaluation social science or behavioral science kinds of research application so there is that there's ability to sort of present that as a expectation and even though in those domains lots of researchers aren't yet familiar with it they have tools and facilities become relatively familiar in order to do the research and so they're the that facilitates the overall adoption on the journal side there is one journal that I know of outside of clinical research that requires pre-registration and that is the journal is called CRSP comprehensive results in social psychology and that journal is a registered reports only journal so they do not accept research that's already been conducted you have to send their proposal of the experiment or experiments that you're going to conduct and that goes through peer review at the journal once they agree to publish it it's pre-registered commitment this is what you're going to do this is how you're going to analyze the data and then they publish the outcomes of that and they only accept that kind of research it's a very exciting experiment and they've been having a nice success in gaining adoption and submissions for that process but they also have the advantage of that's the area that's been most active in pre-registration so it's already a community of interest so I think most of what will happen in the near future is adoption incrementally by pilots and tests and disclosure okay I think that takes all of the questions that have come up and we are just about at time so I think we're going to end right on time so let me pose and thanking you for making time for this session let me thank the lords of the internet for not having this break and whatever it is happened that allowed my kids to not come in at the end of course I have to go check that they're alive so many thanks to you and please follow up with us if you have any questions follow up questions or things that you would like to do in collaboration we're delighted to have this interest in pre-registration we'll be delighted to support anything that you would like to do to try to advance it in your research communities thank you very much