 All right, let's go ahead and get going. Sure, that all works. Can everyone hear me and see the slides okay? Looks like it, great. Thank you so much for your time today. Macy and I are extremely excited to be talking about this. This has been a long project coming and we are finally getting to the point of launching this. So we were going to be talking about the global flourishing study embedded randomized trial that explores the impact of preregistration and registered reports. This is a really interesting project for a lot of reasons. Principal among them being that we can produce some pretty interesting evidence in a way that has not really been done before primarily through this highly, highly, highly collaborative process. So the global flourishing study itself is a huge, huge, huge collaboration between lots and lots of folks and funders and researchers and organizations of all kinds. The trial itself is a collaboration with the global flourishing study and within the trial we have collaboration with our participants. This is a very, very collaborative project. Before I really get going though, please feel free to drop questions and comments in the Q&A at any time. We might be able to get to them live as we're going but there's a huge Q&A section at the end. This is a relatively short presentation. So we should be able to get to all of your questions pretty quickly. And that being said, let's go ahead and get started. Do a little bit of scene setting here. So we're doing a couple of things all at once in this presentation and in this project. So the first thing that we're doing is we're, it's kind of a narrow sense. We're looking at the impact of an intervention on research policy and practice. In particular, we're looking at the design of the global flourishing study trial that is measuring the impact of some of these pre-commitment devices that we talk about all the time, things like preregistration and registered reports. So that's kind of a narrow sense. We're looking at what that actually does, what registered reports and what preregistration actually do. But if we zoom out a little bit, we're also going to be talking a lot about the idea of experimental evidence development. How do we actually produce evidence in these kinds of unusual situations that are really difficult to study and severely understudied in part because they're difficult to study. So we're talking about evidence generation in a way. And ultimately what we're talking about is impact. We would like to change the research environment through developing evidence. We want to see what works and what doesn't work. And if we can see what works and doesn't work, then we can produce a better system through this kind of evidence generation. But review a little bit. Some of you may be familiar with the idea of the Garden of Forking Pabs, right? So the idea here is that for any given research project, when I get an idea, when I start out, you know, my research question, there are just unbelievable numbers of directions that I could go in. I could choose to report certain things. I can choose to clean my data in certain ways. I can control for stuff in certain ways. I can frame my question. I can choose models. There's just an infinite number of directions that any given research pathway can go to. And we usually think about these things as following kind of a straight line, but it's really messy. And that messiness, we sort of assume when we have a research project that it followed one direction, but we don't know what was eliminated along the way. And all of that messiness in a bit of a questionable research environment provides some opportunity for some incentivized, we'll say incentivized messiness, right? So questionable research practices like p-hacking, what if I selected, you know, selected things just to get that precious p less than 0.05. Hypothesizing after research results are known, also known as harking, the idea being that I searched for the thing that was significant and then I hypothesized why that thing was significant. And then I presented that hypothesis as if I had started with that hypothesis and was confirmatorily testing it. Selective reporting, opacity and process, these are sort of questionable research practices that result from this kind of, you know, gardener forking paths, this issue with incentivized questionable research practices. This is an incomplete list, but that's kind of the researcher side of things. Now, often we don't always know that we're doing these things or we don't always know that these things are maybe bad, they're not always necessarily bad, right? But, you know, it's a tough thing to sort of manage as a researcher. Then on the publication side, right? We also have publication biases. I typically call these publication related biases because there's all kinds of problems here. So we can do things like selection on results. So the most common version of this is you're much more likely to get published if you are, your results are statistically significant, whatever that might mean in some given projects specific case, right? So that means we don't publish as many null results, even when the null results are good. So these are sort of some classical problems in publication. And then there's also peer and editor review. We don't really know what happens underneath the hood peer review. We don't really often know on what criteria we're actually looking at things. Are people looking at the methods mostly? Are people selecting on the results? Are people selecting largely on the language? It's a really opaque and kind of messy process in there. And we're kind of worried that people are selecting results over methods. Methods drives evidentiary strength for the most part less than the results. And so we would hope that our peer review focuses on methods more than probably it actually does in reality. And so what we have is a research environment that favors results over rigor, reliability and transparency. We have these researcher related problems, these question of research practices. We have these publication related biases and probably we have weak methods that we can fix, we can have much better methods. And there's this idea out there and Center for Open Science talks about it a lot that there are devices and processes we can use. We're going to call them pre-commitment devices today that might help here. And the two that we're going to focus on in particular are pre-registration and registered reports. All right, backing up a little bit. So there's in the traditional publication model, the way that we think about them anyway is this sort of step-by-step process where we start with a question or a hypothesis. We design the methods to test that question or hypothesis. We do our data collection and analysis and then we write up our results and then after that point, somebody finally looks at it. We send it to a journal that gets in theory period reviewed, reviewed for who knows what criteria. At that point, the journal editors decide whether or not it is worthy of publication in that particular venue and then we publish it, right? So that's the sort of traditional step-by-step process we go through. Pre-registration adds a step in the middle there. So we develop our question or our hypothesis, we design our methods, and then we say, okay, here is what we are going to do or what we are planning to do. Now, you can certainly register something that is exploratory. This does not have to be kind of a classical confirmatory type things. That's great and awesome. What a pre-registration is doing at this point is saying, here is what we are planning to do, here's some documentation of it. And then if you're doing more of a confirmatory, if you're, then you go ahead and collect your data, you write your manuscript, standard journal peer review process from that point on, right? It's just inserting this little step in the middle that says, okay, here's our intent. And the idea here is that you can plan the whole study picture from the very beginning. You can see the big picture before you actually do things. It'll also, if you do a public pre-registry, say like through OSF, you can improve searchability quite a bit so people can find what you are doing or if there are people that are out there that are interested in riffing on your idea and so on or waiting for your idea so that they can do their own tests afterwards. It makes it much easier to find what is going on. It provides an opportunity for discussion of methods, right? So this is really key because after your experiment or after your study has already been done, you've already collected those data and then you go back and look at the methods, that's a really sort of backwards way of doing it. It's a really expensive way of doing critique, whereas the best opportunity for criticism is really when you are designing your method. So it provides us really nice sort of, tangible thing to talk about. And you can identify when your, to what degree your study is very strictly planned, you're doing this sort of hypothesis testing regime or unplanned, you're looking to do something exploratory. Both are really important, but it's really important to clarify what you are actually doing and intending. And importantly, since you've pre-registered in a lot of cases, you are limiting the opportunity to some degree for questionable research practices. You've already sort of committed to some degree to a path and then you're presumably going to follow that path to the best of your ability, right? These are not prisons, these are not, this is not a stone tablet, things change, but it is an opportunity to say, okay, this is what I plan to do. And then when you do it, you say, oh, look, this is exactly what I plan to do. I wasn't messing around very much in that garden of forking paths. Registered reports is kind of a pre-registration plus. So again, you register your intended methods, but rather than go ahead and do your experiment and publish and submit it to a journal later, you go ahead and submit what you intend to do, your protocol, your registration to a journal. The journal does a peer review process on the plan itself, on your protocol and makes the main decision for whether or not it's worth publication at that point. Critically, that helps focus things a lot on the actual methods, right? So we are not selecting on results at this point because the results don't exist. And we are pre-committing to publish the results, whatever they are. So that means that there is not that, that's really binding to difficult selection on results that typically happens in a normal peer review process. So what this is often called at the beginning is when you submit your design, it's called an in-principle acceptance. So it's an acceptance before you actually go analyze or collect your data, focuses on the design rather than results. And in theory, we don't really know, it can help address publication-related biases. So register reports is sort of a pre-registration, helping limit researcher-related biases plus an extra bit that helps reduce publication-related biases. And what do we actually know about the impact of these things? Unfortunately, not a whole lot in terms of, at least in terms of strong evidence, right? It's very, very difficult to know much about this. We have some evidence through some non-randomized observational studies, but they're really, really difficult for the fundamental problem of causal inference, which is that causal inference is hard. So we have a lot of problems with selection on outcomes. We have a multi-level problem. So at what level are we looking at this? Are we looking at the impact on systems? We're looking at research projects from what point? It's also really hard to imagine designing an experiment in this particular setting. We'll get back to that. That's why we're all here today. And we have a whole lot of levels of outcomes, right? So we are looking at different kinds of outcomes, like process outcomes and things like research outcomes, like P less than our statistical significance and so on. And one of the big problems here is that when we look at the data that's out there, we only really have the data from the point of publication, which means that all of that selection that happens beforehand, we don't know it, right? We really want to start from the idea phase and go outward. And that's really tough to do in a lot of circumstances. If we want strong causal data on the impact of these things, we really have to kind of start from the beginning. And what sorts of impact are we looking at? Are we looking at process outcomes? Are we looking at what parts of pre-registration and register reports impact timelines and transparency? And how far did you get through the research process before you dropped or switched to another pathway or something like that? And that happens, has to happen even before you get to the research reliability outcomes. So replicability, strength of evidence, putting on biasness in quotes because we like to think about things as unbiased, but that's a difficult word to use in this particular case. So we have a way of addressing all of these things in a pretty unique way. And to talk about that though, I'm going to turn it over to Macy. Awesome, thank you. So I'm just going to provide a little bit of background on what the global flourishing study is and how exactly we identify this opportunity to embed our trial to study these mechanisms. So the global flourishing study is a collaboration between Harvard, Baylor and Gallup where they are conducting a five year long longitudinal survey with 200,000 participants in 22 countries about what makes humans flourish. And some specific aspects that are going to be included in the survey are things such as happiness and life satisfaction, mental and physical health, meaning and purpose, character and virtue, close social relationships and material and financial stability. And this entire data set is going to be hosted on the OSF, which is a product of COS. So one of the major outcomes of the global flourishing study is that the entire data set eventually will become completely accessible to the public. Data is going to be released in five waves, one per year and each wave will be entirely accessible to the public one full year after its initial release date. So the first wave of data that will be released next year will become completely available in 2025. However, GFS on its own has also created this workflow for those who want early access to the data where those who do want it can submit either a registered report or a pre-registration on how they plan to analyze the data and submit it to the GFS registry, which is also hosted by COS on the OSF. Next slide please. So by piggybacking off of that existing workflow, we decided to introduce a third option for early access where researchers can opt in to be randomly assigned to either submit a pre-registration or a registered report to the registry to receive that data. And additionally, we plan to survey them on their experiences and opinions on either mechanism. I'll go into the workflow and the specifics of how that actually looks shortly, but this kind of trial design allows us to examine some key aspects such as research timelines. How long does it take to analyze, write, submit and publish a registered report versus a manuscript that was pre-registered? What is the rate of papers that get published that are registered reports or pre-registration? We're also gonna look at the impact of research outcomes. What is the rate of papers with statistically significant main findings? And of course, we're gonna look at subjective experiences and beliefs. Do researchers like either process? Would they continue to do them in the future or what would they want changed, et cetera? So how does this trial work on a practical level? As I've mentioned, the main comparison is kind of between these individuals who are randomly assigned to submit a registered report versus a pre-registration. And I know it can seem like quite a big leap to let someone else decide the process with which you do your research project, but let me break it down and tell you how this would actually look. So say you have a general idea of what you would like to do with the GFS data, but it's not really developed yet. Those who are interested in this trial would enroll before developing their idea or making any kind of submission to the registry and receive their random assignment. Along with their random assignment, we will be providing a list of resources. So those who are less familiar with either pre-registration or registered reports can successfully create one. From there, the practical workflow between these two arms is pretty similar, starting with pre-registration. After they get their assignment, they'll create their pre-registration and then upload it to the GFS registry. From there, they'll receive the access to the dataset and perform their analyses. And essentially the same thing happens for registered reports. The main difference being after receiving their assignment, they'll write the registered report and before making a submission to the registry, they will submit their stage one registered report to a journal to undergo peer review. And after receiving in principle acceptance, they can then make that submission to the registry. And of course, the reason for that difference is peer review of the protocol is part of the registered report process. So along the way, both pathways, participants in both pathways will be receiving periodic surveys that inquire about the status of their projects and their opinions and experiences with either a pre-registered report or registered reports or pre-registration. So to sum that up for participants in this trial, the process looks like having a general idea of what you wanna do with the data and rolling in the trial, taking the baseline survey on your previous experiences and beliefs with either mechanism, receiving your random assignment and then for pre-registration, creating that and submitting it to the registry and for the stage one registered report, creating that submitting it to a journal and then to the registry. In order to expand the reach of this trial, we are also inviting those who choose for themselves what kind of submission to make to also opt in to those same surveys. So this could be people who learn about this trial after they've already made a submission to the GFS registry or it could just be people who would like to participate in some capacity in this trial but would prefer to choose what kind of submission they make to the registry. So these individuals can sign up at any time before or after their submission. So altogether, this creates kind of four arms within this trial. We have our primary comparison between those who are randomly assigned to pre-registration and those who are randomly assigned to registered reports. And then we have this secondary comparison of those who choose for themselves to submit a registered report in a pre-registration. And again, all participants in all arms will be receiving quarterly surveys that inquire about the status of their projects and their experiences and beliefs with either mechanism. And they will be receiving these surveys up until they mark their project as complete. And complete does not necessarily mean that their paper has been published. We are interested in all outcomes here. So that could be deciding not to publish, throwing away the project, submitting a pre-print, withdrawing from the study, what have you. And of course, all participants of all four arms can withdraw from this study at any time with no consequence to their access on the GFS dataset. So I know I had briefly mentioned that people would be receiving a list of resources to help them successfully create either a pre-registration or registered reports. I just kind of wanted to expand on that a little bit to know what you can expect. So CUS does provide a pretty robust set of resources for both pre-registration and registered reports, including help guides that help you step by step, start a pre-registration. We have a list of all journals that accept registered reports. And additionally, we have a curated list of journals specific to the kind of data that GFS presents. We have prerecorded webinars, checklists, examples of both pre-registration and registered reports, FAQs for both, and more. So the last kind of practical thing to go over here is the timeline. As you can see, we are pretty much at the very beginning of a set of exciting few years. The sample data is out to help you formulate your prerecorded registration or registered report. And we are expecting Wave 1 data to come out in early 2024. And just as anyone can apply for early access to any one of these five waves of data, anyone can enroll in this study for any of those five waves of data. So technically, the last year to enroll in this trial would be 2029. So now I'm gonna pass it back to Noah so he can go a little more in depth on the trial design and the analysis plan behind it. Yeah, thank you. So the really, the important thing to understand about how we're actually analyzing this is that this is a staged process from randomization to say, pre-registration, getting a complete analysis writing a manuscript and publishing. This is often called a longitudinal cascade. It's a series of steps that take time to get between all of these different stages. And so naturally, the one of the things that we are really interested in here is how much time does it take to get between things? This is one of the key questions that people have about pre-registration and registered reports. And it is a prerequisite to getting to pretty much everything else, our research outcomes. So timelines, how long does it take for people to get to different stages are really the key here. And one of the key questions behind that is that we have kind of two competing theories of what's going to take longer, right? So a registered report puts the peer review part at the beginning of the research process. And that is a very time consuming process, right? You have to wait for peer reviewers to submit and to get back to you and editors and so on. And so we might expect that when compared to a preregistration research pathway that registered reports will have a much longer time getting from the point of starting their idea or randomizing to the point of actually getting analysis and data and so on, because you have to sort of do all those first initial steps beforehand. But on the back end, once you have your stage one peer review or your in principle acceptance, you don't have to go through basically this long arduous peer review process to get published in a sort of traditional peer reviewed journal. So things might get much faster from that point. So we have sort of these two competing time to event type questions and the net time from beginning then. Well, we don't know because there's sort of time savings at the beginning and our time savings at the end. We don't know what the net result is going to be. So these three questions are some of our key questions that we're going to be looking at. As Macy mentioned, there are a variety of surveys. So in addition to the baseline survey that's mostly about personal demographics, we have these quarterly surveys that are about project tracking. This is where our key data comes from. So we are going to be asking people, when did you complete, what stage are you at now? When did you complete certain stages? So we're going to approximate this by weeks. So about what week did you start doing your analysis or data collection so that we can actually track these projects along the way for years so that we have pretty good data about these sort of time to event, these event staging type data. And in addition to that, we can use OSF and all of these other resources we have available to us in addition to the surveys themselves to actually triangulate some of this because a lot of that is public. A lot of the information is going to be available on OSF or through publications, we can search for things and so on. So we have kind of a rich data set. And then every so often, we're also going to be asking about personal experiences with the process, right? Sort of a consumer experience type surveys. And the idea of this trial is that we are really in a naturalistic or a pragmatic framing. Once we randomize, once we sort of signal, okay, it is your intent to do a preregistration or a registered report from that point, that's our inducement, that's our intervention, is that signal of intent. Whatever happens from that point, that's outcomes. We're interested in that. So we are interested in whether or not you complete the project as assigned, how long that might take, what happens along the way, that's all data. That is all the stuff we are interested in is what happens from that point. And most of what is going to happen is going to be through these sort of stage completion event date surveys. So again, that's that self report through the surveys plus our sort of, you know, our manual searches for research outcomes. And as Macy mentioned, we're primarily interested in the differences for what happens to people who are randomized to preregistration versus registered reports. So that's our big comparison. That's our bread and butter in this particular study. There is that secondary set of data. That is, we are including the data from people that self-selected into registration or registered reports and then voluntarily gave us access to their sort of their project tracking data and surveys and so on as a secondary comparison because the difference between who randomizes, you know, being randomly assigned to preregistration versus choosing to be in preregistration is really important and interesting and useful for the research environment to know. So our primary outcome is the sort of research staging time to event. So we're talking about time to event. So we're really talking about pretty standard Kaplan-Meier type statistics here. This is embedded in a longitudinal cascade framing, which basically just means that there's some imposition to ordering for these events that keeps things nice and clean and easily interpretable, but it's all pretty much basic Kaplan-Meier time to event type statistics. As a secondary outcome, and there's a reason this is a secondary outcome, we are also looking at the impact of preregistration and registered reports on research outcomes. So statistical significance of your main outcome in that research project in the end is kind of the big one, right? We wanna know, okay, do people in registered reports publish more null results or different effect sizes and so on? That is a really, you know, that's the end game that people think about when they think about registered reports and preregistration, at least a lot of the time. And we're really interested in that. There's a trick though, because that's very conditional on what happens in the earlier stages, right? So we have lots and lots of projects that we expect not all of those projects are going to get to the final publication stage. So getting to the stage at which we can analyze these things is conditional on earlier parts. So that creates an interesting selection problem. We have some ways of dealing with that. We can talk about that if there are questions about it. But very importantly, it also is a pretty large hit to our sample size. We really don't have much control over the sample size. So we don't totally know that we're going to be able to reach a meaningful outcome here yet. We might, but it's going to be years in the process. So we're hoping to get these data. We have a pretty clean identification strategy if you're familiar with those terms, but we don't really know if we're going to have a sample for it quite yet. So in summary, we have this project that is measuring the impact of our two pre-commitment devices that we're studying by randomizing GFS researchers who in order to get data access would already be submitting a preregistration or a registered report. The trial basically just asks, instead of choosing a preregistration or registered report, let us randomize you to that path. If you're interested in exploring these things, sign up, we'll randomize you and then you can proceed as normal. So this is a really sort of simple intervention that we're imposing, right? It's, you know, the infrastructure is already there through this collaboration. So we're just adding a little bit more to it. But this is also a pretty interesting way of thinking about evidence development because there are all kinds of opportunities for this. I mean, we could theoretically do the same thing for pretty much any other dataset that you have to sign up for submit an abstract for something along those lines. We can do that elsewhere. So we can start thinking about this as a pilot for doing this in many, many other datasets, maybe in a semi-centralized way to develop a sort of a prospective meta-analysis. We wanna know not just what is going on with preregistration or registered reports among GFS researchers, but maybe all other kinds of large data set researchers. And that will help power future studies. So we might get at those research outcomes that are tougher to get at pretty much need larger sample sizes to get to through expanding to other areas, right? So this is in some sense as a pilot. And most importantly, we're looking at impacting science policy and practice through evidence generation, right? So the point of this is to try to experiment with preregistration and registered reports by developing the evidence base, right? You often see a lot of folks say, well, I don't necessarily agree with registered reports because there was, well, there's no evidence for registered reports. Well, now we're showing some evidence. We're developing the evidence base for these sorts of things. And to close out, I'm going to hand it back to Macy one more time. Thank you. So the last kind of major practical point to go over for those who are potentially interested in this trial is the opportunity to be listed as a co-author on the final manuscript of the study. The specifics on that can be found on the website that's referenced in this slide, but it really just boils down to being a consistent and active participant and then dedicating a couple of hours to coding the data afterwards. This study is simple yet so important for the gathering of evidence of these novel research practices. If you've been looking for a reason to explore pre-registration or registered reports or just want to participate in the expansion of the knowledge base on how these pre-commitment devices affect research, we invite you to join the study. Enrollment is open now. And again, can be found on that website that's linked in the slide. So now that we've just kind of gone through the background and the structure and the analysis plan of this entire trial, we want to open up the floor to any questions or comments that you guys could have. Feel free to jump in the chat or the Q and A. There are a lot of really interesting details about that we'd love to chat about. Just give it a second. What is the deadline for enrolling in the RCT? That is a great question. In theory, you get enroll from now till pretty much the end of the GFS waves. So there are going to be many, many waves of data along the way. So right now we have the first wave of data. So we're not even at the longitudinal part of the study yet. So as long as you have a project that you are interested in that is related to at least the next wave of data, up to the next wave of data that is going to be coming out, you can enroll. So there is no deadline up until the last wave of data is going to be released. So effectively five years from now or four ish years from now, four plus. So there's quite a long time. There's a big long tail to this. Research takes a long time. So this trial is also going to take a long time. It's a great question. We have also started enrolling folks as well. Actually I have a question for the crowd, for everybody else then. Are there people that are, here that are interested in maybe joining the trial, just sort of generally interested in, in the idea of the, you know, the how this trial works and so on. So any potential participants, anyone wants to volunteer that that's great, excellent. So we'd like to hear. So this is a pretty open trial, right? So most of the data that the information that is going to be, you know, here is going to be shared on OSF and so on. So this is, you know, there is, this is not like there's the things are not super secret, like maybe you might think of a medical trial as being. So always, always, always happy to chat about participation, whether here or elsewhere. So feel free to get in touch. Any questions on the design of the study or there are sort of some lingering thoughts about, about how we're actually, you know, some of the inferential side of things, how we're designing things, how the analysis actually works. Feel free to get as nerdy or as general as you, as you like. Ooh. So I think Eva, if I hope I'm pronouncing it correctly, he asks, is there any way students can get involved beyond being a participant at this point or in the future? That is an amazing question. So this is not, not fully publicly announced, but we are working on a student project. Hopefully we'll be able to talk a little bit more about this in the next couple of years that is separate from the GFS project or it could be a GFS project, but it's a different trial. So yes, there will be lots and lots of ways to, in general, for being involved in many, many projects. But of course, any student can join the GFS trial itself at any time. So you can, you know, help, you know, look at our preregistration and critique it. You can get involved by, you know, by riffing off of the idea and spinning off things into their own, and you could help also just by chatting. We'd love to hear from students in particular. Great. Hopefully that answers that question. So we have, I think, IU asks, as you said, it's a pragmatic trial. Why does the design not involve a control or usual practice? Could you please elaborate about the arms of the preregistration versus registered records? Yeah, that's a great question. So there are two aspects. Well, there's actually, there's a lot in this question. Maybe that we can break down a bit. So let's start with the end here. Why choosing the arms of preregistration versus registration? So there are two answers to this. Firstly, what we are doing is adding on top of the existing GFS infrastructure an option to randomize. So in order to get early data access to GFS data under normal, without forgetting about the trial, there are two pathways by which you can get early data access to those data. And that is through submitting a preregistration or through a registered report. So if the trial did not exist, those are the two ways you would normally get access. We are adding that other pathway on top of it in order to have people randomized into those. So those two arms existed before regardless of the trial. So those were sort of pre-selected in some sense. Then there's also sort of an existential question of, okay, well, if there's no registration for your idea, then how do you enroll a pathway? So if there's no existing project, which would be sort of the normal way of going through things, you can't really sort of, and it's tough to think about how a person would enroll in that sense. But that is also in part why the other, the self-selection arms exist, right? If you go through a normal path, the regular pathway, you don't pre-commit to something, you can sort of get your data included in our trial normally through that. So it's a little bit of a, there's kind of multiple levels to that question. We can get into a little bit more detail. So the controls here, we're really comparing two arms and we're getting data on these two arms and also in sort of a descriptive sense. So part of that is limitations, but also part of that is these are sort of the comparison of greatest interest in some sense. Does that answer the question? I know that there's a lot more there. We can sort of talk about that kind of all day, but happy to sort of follow up a little bit more. It looks like there's a couple of questions in the Q&A section too. Okay, quick recap. All right, so we have, oh, so, okay, so Yana asks, will it be possible to get access to the survey forums you use? And not only will it be possible, it is possible. So we're, you know, at the Center for Open Science, we like to be really, really open about things. And actually we should double check to make sure that these are currently publicly available. But on our OSF site, which we can link in the chat and it's also available on the GFS trial website, you can get access to all of our forms and so on. Everything that we have is publicly available. So yes, please. We would love it if you had access to that in fact. So yeah, yes, this recording will be on the COS website. I'm actually not sure. I'm not sure if Amanda is still here as to what the schedule for that is. Yeah, it'll just be a couple of days. So it'll be on COS.io slash events and I'll send that link through and then you'll also get something directly to the email that you registered for this event. Yeah, and we'll post it on the trial website as well. So everything will be easily, easily accessible, right? Part of being the Center for Open Science, right? Another minute. Any last questions? Excellent. Well, we are, so Macy and I are available to chat anytime. We'd love to hear your concerns and questions and if you're a potential participant, please, please get in touch. We'd love to chat with you. And yeah, well, there are many, many, many more projects along this line being planned right now. So expect to hear more from us as well very soon. And we're really excited about this. Thank you very much. We'll close it out from here. Have a good day.