 Hello, everyone. We'll also get started here as folks continue to log in. Welcome. Good morning. Good afternoon wherever you're logging in from. Thank you for participating. I'm very pleased today to have our two guests talking about the initiative for funding consciousness research through the registered report mechanism. We have Dr. Marja Delechia. She's a senior researcher and lecturer at Lucian University and Zoltan DNS professor of experimental psychology at Sussex and he's also the registered report editor at neuroscience of consciousness. Marja Delechia is a researcher who's submitted to the program so far and she'll be talking about her experience and what she's doing with the registered report program and how she'll be proceeding with that. And Zoltan is highly experienced with reviewing registered reports at a variety of journals and the main journal we're partnering with us on this initiative is, as I mentioned, Neuroscience of Consciousness. It's the Society Journal for the Association for the Scientific Study of Consciousness and we're very happy to be working with these great teams to advance open science and to advance consciousness related research. In a moment, I will go through a little bit of an introduction about what the initiative is and a couple of key requirements and deadlines that are coming up for it. And the question and chat features should be enabled. If you have a question, please use the Q&A box. We'll be monitoring that throughout if it's a clarifying question. I'll stop or interrupt one of our esteemed guests. Otherwise, if it's a content question that can wait to the end, we'll make sure to get to all the questions. With that, I'm going to share my screen, so please give me one moment here. Welcome everyone again. This initiative is supported by the Templeton World Charity Foundation and it's a partnership between the Society for the Association for the Scientific Study of Consciousness and the Center for Open Science. I just realized I didn't introduce myself. I am David Meller, Director of Policy here at the Center for Open Science. We are a nonprofit organization located in the U.S. whose mission is to advance transparency, trust, and reproducibility in scientific research. And one of the main initiatives that we support and try to advance is that registered reports, publishing, and in this case, funding mechanism. So this particular initiative is designed to fund confirmatory research that advances consciousness studies through that two-stage peer-review publishing model. We'll go into a little bit more details later on, but just to make some clear definitions, the registered report publishing and funding model is where the ideas are prepared ahead of time, submitted to us at COS for preliminary approval for appropriateness for inclusion in the initiative, and then submitted to the journal for stage one peer review early on for any key data have been studied, has been started. So that stage one peer review process is of the introduction, the proposed methods, and the proposed analyses. At that point, once in principle acceptance is given by the journal, which is their promise to publish regardless of the main outcome of the study, that's when the study commences, and then the final stage of peer review occurs after results, of course, known and submitted for a second stage two at the journal. And the key feature here with tying funding to these proposed studies is to help make the process a little bit more efficient so that you know that money to support the work can also come once it's been imprincibly approved for publication at the journal. So the key steps that we are interested in is to send a pre-submission application to us at the Center for Open Science. I'll make sure to share this link in just a moment after I stop sharing my screen, but it's at cos.io slash consciousness, and those applications include a structured abstract abstract of budget that has a maximum of 50,000 US dollars. It has to be, of course, relevant to advancing consciousness research, be a confirmatory study with very clear hypotheses. And then the those that are appropriate for inclusion into this initiative submit a full registered report that full manuscript up to proposed methods and analyses to one of the eligible journals that publishes consciousness research, or to initiative that Zylton will go into a little bit more detail posted on our pre-print server and submitted to a community called peer community in registered reports that act equivalently to review and approve proposed research methodologies. The expectations for the work that is being conducted is that it must be as as open and transparent as possible. So as much open data as is ethically feasible, as many open materials that can be shared, if analytical code is used to post that as well, once the journal provides in principle acceptance that is registered in an approved registry and to make openly available manuscript versions ideally through pre-print, but other venues might be possible as well. And that's key to making, of course, the research as available open and impactful as possible for others to use and reuse. The key deadline that we want to emphasize today is December 16, is when we want to have pre-submission applications sent to us. We are getting close to the, of course, to that date and we're getting closer to the limit of funds that are available for this. So please do submit those applications by that date. At that time, we'll no longer be accepting pre-submission inquiries. There are no specific deadlines for later stages of submitting to the journal or commencing work, but the program ends in summer of 2024. So that's when we would expect results to be shared through publication or pre-printing or whatever is available at that time. So with that, I'm going to stop sharing my screen. I see we have a couple of questions. I'm going to make sure I address those and then I'll pass it off to some of our esteemed guests. Let me just check the Q&A. Is it okay if you're applying for related grants simultaneously? Yes. That is, we have seen several of those so far where people might be putting together a couple of different funding sources. So that is that is feasible. Yes. I see something in the chat also. Give me one more moment. Okay. Yeah. And for that, I will pass on to Marjah. Would you be willing to give your description of the work that you're conducting and how you heard about this? Thanks a lot, David. Yes. Hello, everyone. I applied for this grant scheme earlier this year for a project that has to do with preserved neural functions in the absence of consciousness. I heard about this through some advertisement of the ASSC, if I'm not mistaken, earlier this year. And I decided to apply because I had an idea that I thought that could fit very well this type of scheme. It's a project that we started already a few years ago and it's about assessing whether in unconscious states, specifically in comatose patients, we can find evidence of regularity encoding using EEG in comatose patients during the very first hours after commonset. And a few years ago, we already published some results that indeed, even in unconscious states, the brain is capable of encoding some complex type of regularity. But nevertheless, we thought that it would have been important to first replicate these results and then trying to find ways to even making these results more evident, more striking, more convincing by using higher density EEG systems and also trying to generalize what we found in about 24 patients at the time to a bigger cohort and across different type of clinical management. So in this new project, we have multiple sites that are involved. And meanwhile, also the clinical management of these patients has been updated. Specifically, there are different sedative agents that are administered to these patients and also they are treated with what is called targeted temperature management. So with body temperatures lower to different target temperature. And since the EEG signal is affected by all these factors, by having a bigger cohort and with multiple sites, in this new project, we're trying to also disentangle all these different factors that can influence our results in terms of EEG. So I applied earlier this year and the application is fairly straightforward. So as David said, is an extended abstract. And then I received a positive feedback, I think not even one month later. So in comparison to many other funding scheme, I found this application not that painful. And at the moment, I'm at the stage of recent meeting the register reports, the first stage is my first experience with register reports. There are good and bad side that I can tell you about the good side is that not knowing much about the new results in advance, I could really focus on what is the background, what we already know, and what is the method that I want to apply without being, you know, influenced by trying to fit the results with the old story that I want to propose. So it's really only introduction and methods. There is also quite a lot of emphasis on the effect size and on the justification on the included number of participants, given what we expect in terms of effect size. So really makes you think explicitly about this more than a more standard type of publication. And the downside to say that maybe this type of scheme is not ideal for more exploratory type of analysis, although as far as understood up to now, both planned data analysis and exploratory investigation can be accommodated in the final publication. So of course, if you have a very high risk type of project, and if this first stage register report goes through, then you also have to accept the possibility that if you don't have the desired results, then you have to publish them anyway, which might not be psychologically acceptable for everybody. So I'm trying to accept it beforehand and that these results might not be replicated or there might be other reasons why we found it in the first instance, but not in this new project. This is all what I wanted to say, but please let me know if there are questions or classification that you would like me to tell you about. I'm available for this. Great. Yeah, we'll be, if you have any, if anybody have questions about Marjorie's work or why it was, I actually thought it was especially appropriate for submitting to the register report mechanism. Please put that in the chat. I believe you might also have the raise hand feature and we can allow to talk there until questions come in. I'll pass it to Zoltan to talk about his experience with editing, reviewing, and shepherd it along. Many register reports. Zoltan. Well, I added register reports on the journal that David mentioned, the journal of the Society, ASSC, but also I'm involved with another route by which you can go, which is Peer Community Inn and I'll say a little bit about that. And just say that you're not limited to those options. If there was some other journal you're interested in, you could talk to David, right? And you could say if that was a... Yeah, Neuroscience of Consciousness and Peer Community Inn register reports are kind of the two main groups of journals that we are encouraging folks to apply to, but other journals that accept register reports may be acceptable if the disciplinary fit is appropriate, but our contact information is on the website. So if you have questions about that, let us know and we can double check to make sure that those would be appropriate. So Peer Community Inn is meant to be an alternative to traditional publishing. What happens is you upload your pre-print as a stage one. So once you got the acceptance from COS, that your idea is basically sound, what you could do is write up your stage one. In other words, your description of your introduction, your methods and plan statistics. Put that on a archive like SAC Archive, OSF and so on. And then you can enter the Peer Community Inn register reports system and that archived paper, pre-print, will be edited and reviewed as per normal. And we take you through a Peer Community Inn all the way to stage two. In other words, that's the point in a registered report. Stage one is when you're proposing your methods and analytic methods, you then get in principle acceptance to say that that's great. You can go ahead and then you collect the data following those methods and then you submit your stage two, which is the complete manuscript with the analysis of the results. And in Peer Community Inn, we will get to the final stage of hopefully accepting that. And then don't mind at this point, this has all been on pre-print archives. So it hasn't been published, in any technical sense. I mean, you published it on the pre-print archive, but it hasn't been republished by Peer Community Inn. So that means you are available to have it published in any journal. So we have a set of a couple of dozen Peer Community Inn registered reports friendly journals, which guarantee they'll publish anything we've accepted. Given it fits the journal remit and you have to pay the author processing charges if you want to go for those traditional journals. But I did say I wouldn't share a screen, but I could just share the screen to show you that list. So this is the list. You can get it by going to Peer Community Inn registered reports website. If you just type Peer Community Inn registered reports or PCIR, you'll get it. And this is the list of journals that include Cortex, for example, that does consciousness relevant research, Royal Society, Open Science. And there's some others that you may be interested in. So you can have a look at that. Psychology of Consciousness. Psychology of Consciousness is obviously clearly a consciousness related journal. So that is a process you can go through, which gives you some power in terms of choice amongst these available journals. In terms of which journal that you'll have. It also incidentally shows that we as academics do all the work. So if you want to pay author processing charges to a journal, fine, go ahead. You might want to ask yourself in the end what is gained by that. But anyway, that's a process. Now, one thing to consider when you pick a journal here is that they'll have different requirements for the registered reports. And so you need to look at, if you're interested in a particular journal on the list here, you need to look at what it is they're asking for. For example, Cortex wants a 2% significance level with 90% power. Or it wants correspondingly a base factor threshold of more than six or less than the sixth. Most journals would have 5% significance, obviously, but you need to check the power requirements. And most of them would have correspondingly base factor thresholds of three and a third. Royal Society of Open Science doesn't have such particular requirements. And that's because you can argue there are some situations I would like with Marcia, I was wondering how you got your power up if you're dealing with patients. I mean, that can be a tough thing. And it's not always possible if you're dealing with patients with select populations to get the high power that might be required, for example, by Cortex. But the data is still worth having because it can be built on. So that's the philosophy of Royal Society of Open Science. Just discussed with the editor and reviewers, you know, you make your case for what you can do and why that's valuable. But in most of the journals, what they're looking for in a registered report is that you can give an answer to a question, because this is confirmatory research, normally registered reports. I mean, that's the typical situation. So there must be grounds for saying you have support for H0 or H1 for the hypothesis of no difference or of difference. Well, in another way, what you would like to do is test the theory because it's a confirmatory research. And you'd like to severely test it, obviously, it'd be nice to severely test it. But you can only severely test the theory if you can have strong grounds for supporting H0 or supporting H1, say, good evidence for H0 or good evidence for H1. So that means you need to have worked out in advance that you can produce enough subjects, you can collect enough subjects, and you're committing collecting enough subjects, items, whatever is required to get the sensitivity to get the sort of support you need for the null hypothesis or the ultimate hypothesis. And as Matzi was saying, this is the point, really, where people haven't done registered reports before sort of first come unstuck or first find a difficulty or first find they need to think in a way they haven't thought about before. Because if you want to falsify a theory or have evidence count against a theory that predicts an effect, you need to be, if you're dealing with frequentus statistics, you need to be powered to pick up the sorts of effects that are relevant to that theory. So if you don't want to miss out on the effects that are relevant to the theory, you need to have the power to pick up and all effects that could be relevant and support that theory. That means you need the power to pick up, by logic, the minimal interesting effect size for that theory. So again, you just scientifically justify what that is in order that the statistics are connected to the theory testing. So that can be a difficult thing, but something you need to think about. In terms of a base factor, it's slightly easier, but you do need to say what is the sort of effect that is being predicted here by the theory? So a default base factor is one where you just say, oh, there's this default of excise that I'll pull out of the shelf and test my theory with that. But why is that relevant to your theory in the context of this particular experiment? You've got to justify it. So that can be that sort of the first sticking point for people. And then when you're when you push to justify why these effect sizes, you're coming up with a relevant to your theory in this particular scientific context, so that takes a bit of thought. And then you work out how many numbers would be required to have the sensitivity to get support for H0 to H1. It's typically more a bigger N than is routine in experiments and studies in the field. So when you do the registered report, you're likely looking at a rather greater N than you used to. In other words, you can't just say in my field, we typically run 20 subjects. So that's what I'm going to do because I isn't going to fly. So that mean I'll affect your budget as well. So bear that in mind. So the budget you put in initially when you first submit your proposal is a tentative budget. And then that will be updated if it needs updating as you go through the process. And reviewers and editors suggest that you might need more subjects. Can I ask you a question? Yes. Do you think that this type of format of a register reports article will be eventually accepted by any type of journals? Or do you think there is a reason why only some journals accept this so far? It's just a matter of time or is what other things are to be considered? Yeah. I mean, if you look at, I mean, registered reports started 2013 in Cortex and Chris Chambers set up between now, of which I was a member. There was something like it beforehand, but it was when Chris set it up in 2013 that he made it straight that we will make this work and we'll nail this down so that everything is tied together. So the first real registered reports is 2013. And when you look at the sort of reactions on the web at that time about it, there was a lot of people say, oh, this will never work with bad for science. It's quite funny now looking back on it. And then the uptake has been phenomenal, really. I mean, in terms of you imagine a system like science as a large amount of inertia, with less than 10 years on and over 300 journals now offer this in all disciplines, all sciences and humanities, even qualitative methods now. So from that point of view, the uptake has been really quite impressive and growing all the time. There's always more journals taking up registered reports. Peer Community In gives a way for any journal now to take on board registered reports because they can become a PCIR friendly journal. And then we do all the work for them, you see. Because it's hard work for a journal doing registered reports. They need an editor who knows what they're doing and they need to change the workflow system, what happens behind the bonnet, which needs updating for registered reports. And there can be a sort of a slow and difficult process. So hopefully more journals will take it up now with Peer Community In. They just need to say we'll be PCIR friendly and we say, well, that's great. And then they can use our expertise. One question came through about good examples of Bayesian sampling plans. If you happen to know of any, I'm also looking through our resources now, but if you... So, yeah, with what we do with Bayesian sampling plans is, first of all, decide on threshold, which might be decided for you, or draw side the open sign to decide on it. So you need your base factor, say, to be more than three or less than a third. Now, often with a registered report, in fact, I would recommend you run a sort of pilot if you haven't dealt with this sort of data before. I mean, if it's a sort of paradigm, your lab has been used to running, then that's fine. But if it's sort of a new paradigm and you don't know what error variances are, I would suggest running a pilot just to get a net... I mean, you need to get an estimate of the error variance so you can do your power calculations, but also your sensitivity analysis for the base factor and to make sure that you know what's the sort of way these data behave. So if there's anything surprising thrown up, you sort of know about it already and can plan your analysis with full information about the sort of data that this sort of paradigm produces. So I would either based on your past experiments or run a little pilot, which doesn't have to be the full experiment. It could just be one condition, for example, just so you see what the data is like and the sort of error variance you have. Now, once you know a standard error, you can estimate how many subjects would be needed to have a 50% and 80% chance or whatever of reaching the base factor threshold of whatever it is, getting more than three, or what will take more subjects less than a third to get support for the null hypothesis. So get an estimate of that and then you can say we will run subjects until the base factor is more than three or less than a third or until we reach a maximum because you'll need some maximum there. So that there's some definite stopping point, which is the limit of your resources for that. And that maximum, I don't think any journal has a particular rule about the relationship of that maximum to your estimate and other than it's got to be a bit more than the estimated end. So there's a bit of leeway there because the estimated end to get a certain base factor is just an estimate. So you don't really know what your base factor is going to do once you actually run a certain number of subjects. So you can put in some leeway, unspecified amount, just a bit more, somewhat more than the estimate you need to reach your base factor threshold. Any more questions? Yeah, one came through. I'll type the answer, also answer live. Are there some chances? I think this will be for me for cost sharing, cost reducing, and register reports with collaborative efforts since a lot of these research costs account for the MRI and EGG and PEG costs. To answer the first part of the question, yes. If you're collaborating with others who are perhaps doing slightly different research questions, but you can, I was about to say kill two birds with one stone, but when we're talking about human subjects research, that might not be a good joke. So you can collect multiple data sets at one point. That would be absolutely fine, assuming logistically it can work out. The second part of that question was, is it permitted to access historical records from relevant institutions? That's a maybe. And what I mean by that is that goes under the banner of using existing data. That might be absolutely fine, especially if you don't have insights into what type of trends are already in that existing data set. Some journals will allow that as long as you assert how ignorant or not you are of those data sets and any insights that you have gleaned from them. The problem that we would try to avoid is generating hypothesis, knowing how much variation on the key factors are within a data set that biases what questions you're going to answer. And so you end up building hypothesis and testing hypothesis on the same existing data set. So to prevent that, there are guidelines that are journal specific. Some permit them. Some journals do not permit it, but even if they do permit it, can't have done any previous analysis or be aware of any trends in that data set that it exists. So that's a big maybe. Anybody else want to add anything to that? Yeah. I mean, at Peer Community in, we came up with a systematic way of dealing with that problem. The classic registered reports case is where the data doesn't exist. So that's sort of what I've been talking about. But we decided we would loosen that, but do so in a way in which the risk of bias is explicitly flagged. So the degree of bias control is strongest when the data does not exist. That's what we call the level six. A level five is the data exists, but there's no way you could have accessed it yet. It's hidden in some way. Or the level down is the data does exist and you could have accessed it, but you haven't accessed it. And you need to give some assurance about that. And then we go down the levels to level one, where, I mean, as you go down, you might have accessed the data and you might have even done some analyses on it, but you haven't analyzed the key thing that you're investigating. Clearly, there's a lot of risk there. So what we do is we just highlight that risk. And we ask for more controls and more reassurances and more, you know, sort of finalized error rate type corrections and that sort of thing. So at Peake Community, we will consider registered reports at all levels of risk, but bear in mind that the risk will be explicitly stated, that this is an example of this type. And so we don't guarantee it's as bias free as the greatest level of risk control. In terms of looking then at a journal, as David was saying, you want to take it to from Peake Community in, you'd have to see what level of bias control that journal would be happy with. But a community in itself, we deal with all levels of bias control. And I just put in a link to that, those guidelines for PCI, so people should be able to take a look at that. Let me share my screen for just one second, just to show people what they'll be looking for. So this is the different levels of risk that Zoltan was just describing. Something else we do at Peake Community in, which is yet another innovation, is we can... One thing to bear in mind with a registered report is there's a certain time delay because you need to go through the review process before you start collecting your data. Right. So and that review process takes as long as it does in a typical sort of article when you go through submitting it and finally getting your acceptance. It takes about the same number of back and forth and so it could be six months or something for when you submit to you get in principle acceptance, IPA, which means your stage one has been accepted and you can start collecting the data. So there's a six month lab before you can even start. So what we have is a scheduled track submission where you write up a brief summary of what you're going to do, which would be pretty similar to what you would submit to cost in the first place about your idea. And then we send that out to reviewers and say, in six weeks time, you'll get a manuscript. Do you guarantee to review it within a five-day interval? And reviewers will say yes to that. They'll say yes on these specific dates. So between Monday and Friday on these dates when I get that manuscript, I'll do the review in that time. So now then you as an author have, when you're giving yourself an obligation, you have to write it up now in that whatever time is agreed. Let's say it's six weeks. You're giving yourself six weeks to write it up. In satisfying that obligation, what you gain is you get the reviews back in five days. And so the system works as advertised. It's now our most popular form of submission. So that dramatically improves the initial review stage. This is the facility not offered by any journal, any 3PA community. Can I ask you what's your experience about the final acceptance from traditional journals of the paper that went through this process? Eventually, journalists are happy to take it? That's right. I mean, that's what the friendly journals that I listed, they've guaranteed it. So there's no question about it. They will take it. It's your choice. So all the friendly journals, given you satisfy the remit of the journal, which is decided by PA Community Year, and given you've satisfied the explicit requirements, for example, power of 90% to 2% significance or base factor, whatever it is, given you satisfy those simple requirements, and then you pay the APCs, it's guaranteed. There'll be no further reviewing process. There'll be reformatting into sort of journal format, but there'll be no further reviewing process. There are a lot of advantages to doing it that way. You can't tell our enthusiasm about it. And I would say I hope this is the future of scientific publishing as well. We pay billions to for-profit journals for a while. You asked earlier, will all journals be doing it this way in the future? And we don't know, but it does kind of highlight that the benefits of how science and reviewing should work. And it also, as Altoni mentioned, highlights who's doing the real heavy lifting in a lot of these practices. So there are a lot of key benefits that the system provides through that peer community and versioning of preprints. Do you mind talking a little bit about the logistics of that? Once you post it on a preprint server, pretty much any preprint server would be appropriate in the bio archives, style archives. Those are probably the two most likely ones, I would imagine. Comments come in, and then you stick with the same preprint, but you update the version once you incorporate the comments through a couple of rounds of reviews. Is that correct? Yes, exactly. But you want to imagine, sorry, go ahead. So once the work is finally completed, you might have four or five different versions of the preprint going from preliminary to revise, to re-revise, to finally having some data, to having some cleaned up and commented on data. But the history of that all is retained within the versions of the preprint. Yes, the author has control over that and that they put it up, but what we do is once we've accepted a stage one, we put up a version ourselves we control over. So that means that that version we put on OSF is safe in the sense that it's guaranteed to be there. So stage one is always searchable, always available, and will always be available, and it's guaranteed to be the stage one that we accepted because we put it there. Gotcha. So you remain essentially editorial control over the rights of the accepted once it's accepted. Yes, that's right. Cool. I'm sorry, Margin. No, I was asking whether there is some form of animation of these comments coming in or moderation. I mean, because if you post a preprint in any of these archive, comments will not come naturally, right? So do you invite them? Yes, exactly. So what you do is you go on to the peer community and website, which you can think of. We're not a journal in the sense we don't publish in a sense, but you go on the peer community website and then it's as if you're dealing with a journal because then you deal with an editor. We call them a recommender. I see. And a recommender then solicits reviewers just as they normally would until they get us, you know, a wide number of reviews and then they go back and forth. So it's not just sort of an open invite to anyone to make comments and we'll bear those comments in mind. I mean, the authors could do that themselves, but that isn't the peer community. That isn't part of the peer community in the process. Yeah. And if one wants to become part of it as a reviewer or as? Yeah, if you want to go to peer community and register reports, you can have a look at our content there and have a read. You could put in an expression of interest of being a recommender, which is our editor. And one thing I'd also suggest, I mean, I'm going through, I send personally, my registered reports, which I'm working on, I go through peer community in, but I'd like a peer community and as a set of communities. And each community is what you could think of is like a journal. So registered reports is one community. The other communities tend to be biology based, neuroscience and things, but it can be anything. So what I'd like is for people to maybe set up a community that could be, for example, consciousness science. Then my work on consciousness to go to peer community and consciousness. And I'll be very, to be honest with you, I'll be very, frankly, I'd be happy to keep it there and not go to the journals. Because the work has already been done. And as long as the editorial team is highly, you know, a very good people, then it has the quality assurance that any journal would give. So if anyone does want to set up their own community in a discipline related to consciousness or consciousness itself, I would happily recommend that. A very related question just came in and we kind of been answering it in a little bit to pieces, but what is the point of the journals at that point? Small proposals go through the same review pipeline, what distinguishes between a study published in one versus another journal? Yeah, what is the point of the journals? I think that's a good question. And peer community and sort of highlights that. I mean, at the moment, what you gain is the prestige, because when you go to promotion committees and so on, they say, oh, you know, you publish there. And, you know, that's all fantastic. But what is that all about? That's just show really, isn't it? So once peer community and has his own prestige, which I hope it has, then you don't need that. And I hope things are changing. I mean, shifting, I mean, within the UK, for example, I don't know what's going to happen to the rare research excellence framework, but which universities are assessed. But there will be a movement within that. We're told to open science practices, a greater emphasis. Yeah, and as the reputation awareness demonstration of this work is, you know, held to the same standards of importance and credibility, as that reputation expands, then what you're just saying will become less and less relevant. Today, we often use the journal prestige as kind of an indicator or shorthand of that reputation. Instead, it'll take a little time to percolate out that. The studies have been done several studies now, including recent one, Rolster's idea of open science, that methodological quality is not correlated with journal impact factor once you're above a minimal amount. In fact, there's even negative relationships. So what that means is when you pay, say, $10,000 to publish in nature, you're just joining a rich person's club, which does not, in fact, indicate better quality. I mean, as soon as that is realized and what you're rewarded for is openness and rigor, then hopefully that whole system dies away. We are the revolution. Okay. Maybe I missed this, but the deadline for abstract is December 16th, correct? At which point do you have to have the register report ready? So that's correct. The first point December 16th for submitting, and I'll put a link in just a second to the structured abstract through our website. There is no specific deadline after that for submitting a complete register report to a journal, but the end of summer 2024 is when the program will be wrapped up. And so there is a moderate amount of time pressure to submit that register report early on. So I would encourage it to be as relatively speedy without putting too much pressure on you. Let me just put a couple of links into that chat right now after that. I believe questions are dying down, and we've gone through a lot of information and a lot of basic information, a lot of talk about the future revolution of scientific credibility, transparency, and overall awesomeness. Either you had any closing remarks or thoughts about what you'd like to see happen in the next couple of years with your work or anything else you'd like to share before we wrap up? Yes, maybe I can say something about this new experience that I had with register reports. I think that in the end it might be a scheme that can accommodate any kind of project. So potentially it's something that can really become a standard to maybe in repeating what also you really said. So maybe it would be really great if the Center for Open Science would also open new calls for different topics. And I think this could be a success. For consciousness, I believe there has been quite a lot of buzz around studies that could not be replicated. So potentially this is one main target. But in psychology and cognitive neuroscience research, I think there are lots of studies that could benefit from this type of scheme. So well done with this initiative. And let's see the results in the end. If I could just follow up on this. So I mean I phrased things and we sort of phrased things. Confirmatory research, you think of theory testing, that's how I put it. But it doesn't have to be strictly that. So during the pandemic we had a registered report that was estimating a certain COVID relevant parameter. And the idea was just to estimate it in an unbiased way and get the benefits of registered reports in that way. So if you wanted to just estimate the amount of something that's relevant to consciousness research, it doesn't have to be strictly theory testing. But you can still benefit from the registered report process in doing that. Because for example, in the beginning stage and forming the stage one, you haven't done anything yet. So that means the discussion between reviewers, editor and author takes on a completely different dynamic than a standard paper where everything is already done because you're working together to get the best way of solving this problem, which could be estimating parameter or could be testing theory, but whatever it is. So you all gain and you can change. But once you've already, in the standard paper, you've already done it. So if some reviewer says it should be done some other way, what do you do? You dig your heels in and become sort of defensive and about argumentative. Of course, that's not to say you might not have disagreements in registered report, but it does feel different in working together to find the best solution and using everyone's knowledge to find the best methods, sort of in terms of experimental paradigms and also the best statistics to answer this question. Yeah, I think that's a really good point. When you're still developing what's going to be done, it's collaborative, trying to find a problem as opposed to trying to convince that what did happen was the best way it should have been done and you're just wrong for thinking otherwise. And by the way, we can't rewrite history, so what are you going to do about it? All right, I greatly appreciate your time for sharing your experiences, sharing your recommendations and processes. Thank you for all the participants for your great questions. We look forward to your applications and we are happy to be here. Thank you very much and I hope you have a great day.