 So welcome everybody. This is one of the sessions in the block that was organized by the Federation of American scientists, and in response to their call for policy recommendations in terms of what the federal government should be doing for promoting open science. And I am getting a couple of echo notes, so I will do my best to fix that. But hopefully it's working okay for most folks. It is my pleasure to introduce our the cohost and the panels for this session. This session is about promoting transparency and rigor in federally supported research through open science practices. And we've got four great folks here to share their perspectives on that question. Going first will be Jessica Ponca. She's the open science program director at the Stara Institute. Thank you Jessica for participating, of course. Brendan Nyan, he's the James O. Friedman Presidential Professor in the Department of Government at Dartmouth College. Brendan, Wave. And then we've got Sean Grant, who's a professor in the Head Co Institute for Evidence-Based Educational Practice at the University of Oregon. Sean is also the chair of the Top Guidelines, the Transparency and Openness Promotion Guidelines Advisory Board that we support and encourage at the Center for Open Science. And then finally, here's truly welcome everybody. My name is David Meller. I am the director of policy at the Center for Open Science. And I'll be sharing my submission to the FAS Open Science Policy Sprint last. But without any further ado, I'm going to stop sharing my screen. I'm going to pass it over to Jessica. Thank you so much, David. I really appreciate the introduction. It's great to be here with all of you. While I am now at Stara, I'm going to talk today about some work that I did while I was executive director of ASAP Bio. And the title of this memo that I'll tell you about is Make Publishing More Efficient and Equitable by Supporting a Published Then Review Model. So as I'm sure everyone on the call is very familiar with the Nelson Memo, which came out in August of 2022 and called for all federally funded research to be openly available without embargo, removing the 12 month embargo imposed by the Holdren Memo and also allowing for reasonable publication costs and costs associated with submission, curation, data management and article publishing. So the Nelson Memo was a huge step forward for open access in the United States and by extension globally. But I think that it raises the question of what are reasonable costs. This is not a term that is clearly defined within the memo. Although they do note that American taxpayers are already paying an estimated $390 to almost $800 million annually for the cost of publishing federally funded research. So article processing charges or a charge levied by a journal for publishing an article has been found to be increasing and one study found a 50% increase over the decade from 2010 to 2019. And APCs vary widely, but they can cost up to over $12,000 for the publication of a single article. But costs in terms of APCs are not the only barrier to publishing. They only cost of publishing. The traditional publishing system also costs time. So not only are papers kept sort of in the dark for months or perhaps even years before they're actually published as a final, there it is, a final peer reviewed paper. This process that sometimes involves repeated submission to more than one journal, repeated rounds of review, repeated rejection, repeated requests for review, has been estimated to cause up to 15 million hours of wasted researcher time in duplicated peer review. But there is a way to help make research immediately open. So preprints, which are manuscripts posted online without formal peer review can make research immediately openly available. And in the past and in many disciplines, for example, physics and economics, preprints or working papers have been used extensively to share research prior to peer review and they allow for community feedback ideas and discussion while the paper is under review. But there's another way to use preprints, which is as the foundation to reimagine a publishing workflow where instead of reviewing and then publishing, you publish and then you review. So after posting on a preprint server, there are services which will conduct peer review and allow the author to update and revise the paper in the open. And then this results in a publicly available peer reviewed paper with all of this work happening in the open. So what are the benefits of this? Why would anyone want to do this? The first is really that research, not only the original version of the manuscript, but also that peer reviewed version, the reviewer's opinions, reviewer's thoughts, that is available sooner. And this makes peer review more efficient and rigorous. It is more open to scrutiny. It's easier to determine what actually happened in the peer review process. Experts are able to weigh in even if they haven't been invited to participate directly in the formal peer review. And it's more equitable because people who might not be able to be invited to peer review are also able to bring their expertise to the table. Also, taxpayer research dollars are used more efficiently because I think like the very high APCs that I mentioned, I think are driven by the prestige signaling that journals provide. But by unbundling the functions of coordination of peer review, of archiving and publishing and distributing these tasks to open infrastructures, researchers are better able to make decisions based on the services rendered. So preprint usage is growing in the life sciences, for example, and in many other disciplines as well. This plot from European C shows the cumulative number of preprints as well as the number of preprints posted on a monthly basis. Preprint review is growing too, although at a much smaller scale. This is really in its infancy, but there are already several services that provide open review of preprints. So where do we go from here? The recommendations in the memo are first that Congress should commission a NASA report on the benefits, risks and projected costs to American taxpayers of supporting alternative scholarly publishing approaches, including the publish and review infrastructure. OSCP should coordinate support for key disciplinary infrastructures and peer review service providers and also draft a policy that federal agencies can consider adopting. And those individual funding agencies should follow the lead of the NIH, which in 2017 put out a guide notice indicating that preprints could be cited anywhere journal articles are cited in grant applications and reports. And they should also extend these provisions to include publicly accessible reviews. So thank you for your time. I've put a link up here to the original memo and these slides can be found on the OSF project for the conference. Thank you. Thank you Jessica so much. Most questions are going to be held until the end unless there are any clarifying questions folks would like to ask at this time. If there are not, what we'll do is move on to the next speaker and then start some discussion and Q&A after the last speaker. Seeing none, I'm going to introduce Brendan to talk about promoting reproducible research in federal funding, Brendan. Thanks, David. I do not have slides. This is a pretty short talk. I'm sorry. It's refreshing. I've got slides so I can make that joke. Oh, got it. Okay. All right. So I'm going to be talking about the white paper that I produce as part of this open science policy sprint that is intended to help describe a pathway towards the federal government better incentivizing reproducible research practices. That turns out to be a complicated task. So I'm going to lay out the motivation for this effort. The evidence that I think justifies moving in this direction and then what steps might be taken. So obviously I think everyone here shares the notion that both it's both true that science is generating important actionable insights that improve the quality of life for people in this country around the world. From new treatments for diseases like melanoma to the behavioral science that's helping inform the response to COVID-19. But we're still seeing challenges with promoting reproducible rigorous credible research given the set of incentives that scientists face. And the evidence from this I think can be found from many different sources. The Center for Open Science obviously is one of the key players in this space and has contributed a lot of knowledge. But without recapitulating that entire debate, I think it's fair to say we're not getting as much value for our scientific research dollar as we could. Under the current circumstances that the current funding models do not adequately incentivize in my view the use of reproducible research practices. Most notably but not limited to pre-registration and that has in many cases led to research that is not reproducible, not just in the social sciences where I do my research, but in the natural sciences and clinical medicine as well. And so to the extent that what I just stated is true, we're losing valuable important scientific insights that might be gained for the same amount of money that we're currently spending. If we were better incentivizing these kinds of practices which would in turn help us generate more rigorous replicable results that other researchers could build on and generate all the kinds of auxiliary benefits that we would hope science would create. Now the good news is, again, I think as people on this call are well aware that the federal government has started to take more seriously the notion of promoting open science, of course, the Nelson memo was referenced. There are many agencies and other folks in the government who are thinking about these issues. There are a number of reports and studies from different parts of the government or associated institutions like the national academies, the government accountability office, etc. Getting into these issues, but excuse me, but we haven't yet changed as much of how we administer scientific research funding and deliver it to encourage these practices as we might. The most notable change in this direction is the establishment of clinicaltrials.gov and then the requirement for registration in it, but we've still found downstream problems even in that area. So there's clearly a lot more to do to extend those kinds of models across more of the scientific research funding that's being done at the federal level and to within each field think about what the best way is to promote these kinds of practices. So what the white paper lays out is a set of steps that could be taken to start a set of pilot initiatives that could be used to start this process of thinking about how scientific research funding could be, could better encourage the use of reproducible rigorous research practices. In whatever way makes the most sense, given the topic at hand, the research methodologies in use, etc. That's obviously going to vary and it's going to be important not to use a one size fits all model, but I think we shouldn't. It would be easy to underestimate the role the federal government could play in changing the norms and incentives on this issue. It's very hard to be successful in an area when other people aren't using these kinds of practices and you are. It's making the publishing and grant funding landscape more difficult in an asymmetric way. Whereas in fields where the federal government is a key or the key funder, you're now creating a more level playing field where everyone is encouraged or incentivized to use these kinds of practices. So I think it could be really powerful. So what are these steps? First, we propose creating discipline specific offices that would develop initiatives to promote rigorous and respond and reproducible science within specific NIH institutes or NSF directorates. So again, there's not going to be a single model that's the right one here, but this would be a kind of starting point within a particular area of research to think about how to promote these kinds of practices. Then the next step would be to learn in that area about what kinds of practices are being used, how often people receiving federal funds are employing them, and what approaches or models or incentives might better encourage funded researchers to use these going forward. So that could be in the criteria for the funding solicitation. It could be in how applications or the questions asked in funding applications. It could be in the evaluation process. And then one final thing is it could even be downstream in terms of post-branting reproducibility studies. That's something I'll talk about in a moment. So there's a lot of different steps at which the office or institute in question could be thinking about how to encourage these practices. So what I hope they could do is develop some of these ideas about what might be tested and then conduct some studies prospectively and retrospectively within their area about how to incentivize and reward these open science practices in their field. The final idea I alluded to before would be to do a pilot study or set of studies in which the federal government was actually partnering with an independent third party organization like the Institute for Replication to commission replication studies. And this could be both as a measurement tool to learn about what percentage of research is being funded is reproducible, but it also could be implemented prospectively in a way that was communicated to the grant funding recipients. And that would in turn hopefully encourage them to use reproducible workflows and approaches knowing that their work would be replicated at least with some probability by some third party agency. So I'll stop there. There's a lot to dig into. I hope we can talk about it in the Q&A. And of course I defer to the expertise of the folks in the room who know more than I do about the particulars of how to implement this in that process. But I think I think there's a lot we could do and I hope we start moving in that direction. Thanks. Thanks, Britton. I was taking some notes and I've got several questions that I'm going to hold off until the end because I think they'll open the door for a lot of good discussion and I want to make sure we've got time for everybody. But before I pass the baton to Sean, I'm going to unpin you and just ask if there's any clarifying questions that folks had before we moved to the next speaker. Seeing none, I'm going to give it over to Sean Grant. Great. Thanks so much, David. And thank you everyone for taking the time to attend this presentation today. It really means a lot. I know we're all very busy folks. So I put in the chat, Michael Leonard. Yeah, I don't want to finish this one today. I mean, this one is just my copy. Yeah, if everyone wants you to mute. The participant list shows who's unmuted. I think it's, yeah. Perfect. Well, thanks everyone for attending today. I've put in the chat the link to the piece that I wrote for the Federation American scientists series on open science and policy. Mind folks is on incorporating open science standards and to the identification of evidence based social programs. And essentially the motivating question for this presentation is whether closed science makes it too easy for social programs to be considered evidence based by federal state local governments and nonprofit agencies. And the background of this in short is that evidence based policy uses peer reviewed research to identify programs that effectively address important societal issues across various areas of domestic social policy. And there are several agencies in the federal government in particular that run these groups called evidence clearing houses that review and assess the quality of research published in peer reviewed scientific journals. And they use those assessments to identify programs that they think have evidence of effectiveness and therefore are eligible for literally billions of dollars of funding every year. But the replication crisis in the social and behavioral sciences raises concerns that research publications may contain a higher if not alarming rate of false positives rather than true effects. In part due to the selective reporting of positive results that closed science makes possible. So even in studies that meet standards of evidence related to internal validity like randomized experiments with low participation nutrition. The ability to conduct undisclosed multiple hypothesis testing and then selectively report positive results remain a plausible alternative explanation for the results in a study rather than true intervention effectiveness which is the assumption underlying these evidence based funding models. So the use of open science practices like study registration and availability of data and code. I think an engender trust that studies provide valid information to decision makers, but these practices aren't currently collected or incorporated into these assessments of research evidence. So the recommendation from this presentation to rectify this issue is for clearing houses to incorporate these open science practices into their standards and procedures that they use to identify evidence based social programs. So let's dive into all of that a little bit deeper. So to start, as I mentioned, evidence based funding of social programs is becoming increasingly prominent. The federal government is increasingly prioritizing the curation of research evidence and the use of that curated evidence to make decisions about which policies and which social programs to support. And a key component of this evidence based policymaking is what sometimes called evidence based funding that either requires or incentivizes research evidence of the effectiveness of social programs in order for those programs to receive government funds in the real world. And these evidence based funding mechanisms typically prioritize interventions with evidence of effectiveness that are supported by randomized controlled trials or other research studies using causal inference methods. And the most major source of finding these randomized trials is the peer reviewed journal literature. And these evidence based funding mechanisms rely on again what I've called evidence clearing houses that review the literature. So in this effort, federal evidence clearing houses, these influential repositories that curate evidence on the effectiveness of programs are widely relied upon to assess whether programs across various policy sectors are truly evidence based. And these clearing houses generally follow a set of explicit standards and procedures codified manuals to assess whether published studies both use rigorous methods to assess effectiveness. And then a key thing for this presentation, they often require them to report statistically significant and positive results on outcomes of interest and require that there are not negative statistically significant results reported. As one key example, I work in a college of education. The Every Student Succeeds Act, also known as ESSA, is what is authorizing the federal funding of our K through 12 primary and secondary education systems. And that act directs states, districts and schools to implement programs that have research evidence of effectiveness when using federal funds for K through 12 public education. The What Works Clearing House, which is under the Department of Education, is the clearing house that identifies programs that meet these evidence based funding requirements of ESSA. And so consequently, the ratings from what works clearing house on whether a study or a program meets the highest or lower tiers of ESSA or whether it doesn't has the potential to influence the allocation of billions of dollars appropriated by the federal government for educational programs. And similar mechanisms exist in the Department of Health and Human Services and the Department of Justice, the Department of Labor. This really is something that cuts across any federal agency that provides funding to states for social programs. And the thing I want to focus on today is that this approach rests on assumption that peer-reviewed research is sufficiently credible to inform these important decisions about research allocation. And in particular is reported accurately, transparently and comprehensively enough for these clearing houses, these systematic review groups to distinguish which reported results represent true effects that are likely to replicate at scale. But anyone familiar with the meta science work over the last decade knows that there are concerns that published research can contain results that are wrong, that are exaggerated or are not replicable. And one of the main concerns is that this is due to closed scientific workflows that hinder reviewers and evaluators attempt to detect issues that negatively impact the validity of reported research findings, sometimes called questionable or detrimental research practices. The main culprits are selective reporting of results and the ability to try different ways of mining, analyzing the data, key hacking the data, and then selectively reporting results. I'm not saying that that's what everybody does. What I'm saying is that that's possible. We know it happens and there are no safeguards against it right now in the system. So it becomes a plausible alternative explanation for the results that you see in the journal article. And so we contend in our line of work that research transparency and openness can mitigate this risk of informing policy decisions based on false positives, rather than true effectiveness of these programs. These practices include things like registering your study, prospectively sharing your protocols and analysis plan, and releasing data and code required to reproduce results, which would allow third parties like journals and these evidence clearing houses to fully assess the credibility and replicability of research evidence. But in work that we've done looking at the standards and procedures of both clearing houses and the peer reviewed journals that they use to find these trials over 300 journals. By and large standards related to these practices are not existent in the journals themselves and really no clearing house requires any of these practices to meet even the lowest level of a promising practice, but alone in evidence based practice. So as a result, one concern we have is that not that it necessarily always happens but again it's unacceptably easy for interventions to seem evidence based if someone has the ability to selectively report positive results and that's never checked. It's not rewarded by the system that asks for only statistically significant positive results. There are even some great tongue and cheek papers that say how to make a program seem effective, even if it's not essentially how can one gain the system. As I mentioned, a lot of funding and prestige and career opportunities are tied to these clearing house ratings and publications. So there are structural senses in place that I think make this a very real concern. So our recommendations are for policymakers to enable clearing houses to incorporate open science into their standards and procedures that they use to identify evidence based social programs that are eligible for federal funding. And to do so they need to increase the funds appropriated to clearing house budgets to allow them to take on this extra work clearing houses should be applauded for the work they've done over the last two decades to raise rigor and standards. They've said in the socio-political context they're constrained by the resources they have. So this isn't a lack of desire on their part. It's an issue of opportunity and capability to do this kind of work. So one thing that could happen is dedicated funding appropriated by Congress and allocated by federal agencies to clearing house budgets so they can better incorporate this assessment of open science practices into their evaluation of public research. Funding should facilitate the hiring of additional personnel dedicated to collecting this data. And if open science practices were used, folks who have their FTE covered to assess the comprehensiveness of reporting, like checking publications on results with prospectively shared protocols, and doing computational reproducibility checks, rerunning analyses using the study data and code. We also think it would be helpful for a group like the Office of Management and Budget to establish a formal mechanism for federal agencies to come together and develop shared standards and procedures for their respective clearing houses on reviewing open science practices. As one example, one could create an interagency working group that's been done before for standards on internal validity to develop and implement updated standards of evidence that include the assessment of open science practices. We published an example of this in the Journal Prevention Science, and we think it'd be great for a federal group to take this on board and create an official top guidelines for evidence clearing houses or something of the sort. And then as funding standards and procedures are in place, federal agencies that sponsor clearing houses can create a roadmap where maybe perhaps folks are just starting with reporting whether or not these practices are done. And then as they become more normative over time, in part based to the several amazing initiatives being discussed at this conference, clearing houses can look to increasing the requirements for higher tiers of evidence. So for example, the thing that we say needs our highest tier of evidence is something that is comprehensively reported according to reporting guidelines for results manuscripts. Those are concurrent with perspective we shared protocols and analysis plans and the results are computational reproducible. And we think that if that's the case, we will have greater trust in evidence-based practices if we know that they have computationally reproducible results and have comprehensively reported details in study findings. So we really appreciate the time you've taken today to listen to this. We think that this momentum from the 2023 year of open science in the White House of 2022 year of evidence for action creates this unmatched opportunity for connecting these federal efforts to bolster the infrastructure for evidence-based decision making with federal evidence to advance open science. And so if you want to chat more, contact detail on the left. This is a QR link to the piece from the Federation for American Scientists. And I want to thank Arnold Ventures for funding the project, my co-principal investigator Evan Mayo Wilson, and all of our findings and materials for doing this kind of work are available on our website, trustinitiative.org. So thanks so much. And I look forward to your questions. Sean, thanks so much for sharing that. Likewise, I've got some discussion questions that I'm eager to start on, but I'm going to jump right into my own presentation right now. And we're running perfectly on time. So thank you to the speakers so far. And if you'll bear with me for one moment, I will share my screen. Right. So the gist of what I'd like to say today is that there are mechanisms to improve research through existing infrastructure that the federal government uses in the grant application process. And those are through the data management and sharing plans that are integral to the upcoming policy changes that OSTP has mandated. And I'll go into a couple of details about all of that and what those mechanisms are. I'm going to start with a precise definition of how the Office of Science and Technology Policy defines data and what the implications of that are, and then to describe a certain set of policies and practices that are going to be useful for making sure that the work that goes into those are as high quality as possible and are as verifiable as possible. And I'll wrap things up by giving a preliminary snapshot of what federal agencies are planning to do around the guidance coming out of the White House and some recommendations for improvement. So as Jessica mentioned at the beginning, and as I suspect many of you know, the US federal government is moving towards open science being led by the Office of Science and Technology Policy stating that all work conducted by or funded by federal agencies needs to have openly available outputs. Those are open access papers and that is open access to data and research outputs that result from that work. And the timing of that is going to be either by the end of the grant period or by publication of results from this work, whichever comes first in a strong statement to address well known problems with non-sharing of results or file drawing. A couple of key details here that I'd like to go into is about how the Office of Science and Technology Policy defines scientific data and what the implications of that are. So they define scientific data at the objects that need to be shared and freely and publicly accessible to the extent allowable by ethics and law and intellectual property rights. And when they state that data needs to be made available, what they're talking about is an expansion of prior guidance from OSTP, and in particular, I'll emphasize that the need to replicate and validate research findings. So for the purposes of this policy guidance, scientific data includes recorded factual material that the scientific community states as being of sufficient quality to validate and replicate research findings. And just to clarify what that does not mean, it's not lab notebooks, preliminary analyses, drafts of papers or peer reviews, and it doesn't include physical objects. But it does include all the other materials needed to, I'll say again, validate and replicate research findings. And I'm going to extrapolate from that and advocate for items that that is included under that strong definition and what federal agencies should do to make sure that that definition is applied uniformly. So what do they mean or what do I think they should mean when they're talking about items that are needed to validate and replicate research findings? Those are all the digital research objects used to create the study or to conduct the data analysis processes. This is the code that's used to analyze the results. This is the information that's collected during the or that's accessed during the portion of the research lifecycle that also includes metadata and descriptors that really describe the context under which the data set was collected. And it includes research protocols and assertions of when a study is going to be conducted. As others have mentioned, registration is the process of specifying that a study is going to take place. This is common in clinical sciences, but it provides an opportunity for defining when a study is going to take place. And it provides an opportunity to state that the protocol is going to be applied under a certain context. And I'm going to ask, please, if you're not talking to keep your microphone on mute just because there is sound that does come through sometimes. The top guidelines, as has been mentioned once or twice also is a policy framework that articulates items that are used to conduct research that can be shared and archived and disseminated for the purposes of reproducing analyses or replicating previously published work. The top guidelines includes all of those items that I mentioned before and a couple of others, but the key framework that the top guidelines provides is applicable across many different disciplines designed to be applicable to life sciences, to social sciences, and other fields. It's modular in the sense that these different practices are distinct elements that are used or conducted throughout the scientific process. And the top guidelines also provides a structure for improved practices over time. So it starts with recommendations for items such as data availability statements, disclosures of whether or not an open science practice took place. And it moves up into more stringent requirements or verification steps that these practices were done well. So how and what should the federal government do to improve data management and sharing plans? I'm glad you asked. Those DMSPs need to be part of the scored criteria of grant applications. Oftentimes they're required elements, but they're specifically excluded from any sort of peer review or evaluation of their grant. And as we might surmise, that does not provide a strong incentive to put a lot of thought or effort into those DMSPs. So they need to be considered as part of the core scientific process and treated as such and evaluated as such. So those are part of the scored criteria for funding applications. Researchers who are creating those DMSPs will put a more appropriate thought behind them. Those DMSPs need to be easily accessible and publicly available, along with other information that's shared on federal websites about federally funded work. Additionally, that information includes the title of the research, abstracts, affiliations, and the primary investigators who are conducting the work. But it doesn't include information such as those DMSPs. Now, there's lots of good reasons that many of the details of those grant applications are not publicly shared on websites because that does include proprietary information and cutting edge research that is important to keep confidential for a certain period of time. But those data management and sharing plans don't include such intellectual property. They include information about how and when research outputs are going to be made available. And that information itself deserves to be publicly shared so that readers or anyone who has a stake in how the research is being conducted knows where to go to find the outputs, to find the data, to find the preprints, to find the code that's resulting from that work. Those DMSPs should have structured templates to make them easy to read, make them easy to follow, and probably even more importantly, just make them easy to create. These are new practices for a lot of primary investigators, and especially when they're not scored, not a lot of that goes into them and will often just kind of reuse information that might be a little bit dated. And there are better ways to go about it by provided structured fields, dropdown menus with selections for when and where data are going to be made available. Those DMSPs should be updatable so that once data sets are available, the DOI can be put in there, putting in kind of a generic name of data repository about where the data will be shared. It might be all that's available at the grant application process, but of course those repositories can be large and it can be sort of difficult to find where the data are specifically in those repositories. So once they're updatable, they can be filled in as appropriate. And as I mentioned earlier, it should be very clear, especially in that structured DMSP that data means more than just a single CSV file. It means all the information needed to validate and replicate research findings. So everything that I mentioned before about materials, about code, about data and metadata, about protocols needs to be made available through those DMSPs. We've taken a preliminary look at what agencies are doing so far and there is room for improvement. So the first nine that came out are available on our website and we evaluated those based on the content that I just went over. Unfortunately, none of them stated that the DMSPs would be publicly posted, so I think there's most room for improvement there. Three of those nine did state that DMSPs would be reviewed as part of the regular grant review process. And so that represents a great opportunity for interagency cooperation or for learnings from agency to agency. It's obviously possible that that can be part of the scored and reviewed grant criteria because some agencies are doing that, but a majority of them so far are not doing that. And thankfully, several of them, but not all, did define data in a fairly expansive way. We're taking guidance from that OSTP memo stating that data is more than just the precise bits of information collected. It includes everything needed to again validate and replicate research findings. And so again, the other agencies can learn from those definitions to make sure that those are being applied along with the spirit that OSTP is guiding. Those responses are being collected and curated and disseminated online. So on our website, we have responses to requests for information from agencies as they come out with their preliminary plans. And alongside that information, there's complementary information for what publishers, funders and universities can do along these same lines. And with that, I will finally stop talking. I'm going to open up the floor for some discussion and for some Q&A. I'm going to stop sharing right now. I'm going to check to see if this is being, there's anything. Okay. So what I'm going to do right now is explain the Q&A process just to make sure it's clear that there was a couple of different channels that we have available to us. And then I'll take the prerogative of the moderator and get things going with a couple of questions. As you have questions, please use the Q&A feature. It's under the menu at the bottom of the page. Or if you'd like to speak, please use the raise hand feature. And let me just find my notes right here. I'm going to bring this back to the beginning and give a question for Jessica, if that's okay. So Jessica, get ready for the hot seat. Jessica, you're talking about the benefits that the preprint process can have for the scientific ecosystem. And of course, the benefits of pulling apart the publication from the review process. This is something that I have a strong opinion on and I fully support. But one of the most frequent questions that comes from this type of process is how you compare preprints to traditionally published peer reviewed papers, especially thinking about either members of the public or journalists, or as we're talking to policy makers to policy makers. How should they view preprints and the reviews that come on to them? How should they evaluate those as compared to traditionally peer reviewed publications? Yeah, it's a very good question. And I think that the COVID pandemic really gave us an opportunity to take a case study of how we're going to deal with information that's coming out that hasn't been traditionally peer reviewed. I do think that there is a danger for potentially misleading information to be shared, but preprints are certainly not alone in that. The best defense against information that the community disputes is to put up public reviews of that material. So just to provide an example, there were certainly some perhaps questionable studies shared early in the pandemic. But the application of comments and open reviews helped journalists put that into context and become aware of these other experts who had different opinions on the paper. And really, I think helped curtail the spread of that misinformation. There have been a number of studies about how preprints tend to change prior to formal peer review. Most papers, I will say, like the vast majority of papers do not have major changes to their conclusions between the first preprint and the final review. But that's not, I think, universally true. That's why I think it's very important. And we've recommended in this memo that the paper preprints be updated with each subsequent version so that there is an opportunity for researchers to share publicly what they're revising in response to the reviewer comments. I hope that answers the question. Yeah, I think that's one of the real, you know, key benefits that preprints have. They're simply less static as better evidence comes out. They are quickly updateable. It's not a, you know, a static image that becomes almost impossible to change, revise or update. And I also, you know, look to see evidence of what's folks who compare preprints to published articles and how credible or reproducible findings are from one to the other. And I think there's a lot less difference than folks might might might might assume by giving a lot of a lot more credence to the peer review process than perhaps it's worth. And then can I ask you a question? Please. Thank you for your presentation. You were mentioning a lot of the open science practices that ORQ the National Institute for Neurological Disorders Office of Research Quality advocates for or promotes. Basically, you meant your memo you talk about registration and replication as two practices are those the two main practices that you think there's that they should be promoting and advocating or were you thinking about kind of all the open science practices or how would you rank a couple of these different best practices and what they should be spending their time promoting or creating policies around. Yeah, it's a good question. I value all the different practices that are on the table. But if we're trying to think about getting the greatest bang for our buck in terms of discovery, I think we need to think about the integrity of the research itself. So making the output, the papers more accessible, accessible sooner via preprints, etc., is valuable, but it doesn't necessarily change the scientific output. Similarly, having the data be available helps other people build on it. But if that data is generated through some non credible process again, we're not we're building on a foundation of sand in a way that I think is best avoided. And so the idea that that I'm promoting is that in areas where it's applicable pre registration can help us ensure that researchers are transparent about which hypotheses were truly tested in confirmatory way and which results that are being reported are exploratory or in some way unexpected. You know, maintaining that separation pre registration is the best tool we have to promote that and that will in turn I think, and it will help us see also when people are, when there are lots of file drawer nulls on particular topics. So in both those ways we can get a lot of value. The replication idea again, making public data available allows for replication the sense of you can download the data push the button and make the results come out. That's kind of baseline expectation, but I'd like to see, excuse me, replication in the sense I described that people know that with some probability their work will be reproduced downstream. And that in turn creates a kind of positive reputational incentive to be more careful upstream. So I really like to, you know, encourage the best incentives for researchers because right now they're off quite the opposite. I really agree with that. I'm going to ask one more question and I'm going to encourage it for everybody listening, both our panelists and listeners to use the raise hand fake function or to use the Q&A box for any discussion you'd like to have or any of these points. But Sean, you've been putting a lot of thought and effort and push it behind bringing higher standards to those evidence clearinghouses or essentially to the evidence used prior to making policy. Can you talk about the trade off a little bit between kind of, you know, the need to improve education, for example, versus the need for better evidence behind initiatives that are going to have money put to them. They often think of like the clinical trial world where there's an urgency to get patients to the best available promising interventions. But sometimes it's going to be that balance between false hope and getting to the most cutting edge treatments available. What's the status quo for how that's going right now and how do you see this balance evolving with higher expectations and standards. Yeah, thanks, David. I think it's a terribly important question and the things that keep me up at night working in this space about this on the face of it positive mood toward evidence based policymaking. There are one, the weaponization of this to justify gutting social welfare, basically like gutting the social safety net saying we have a positive things that meet high standards. And therefore that means why are we even doing this space and that just betrays a bad faith effort and a lack of understanding of the kind of relative evidence that trials provide on whatever's been tested. What makes it to the point of having the funding to be tested, etc. So that's one thing I think is always important to be mindful of. Walking back from that extreme, I think another one is the some of these are set up where they really mandate the use of practices on these clearinghouses to the point where you might be asking communities to abandon things that weren't the comparison in these trials with populations and settings and outcomes that weren't represented in this body of evidence. And then on top of that my concern is with evidence that isn't really that strong. I think if you're going to tell people to abandon what they're doing, it really needs to be rigorous and applicable evidence. I think the best ones out there, and this is what I'd love to explore more with this really see these as evidence building mechanisms. So by curating the evidence and then just being honest about what the high standards can or should be in the field and what meets those standards. We can see where we're at and then we can incorporate into these cured evidence mechanisms funding for promising practices. So not just evidence for delivering the service but building up the evaluation capacity for things that seem like they have a potential replicated scale. And give money to groups who deliver those interventions to those who have evaluation capacity to make the evidence base stronger over time. That's what excites me, but I think it's really important to be mindful of unintended negative consequences of which there are several in this space. Yeah, I think there's ample opportunity to rebalance towards finding better evidence and taking the sort of time and energy it takes to follow up with those kind of promising leads. Would any of the panelists like to ask questions to their fellow panelists or any of the members and I should double check. I think we've got six minutes, but we could also be putting up against the next session. So if there are any follow up questions or final thoughts that anyone would like to give, please do so. I want for you, David. Would you consider if it's an NIH funded study on a social program. So it's a someone's gotten an R01 to do a randomized trial on educational intervention or like a health prevention intervention. And the, as part of the grants developed intervention implementation materials like training manuals or scale success fidelity of intervention implementation. Do you think that falls under the Nelson Meadow as things needed to carry out the study and replicate results, or is that more in their list of things that they explicitly say don't count. I'm, I'm pretty sure it counts because it has a pretty expansive sort of definition about work that's supported by or conducted by the by the federal government so my understanding is that it will. I also don't want to speak on their behalf and spread any misinformation. So if anybody else has an opinion on that I'd be also happy to update my priors, as they say, hearing none. That's now the law of the land. So it's good to have that, that opinion is power. I was beginning at it. It has a quite an expansive definition in terms of not not just the type of grants that are traditionally thought of in the academic research space, but work conducted by federally employed scientists, work that's conducted as part of evaluations as long as there's not a compelling justification to, you know, keep the work confidential, which many times there are going to be exceptions to that. But the default expectation is that it falls under those policy guidelines. I can imagine ongoing assessments or evaluations. I could imagine carve outs coming for that on a case by case basis. I want to say one more time, thank you to our esteemed panelists for responding to the FAS open science policy sprint, putting your thoughts and suggestions into writing. I think us getting those opinions in front of the audience that FAS has available to it is a step in the right direction. I think there's a lot of work to be done to make sure that that these opinions are heard and are echoed and are shared and this is obviously a part of that process. And so thank you for being a part of this session to work towards that mission. And for all of our audience, thank you for coming and participating. And you've got about three or four minutes I think until the next session starts. So go into the lobby, mingle with your friends and colleagues have a drink and go to the next session.