 We'd like to spend a little bit of time sharing with you the SEAR principles and sort of how we envision them as supporting open science. We are going to talk specifically about pre-registration, data sharing and replication and the things that we've been supporting. And we can talk a little bit about where things are going in relationship to that particular line of inquiry. And then we'll talk about our current and future public access requirements for IES funded projects. You're going to get to hear me for a little while and then I'm going to turn the floor over to Laura and hopefully this will be understandable and enjoyable for everyone who's here. All right, the SEAR principles. So for those of you who haven't been part of the SEAR principle evolution, this was one of Director Schneider's initial attempts to pull together best practices in making sure that the work that we have been supporting, we in education space and particularly at IES, so that we can make sure that it's both transparent, actionable and focused on outcomes that if you will matter the most, right, so that we can improve outcomes, particularly achievement and attainment. We have been building rigorous evidence building since 2002. And what SEAR principles are intended to do is to complement the What Works Clearinghouse's strong focus on internal validity by coming up with a set of standards that we can all try to implement in our work as appropriate across the two research centers, across the evaluation centers and hopefully beyond just IES. I'll put the link in there if you've been to the pages. There's lots of resources there and I'm going to highlight just a few as we go through the deck here. So the SEAR principles, we have nine of them. The two that we're going to spend most of our time on today is on this request requirement expectation that we pre-register studies. And then the second, the expectation that we should be making findings methods and data open. I don't think I have to convince anyone on this call that that's incredibly important. So I want to draw everyone's attention to the bullet that's third in the list, which is our newest SEAR principle, which is focused on addressing equities and inequities in learners opportunities, access to resources and outcomes. That's a brand new principle that we're continuing to develop and learn about, but stay tuned for more of that in a later conversation. So pre-registration. Hopefully this is not new to anyone on the call. But in the IES universe, one of the things that we do is in our RFAs, we require causal impact studies to be pre-registered in a recognized study registry so that we can have information documenting confirmatory research questions and their planned analytic activities. One of the nice things about the study registry process, as you all know, is that that should continue to be updated as and if things change. This is a time of COVID. This has been incredibly and incredibly important opportunity, I think, for us to think about how do we talk about how things have been changing in response to an external event that we did not have any way to plan or prepare for. We've been happy to see that pre-registration, people are doing it, right? People are doing what we've been asked to. And just for a point of information, I did a little bit of research trying to figure out where are our studies being pre-registered. Most of them today are in RIS, the, oh, I should have figured out what that is. That's through the Society for Research and Educational Effectiveness. So there's 166 IES-funded studies. Just to be clear, that includes both studies funded through the two research centers, as well as studies funded through the National Center for Education Evaluation. I was only able to find 14 IES-funded studies in OSF, but I have a feeling that's probably an underestimate. And we'll talk a little bit about some of the challenges that we're encountering that we would love to think together with you all about how to solve. And in clinicaltrials.gov, I was able to locate seven IES-funded studies because we do fund work, particularly in the social and emotional mental health space where there's overlap between the NIH type studies that you find in clinicaltrials.gov and the work that we support. The next piece is sharing findings and methods. You'll notice that we put openness as findings, methods and data. So findings and methods is actually where we first started. So with the initial public access memo that was released in 2013, yeah, 2013, sorry, I'm getting ahead of myself. We put a policy together to make sure that all of the research that your tax dollars were paying for would be available to the public who don't necessarily have access through libraries and repositories to find the manuscripts. So now all of all IES grantees are to submit electronic versions of their final accepted manuscripts to Eric. We've specified a repository for our grantees. What we have seen is an increasing number of IES-funded publications in Eric, which is fabulous. And we're working right now with our journal community trying to figure out ways to renegotiate our agreements so that to reduce the burden on folks like yourselves, on librarians who are surfacing scholars and on the scholars themselves to see if we can get journals to automatically deposit content. But if journals are not doing that, then grantees need to actually submit a grantee submission to Eric. And what we have seen is, like I said, increasing compliance. I checked this morning, we have 2,704 grantee submission records in Eric. Not all of those are yet publicly available because there is still a 12 month embargo for many journals. But we're happy to see this. I think the first time I presented this slide, it was something like 200. So we're seeing lots and lots of improvement and compliance for which we are very excited. Data share. This is a challenge, particularly for those of us in the education research sector. We want to enforce this, right? You need to share your final research data. But there are concerns about privacy. There are concerns that come also from where the data is that you're getting. So if you're doing original data collection, you can provide access to research data relatively straightforwardly. However, if you're using data from a state or local administrative system, you actually may not be permitted to share that data. You can, however, share code. So when we're talking about data sharing and trying to figure out the degree to which our grantees are actually able to do what we're asking them to do, we look both for the presence of code as well as for the presence of data itself. How are folks doing? Oops, sorry, I skipped ahead. We do provide lots of resources. And I believe that there was a session that Ruth Neald led earlier in the unconference talking about this publication that came out of the National Center for Education Evaluation about sharing study data. So Ruth Neald and her colleagues prepared this in service of our funded community, but really in service of the broad community. So I hope that session went well. Would love feedback about what are you missing? What more do you need? And there's an upcoming IES Shree webinar talking about sharing study data that's available on March the 28th. So if you all are interested in continuing to talk about this, please sign up and join that meeting. So where IES funded data being stored, the data sharing component of our public access policy was instantiated several years after the initial policy. And as I was looking and pulling information together, I was looking for a needle in a haystack image. I couldn't find it, but this is almost the same, right? Finding the data is actually not easy. Eric has created a place in the publication. Like when you put your publication in, there's like a field you can complete to say where your data is being stored, but that is relatively new. So right now I was spending time going, well, I know ICPSR has data. I know OSF has data. A lot of other data is living in university managed repositories across the nation. So discoverability is going to be something that is going to be preoccupying us as we're thinking about how to actually make the promise of open science truly be inactive. In terms of direct support and replication, right? So why do we want to share data? In part, we want to share data so that we can replicate study findings so that we have the opportunity to build knowledge together and to build knowledge collectively across many studies. And for those of you who aren't familiar with the work that we've been doing in replication, in 2018, I think the last unconference Christina Chin and Katie Taylor presented the findings from this paper, which is the first link there. We went back and reanalyzed all of our grants that had been funded to do efficacy work. And one of the things that we learned is that there was a ton of replication work happening, but people just weren't calling it replication. So we were like, okay, what's happening here? So we worked with our colleagues at the National Science Foundation to come up with guidelines, shared guidelines on how to do replication and reproducibility and education research as we realized that this was happening but not being discussed and framed. We also then supported Brian Nosek and other colleagues to develop the SARA, the Special Education Research Accelerator, which is again an attempt to support accelerated work and possible future replication. Mark Schneider came, we had a conversation, replication work had always been supported under our broad open competitions. We decided to pull out systematic replication as a separate request replication competition notice. In 2020, where we were trying to really provide some guidance to the field about where we thought replication could really serve the needs of the education sciences community most well. So we launched that. Then in 2021, we were like, okay, well, let's continue to think about other ways we could do replication, right? Not simply replicating a study, sort of taking a design and replicating it, but could we do replications internal to all of the digital learning platforms where kids are doing like their studies, where they're reading, where administrators are learning about student outcomes. What could we do with a digital learning platform? So we established CERNET, which is sort of the platforms themselves. And this year we will soon be hopefully announcing some research grantees who are going to try this out and see how does replication, could replication even actually work in the context of these digital learning platforms. And we're continuing to work with SARA and support the work of the Special Education Research Accelerator. So what happened when we pulled out systematic replication? What's been interesting is that we have 15 funded projects across the two IES research centers. The initial set of awards that we made were, there were more of them made in special education than in the Education Research Center. So individuals were trying to see whether high quality interventions with evidence of efficacy, working with general populations could be replicated when used with students who had identified special education needs. As the years have gone by, it's become more balanced. I think we currently have eight of those projects funded in NYXR and seven in NCER. So we're continuing to build on replication and I'm really interested and excited to see what we learned from this and to see how the data that comes from these studies can be archived and used for other purposes as we're moving down the line as we're continuing to learn more about our thoughts, our knowledge about how best to support students in classrooms. What are some of the problems we've noticed? Discoverability is a huge one and I think this is a really interesting conversation for the open science community because in many ways the premise of open science is that it will enable new work to happen more quickly and more rapidly. So let's rest on this assumption, at least in part, that you can find the data that you're looking for and yet maybe you can't, right? So I want to just throw that out there. I'd love to have a conversation about suggestions that you all might have around how we at the federal government can think about supporting discoverability and work in the discoverability space. I think the other piece that we've been thinking a lot about is how do we incentivize routine compliance, right? How do we make open science a normal part of all science? I will say that when I was looking at the ICPSR data in terms of where the data is and who was there, I was really gratified to see that many of our training fellows are actually archiving their data and ICPSR. These are individuals who are completing pre and postdoctoral training through funding that the IES provides. So that's one pathway. But what else could we be doing to really help make sure that everyone is putting their data and their studies and their methods? Like it's just part of the process. Like how do we change that and make it more, make open scholarship the norm? I talked a little bit already about the training programs. I think the other thing we're thinking a lot about is how do we talk about impact? And it's in scare quotes because as an IES or if I talk about impact and I'm not talking about a causal study, I would get myself in trouble. But this is in the common parlance. Like this is a change we made. We've been putting it in place. You know, it's been in place for about 10 years. How do we make sure that what we're intending is actually what's happening? So I'm going to talk really quickly about sort of past and current public access requirements, Lara, unless you were going to do this. This is me, right? Sorry. I can't see either way. Do you want to do it? You want to jump in? Or you want me to do it? I can jump in. And I'm going to let Lara talk a little bit here and she can talk about these because she's going to go into the future work, which is where we're going. Yeah, so we're going to try to create like a narrative thread of like where we've come to where we're going. And so this is, and if you want to go to the next slide, this is a little bit redundant with what Liz has already shared, but just to sort of like get us to where we are now. So IES established publication and data sharing requirements for IES grants starting in fiscal year 2012, but those policies were sort of established in 2011. So ahead of any of the sort of cross government requirements. In 2013, the White House Office of Science and Technology Policy, OSTP for short, published the Holdren memo so named for the then head of the OSTPH on Holdren to provide guidance on the need for federally funded researchers to start systematically sharing publications and developing plans for sharing data. So that was a sort of starting point for all agencies that provide federal funding for research across disciplines and across the different programs to start to have some consistent standards. In 2016, Department of Education established a public access guide and policy to sort of formally implement the recommendations and requirements within that Holdren memo. Which was not a huge sort of upheaval for those folks who are funded by IES because our requirements had been in place for quite some time, but it sort of laid out in more formal terms from a policy perspective as opposed to most of the ways that we sort of single requirements is through our RFA's that this is sort of a more cross cutting policy guidance. So moving to the next slide. This is sort of a, it's not advancing. Sorry. There we go. Thank you. Sorry. This is sort of a review and sort of an expansion on what Liz just shared. Oops, there we go. So their current public access requirements that exist within IES for publications as Liz shared, IES requires all peer reviewed scholarly publications produced with IES funding to be made available in Eric. So either after a 12 month publisher embargo or after publication for those who are publishing open access. So the idea is that you get it in there when it's accepted or published that there is a 12 month embargo for many articles that are not published using article processing charges. But and the other piece of our sort of current policy around this is that grants are grantees are allowed to use funds from the awards to pay article processing costs in order to support open access publishing. And then with respect to data sharing exploratory and any causal impact projects are required to submit as part of their grant application, a data management plan. And to have specific plans for making data accessible post publication. So that's kind of where we are now. So where we're going. And Brian Nozick answered some questions yesterday about sort of the game changer question about the new OSTP public access memo. So we did want to take a little bit of time to talk about what's in there and what the implications are a little bit more detail. We're sharing the memo, rather than IES is or the Department of Education specific plan, because the memo was released in August of this past year. We've been given a certain amount of time to get a plan in place. And we'll be sharing those out once once we have those to share. It's the federal government we don't turn on a dime here so there's going to be some time to sort of develop these plans and implement them. So, you know, IES is motivated to sort of embrace a lot of these principles, and we'll likely sort of implement some of these things in advance of when there's an official policy in the federal government and across the Department of Education, we will signal those things in our RFA's. We'll certainly also share out, you know, supporting information as we have it. But we thought it would be useful to kind of walk through. And some of these things are bigger game changers with respect to how they're going to influence your day to day and month to month and your day to your life. Other things are sort of more devil in the details kinds of things but that will actually have some implications for kind of how you do business moving forward. So we wanted to kind of walk through those things one by one. We've identified some things that we have anticipated and Liz has already signaled some of them as sort of pain points or challenges for the community in sort of embracing these. And we definitely, as Liz said, would like to hear from you about what are some unanticipated challenges to navigate. What are some ideas that you have for ways to help equip the field, especially, you know, we're a little bit preaching to the choir here with respect to people who are sort of already embracing open science. So we have a lot of folks who are struggling to figure out, you know, Brian talked yesterday about, you know, people people are there they're ready they just don't know how they don't know where they don't know when sort of what what are some ways that we can help to message and simplify that process so just to kind of walk through the memo. So the main foci of this memo and sort of the goal in keeping with kind of White House strategy in general right now is a focus on fostering equity in science and so you know this, we can talk a lot about how open science helps to facilitate this as Brian talked about yesterday but we can also think about it as kind of leveling the playing field with respect to access right so, instead of having to track down somebody that you know and have met at a conference to try to finagle your way into getting access to their data. They're the data are just available. So that's, that's sort of a broad framework. Oh, and I was going to share the link to the memo. It's called the Nelson memo because Alondra Nelson was the acting head of STP at the time that the memo was issued as opposed to the Holder memo from 2013. So it gets into a lot of details it's really guidance for federal agencies but you can certainly sort of read the tea leaves about what how it's going to get translated into practice for researchers. So the go ahead. Yeah, so the guidance will not be implemented officially or is not required to be implemented officially until the end of 2026. So we've got time to be figuring these things out. But we're working on them and as I mentioned, I guess it's likely to start to implement some of those things in advance of that deadline. The biggest one that's caused a lot of attention that's attracted a lot of attention is that the 12 month embargo is going away. So right now, if you publish an article. There's a 12 month embargo that gives publishers a chance to make some money off of the article through subscriptions and downloads before they have to make it open and available. Because before you have to make it open and available. And the exception to that of course is if you are paying APCs in order to publish open access effective as of 2026. So all articles must be available freely available to all on the day that they are published. And the sort of consequential implications is that those should be shared and sort of uploaded right into the appropriate archives that the funding agencies have specified by that by the day on the date of publication so that that is that is the sort of publishing charge. The devil's in the details with respect to the implementation and that's where you're going to want to go and you know again all agencies are being required to do this and are developing these policies. So if you have NSF funding as well as a yes funding, you'll let you'll want to look at the agency specific guidelines about how that's going to get implemented. The other big piece is that for many of you may not be too traumatic. But is something to note which is that data sharing will be required at the time of publication for all data generated from federally funded sources. Or if the data are unpublished after a certain time interval, following the data collection, you will be required to share your unpublished data. So whichever comes first and each agency is currently figuring out what that time window is going to be. If it's going to be sort of the data which the grant closes or a certain amount of time after that or a certain amount of time after you finish the collection, the agencies will be providing those kinds of specifications, but this is new. And what it means is if you are collecting data from a federal grant off of a federal grant and you publish some of it but you are sort of hanging on to some of it for some subsequent analysis. At some point you are going to be expected to share those data, regardless of whether you publish them or not. So something to sort of be aware of. Another piece to be just and this is not sort of mandated in the in the memo but something to be aware of is that many agencies signaling the support for sharing and not just data management practices are shifting what they call their plan from a data management plan to a data sharing and management plan or sometimes the data management and sharing plan. So expect to see a different acronym forthcoming. It's going to be and I've got a second slide on this just to sort of unpack this a little bit more. There's requirements in the memo for consistent use of unique digital persistent identifiers. We already have some of those. Sure. We already have some of those in play that we are all familiar with. But there's going to be some evolution in what's expected. So I'll go on to that in just a second. And then I think I have one more bullet on this slide. There we go. Yeah, and then this is also this is kind of a devil in the details kind of piece but you'll it'll make, you know, sort of uploading things or sharing things a little bit more finicky with respect to fields to fill in is that there's going to be something that that there is inclusion of rich debt metadata when sharing either publications or data and there's some specifications about what those metadata should include, which also includes those unique persistent identifiers. So let's flip to the next slide. So unique digital persistent identifiers are very important for tracking for reporting. For the federal government trying to sort of keep track and keep tabs on what's happening with the funding that it's giving with what happens with respect to the trainees and the sort of placement that they get after going through training. It facilitates discoverability from the from the funder side from the publisher side but also importantly from the researcher side and, you know, I always say, you know, if a data set falls in the woods and nobody knows it's there. And if it never got shared. And as Liz was highlighting there's, there's not sort of some consistent patterns with respect to knowing exactly where to look to find things. And hopefully we can use digital tools metadata to start to map this out the requirements in the in the Nelson memo, requiring the publications have unique digital persistent identifiers we're all already familiar with do is digital object identifiers. So that's sort of the kind of gold standard at this point for publications. It means that that is something that will persist over time it is a unique set of identifiers with a digital address that you can always get to it. So researchers, including PIs and Pope guys and grants and any additional authors are also going to be expected to have unique persistent identifiers associated with them. Orchid is probably the best known of these. And so, and as many of you probably now and those of you who who haven't encountered orchid yet we certainly would encourage you to check it out. Orchid is watching you even if you don't know they're keeping track of your publications. If you create a record, they will sort of ingest your publications into your official record and then all of your scholarly work is findable. You can also add information about your career placement you can add information about your training up background about grants that you've gotten grant panels that you've served on, etc. So there will be an expectation that each individual has a digital identifier that is used consistently. Award numbers are going to be requiring more universal unique digital persistent identifiers if any of you have an NSF or a IS or an NIH or a Department of Justice grant you have a unique identifier within that agency. But there's going to be a system that needs to be developed such that every grant within the federal government US federal government has assistant has a consistent coding pattern. So that's going to be coming along at some point. And then we're also needing to develop and identify a set of digital identifiers for data sets. So having all of these things have digital identifiers that are universal across federal funding sources is going to increase the likelihood that we can track information that we can report consistent information and that we can find information. And you can imagine that if you go to the IES award database and you look at a particular award and you see the unique identifiers for the individuals or the PI copi you can go to their orchid pages. You can see where the data sets are and what publications have been issued. If you go to the orchid you can find the which grants right and so that once you've got these these identifiers and we've got the the fields for these being the metadata for these being consistently reported. It's a much richer source and it's much more likely to solve the discoverability problem. I've got down there that the digital persistent identifiers are necessary but not sufficient. We do need to have much more of a rich sort of inter connectivity amongst these identifiers in order to facilitate this. Okay, so here are some potential concerns that we have identified as you know potential pain points that we're happy to talk more about or get your feedback on. And then we also want to see if there are other issues. One is of course anxiety about managing this zero day embargo. How does this connect up with publisher policies you know publishers are for many journals are sort of offering to get things into the right databases for you. But that's going to be a different kind of challenge if it has to be sort of on the day of publication. It may not surprise some of you that some publishers are embracing this and working really hard to be collaborative about implementing this and others are more reluctant. There's you know which version do we share do we share the accepted version or do we share the version that appears in print, but if there's not print anymore right. Sort of, you know what if you upload it but it doesn't appear right when you sort of uploaded. Are you in violation, you know so we will certainly be developing sort of guidance on an agency by agency level about that. My concern that has come up and we've we've heard from this from individuals from societies from university libraries is that APC based models which is sort of the assumption, if we're going to make everything accessible are not always the most equitable so you remember that I said that the Nelson memo was really sort of intended to take equity as its focus. It's leveling the playing field with with respect to access right. Everything's open everybody can find it if they can find it. They can download it they can use it. But it requires that that there be access to the funds to be able to publish. Right. And so shifting the ecosystem in such a way that creates the expectation that things must be open actual open access through APC's creates a barrier for those who have fewer access to fewer funds, who are at less resource universities, etc. And so that's certainly a concern that is being talked about right now. As Liz talked about earlier, and this is something again that Brian does like to hit on yesterday, sort of in cultivation is particularly with respect to data sharing. I think there are different pockets of the field that are more versus less receptive to these ideas. You know, certainly those that work with administrative data, you know, you're, you're capitalizing on accessibility of data, but, and, you know, many of you are routinely sharing data. But what are some of the barriers to to shifting to a model where you are routinely sharing your data at the time of publication or even perhaps before. There are obviously concerns with respect to protecting proprietary and or sensitive information with data sharing. And certainly there are, there's lots of guidance out there there's lots of support out there but in terms of the policies that institute that funding agencies are going to be instituting. You know, where are those protections how how does one document or make a case for not sharing particular data that might be of concern. That's all something that we will obviously need to address and make contact within our new policies. And then, adapting to this new sort of emphasis on metadata and particular persist and identifiers is something that we will obviously all need to get get used to and adapt to. So, I will pause there let, but you know I do think that there, we certainly want to hear from you with other issues to consider and Liz I'll kick it back to you to round up the last couple sides. You have to unmute yourself. You mute. Sorry, I was pushing the wrong button, you know you would think after like three years in pandemic mode. I'm like pushing this button like why won't it unmute me. Okay, so the other thing is that as a funding agency. It is, it is incumbent upon us to share with you a QR code that takes you to our funding ops page. Many of you on the phone are well aware of it. You will notice that there's nothing new up there yet, but there will be things coming very soon and very quickly. So I just wanted to make sure that everyone had access to this page, please bookmark it if you're interested in applying for funding sign up for our newsflash. Because our new slash will tell you when we publish a notice inviting applications when there's something in the federal register, it'll give you a sense of our timeline. As you guys know we got our budget very late in December and so we were waiting to figure out what our budget was going to look like before we made decisions about which competitions to move forward with. We now have that information we are working as quickly as we can to get requests for applications prepped and out to the community. And you also want to say that for some of you all may have noticed that we sent out a request for information on topics for R&D centers. This is a different kind of openness that I am hopeful. Folks will appreciate we're really trying to think about what are the next centers that we're going to be competing these are our big five to $10 million investments. So please we received over 80 responses, including responses from individuals as well as responses from from sort of aggregations of societies. So we're working through that and we will be sharing back with the field what we've learned and let you know where we decide to go in terms of our next investments. That's pretty exciting, at least from my perspective so I was really glad to see engagement from the community around that. And again I got to move my book my thing. Emails are here. Laura and I will be happy to answer any questions offline, connect you up with our program officers who might be subject matter experts in the area that you want to do future research in. If you're following us on Twitter or Facebook please do and explore our inside is research blog. It's one of my favorite things, not only because I pushed really hard to make sure we had it, but also because it gives us a way to showcase. Not only the research but also the researchers that we are developing who are doing the work so we can recognize awards that everyone is getting and honestly it's a super fun part of my job because I have to read every single one of the blogs before they go up. I think that's it I think then the next oops I didn't mean to go to black I'll leave this up. We are open for questions we want to talk. I know there's been chat going on. David, do you want me to stop sharing and we can all put everybody's pictures up so people can talk or help. Okay, all right. Yeah, go ahead and do that. Yeah. All right. I will stop sharing sorry if I can push the right button haha I did it. As questions come in we'll start popping them your way the first one I saw from Stacy and then I'll grab a blank moment I'll take the moderators prerogative, but Stacy is asking if there's a way to or have they been talks about how to better standardize data documentation with things like code books to describe exactly what's in the data sets because so much open data is not shared very well with it. So Sarah that's a really good question I know that that the guidebook or that the sort of sharing data that Ruth Neil talked about has a little tiny bit of that information but it sounds to me like that's a resource that that would be really really helpful. If we could prepare as a fee and I yes and share it out with the field. I know that in the NIH data sharing and management plan there has been there is that there's sort of their information there's a lot of information about it, but your right standardization is absolutely critical for us to be able to find to be able to pull data together in a consistent and systematic way. Just as a sidebar we're doing a digital modernization project and I yes trying to get our website, like into the 21st century as opposed to like the 19th century which is where it feels like it is. It's really interesting because questions like the one you just posed are coming up in terms of standardizing how we use terms across the different centers of different functions and how do we build out a system where we can really be well integrated. So thank you for the suggestion and I don't have a resource at hand but I will definitely put it on the to do list. So good questions I'll get to both of these Jesse and Allison, Allison asked, how did you do the searches for what was is funded, particularly an OSF. And before your answer I'm going to put a little plug in for a new feature on OSF which is tagging funder in OSF project so if you would describe that you would be willing to how you did those searches for IAS funded work on different repositories. So I'm super old school, I put in quotes Institute of Education Sciences and saw what came up. So entirely dependent upon people actually a spelling our name right saying of and not for you know all of those various things some you know I could do education or Department of Education and sometimes it's really challenging to do. You know it just doesn't give you anything right I get stuff from Norway or you know whomever in the international community is doing that work as well. I think having a funder a place to put the funder tag will be great. If I could put a request out. It would be even better if you could put a place for a grant number. And it can be an open text field because I did try so like on Google, or on Google scholar, I often will go on and do the initial for digits, digits letters of our grant number so our 305 with the, the asterisk at the end and that pulls up all of my pubs but that didn't work, you know I said because I was looking to see if I could do something simple through that. ICPSR does have a funder blog so I funder a funder tab so I'd use that as well. The other thing that's interesting about ICPSR is it became apparent to me that people are uploading data in two separate ways. So one is through the traditional ICPSR process where people are uploading their data sets and getting a registry number again going back to the digital persistent identifiers and other folks are are uploading it all as a function of a requirement and publishing in AERA open, and they don't appear to have an ICPSR number at least not what I could find. So back to this question of let's all use the same conventions, so that we can actually find things that would be super super helpful. Absolutely. Jesse asked this question I want to ask also. You mentioned earlier in your presentation that you had noticed several of the trained fellows, trained IES fellows later showed up fulfilling the training goals, sharing data sets and so forth. A similar issue of course exists with pre-registration. There's a huge learning curve to what it is, why to do it, how to do it, how to do it appropriately, how to report the results. Have you seen any either trainings there or seen anything that strikes your fancy or strikes your recollection as a pre-registration? You know really good examples or what's working well in that field or more likely where there's room for improvement. Yeah, so pre-registration is really really interesting. So, and it's interesting on a couple of dimensions. So the Rease Registry which is through the Society for Research on Educational Effectiveness is a highly structured data set essentially right and it takes substantial time to actually input all of your information. But because it's highly structured, for someone who's learning how to do this, it's actually probably pretty helpful because it tells you what are all the pieces that you need to be thinking about pre-registering. It's not just your hypothesis and your plan sample, it's also your analytic plans and there's all the measures you're going to use. There's all of this stuff. We have over the years done trainings about pre-registration at the ISPI meeting right in terms of saying here are the things you need to be thinking about. I'm not aware of things that are done out in the sort of public and certainly in our, if you will, our methods training, pre-registration has not been a topic that's come up, but it's definitely something worth thinking about. The other piece I'd like to put out there is that the CIRA standards right now talk about pre-registration for confirmatory questions, research questions. We're pushing, I'm pushing to help people think about pre-registering even your exploratory questions, right? So yes, Brian, yeah. Because I might look, the whole point of this is it's a process. We start with questions. Those questions change. We need to understand how science evolves. And if we don't have a place to do that, then it just we lose, we lose the opportunity. So I think OSF is one of the few places where you can in fact upload, you can pre-register anything, there's not like constraints on the kind of study that you're doing. And so this is something I've been thinking a lot about and trying to figure out what role can I, yes, play in encouraging and sometimes requiring everyone to pre-register to the degree that it makes sense. But thoughts about that would be super welcome. And I'd be really happy to talk further with you all as you're thinking about this. I've got lots of thoughts, but I have a feeling my email is going to be filled with, can I talk with you? Can we have a meeting? I'm like, yes, of course you can. Let's make it happen. Let's see. Brian Kirk recommends a national center for generating some of these standards of everybody to be included. Did you put in the RFI? Did you respond to the RFI, Brian? I mean, come on. I'm ashamed to say no, but if I can put in a light word. Always have a day's suggestion. Let me jump in right on that point about sort of the benefits of pre-registering, you know, all work anytime you're about to commend something. It's one of the more topics that generates more conversations amongst the research community. Have you received pushback on those? Or how do you start conversations when folks might not be familiar with it or might fear that it is too constraining or things like that? Or with that question, knowing that IES has really pushed for pre-registration for a long time, but a lot of the work that gets published still doesn't necessarily link back to what was registered. Where are you seeing opportunities for education, more training, or pushing the boundaries of starting a registration for any type of empirical research? You know, that's a super good question. I don't know that I thought carefully enough about it to give you a coherent response. Laura, have you had any thoughts around all this? I know that you're, we've not talked a lot about pre-registration. Yeah, it's a lot. It's a big question. Yeah, no, I mean, I'll just, you know, sort of echo my support for the whole process that, you know, the act of doing sort of creates different ways of thinking about the science. And so it's really important for us to think about some ways to structure and scaffold. Yeah, I guess what I would say, the other thing that I've been thinking a lot about is how do we work as a community? So right, federal funders are only one piece of this whole community. So I have had the opportunity to participate as an ex officio member on the round table for open scholarship, which is thinking a lot about sort of the sub, after the colon is talking about aligning incentives. I'm sure many of you are quite familiar with this, that work. And so part of it is thinking about how do we get the sort of the universities, the research firms, the scholars, the training programs, the federal and non-federal funders, the systems that support all of this to be in conversation with one another, so that we can make sure that we're not sending contradictory messages, teaching people to do things that run into each other. I think we don't have an overall body that governs all of this, except for perhaps the White House and the Office of Science and Technology Policy, but they put out guidance and then we implement it. And we all are working together at the federal level to try to make sure that we're being consistent. But again, we're only one part of this conversation. So to be able to figure out how do you actually make sure that pre-registration happens for all of the studies that it should, that it's used appropriately, that we're not sort of, I saw Colleen comment jump through about hypothesis hacking, right? Like how do we make sure that that's working and then feed that all the way through, everybody gets an orchid, we're now requiring that for our training fellows. Like how do we incentivize every single piece of it from every single part of the system? Because that's the only way we're all going to move together. Sorry, that was a very generic answer, but that's where I went, David. That's where my brain went. That's very good. A question I know you won't be able to answer, but maybe you could talk a little bit about the process you're going through about deciding that embargo question for unpublished data. And at the end of the grand period or some period after that, what's the process that you're going through to listen to folks who might be advocating for shorter or longer? And what will the presumably the public comment period will initiate or not initiate, but results in a lot of feedback pushing for shorter and longer conversations. How are you considering that right now? Yeah, super good question. You're right. There's not a lot I can say. Other than that, there are lots of listening sessions that are happening across government and we're all sharing what we're learning and what we're hearing and trying to sort of dovetail it. You know, the reality is, is that the astrophysics community has one reset set of responses, you know, the biomedical investigators have a different set of responses, publishers have a third set of responses. So we're we're just we're really in listening mode right now, not in solutioning mode, right? Fortunately, we have time to do this. So for IES specifically, we put we prevent I can't talk we presented our draft plan to the Office of Science and Technology Policy a couple weeks ago, we're waiting for feedback from them. Once we get feedback from them, we will be in a place where we can actually start to like map out our public comment period in terms of what we're going to do right how we're going to engage the public around this. The Department of Education is a super interesting place because we have there's sort of two parallel tracks that are happening. IES, we can make changes through our request for applications. And we don't have to go through a listening period or a public comment period. That's not to say that we won't but just so you guys are aware, we can make those changes. They are what's called a rulemaking agency and any changes that they're going to make will have to go through a formal rulemaking process, which means that the changes will be proposed in a federal register notice. There will be an opportunity for comments received and then responded to. So there are these are two things happening simultaneously. So if you guys have thoughts now, don't be shy, send them to me. Just turning the question back, does anybody have thoughts right now that they want to share with us while we're talking, either about sort of, you know, pros and cons of a shorter term versus a longer term. I was about to ask what would, what would you like us to do as a lots of advocates but I'll open the floor I'll stop talking for a moment and encourage folks to chime in with your opinions. Sarah. I wonder if having a shorter, I mean, a shorter period might have the unintended consequence of, you know, potentially PIs requesting more larger grant periods, then they would have otherwise to make sure that they have enough time to, you know, publish what they need to publish on the data they've collected. That's really interesting question. We currently work within a five year timeframe, which I think is like we don't have we can't extend past that but certainly for shorter grants right we love the grants where we're getting finings in two years. But yeah, interesting. Okay. Thanks. Yes, Brian. This really isn't a concrete answer so much as just a maybe a thought. Yeah. It's similar to the previous comment that doing it too short. Major short change that the process especially early on is this is still relatively new and people are figuring it out. I think there's some potential dangers and and requiring too much too early that it is just going to scare people and and maybe result in some cuts being taken and the problems with data being shared that that is difficult to to use or even find. But balancing that obviously you don't want it five years down the line or something that it doesn't do any good then so I don't know what that what that Goldilocks zone is of getting it quick but not too quickly. I think if you guys haven't read the Nelson memo or folks, it's actually worth reading. In part because it uses the example of what happened during the pandemic early on we're trying to understand what was going on, and how critical open science was to coming up with solutions. So, so that in many ways is an insight into the hopes, right of what can happen if we are more quickly able to share both data and findings. But doesn't address the second part of your comment Brian which is that there, there may well be unintended consequences if we push too fast to you know to get everything out there. And hopefully there's some time to this so maybe it will be in a better position in terms of resources and educating the awardees so that they're able to do this more quickly. And then I think a quicker turnaround is more justifiable. And I think Stacy's initial question is actually completely relevant to this right in terms of training people about how do you create code books that makes sense that are sensible that actually record everything that needs to be recorded can do a good job at reusing. We're using your data. Other questions comments. This is Laura we're just about out of time I want to extend a very warm and heartfelt thank you for taking time out of your days to speak to us to share what's going on at NCR and for all of you know, all the work that goes into the process that you're going through right now. This is great and may I ask the question of you all. There was lots of chat happening. Is there any way that I can see the chat I was trying to pay attention so can you guys like save that for us because I know I when I log off it's going to go away. Because I'd love to know who I need to follow up with that be fabulous. I'll save the chat as a text file and email both of you. Thank you all so much and congrats on a good meeting I have heard good comments from folks who've been able to attend so congrats to the COS team for pulling it together. Thanks everybody. Yeah. All right have a great Friday have a great weekend. Take care.