 All right, we have got a couple of people logged on now. Hello, everyone. My name is David Meller. I'm from the Center for Open Science, Jessica Sprybrook at West Michigan Michigan University. Ryan Cook from UVA here in Charlottesville. I'm just going to give a couple of logistics and then pass it over to Jessica to get us going. So this is, of course, the virtual equivalent of the CERY conference. If you should see our faces and you should see the presentation in front of you, meeting the requirements of funders around open science, open resources, and processes for researchers. You should be able to ask questions using the Q&A box. If you don't see it, I think it'll be a little toolbar, either at the top or perhaps at the bottom of your screen. Feel free to submit a test question now if you want to just make sure that works. Three presentations, 15 minutes each maximum. We'll be here for an hour total. After each presentation, we'll have a short amount of time for a few questions, and we'll save most of the questions for the end. If there are clarifying points during the presentation, I'll be monitoring the Q&A panel. So I will interrupt folks if something needs to be clarified on the fly. Other than that, feel free. If you want to, take a moment and just try out the chat window or the Q&A just so I know that it's working. And we'll get started in about five seconds. Yes, unfortunately, the attendees' mics are muted. If you would like to speak, I think you can raise your hand. There's a little button to raise your hand. And I should be able to give you permission to speak. Sorry, that's authoritarian. But in this virtual environment. So for now, we'll pass it off to Jessica. Great, thanks, David. So I know you all can see me. I wish that I could see you all because I love to present to an audience. But instead, I'll be looking at David and Brian. That kind of helps me feel like this is a little bit more live. But thanks for joining the webinar today. We're still excited to talk and share with you some of the work we've been doing around trying to coordinate between funder requirements around and guidelines, I should say, around open science and how we can support the field in providing some resources and processes for researchers who are aiming to meet these guidelines. So David, if you could go forward. As far as just kind of setting up the plan for the webinar, so I'm going to start by reviewing some of the funder guidelines. And then I'm also going to talk about one of the specific open science practices that we focus on, which is preregistration. So after that, I'm going to pass it off to David. And he's going to talk about open data, open code, and code books. And then Brian is going to go ahead and talk about open access. As we said, as David mentioned, if we have some brief clarifying questions after each of these sections, we're happy to answer those. And then hopefully, we'll have a little bit of time to do some type of virtual discussion at the end. Go ahead, David. Thank you. OK, so I wanted to get started by just introducing three specific funders that we focused on for this presentation. So specifically, we are looking at guidelines that are put forth by the Institute for Education Sciences by the National Science Foundation. So to kind of represent some federal agencies. And we also have the Arnold Ventures to give a sense of foundations. And obviously, there's ranges across foundations, ranges across federal agencies. But we selected these because we feel like, particularly for the three audience, these are very relevant funders in our field. So what we did, oh, back to one second. So what we did here, I put a bunch of links here to different resources. So the collection of resources here are things like the RFA solicitations, guidelines around data management plans, and data sharing. And so for IES, I kind of found three documents that I think are going to be the most relevant to folks as they think about these open practices. Same thing with NSF and for Arnold. In addition, I also consulted with some IES and NSF staff because these things are not exactly crystal clear as you're going through the guidelines. And so trying to get through some of the nuances around what's required, what's strongly encouraged, and so on. So I'm going to give kind of my best attempt at breaking this down with support from these guidelines and from the staff as well. But I would also encourage you to seek out clarification as you're trying to meet these goals and meet what these different agencies have set forth. So go ahead, David. Thank you. So what we have here is along the top, I listed our funders. And then for each role, we're identifying some different practices that we're interested in considering. And so to begin with, we start with the preregistration. So I'm going to kind of go ahead and click through. And we will see what's happening across these different funders. So it's important to recognize, again, that we are not saying by indicating an X here that this is a requirement of a particular funder. So for example, if we start with this preregistration, IES, for studies that are seeking to make causal claims or what we typically think of as our impact studies, preregistration is strongly encouraged at this point. For Arnold, when you look through their materials, they note that preregistration should be done for any empirical study that involves statistical inference. Now, when I didn't add an X to a particular funder, it's because I most likely found that there wasn't guidance around a particular practice. So again, encouraging folks to reach out if they have questions in these different areas. So next we consider open data. So when we look at open data, we can see that we have Xs across the board. So I would say open data is one that's kind of been being pushed for a little bit longer at this point. So IES requires that impact studies make their data available to the public, again, ensuring privacy and legal considerations are taken into account here. And the same holds true across NSS and across Arnold. And I think it's also important when we come to some of these issues around data because they can be very sensitive. The resources on the prior slide, there is a lot of advice and suggestions in terms of what these different agencies are looking for in their data management plans, which is often for IES and NSF, where they're looking for details around open data. The third practice that I consider to hear is thinking about data documentation. So it goes hand in hand with data. So here we're talking about, are we providing code books to folks? Are we explaining what these variables are? Are we basically documenting these things so that others could understand what's in that data set? And so you'll see in that case that, again, kind of going along with that data is the expectation that we have some documentation to correspond to that. The next practice that we considered is open code. And this, I would say, unlike open data where we've kind of been seeing a push in the field for a little while, open code, I would say, is relatively newer, particularly in the field of education research. So here the idea is that we're actually sharing our analytic code so that somebody could take our data and they could merge it together if need be and they could actually replicate these analyses. So at this point of our three funders that we have here, Arnold is the only funder that speaks to this in their guidelines and asks that code is also shared along with the data and the code books. The final practice that we're going to consider today is open access. And so you'll see that I have two access here. So I want to clarify this a little bit. So in IES's case, it requires that grantees whose projects were awarded in field year 2012 or after to make their peer-reviewed scholarly publications available on EREC. And so there's also specific guidelines that go around this in terms of embargoing and working with journals. From the Arnold side, we see that their guidelines require that results from the research be open and publicly available for free. So as I mentioned, it's important to note that these are still relatively new. So some of these practices are emerging and guidelines around these things are emerging. And so I would say it's really important to continue to update and check updated solicitations from the different funders as this is going to be an area that is going to be constantly moving. My role here is that I'm going to dive us a little bit deeper into preregistration. If you can move us one more to, actually, that would be great. OK, so just a basic idea to make sure that we're all on the same page here, because I think a lot of times terms get thrown around as well that sometimes mean different things in different fields. So when I'm talking about preregistration here, we can be talking about two different types of things. So one is just a basic study information. So this would be putting out study name, authors, funders, abstract kind of general study details. The second type of preregistration would be basic study information, which is kind of that essential piece. We need to know that, but with the pre-analysis plan. And so by pre-analysis plan, in essence, what we're trying to get folks to think about is really delineating the details of their design and analysis plan. So for example, a plan which distinguishes confirmatory analyses from exploratory analyses, which identifies the primary outcome measures that are associated with these different research questions. And so it's really a comprehensive kind of upfront, ideally, though it's not always upfront, a comprehensive set of information around what the study looks like and how it's going to be taken up. So go ahead, David, we can, thank you. So preregistration is definitely taking off across the social sciences. And one of the things that's happened is we've started to see several different registries really come into play. And so what I wanted to do here was introduce some of these different registries because I think one of the things that's important for folks to consider is what's the registry that's going to work best for my work or for my research. And so I've identified five different places for folks to go. So we have the AEA registry. We have the Registry for International Development and Past Evaluation. We have the Evidence and Governance and Politics Registry. We have the Open Science Framework, which is a host of registries within that. And then we have the Registry of Efficacy and Effectiveness Studies. So a question that you would think is, okay, well, there's lots of options. How do I go about making that assessment of what's gonna be the best fit for me? So if we go forward, one slide for us, David. Thank you. So I kind of just laid out a few things for us to think about, or for you to think about really as you're considering where you might want to go with your study. So the first thing is, you'll just see, I'm providing the sponsoring. We have one question. I'm gonna let Theodore ask. That's okay. Sorry, I'm not even looking at the chat. Okay. Sorry, no, it's okay. Well, we're all trying this out on the fly. Theodore, you should be able to speak. Yeah, I just wanted to check if the slides are gonna be available. Oh yeah, yeah. Great. Okay, thanks. We can email. We're also recording this too. Yeah, this is being recorded and we can email our participants with links to the slides also. Yes, we better because otherwise our links wouldn't do any good within the slides, right? Okay, so I have the sponsoring groups listed up there. In most cases, the group kind of basically goes with the title of the registry. So not a whole lot of mystery there. The next thing that I look at, go ahead, David, is the substantive focus of the different registries. And I think this is really important to consider as you're thinking about what's gonna be the best place for a particular study. So oftentimes the idea is that if we can get kind of a hub for a particular topic area, we're really gonna invite people to come there and it's gonna become a nice central source for folks. So some of these particular registries are quite narrow in terms of their substantive focus. Like EGAP, for example, Governance and Politics, it's gonna be a pretty tight set of studies within that. Others are much more varied. So for example, OSS has a variety of different topic areas that certainly are welcome across different registries within that. So it really depends on trying to find a home of what's gonna match your study. The other thing to consider, and David, if you can go one forward, is the type of study and design that you are trying to register. So again, much like the substantive area, the type of study and design can also range from very narrow to very broad. So for example, the AEA RCT Registry specifically focuses on impact studies and RCTs only. That is if you are doing an impact study in Econ and it's an RCT, a very logical home would be the AEA Registry. So you'll see that across like EGAP, for example, it takes all types of studies and designs. And so again, kind of using this as a guide in terms of where your study might best fit as you think about that in the planning. So I'm just gonna take a few more minutes and I'm actually gonna focus on telling folks a little bit about REAST. And partly I'm doing that because I was a part of the development team for REAST. So I'll put that out there to begin with, but also because the idea is that REAST at this point is really focused on education studies. And so the idea is to try to get folks who are doing education impact studies to use this kind of central resource so that it makes it easier for other folks to find as well. So a little bit about a REAST entry. So a REAST entry includes eight different sections. So as to connect that back to what I was talking about before. So basic study information really comes in section one in section two. And then section three through section seven are those details related to the pre-analysis plan. And then in section eight, basically that's an opportunity for folks to provide additional materials to us. So for example, we see folks supplying additional documents with very detailed information about their proposed models. We see a lot of folks providing information about their logic models. So they might include that as a separate attachment or about their fidelity measure. And that really is a place for links or for documents to additional information that you think other people might be interested in. Okay, so to tell you a little bit about REAST. So it is a web-based interface. The underlying design was really built with the idea of trying to make this a very easy to navigate system and a quick way to enter your study. And when I say quick, I'll say for a study that has thought through all of these details. And I think that's an important piece. So I'm imagining, let's say that you are funded by IES and you're strongly encouraged to preregister. So you have a funded proposal. So a REAST entry is not asking for anything that's not likely already in that funded proposal. And so it's a translation of information in many ways into a different form. So a little bit about that form. One of the things that REAST has tried to do, which is a little bit different from some of the registries that are also in the field, is to try to elicit information via categorical responses. So the idea is to try to increase the consistency in language across studies. So for example, folks might say, I'm doing a randomized trial. And if I were writing this in narrative form, I might say, well, I have a multi-site design with students who are being randomized within schools. And somebody else might say, well, I'm doing a randomized block design and schools are my blocks. And we might be talking about the same design, but the fact that folks are using different language can make it challenging to recognize that we're talking about the same thing. And so much of the data here is collected via categorical responses. Another important feature I just wanted to share with folks, and this is something you're gonna see across all registries, is that they're set up to chronicle changes. And so the idea here is there's a recognition that studies are going to change oftentimes when they meet a field. And so recognizing that and being able to share those changes and why they occurred. So the example that I have on the screen here, if you'll see the first version of this particular study was published on February 6th. They went back in on August 28th and updated the regression model and description to include randomization of block six effects. And so you can see what those changes are. You can go back to the original version. You can go back to the second version or you can go to the current version, which again, had changes that were logged on November 25th. So the idea is really to make this a detailed kind of map of what occurred in this study over time. Next slide please. So another feature that I wanted to share because I think it's very useful for folks is the search capacity. So what we're hoping is that folks are registering their studies. And so it makes these studies and the information about them available much earlier than we might see if we were waiting for publications and things to come out. And so you can search the registry by a variety of different terms and different type study designs and so on. Next slide please. And then there's various ways that you can get the output from your search. So if you're looking for an individual study, it's common that folks just wanna see the PDF of that particular study so they can understand, what were the confirmatory questions? What is the analysis plan? What are the primary measures and so on? Next slide. If you are looking across multiple studies. So another potential use of this database is for those who are conducting meta-analyses and trying to get information on these studies. And so this is just a sample of Excel output that you might see across multiple studies. You know, if you were doing a search on, I can't remember what this particular search was on, but you know, you might have done it by simply RCTs or by science outcomes or whatever it was and then you could get a collection of studies. And kind of the last note I wanted to end on here as far as pre-registration and REIST is that we are excited because it has a new home on ICPSR website at University of Michigan. And so this has been a collaborative effort between the three team and the ICPSR team. The updated version has an improved user interface functionality. We also updated the single case design protocol and we hope to share more details via a future webinar on that. And it also gives folks access to some of ICPSR's other data and features. Currently as of a couple of days ago, there were 155 published entries and so we're hoping to continue to see that number grow as we see not only more funders strongly encouraging but folks just really wanting to get their work out there and increase the transparency. And with that, I am going to turn it over to David. That's great. If there are any clarifying questions, feel free to ask them now as I switch through the screen. We'll probably limit that to one or two questions and get the rest of them. I'm gonna go ahead and jump in and then we'll have more time for questions at the end. So I'm going to pick up right where Jessica left off with a format called registered reports which is a way to get reviewer input on these pre-registrations and engage journals early on in the process. And then I'll go into more of the bigger portion of my focus here, how open science practices in particular data sharing are helpful and how you can, again, satisfy these funder requirements or encouragements or guidelines based on a variety of tools. A little bit of disclosure first, I'm David Miller from the Center for Open Science, director of our policy initiatives here. And we build and maintain the OSF which is a platform I'll talk about for a lot of these open science practices that we'll get to. So our mission is to increase the openness integrity and reproducibility of scientific research. We do that through replication studies and meta science investigating the barriers to transparency and reproducibility. We advocate and educate on open science practices and behaviors and policies. And then again, as I mentioned, we build infrastructure to help enable the types of activities we wanna see happen and we're supported by a number of private family foundations and government research projects. Architecture reports are a two-stage study design and publication format in which peer review occurs, the first stage of peer review occurs before results are known, before the study is conducted. A second stage of peer review then takes place right prior to final publication. That first stage of peer review, you can think of it as peer review of that pre-registration document. The reviewers are evaluating only whether or not the hypotheses are well-founded if there's justification and importance for these research questions. Are the proposed methodologies feasible and sufficiently detailed? Is the study well-powered? Typically, they're looking for a justification, a well-justified effect size and a sample size power to detect 90%. And often considerations of quality controls and positive controls are evaluated at that first stage of peer review, particularly in recognition that no results are very possible with giving a promise to publish before the outcomes are known. And so how will the authors and the reviewers know that the study was conducted as expected? And those questions are considered, again, before being biased by what the actual results are. If the answer to those questions is yes, if it's an important research question that the journal wants to publish, it can be given in principle acceptance, a promise to publish the results regardless of the main outcome. This is the point at which it would be appropriate to specifically register the research design. We have seen it's actually fine to do it a little bit earlier, but through that review process, there is an expectation that the study design or some details are going to change. So if you registered on the free registry or an OSF before submitting to the journal and then you submitted to the journal and got feedback through one or two rounds of revision, make sure that those are updated on the pre-registration. If you hadn't done that, go ahead and register for the first time once you have that IPA from that stage one review. At that point, studies conducted, written up. And the second stage of peer review is only evaluating whether or not the authors followed the approved protocol, whether or not the positive control succeeded and all the conclusions justified by the data. Very importantly, questions about whether or not the hypotheses are supported or significant or novel or impactful, these are specifically prohibited from consideration at that second stage of peer review with the register report model. We know of 230 journals that include this two stage peer review process known as registered reports and there are several of them that publish education research, gift to child quarterly, exceptional children being two of the most prominent. These other journals, language and instruction, scientific studies of reading, language learning and two large multidisciplinary journals, POSONE and Rural Society Open Science also would be relevant potentially to education researchers. You can find a complete list of details and other journals that are accepting this format at that website, cos.io slash rr.org report. I just want to give a plug for there because it's a very popular format for authors. It really is helpful to get that assurance that the results will get published no matter the outcome of the study ahead of time. Reviewers tend to like this format better because the comments coming in aren't doing any sort of post-mortem on an already conducted study but they have the opportunity to give recommendations and suggest improvements at the point in time when those can actually affect the upcoming study. So it's very popular with editors, with authors and reviewers and I'd encourage you to take a look. I'm gonna go next into points about the open data and open reproducible research pipeline. So of course we talked about, Jessica talked about how these funders are sometimes mandating or encouraging or asking about plans to share data upfront. So addressing that is of course in everybody's interest when you're submitting that publicate when you're submitting that funding request. We also see a couple of other reasons. We know studies that have openly available data are cited above comparable studies. There's a competitive edge these days. You know a lot more mandates are coming down the line about various open science practices and now's a good time to get going with this to stand out sort of in a crowded field and show what steps towards reproducibility are being taken. A lot of times with data sharing in particular there are a lot of questions and it's not always easy to do it the first time around and so trying it once, trying it on a subset of data that's easily to anonymize, doing it with some analytical code or research materials beyond individual participant data because of ethical concerns or other justifiable reasons not to share data. Those are all practices, those are all skills that require practice and deciding what to share and how to share and when to share it. So now's a good time to get on that bandwagon. And we see a lot of folks who are adopting these practices really are the primary beneficiaries themselves especially their future self as they try to work through and build upon previous work. So I'll give a couple of examples of both of these. There've been several studies that have seen increased citation of manuscripts and articles that have openly available data from them. All these citations are available at the end of the slide and will make you available to everyone later. So I just wanted to point out a couple of these studies that looked into the relative citation rate of articles with openly available data compared to similar articles in comparable journals. So we know about five studies have been going down that route so far and seeing similar responses. So beyond the mandates that we know are there the following funders really encourage or incentivize or asking about open data during that submission process. So Jessica talked about NSF, IES and Arnold Ventures obviously major education funders of education research. We also see a large number of other funders in related fields. Likewise, requiring incentivizing or somehow encouraging the use of open data practices or at least inclusion of data availability standards at the grant submission form. And you can see a more comprehensive list of funders that are engaging with this on our website. So the most straightforward reason about why data availability is helpful is that it's an organization technique to help your future self. We've all been encountered with cluttered desktops versions of files that are hard to parse out which one is the most recent one. A lot of these open science practices are really about organization, data management, reproducible workflows that have a learning curve but getting over that curve creates a much more efficient pipeline for work and a much more efficient pipeline for when you pick up that work six months later as the next student comes through the lab, for example. This is a screenshot from a project on the open science framework. The OSF is again that open source platform that we build and maintain. And there are a couple of features in here that just make it helpful to pick up a project as opposed to going back into your own laptop or into their own thumb drives looking for old things. This is an example of one study that has pre-registration data available materials organized for sharing but organized for the researcher themselves as they were conducting their own study. So how do you do open data? Prepare at the beginning, create a data management plan. I'll talk a bit about some file naming conventions and point to some other practices like version control and license for reuse that's helpful again to you and to others. A lot of these tips are coming from a fantastic article, practical tips for ethical data sharing. Ethical data sharing, particularly human subjects, of course, human research participants. Mine is 2018, there's a link to this at the end as well and we'll email participants with these slides. But the framework that she provides, a couple of practices to avoid or be aware of when you're writing up informed consent or IRB requests. Be aware of promising to destroy data that's currently in a lot of IRB language pulled over from previously used IRB. So be on the lookout for that. Be on the lookout for promises not to share. If you make that promise, you should of course abide by that promise but be on the lookout for that and expunge that from IRB plans submitting going forward. And don't promise that research analysis or collective data will be limited to a certain topic that really prevents the full use of the future data set. Brian has a question, I think you can unmute yourself. David, I apologize if you were about to cover this. But in my experience, talking with your IRB about this, IRBs are becoming much more aware, some more so than others about sharing data and they'll often have some template language and recommended procedures. And so this is something for most of us that we're just starting to get our heads around including a lot of IRBs. So working closely with your IRB, they have almost certainly dealt with this before and have some recommended language and procedures for going about this. Yeah, IRBs are in the same community that we're all in right now sort of deciding and reforming standard operating procedures to take use and reuse more seriously. And this is ultimately I think a couple of surveys are coming later in response to a lot of research participants who participate in studies because of the benefit, not necessarily the benefit that they see to themselves but they want to benefit to the scientific field and some of those older practices about data sharing or promise to limit the analyses to a certain set of topics really go against where a lot of the field and the society is expecting it to go. So of course, do get consent to share data, incorporate these into discussions with IRBs. And I think most importantly, be very thoughtful about considering risks of re-identification as of course a very, sometimes it's easy to not realize, not just removing somebody's name, of course, or social security number or email, but we've got a couple of different demographic factors about, for example, where they go to school, what their income is, what their race is and one more thing, you can basically re-identify somebody. So be thoughtful about considering risks of re-identification and remove demographic variables if that needs to happen. As Brian said, there's a lot of work going on in the IRB community about consent language and IRB approvals. This project has a couple of recommended clauses to include in that that have been approved by various IRBs, but of course, each institutional review board or at this panel is different. So feel free to use these resources on to start a conversation, but we'll continue that with the individual IRB of that's relevant to you. A lot of funders are particularly requiring data management plans early on. So there's a lot of guidance that many folks have not created a DMP before, but there is a lot of guidance on what to include on that. The basic criteria to include are answering the questions such as how your data sets would name and reference what file formats will be used, who will be having access to the data and when, and how do you plan on preserving those data sets. And there are more resources available in our help docs and dmptool.org is very useful at helping you structure a data management plan, file name and organizations. So this is an example of one project on the OSF that has attached to a couple of different services to show, again, good recommendations for how to include easily discernible file names. Use a single directory for one project, separate your raw data from your derived data, so keep that raw data usually in a read-only format is the best way to do that. Separate the code from the data sets, keep them and separate or next to each other in directories. And every shared data set should have a code book or read me file, read me with just giving basic information about what's included in the following data sets or in the file following directory. And more information on all of these is provided in those references I mentioned just a few minutes ago. I'm going to mention a couple of other things on the OSF getting close on time here. So I'll finish up in just a moment, but version control is a very, should be your friend. So it's a good way to make sure to track provenance of how, for example, a code set changes over time and it's applicable for data sets, code, or any materials that you want to use. But here's one way to do it on OSF where you upload the analysis script in this example. And if you upload multiple versions of it with the same name, you don't need to amend version two, version three, version final, version final, final, just keep the same file name uploaded and the version control takes place on the back end. So you can go back if you need to. Final point I want to make is about licensing. Licensing is just a simple way to make sure that it's clear from the beginning how you're granted permission for others to either reuse this information without the license. It's ambiguous and a strict interpretation of some laws would say it can't be reused while the rights reserved is often the default. So encourage others to build upon your work with credit, get those recognition for that. Having a license and openly available license will help you do that. And sometimes it's again, a requirement by funders or journals to make sure that that licensing is clear. I strongly encourage, especially for data that doesn't have a creative attribution to it to simply put it in the public domain in order to maximize reuse here, simply saying that this data, I don't own it, society owns it, so please let others use it. And that again, still encourages citation and credit, but it removes that from being a legal question, which is not usually handled in the academic world. With that, I think we're just about time, so I wanna run through this and then pass it over to Brian Cook. If there are any clarifying questions right now, I'd be happy to answer those. I think three couple were answered, so hopefully those are clear. Check and make sure I didn't screw those up. Okay, let me just double, I will say no. So we'll send these slides out at the end and all the citations that I mentioned are available on the slide deck. You have Brian right here, let's see Brian, this is yours. Yep, all right, timing looks like we're right on, but as is my want, I've got a lot of information on here and so I'll try to move quickly. I have the good fortune of probably having the least nuanced and complex topic to cover that we're gonna talk about today and so I'll move through it somewhat quickly. Open access and one form of open access that I think has some particular advantages and is becoming more and more popular preprints and so this idea of making scholarship freely and immediately available has lots of different advantages for other researchers and research consumers, but is increasingly being targeted by funders, which is kind of our entree into all of this here. I know I was just reviewing grants for IES and it looks good to have established, I think a record of providing open access to your products and having a plan for doing so. So we'll talk about some different ways to do that here. Hit the next. And so quick, I did a similar presentation on this a little while ago when COVID-19 was just something on the horizon and they were starting to talk about, recognize the importance of it and the scientific community had kind of agreed because of the gravity and the timeliness of the topic. We can't wait for the typical peer review process. We can't have the important new research findings hidden behind paywalls from a lot of the research community and certainly our research consumers can't access them. And so there was an agreement. We need to get this stuff out there and get it out as soon as it's done and doing this through pre-prints and open access. So I'll talk about each of these very briefly but you can see it is still the case and historically has been the case that most of scholarship is behind a paywall and that if you're affiliated with a university that has, has paid a lot of money to a publisher or a subscription, you can access that. There are other ways to access it but they involve you paying for them. And so the different other open access options that I'll talk about briefly include bronze, hybrid, gold and green which are becoming much more prevalent in recent years and I expect will continue to be to rise in the prevalence with which they're used. So before we get into those different models how to open access, general advice check the journal website for policies if you wanna know whether and how you can make work open access that you plan to submit to that journal or that you have submitted or that you have had published in that journal. So some of the things that I'll talk about and mention and that you can try to keep your eye out for article processing charges or APCs, different options for archiving or essentially posting your work or different versions of your work and then embargoes and if there are embargoes and how long those embargoes are for certain versions of your work. There's a link here for Sherpa Romeo which is guidelines for different journal policies especially for some smaller journals and subfields. They may not have policies on Sherpa Romeo and there is a fair amount of confusion as Jess was talking about some with the funders where it's hard to sometimes discern what the actual policy is and what's a recommendation versus a requirement. Sometimes you'll see things posted on a journal website that are contradicted by their publishers policies and oftentimes there's just not there's a lack of information or a lack of clear information and that gets reflected in kind of clearing houses like Sherpa Romeo that are pulling that information from the journal and policy website. So it's not always crystal clear when it isn't crystal clear or there just aren't policies posted. Check the publisher policy and ultimately I recommend contacting the editors which in my experience, you've got a 50-50 shot whether the editor knows the policy but then you should be able to at least find out what it is. So gold open access is where a whole journal is open access and we're seeing more and more they are primarily online only journals and it's kind of a new breed of journals. There's a directory of open access journals that are out there typically but not always these charge in APC or an article processing charge to publish there. They have bills to pay even though they're purely online and so they're gonna charge you to publish there. There are general open access journals, PLOS One, Public Library of Science is probably the largest and broadest. They have an APC of $1,700 for a research article. F1000 has a sliding scale depending on the link that'll run you anywhere from $150 to $2,000 to publish there. There's starting, we're starting to see some in the field of education, AERA open probably being the most prominent. Right now they're charging $700 but I believe they, I forget when exactly they had some introductory rates they had a little special on because they're a newer journal they also have different rates for students open review of educational research which is another purely gold open access where the electronic versions of those articles and that's all there is the electronic version they are open to everyone and you pay an APC in almost all journals to do that. Hybrid open access is now, I don't know if this is true but I'm gonna say almost all journals published by what I might call mainstream publishers kind of big publishing houses they have converted to have their journals have a hybrid option where for the most part journals in education continue to be traditional journals where work is behind a paywall unless you have a subscription to access them. But there is an option that you can pay an APC or article processing charge just for a particular article to make that article in an otherwise in a journal that is otherwise behind a paywall you can make your specific article open access and anyone can access it then on the internet. And so those because the journal is these journals are still oftentimes print journals and have other costs the APCs tend to be higher and so I did a little bit of homework here Elsevier has a list of different journals and I just did a quick mean $2,600 but that very standard deviation is over 700 Sage has a blanket their default is $3,000 to do this Wiley the average is a little bit above 3,000 Taylor and Francis had a very long and varied list that I couldn't quickly get a mean of but that list is there if you're interested it. So hybrid you can make it open but it's within a traditional closed journal. Bronze is an interesting and increasingly popular option that you'll see and this is where the authors haven't paid anything it is a decision oftentimes made by the journal editors to make particular articles open within an otherwise closed journal. So this may be a special issue a topical feature that they're really interested in promoting sometimes there seems to be little rhyme or reason to it but things are individual articles are made open the difficult part with bronze open access is you don't know whether and when that open access is going to go away and so as much as we see it and journals are recognizing that there is the citation bump that David mentioned associated with making your data open that exists for making your work open. I think it stands to reason if more people are able to access your work more people are going to cite it. AERA actually had their paid wall go down inadvertently for a couple of months last year and those articles during that time saw quite a bump in terms of downloads and so in some ways it's advantageous for journals to do this and provide greater access to their work but that's not part of their business model so bronze acts open access is where an individual article is open for a time this isn't something you would do to make your work open in terms of making it showing a funder that you're gonna make your work open but it is something that happens with some frequency. I believe you can also ask the editor if they'd be willing to make it and I think they usually use the term free to read. The worst they can say is no. And I would imagine there's a lot of no as the response to that but- I'm 50-50 for having asked it twice. Really? Huh, interesting. There are journals will and this is often what you do see in the journal and publisher policies there are certain agreements with certain publishers that require or I'm sorry certain publishers have agreements with certain funders to make work open access but those 10 they're very specific to specific publishers and not I don't know of any for example with IES or some of the funders that would be more common in education. Okay, green open access is what I'll spend my remaining minutes talking about and this is allows for some level of self archiving where and the most common version of this is with a preprint and essentially the version of the paper before you submit it you largely have control or at least some level of control over what you do with it and different publishers as you might expect to have different policies and so I've linked them here and you can briefly read them Elseviers the language on it basically says that version of a paper that is not formatted by the journal the version, the PDF of the Word document that you have that before you submit it to the journal do whatever you want with that largely. There are some journals that don't agree with that but the Elsevier policy basically is you haven't added to that or enhanced it you can post that out there. Go ahead, David. Sage policy is pretty permissible around this too they make a distinction between an original submission versus the accepted manuscript in some areas but broadly you can post those it's not the formatted version of the article it's not a PDF of the article but it is largely a PDF of your Word document and so they say you can post those. Taylor and Francis, the version, the original manuscript before you submit it to a journal say you can share that how you like. Wiley says you can share the accepted version after an embargo after a 12 to 24 month embargo period. So and what they're basically talking about is what we usually refer to as preprints and so there's some language and it's not always consistent and I'm running out of time here. So preprints are generally speaking the version unformatted version that before you submit it sometimes up until acceptance. Postprint is usually the accepted version of the paper and sometimes more and more people are referring to these just papers or prints because a lot of people there are things that we might post that we never submit or never get accepted and published and so it's not technically a preprint or a postprint. You can post these on repositories, they get a DOI there are options for licensing them and let's skim through these real quick David. So there's lots of different preprint servers out there now, some general ones including preprint.org, figshare. I was involved with developing Ed Archive which is a new preprint server that's specific to education. This is you log on to Ed Archive. It is a, I think a surprisingly simple process where you just basically upload your paper and it is out there. March of the preprints, this is just in bio archive which is a preprint server specifically in biology but there, this is going crazy and I think education will follow suit to some degree. It'll be interesting to see how much it takes off and this is a little more information. There's lots of resources out there on the internet, the same issues that David talked about about licensing your data. Think about and most of the preprint repositories allow for and actually opti-femmes require a creative commons license. So there's some directions about how, whether and how others can use your, the preprint. And so recommendations I think I've already covered for the most part but I recommend, I think it's good for the scientific community to post things out there or to post preprints. It gets everything out there freely accessible. It gets everything out there much more quickly. Check the policies of your potential outlets. I do know of some journals in my field of special education that will not accept submissions if they've been preprinted. There is a concern about, especially in education, most of our journals follow double blind peer review. If you put it out there as a preprint, then people can see it. That's not unique to preprints that happens with conference presentations for example but if that's a concern, you can post your preprint anonymously or wait until peer review has concluded. And think about building in APCs into proposal budgets for larger grants. That's a relatively small part of your proposal budget but is going to show your commitment to open access to potential funders. And a couple of slides that I will let you peruse on your own but one of the other benefits is that it cuts down the time and there's a greater attention on social media and citations when we openly share our work. Great, thanks. So, what we're just about at time, I don't have to log off immediately. I'm happy to stay and answer any questions like I'm going to speak on anybody else's behalf but we'll be doing this show up to now. I will remove the screen sharing so it'll, to you, it'll feel a little bit more intimate. Let's see, hello everyone. Just check our question box here. Yeah, so we're at a time. We hope you enjoyed the show. If there are any questions, feel free to submit them now. And I promise to send out those slides to everyone who registered for the webinar. David, will you also send a link to the recording or that will be posted somewhere? Yeah, the link to the recording will be, if you go to COS.io slash webinars, but I'll include that link also. That'll be, I'll try to send the slides out just in the next hour or two. The webinar will be posted. I'll put the link to where it'll be but that should be posted probably tomorrow. Thanks. I guess this is a concluding thought. I would offer the point that, so this is obvious, this is just dipping our toes into the water, putting out some resources, floating some ideas, which I think is a good first step. This isn't, some of these things, there's a learning curve and it takes a while, even after you kind of feel up to speed on how to do it. It changes the workflow a little bit. And so I think it's important to recognize that and to, I think, approach this maybe one piece at a time and that idea of some openness is better than others and perhaps directed by, especially if you're thinking about funding, what your funder's requirements are and kind of taking off one, looking at the requirements, doing those, getting good at that and becoming more and more consistently open that work hopefully over time. Yeah, the fact that there are so many options and sometimes feel overwhelming and try not to be overwhelmed by that, think of it as a smorgasbord. If any one of these activities seem more enticing to you, try that one first and don't feel that you have to do all of them if you do one of them. But I would encourage everyone to at least try one in the next academic year, in the next semester, whatever, for the next project and build off of that, try to do two for the subsequent project and see how the first time you share a data set or the first time you pre-register, it takes a learning curve but the second and third time there's always a system. Yes, all right. All right. Well, with that, I'm going to close the webinar and I appreciate everyone coming on. I know this was kind of a last minute thing but I think I've worked out fine and everyone stay safe out there, social isolation, flatten the curve. Thank you.