 Our goal today is to talk a little bit about some of the trends and mandates in Iran research funders that are reflecting some of the practices in open science that the center here are big fans of and like to advocate. And there's a lot of activity, lots of frequent changes in this arena. So it is difficult to keep track of as researchers. We have a lot going on within your own work. Keep your track of that, it's one more thing. So David here is gonna help us do that. So in just a second, I will hand off to David and Nikki to introduce themselves. And then we'll launch into our material for today. David. Thanks Eric. I'm David Miller. I'm the director of policy initiatives here at the Center for Open Science. And I work on our initiatives to increase rewards for open science practices and work with big institutional stakeholders such as fund private and public funding agencies on a lot of these topics. Well, thanks David and Nikki. Hi everyone. Thanks for joining us. I'm Nikki Pfeiffer, director of product here at COS. Obviously working on the free tool that we have built and maintained the OSF and many of our other interface tools to support communities and researchers and societies funders as well in their goals for open practices. All right. Thank you, Nikki. So just quickly I will preview what our plan is for today. We've just done introductions. David is gonna talk a bit about some of the funder mandates that not only is he follow and document but in many cases plays a real significant role in the formation of those policies. We'll take a moment if you have questions or as questions come in use that Q&A field that you have available for you in the Zoom interface to put your questions in and we'll take a minute to answer some of those. And then Nikki will sort of tell us a bit about how the OSF tool is built with this changing landscape in mind. And then we'll do a little quick tour of some of the features that David will preview for us. So yeah, use that Q&A field so that we can see some of those questions coming in as we go. The session is recorded. So if you wanna come back and reference this later if you wanna send it to a friend we'll send this material out to you later. So David, are you able to share your screen? I might need to stop first. Yeah, stop yours and here we go. All right, thank you so much, Eric. As Eric said, I'm the Director of Policy Initiatives here at the Center for Open Science. We are a mission-driven nonprofit with a focus on increasing openness, integrity and reproducibility of scientific research. I always like to give a little introduction of who we are as an organization. If you're on this webinar, if you come across it, you'd likely know who we are, but just in case I wanna make sure to give a little introduction in some contexts about our organization. We work to achieve our mission of increasing openness through basically a three-part strategy. We look for evidence of best practices and look for lessons in the ability to successfully replicate empirical research findings through our meta-science initiatives to replicate and evaluate research practices. We advocate and educate based on those lessons learned in our community and our policy and our incentive programs that involves working with bottom-up organizations seeking to build consensus and communities of practice around a lot of these issues of transparency and working with some top-down organizations on policy change and making sure that incentives are aligned with ideals of how science should work. And then finally, our biggest focus is building tools to enable the types of practices for which we advocate. So the focus of this webinar is of course meeting funder expectations around open science practices. I'll be talking about what some of those expectations are and then we'll transition to how to take those steps in order to meet those expectations. We are generously supported by our supporters, by private and federal research dollars to help support our mission. So I'd like to acknowledge that. The basis of this webinar is a resource page, cos.io slash top funders. This is a resource page that serves a couple of purposes. It's a resource page for funders themselves to see examples of what their peers and colleagues are doing in the funding research community. And it's a resource for the research community designed as a one-stop shop to see what these expectations are for a variety of funding agencies and private philanthropic foundations, what steps they're taking around these open science initiatives and what the future holds in store. So it's a curated research hub for both of those sets of audiences. What I'm going to be focusing on is meeting funder expectations for a variety of topics, sharing open data and materials. I'll also be going into a little bit of the how and why of doing that. But again, most of that is Eric and Nikki will give some explanations about what the OSF is capable of there. I'll be giving a little bit of a landscape of what funders are doing for pre-registration. I'll find that and give some examples of how that can be incorporated into workflows. Likewise, with registered reports, a publishing format that incorporates peer review before the study results are known. And then finally how funders are expecting researchers to collaborate and to openly share and disseminate research findings. So those are the topics I'll be covering. Let's get into it. What I want to talk about first is the, let me minimize a little part of my screen here and I'll take that to fix what you're seeing. What I want to get into first is open data and research materials. Show what some of those funder mandates, expectations and incentives are. And get a little bit into the why these are important, both for a little bit of selfish reasons, how they bump up the impact of one's own work, but also the benefits that these practices have on one's own workflow. So the first section of funder expectations around open data and open materials and another research materials is shown here. I would say most of the activity around open science practices in the funder community is around ways to incentivize or encourage or to increase expectations for data transparency. There's a variety of either mandates for open data or requirements to disclose whether or not data are available. The trend for all of these organizations and for the funder community at large is to work on ways to increase expectations and eventually mandate that all data underlying research findings be made publicly available to the greatest extent that can be applied through given the ethical and practical realities. All of these funding agencies listed here, the National Science Foundation has strong guidelines on how to share data. The Institute for Education Sciences will come up several times. They have what are known as the SEER standards, standards and excellence in education research. And those mention expectations for data transparency, reproducibility, replications and I'll mention also what they are doing for pre-registration. Arnold Ventures has a strong mandate on open data as does Welcome Trust on a variety of their research areas either requires or strongly encourages the sharing of data underlying empirical findings based on support that they provide for funded research. American Heart Association has strong disclosure requirements and strong expectations and perhaps changing expectations in the future about what needs to be transparently shared underlying reported results. Finally, the NIH just today came out with stronger final policies on what their expectations are for data underlying research results and encourages peer reviewers to look at those data management plans so that they can evaluate how appropriate and how critical and how well thought out those expectations are. So I'll get into a minute about the importance of data management plans but more and more funders will be taking those into account to evaluate whether or not the work is appropriate for funding. And so that's a real critical time at which one would wanna make sure that the expectations you're lying out early on in the research life cycle are to be as transparent as possible or to include plans for data transparency later on in the research life cycle. Links to all of those policy documents or expectations and links to everything that I'm gonna be talking about today are available at that data resource hub, cos.io slash top funders. A little bit more of the reasons why open data in particular is important. There have been several documented pieces of evidence showing a citation advantage of empirical research findings that include links to openly available data relative to empirical findings that do not have links to underlying research data. So there's a couple of different studies here that saw a relative citation bump of between one and 1.6 or a two or three times in some cases mean improvement in citation bump for studies that have data available versus comparable articles that do not. It's not only those two studies, but we know of several more that are starting to see associations between increasing citations for an article. There are probably a lot of possible reasons why this is true. Some of the journals, of course, that get relatively high citation impacts have good policies for data linking and data sharing. It's also one of the explanations that seems quite plausible to me is that it's simply a more usable empirical finding. If data are available, it's easier to include in a meta analysis, for example. If all of those statistics can be verified or calculated new if they weren't included in the original paper and put into that meta analysis. So there are a lot of potential reasons why, but the early evidence tells a fairly consistent story that there are citation bumps when those open data are available. Sort of a purely logistical and workflow reason why open data is helpful is simply the management that comes into the skills required to prepare data for later sharing and reuse are the same skills that help internal just reusing with yourself, with your future self, of course, or with collaborators in your research lab. This is not my background or this is not my desktop, but sometimes it feels like it is. But we've all sort of been in that situation, I think, where it can be quite overwhelming dealing with multiple streams. And an improvement on that is the expectation early on in a project that the workflow that one is going through and the data that one is collecting is going to be checked under the hood. It's gonna be open for investigation either through peer reviewers or through post-publication peer review. So preparing a study with that level of transparency in mind at the get-go is helpful for one's own sort of internal organizational expectations and incentivizes one to get your ducks in the rows, so to speak and just make sure things are organized enough for your own usability and for others to use. And this is a typical OSF project that is well organized by the authors with code materials and data sorted into that. One of the underlying philosophies of the OSF is to create an online collaborative space to conduct research, which becomes the same place for later sharing of the research. And so that encourages the type of practices that we want others to be able to take a look at through transparency. We'll get more into the how-to's of open data in a few minutes, but I just wanna mention a couple of important points at the beginning. As I said, the preparation for data sharing begins a lot of these issues early on. Sorry, there's might be a Q and A coming through. Eric, I'll trust you, if you're managing the Q and A, if there's anything that needs clarification at this point, go ahead and interrupt me. If it's something that can wait till the transition, we'll put it into you. Yeah, we'll do. Inclusion of a data management plan, preparation for file management and version control and inclusion of licenses for reuse. I won't go into too much detail about these, but I just mentioned that these things are available. One thing that I encourage folks to look at are these practical tips for ethical data sharing, with a couple of clear dos and do nots. So a couple of statements that we see in ethical consent forms or IRB informed consent documents is kind of a legacy that we've come across, promises to destroy data after a certain period of time. We want to try to remove as many of those as possible. Promises not to share, promise that research analyses will be limited to a certain topic. A lot of these are holdovers from earlier times when expectations for transparency weren't so keen and it was thought that only limiting any data that was collected was the most ethical way to proceed. We know through surveys that research participants are generally quite encouraging of data sharing. They expect that to be used to the wise extent possible as long as their personal information, identifying information is remain confidential. And these types of statements, of course limit the applicability of the time and efforts that research participants are providing. Again, if you do end up promising, not to do these things, you should keep your promises. I don't want anybody to encourage anybody to lie to say that you won't share the data and then go ahead and share it. That's of course not what we wanna do, but just to shift expectations for transparency in the future and put that in the truly informed consent category. Do again, get informed consent to share that informed consent really is a deep phrase that emphasizes the importance that the research participant should have on truly understanding what they're consenting to and what the implications of that are. Incorporate plans to share and to reuse both in these informed consent and in the IRB protocols and be thoughtful considering risk of potential re-identification. Of course, if you share lots and lots of demographic data that can often pinpoint down to individuals and so be aware of those limitations. We have examples of informed consent in IRB language that have been used successfully to share and transparently share research results findings. So take a look at some of those that are available. A link to that is at the bottom of the screen and these slides and materials will be made available after the webinar as well. Include in your data management plan how data sets will be named and reference which file formats will be used, who will have access and how and when they'll be shared and how data sets will be preserved. So these types of standards to include in data management plans are the types of things that reviewers are increasingly expecting to see and not including this type of information will increasingly be sort of detrimental to one's funding applications. There are more resources available for both of these at the DMP data management plan tool.org and on our help docs. I'm not gonna go too much more into this except show some examples of well-named files and figures and there'll be more information about data sharing and file uploads on the OSS in a few minutes but I just wanted to give some examples of well-organized file name organization formats and a couple of key things to remember are the use of one directory for one project, separate raw and derived data, separate the code from the data and these readme files are of course good instructions to describe what to do to open and analyze the data. Finally, I encourage open licensing for reuse, CC0 public domain for unlimited remix and reuse or CC buy again is a very open license to encourage others to use the data sets that are being shared. This just sets expectations early on for how data can be used and it's perfectly appropriate to do so and be careful about putting more restrictive licenses on collected data because it could very well conflict with what some expectations are from funders. Next topic is per-registration, just to define it a little bit and then to point to a couple of what some funders are doing to incentivize or increase expectations around per-registration but it's the act of specifying in advance how data will be collected and analyzed for a study, submitting that plan to a study registry either to make it public available or public after an embargo period. It helps to open the file drawer so that more understanding can be gained or better understanding can be gained of how much research is conducted every year regardless of how many results eventually become published and critically it makes a clear distinction between planned research, confirmatory hypothesis tests and the unplanned exploratory discoveries. Both of those are critically important but keeping them separate is important. When one is in the context of confirmation this is really what we, a lot of the basic statistics that are ubiquitous in many empirical findings presume many of these conditions are true that this is traditional hypothesis testing. The results of this confirmation are held to high standards of rigor and we do wanna minimize false positives. We wanna say that there's an effect when unless there is a certain amount of confidence in it. And again, those P values, a lot of our traditional statistics are based on really presumed this type of context. In comparison to when one is in the discovery mode looking for unexpected trends you really want to minimize false negatives in this case. The most common examples of course discovery of antibiotics. It was purely an unplanned discovery seeing how that affected growth rates. And you obviously don't wanna miss those unexpected negatives. You don't wanna say nothing's going on and miss out on a potentially worthwhile research agenda or a research question. But the context of discovery really presumes that it'll be confirmed at a later point. So when reporting exploratory findings that's an important distinction to make and not to make inferences based on those types of discoveries. The purpose of pre-registration is often to make a clear distinction between these two modes not to value one over the other but to provide a marker and a reminder for oneself a year from now perhaps. What those planned analyses were what the context was at that time and to free oneself to go into purely discovery mode once the expectations of confirmation are met. They complement each other very well but mixing between the two can be problematic. It's problematic because it's in some ways incentivized to present exploratory results as if they were confirmatory. It makes it look more surprising use worthy publishable but it comes at the expense of their credibility. So that is really what the focus of pre-registration is about. When one pre-registers we expect that the expectations are that one should include a link to the pre-registration in any pre-print or article that comes from this report the results of all the analyses that were included in that pre-analysis plan. If there were 10 hypotheses and 10 analyses include all 10 not just the one that was most exciting. Any new analyses that were added to the work must be made transparently clear ideally in a separate section. And whatever changes occur to the methodology should be documented and shared. And there are examples of that on the website as well. You can take a screenshot of that or again these slides will be made available afterwards. Many funders are taking steps to incentivize pre-registration. The National Science Foundation encourages one to link to pre-registration as a preliminary research output. Institute for Education Sciences I mentioned their SEAR standards before standards for excellence in education research. Strongly encourages the use of pre-registration. Arnold Ventures requires that when making when conducting inferential studies taking a sample making inference to a wider population the Dutch research funders on MW requires in certain contexts as does the German Federal Ministry of Education Research. PCORI is a great funder of health centered research created about eight years ago and they do a great job of not just for clinical research but for pre-clinical health research requiring pre-registration. And they take an extra step which we strongly encourage. We think of it as kind of a level two of approach of verifying that the results were reported consistently with what was in that registration and they have a database of results that were reported and that are internally reviewed by PCORI staff before presenting on their website. That doesn't preclude other publication. It's just a good way to really demonstrate what was planned and what was found. Finally, the National Institute of Health and NIH has been working for a while to determine the best definitions to include for what should or should not be considered clinical research and therefore fall under mandates about what needs to be registered and we are working with them in many regards to help clarify those expectations and make registration as easy as possible no matter where it's registered. So you can see links to all of those documents and all those policies again at that website I've mentioned a couple of times. Registered reports is a step above registration in that it takes that proposed plan and subjects it to peer review. And there are many funders that are taking steps to encourage the use of registered reports in their pipeline by partnering with journals to review proposed studies. And after that proposed plan undergoes a few rounds of peer review there can be both funding for that proposed research and a promise to publish the results regardless of the main outcomes that are found. These funders, Children's Junior Foundation the award for global research and Nicotine Dependence Cancer Research UK the Flu Lab and Tempestal and Moral Charity Foundation they all work in with various journals to have submissions that are fundable if they go through that round of peer review again before results or no and there are examples of those initiatives and there are more in the works that will be coming on the future so we encourage researchers to prepare for this. You can, there are about 250 journals that accept registered reports at this point so we encourage researchers to submit to those and as more funders provide the award of funding for a successful stage one peer review those will become even more well incentivized. Richard report model is a two-stage peer review model as I mentioned that first focus is really on whether or not the hypotheses are well founded. This is all taking place before data collection starts on the methods for post-analysis feasible is the study well powered? And very importantly, is there some test of fairness? Have the authors included positive controls to demonstrate that the study will be conducted to a high standard of rigor when those are only evaluated in a post-talk manner there's all sorts of biases that occur depending on whether or not the study quote unquote worked or did not. And so it's best to set those fair criteria up before knowing what the results are. If after one or two or a few rounds of review editors and reviewers agree that the answer to those questions are yes then the study can be given an in principle acceptance a promise to publish the final results no matter how they turn out. After the studies conducted a quicker second round of review did the authors follow the approved protocol? Did those positive quality control steps succeed? And are those conclusions that the authors are drawing are they justified? But very importantly, reviewers and editors aren't looking at whether or not the results are significant or impactful at this point. Those are unscientific reasons for disseminating evidence and those do not come up in that second stage of peer review. There's a complete resource hub available for authors or reviewers and for editors and for funders at this website, cos.io, slash rl. All right, wrapping up, that's just for me two more quick notes about funder expectations for future collaboration. A National Science Foundation has several grant programs that specifically include calls for collaborative research projects. The idea being that teams working together will of course have a bigger impact and increases transparency. There are a lot of good anecdotes out there about teams trying to replicate one each other's work. Works out several kinks that are often harder to uncover if you don't have a team working with you at the same time. When we see several agencies and calls for collaborative research grants and collaboration is one of the main benefits of the open science framework. It's really designed with that in mind for working across space. Finally, there are increasing expectations that the results of empirical findings and funded research should be made freely available. A lot of these are either publicly funded agencies or charitable organizations that wanna show their donors wanna show policy makers what the outcomes of the research are and don't want those outcomes to be behind a paywall. So many funders are gonna be behind what's well known as plan S, coalition S of funders are advocating for work that results from projects that they support financially should be made immediately available for everyone. And we encourage folks to take a look at what your particular funder requires and as a compliment to that philosophy, quickly disseminating without paywalls via preprints, there are many preprint platforms. We encourage many to, across several disciplines a lot of social scientists a lot of behavioral scientists and psychologists have platforms available on the OSF preprint servers as do many others. So with that, I will put up this just a couple of more resources. So that's a few times feel free to take a screenshot or just wait for the slides to be shared afterwards so that these links will become available to you but take a look at those and I'm going to stop sharing my screen and we'll see if there are any questions that should be answered right now or we'll transition right away. Thanks, David. I do have a few here. So Diane asked a couple of good questions here that probably especially of interest for our funder funder infrastructure friends that are in attendance today. So too, I think I can kind of tie it together. How did these, how do the funders check the that the grant recipients have complied with those promises with those statements that they had included with their proposals and then related to that, if there is this data that they've assured that they're gonna that they're going to include or share where would be the version of record for that especially if their data ends up being included in multiple locations? Right, that's a great question. A lot of funders are taking the approach of reviewing what the proposed data management plan includes. So a well thought out data management plan that has a well laid out plan for how data will be collected, organized, and then preserved. Some are stopping at that stage and of course it takes a little bit of trust at that point to see that it'll be followed. But it is a good signal to the funder and the reviewers that one is taking this seriously. Some are taking a little bit more active step of looking either at the interim progress reports or the final reports and asking for one to fill out the field of please include links of any research outputs, including data. Some of the strongest ones are considering the biggest stick that can be included at any given point would be future research could be or future research evaluation, future research proposals could be affected based on one's following of that proposed plan. There's a lot of active discussion about whether about how one would enforce that type of expectation because of course it requires a fair assessment of whether or not one followed the plan. But a demonstration in the research grant that here are past examples of me following my data management plan or here are past examples of pre-registering or sharing materials or code are a great signal that one's proposed funding will follow the promises being kept. So there's some elements of trust still in there which we don't wanna get rid of fully but it improves the potential assessments that reviewers will be looking at when there's evidence of past actions of transparency. Thank you, David. So we have interesting question here that maybe both you and I can touch on briefly. So Lisa asks, she's wondering if CUS works with large research institutions to develop more tailored solutions that are widely applicable across a broad range of disciplines. Yeah, I think Eric you'll get to that in a little bit. We have institutional landing pages that can allow all the research that's being conducted in an institution to be discoverable in a single place. And that allows both administration within that institution, members of the public or colleagues to see what data sets, for example, are being made publicly available based on the research that's going on in an institution. So there's a couple of branded places that include more specific guidance and more specific standards for institutions. Another example would be what we call OSF collections where an individual journal or society or funder would establish set standards for what needs to be shared. A good example there would be the International Life Science Institute, North America. You'll see North America has an OSF collection where they expect their funded researchers to provide some or all of the data or materials associated with their research findings. Awesome, thank you, David. So I see that there are some hands raised and some other questions. I'm gonna make sure we come back to that if we have some time, but I don't wanna make sure we have time to get everything in. Diana's giving me a great question. I like your passion. There are zero journals that really insist on data sharing. Sometimes there's kind of a level one approach which is what we call in the top guidelines statements of whether or not data are available. And of course saying no in that case is an acceptable statement. That's really the work of peers to remind others that that's no longer acceptable. So better statements need to show up in those data availability and disclosure statements or not just a no, but here are the precise steps to obtain the data. Cool, thank you, David. So folks that have their hands up just hang in there. Let us continue and we will try to get back around to you. And if that doesn't work, then we'll certainly respond to your questions after the session completes today. So I am gonna come back to Nikki for a few minutes to tell us a little bit about how we get from all of that activity in those funder communities to thinking about here at COS, thinking about how the OSF does, will, should work to align with what those research communities really need. Great, thanks, Eric. And thanks, David, for doing that great review of sort of the funder landscape and their requests that we're starting to see trends in. It's really great to see so many that are actively seeking these open practices among their grantees. So as Eric said, we're gonna talk a little bit next about how you take it sort of from that concept of like, okay, these are things I should be doing or striving for in my research workflows into a more implementation stance. And so Eric's gonna do a lot of that demonstration, but as he said, I wanted to give a little background about how the tools that he's gonna show you kind of came to be and how that continues as part of the COS's mission. So OSF is sort of the backbone of this. It's a free open source tool for researchers that provides that collaborative management project space. A lot of words thrown together, but I think you'll understand once you see it. But it supports those workflows across the research life cycle and all of the things that David mentioned with collaborating with your teams across organizations, across universities, documenting and sharing data, both publicly and privately. We understand that sometimes there's some staging that needs to happen. Part of that is supported on the OSF and even controlled access scenarios. The concept of preregistration of study designs and pre-analysis plans is also part of the sort of integrated workflow of OSF. And then finally sharing those findings with the preprints that David mentioned as well. So some of the things that we've developed obviously have been heavily done through collaboration with communities of researchers and funders and also all aligned with the shared goal for open practices. So some examples of this would be that one thing we're currently working through is working closely to bring preregistration much closer to disciplinary researcher communities through the infrastructure that we built that enables customized preregistration templates. Not to say that every researcher can come up and decide how to preregister, but more or less we're working through understanding some of those challenges and gaps when you're trying to do qualitative or secondary data or longitudinal pre-clinical and some of the specific needs in that sense of laying out your preregistered plan are different. And so working together with those communities and funders alike to try to come out with what is the standard or the best practice and implementing that on the OSF. So it's available. Similarly, we've been working over a number of years to support more data collaboration, documenting and sharing on OSF. We've done this through integrations. So we've actually implemented 11 different storage provider integrations and this supports all of this just by making it a little bit easier. So meeting researchers where they are in your workflow that you might use a specific storage provider for your data and having to move data actually starts to break down some of those really good practices David was talking about when versioning, documenting and keeping track of the historical record. And so being able to attach that into the OSF where you'll see with Eric, there's version control, there's activity logs on projects. There's so many great ways that that just helps like David said, future you as you move along and you don't have a mess at the end to clean up. And this way moving data around sort of eliminates a bit of that. I could go on and on of how we've collaborated. This is my favorite part of the work that we do. And so it's something we're constantly doing. Happy to hear any questions, concerns, comments on that. So feel free to reach out to us. But basically we collaborate to sort of understand these gaps in infrastructure and partner to bring those solutions and making these open practices possible, easy to implement. And that also supports the funder side. So like David was talking about registries, collections, preprints and other mechanisms to sort of enable more discovery into what researchers are doing and make it easy for researchers to report out as well. So with that, I'll let Eric take it away with sort of showing the more detailed workflows of what we've been describing. All right, thank you, Nikki. So this will be a whirlwind tour but many of you that are here are pretty familiar with OSF already. So you're probably not gonna see things for those of you that are veterans who are not gonna see things that are brand new but some that are still orienting. I think this will be helpful to really connect what David and Nikki have referred to, to the practice of creating and sharing material through the OSF. So today, just to string it all together we're gonna assume that we had a grant and part of that grant and NSF grant maybe was to hold sessions like this. And we need to share our material from these sessions. So I'm gonna create an OSF project for today's webinar. So we're gonna use some of the material and titles that we've already generated. One of the things that we had a question about was how this aligns with research institutions and how they can both sort of advocate for practices like this within their communities and also recognize when they're at those communities are really embracing those. Part of that is through a suite of tools that is built into the OSF called OSF institutions. And I'll show you what the results of this looks like in a moment, but because I am part of two of those organizations that are members of institutions I get to choose from those affiliations that are relevant. And a lot of what I decide right here is I'm setting up a project as well as things that I'm gonna do on my project shortly. I can come and modify and continue to evolve this project. The one thing that I need to make a decision about early is where my information is gonna be stored. And this is another part of OSF adapting and adjusting to global research policy is recognizing that there are regions where data security is important. And it's important everywhere but part of their approach was to ensure that that data is stored regionally. So that is something we have embraced with our Google cloud storage is to have storage locations in some of these regions including Australia, Canada, Germany and the US. This you won't be able to change once you've made this decision. So I wanted to think through while you're setting up your project and then you get to start off by including a quick description before we launch into our project. And so all I need to get to this point I've now created a project on the OSF just to sign up for an OSF account. It's completely free as Nicky has pointed out there are lots of other things you can do on the OSF or your institution or your funders or publishers might do on the OSF but your account is always gonna be free. So we've set up our project here and right away I know I have some contributors that I need to include as part of my project. So there's a couple of ways to add those quickly from the interface. You can see the existing contributors myself but I can also add Nicky and David for their contributions to our session today. They should be pretty easy to track down on the OSF. There is Nicky right there and we'll add. So one of the things that David pointed out was that you have lots of flexibility in terms of access control and that's not just who you include your project but also the permissions that those individuals have to make changes within those projects. So I have community members that even for something that is gonna remain private like this project is for right this moment I can give them read access they won't be able to make modifications but they can come and see all our content. The read write contributors will be able to make modifications but they won't be able to delete the project or to register it which we'll come back to in a moment. And then administrator which can do all of the above just like myself as a creator can do creator of the project. So let me add Dave in here. Make him administrator as well. And then when we return to our front page of our project we will see our new contributors added. There we go. And because we're all affiliates of COS that we have our affiliation badge here included on our project and quickly I'll show you what David was referring to for researchers that are part of one of these institutions that partner with us. We have a, they have a landing page all of those institutions. And I think Dickie shared the link to those institutions and this landing page is all of the public material the projects and the registrations that affiliates of those institutions are working on on the OSF and we have a whole lot here on COS and this will load all day. So I'm not gonna sit here and wait for that. But you can check out a lot of our other institutions and the work that they're doing in this project that we're working on here will show up once we make it public. So we added a description when we first set up our project we can come and modify this if we need to. If we need to have a real detail breakdown of what's included in our project or instructions for contributors if particularly if you have lots of contributors that contribute in multiple ways. We have a Wiki that uses markdown so you can create detailed wikis with lots of instruction. You can even bed files, images, videos to help orient visitors to your project and to your material. And Nicky mentioned just a moment ago some of our storage solutions. I'm gonna go ahead and connect one of those because I already have our slides in one of our Google Drive folders. I can go ahead and connect that and I don't need to go and find that again because it will already be included in my project once I connect this folder successfully in length. So when we look in our files tab here we'll see all of the files that have been included. In this case we have files that are in two of our different storage providers here, Google Drive being one and then the native OSF storage being the other. So I didn't have to upload this file anew because it was already included in my Google Drive folder but despite that when we look at the file here in OSF it's gonna render for us and we have many, many file types that will render in the OSFs that you're not having to bounce around and change providers. You also have a version history when you're looking at several of our providers will keep the version history feature intact including OSF storage and Google Drive that both of these are featured in and you can view or download those previous versions. So this presentation were to go through changes for years and years and years and I keep uploading new versions. You'd always be able to come back and see how this project or this presentation has evolved through the version history. The most recent one will be the one that's publicly visible up front but visitors could always come and see this material. And if I needed to upload a new version I could do that as well and it would be included in that version history as long as I keep that title the same it won't just create multiple copies of that same presentation but instead will become a new version here. So one of the elements that David has mentioned is part of practice that can help with peer review and sharing when your material, your project is private like this one is still private but I may have a peer reviewer or someone that I need to look at this for me. If we go back to our contributors page we have this section for view only links. And so even for a private project I can create a link that a visitor would be able to use and see this material without having to be added as a formal contributor as Nikki and David have here. I can also anonymize that our author contributor list with that link so that if I need to send this for peer review and it needs to have no identities included then that link will satisfy that. So we'll see a list of our view only links here and I'll show you what that looks like quickly. So we have the same data, the same files, same description but our contributors have been anonymized with our view only link here. And if someone were to see this link normally they would run into a page indicating that it's a private project. They can request access if they like but they won't see this material unless they have a link like this or added as a contributor. So one of the real key workflows that David had discussed was registrations or preregistering your study before getting into data collection and describing your methods and hypotheses and some of that material before you go and do your experiments. Obviously this project here doesn't quite apply but I will show you where you get to the point where you can submit a registration. So for on our project page even when you just started if you have all the data and you're ready to take a snapshot of this material then the registrations tab. Here is the place to start that in the very near future you will be able to start a registration without having to start a project first which is very exciting but work for right now you will use that registrations tab start a new registration and there are multiple templates available that vary based on the communities that utilize them and in some cases funders have requests that these specific templates are used and that the questions and fields within these will vary a little bit and we try to provide some guidance for you if you're not quite sure which one you should use there's some material here to help you think through that and I'm not gonna complete a draft here but just so that you have an idea what this looks like it will import material from your project and it will archive the files and other material that is in your project. You'll also have opportunity to do other clarifying including licenses that are separate from your project if that needs to be the case of your subject and then we see here where we're gonna include our design plan, sampling plan variables and other elements so I'm not gonna submit this and create a registration that doesn't quite belong but that will give you an idea and certainly we can share some examples some good examples of pre-registrations in our follow-ups so you can actually see a completed registration there. Quickly I'll point out the license that David mentioned and showed and I think a screen shot here. We have lots of licenses available including the ability to add your own. Of course we like to make this material as available and shareable as possible so you can add your own license if needed. And then finally the pre-print and I went ahead and filled out a pre-print submission for one of my papers so that we won't spend time to win all the metadata out here but you get an idea where we're submitting our titles, licenses again, information about where our pre-prints what subjects or disciplines it may be related to and now I've got a pre-print here on OSF pre-prints we have like registries we have multiple services that are community or regional oriented that you might submit to but now I have a pre-print, a version of my work that I can create a DOI for and share with my community for feedback. So that is the super fast look at the OSF and some of those elements that we talked about today. So I'm gonna stop and take the last couple of minutes here to see if we have some new questions. Lisa asks, I read that OSF has or will be implement data storage limits for some researchers the cap is not sufficient for even a single experiment. Is there a recourse and how does one gain access to local storage provided by OSF institutions? Well, I can pick that one up. So yes, next week actually you will see lots of you have seen these communications already. There will be storage limits for projects that rely on OSF storage. So if you were to like myself here were to connect Google Drive or Dropbox or one of those services, then these caps don't apply to those at all. Only two files that are included in OSF storage and those limits apply for projects or components. So if I have, my experiment is gonna have lots of one gigabyte files, perhaps their audio files or something, instead of putting dozens of those into my project, I can create new components and based on the sequence that those experiments were done or the individual that managed those experiments and that storage will be spread across those components instead of all accumulating on the project. So those are a couple of quick approaches that will help researchers avoid those limits. Connecting to local storage is also an option. So for an institution that becomes a member of that OSF institutions service, one of the things we can talk about is integrating their local repository or cloud repository service that they subscribe to so that their researchers have another option on top of these or just helps them satisfy some of these other institutional data sharing requirements that is an option that we can help with but we would wanna chat about it. So please do write me so we can have a conversation. Diana also has a question, templates for conditions for using and sharing data. I'm not sure if I'm reading that right. No, you can interpret that one, David. Yeah, so I think there are some templates for both data management plans and there are templates available for sort of informed consent language to show under what conditions data would be shared or not in some of the old language that we see in old consent forms sort of specified in advance why one would be allowed to use data and that is or is not appropriate depending on the conditions. But I think we are just about at time and so I think I wanna say thank you to everyone for listening and participating. If there are any other questions we'll follow up with some communications and resources and we thank you for your time. Thank you, David and thank you everybody.