 All right, let's go ahead and get started. Thank you all for joining us today. My name is David Shear. I'm representing Mendeley data today. I'm joined by my fellow colleagues from our metadata subcommittee through our Gray Initiative Project. And today we're here to give you a gray collaborative webinar on our work regarding metadata and getting community feedback. So we're pleased to have you here today. We have a lot of things we'd like to discuss with you, things we'd like to share. And most importantly, we'd like to get your feedback. We have a jam-packed agenda for today's discussion and presentation. So to start things off, we wanted to give you an introduction to our gray repository initiative, who our repositories are, and what are objectives? For those of you that are new to learning more about gray, we'd like to give you just a brief overview. We'll also talk a bit about our collaboration model. We like to call it our co-operation model. And then from there, we'll talk about our metadata recommendations and how these recommendations relate to the use cases that we've established as part of the Gray Initiative. We will also talk about how our recommendations align to the data site metadata schema, which we'll also talk about the partnership that we have with data sites part of this overall initiative. We do have some interactive sections as well. So we have a few poll questions that we'd like to ask you as our attendees. We have three questions, so we'll take time from the presentation to pause, move to our polling period, and then move back to discussing our implementations of the metadata is schemas and how it aligns to our use cases. And we actually have a couple of our general repositories that have started the process of implementing some changes to the metadata schemas and their repositories to align to the recommendations. So we'll hear from two examples from OSF and Dryad and how they're progressing with the recommendations. We'll then talk about what's next for the initiative as far as the further implementation of the recommendations for metadata, how it aligns to our use cases. And we'll also talk about how you can further participate and provide community feedback to the participants here today. So I'm gonna start now and just give you a brief overview about our Generalist Repository Ecosystem Initiative or GRAY. So GRAY is actually a NIH-funded project and it was designed to make it easier to find and reuse NIH-funded data. The initiative is intended to supplement the domain-specific repositories that are available for data deposits. And these are critical components of the NIH Biomedical Data Ecosystem for Data Sharing. The initiative builds on the findings from the 2019-2020 NIH FigShare pilot and the NIH workshop on the role of Generalist Repositories to enhance data discovery and reuse. The initiative includes seven established Generalist Repositories with a mission to establish a common set of capabilities, services, metrics, and social infrastructure, raise general awareness and facilitate the education and training of the FAIR principles and the importance of data sharing and more. The initiative also aims to improve discoverability of data within and across the participating Generalist Repositories and to lead to greater reproducibility and reuse of data. The GRAY repositories are represented by the names that you see here, which includes Dataverse, OSF, FigShare, Dryad, Mendeley Data, Vivly, and Zinodo. And you'll actually hear from many of the presenters today that represent our various Generalist Repositories. Lastly, along with our Generalist Repositories, the initiative also includes DataSite. The GRAY repositories register DOIs and associate their metadata through the DataSite schema. With DataSite, a global community is focused on ensuring research outputs and resources are openly available and connected so that their reuse can advance knowledge. Through alignment with DataSite's metadata schema and the GRAY repositories register their consistent metadata, enabling connectivity of datasets with other digital objects such as articles, researchers, research organizations, grants, and funders. The long-term vision of GRAY is to develop a collaborative approach for data management and sharing through inclusion of the Generalist Repositories in the NIH data ecosystem. GRAY also aims to better enable search and discovery of the NIH-funded data within the Generalist Repositories. With this in mind, the initiative is built around 10 primary objectives, which includes supporting the discovery of NIH-funded data, adopting consistent metadata models, facilitating quality assurance and control, connecting digital objects, cataloging use cases supported by the initiative's goals and outcomes, implementing open metrics and preparing training materials, and also conducting outreach and engagement. Lastly, one of our main objectives also is to commit to a unique collaboration model in place with our Generalist Repositories, serving as a partnership to support the GRAY objectives. This unique engagement model is what we refer to as our co-opetition model. So I'm now actually gonna turn it over to my colleague, Anna, who's going to go into more detail on this model. Great, thanks, David. So I'm Anna Van Gulak. I'm at the Data Repository Fig Share, where I'm our government and funder lead. And I have the privilege of this year being one of the co-chairs for the GRAY co-opetition or the collaboration amongst these seven different Generalist Repositories with NIH. So I wanna present to you the co-opetition model and why it's particularly relevant here to having common metadata across the Generalist Repositories. So co-opetition is a portmanteau of cooperation and competition. And I came from that 2020 workshop on Generalist Repositories for NIH data sharing. Actually, it predates that as well from a book some of you may be familiar with on game theory, but the idea here is that all of these different repositories can work together and we can cooperate on common features and standards. So having this idea of a value line, so below that value line, we want to do things the same. We want them to be interoperable across our platforms and across the data ecosystem. Actually, even beyond Generalist Repositories would be ideal for the entire data ecosystem to adopt interoperable standards. So things like metadata as one of these as well as persistent identifiers, common metrics, which is another objective of Gray, things that support discovery or other important features. But then there's also unique features of each of these repositories, which we can continue to compete on in a friendly manner. Next slide, please. So this Gray commitment to Co-Opetition is pretty central to the Gray program and makes it fairly unique. And David outlined the seven different Generalist Repositories and data sites that are participating in Gray. And these repositories are definitely all similar. They all support fair data sharing across disciplines, generalists by nature flexible for sharing many different types of research outputs. They strive to adhere to the NIH repository best practices. And most of us already use community standards like fair using data site metadata and adopting persistent identifiers like Orchid and Roar. But it's also important to note that there are differences among these repositories. And prior to Gray, we didn't always work together on building our products. So the benefit of Gray is that we are coming together to do that. There's a mix of nonprofit and for-profit companies, repositories that are built fully open source versus having proprietary infrastructures. And we also offer varying features. Some of us offer different visualizations except different file types, offer different licenses, different curation workflows, controlled access, things like that. Next slide, please. And so the Gray Co-Opetition is designed to be a collaboration amongst these generalist repositories that allows us to jointly advance our repository functionalities to better support NIH data-sharing discovery and reuse. And so this co-opetition allows for us to have a cohesive and interoperable generalist repository landscape, including getting to work together and communicate on a regular basis. And importantly, to come together to implement the same common best practices and standards, including very importantly, leveraging those existing community standards. And you'll hear a lot today about the data site, metadata schema, and how we are adopting common parts of that all together. So rather than starting from scratch, we're leveraging those existing standards as well as for persistent identifiers. And all of this will help support greater, flexible data sharing, enhanced data discovery and tracking of its impact and allow us to have a unified partnerships, but of course, still keep that individual functionality as we need to as well. So this is a really unique opportunity and I think metadata is one of the best examples for how the co-opetition model can benefit generalist repositories and the data landscape more broadly. And I'll hand it over to our next speaker. I think back to you, David. Thanks, Anna. So then we've explained a little bit about what gray is and our co-opetition model. We wanna let you know where can you find more information and as well as documentation about the gray initiative. So looking through some of our project materials. So we've actually deposited all of our project materials into Zenodo, which you can actually find via our gray community, which is a one-stop shop for all of our gray materials. It's very easy to find. You can actually go to zenodo.org and from the homepage, click on communities in the upper menu bar. From there, you can actually click on the communities page and enter generalist repository ecosystem initiative and search for the gray community. From there, you'll be able to see our materials, see what's going on within the gray community. You can browse our recent uploads or searched for specific keywords or files. And one of the great things is actually leading to our conversation today, this is where you'll be able to find our metadata recommendations and relation to the use cases. So I'm now gonna turn it over to my colleague, Julian, who's now going to introduce you to our recommendations. Hi, thanks, Dave. My name is Julian. I'm along with Dave. I'm a co-chair of the metadata subcommittee. I work on Dataverse and on Harvard Dataverse as a user experience researcher. And I'm gonna be going over like an overview of the metadata recommendations and how they relate to the use cases. So you can see on the left, like the first page of the recommendations, which is Dave said or in zenodo right now in that gray community and the DOIs over there. Next slide, please. I don't know. Can everyone still hear me? Yeah, Julian, I apologize. This is, oh, my technical issue. I apologize. All right, I thought I lost my, my wifi was on the fritz. There you are. Great, yeah. So we saw this chart already and I wanted to show it again just as a nice way to show how what the metadata recommendation mostly relates to, which is the objective for adopting consistent metadata models. And because the goals of connecting digital objects and of implementing open metrics rely on metadata, we've kept those objectives in mind as we were working on the recommendations. Next slide, please. So we discussed what kind of information each repository should collect and distribute in order to support those gray objectives and the goals of the metric subcommittee, which is another subcommittee that's, that was working specifically on open metrics. And we also, like I said, looked through, tried to consider the following four use cases from the use cases subcommittee. So the next slide, I'll go through each of those use cases really quickly because they really drove our work. We wanted to make sure that what we were considering, we were talking about had, we kept in mind who the users were. And that included NIH funded researchers who needed to select a repository to share their data so that they comply with data management and sharing plans and the conditions of their grants. We wanted to consider the use case of a researcher who wants to find research data of interest so that they can validate findings, reuse data and build on work within their discipline. We wanted to consider folks from institutions, like academic institutions who need to report on research outputs like data sets from their institutions so that they can ensure compliance of research data sharing and management plan commitments by their researchers. And fourth, we wanted to consider the use case of a funder from a specific NIH institute or in general, who needs to find data sets that they fund it so that they can also report on compliance with their policies and track impacts of the research, their funding and the usage of that data. So the next slide, we'll see just one of those use cases. And when we think about how a funder might want to find data sets, what kind of information might a system need like a repository or a portal or a search engine? And so we saw that repositories needs to collect information about who funded the research that produced the data set. Right. And then in the next slide, there's another piece of that use case for tracking impact of research funding and usage of the data. So repositories need to collect information about other research objects that cited and used the data. And what's not here, but I forgot to mention and systems need a way to get that information from repositories about the data that's being deposited in there and how it relates to other research objects like journal articles and computation workflows and even software. So on the next slide, we'll talk about why we chose the data side metadata schema which Anna already said, which is that we're all already collecting and sending the kinds of metadata that data site needs when we're registering DOIs for research objects like data sets. The data side metadata schema is also nice because it's domain agnostic and these seven repositories are working with different types of disciplines and data side already collaborates closely with gray. So that was also nice. They're a collaborator of the gray initiatives. So we have their year and their expertise about the metadata schema, about their future plans. We've talked about things that they're reviewing for adjustments to the schema and in the future. And finally, other resources, other services rely on the metadata expressed in data side schema, including metadata aggregators and data sites own event data service, which is what's being used to track the impact of data set citations and views and downloads and those kinds of things. So the next slide, please. So the metadata gray recommendations highlight specific properties from the data site metadata schema in the current version beyond just the minimum fields that you need to get a DOI. And repositories are encouraged to incorporate these properties into their metadata or identify a local equivalent field. So for example, when author identifier field can be mapped to the data site name identifier, so property of creator. So basically, what people type in or select or somehow the repository records an author identifier, that information is included in the metadata that's sent to data site. When registering a DOI with data site recommended properties should be included in the data site DOI metadata. Next slide, please. So we wanna get more specific about one of these use cases and how we use the data site schema and particularly parts of it to support the use cases. So here we have a funder from a specific NIH Institute or a general needs to find data set they funded and that that image is showing a bit of the data site schema and fields that relate to funding information and how repositories should send information about who funded the research that produced the data that was deposited. And then in the next slide, we'll also see a bit of metadata that's important for tracking the impact of research funding and the usage of data. Like, you know, so identifier specifically that are associated with other objects that are related to the data set, like a journal article. And I forget if this is my last slide. It is, yeah, so I'll hand it back to where we'll be doing the poll questions, thank you. Thanks, Julian. So yeah, I think we're gonna take a pause now from the presentation and we have a couple poll questions that we're going to ask of you to please participate and we're gonna run the questions one by one. So the very first question you'll see up here available to you is, have you shared data in a repository or assisted others to share their data in a repository? If you could just put quickly an answer to that question and then we'll be able to share the results and we'll move on to the next set of questions. We're about 70% done. So please, if you haven't yet, please take a moment just to look at the poll question and please put down a response. We would appreciate it, thank you. Few still that are looking at the question, if you haven't answered yet, please do so. The first poll question here is, have you shared data in a repository or assisted others to share their data in a repository? And we'd like to simply just know yes or no. I think we're about there. So we have an overwhelming majority that have shared so we can show those results. So you should be able to see those now. So 81% of those that have answered have said yes. They have either shared repository data in a repository or they have assisted others to share the data. So thank you again for entering that first poll question. We'll now move on to poll question number two. Poll question number two is, if you have shared or assisted others to share their data in a repository, were the metadata requirements clear? And for this one, there are three options. You can say yes, no or not available. So quickly just take a look at that question and please select the most appropriate answer for you. We're more than halfway done with those that have answered. So thank you for those that haven't answered yet. Please just take a look and answer with your best answer. We'll give it another minute or so, we're about 75%. So we appreciate those that have answered. So with that, I think we'll go ahead and we'll just go ahead and share the results so far. So it's still a majority that have said yes, but we do have a fair number that have said no, which is great. We appreciate knowing that. And I think hopefully with our recommendations this will assist in that matter. So thank you very much. We'll now move on to poll question number three. So poll question number three is if you have searched for or assisted others to search for data in a repository, have you found the metadata useful for discovery or context about the data methods or conclusions? And with this poll question, there are three options. You can say yes, no or not available. Give it a couple more seconds here. Thank you all very much. And this is the last poll questions we do appreciate your participation in the poll during the webinar. Thank you very much. At 75%, so I think we can go ahead and share the results for this one. Please feel free to keep answering if you haven't yet. But again, with this question, if you have searched for or assisted others to search for data in a repository, have you found the metadata useful for discovery or context of data methods or conclusions? And we do have a majority that have said yes, but we do see a large percentage that have said no or not applicable. So we do appreciate you sharing your thoughts and your experiences. Thank you very much. So now with this, I'm gonna turn to our next slide over to Gretchen to talk about the experiences of the implementation and use cases with OSF. Great, thank you. Hi, my name is Gretchen Gigan. I work at the Center for Open Science and we operate the Open Science Framework, which is also more commonly known as OSF. And OSF is actually a platform for diverse research outputs. We're a little different than some of the other repositories and that we do serve repository functions like storing data, but we also have collaborative spaces. We do registrations of research activities and we run some pre-print servers. So it's a really diverse platform with three main kind of elements. And all three of those services were designed and implemented individually and at different times. So the metadata that was developed and stored for each of these separate kind of objects was a little bit different. So slightly different, but significant overlap in each of them. So as part of the work on this project, once the metadata working group have begun to kind of zero in on the data site schema as a common core to work from, we went back and analyzed metadata across all of our objects to identify overlap as well as areas that we could enhance with new properties that we weren't already capturing. So in the color coded spreadsheet here, you can see in the first column that in some cases we only have one product that is using a particular property. In others we have two or three and they kind of run the gamut of different situations there. So if you could go to the next slide. Once we finished that analysis, we then compared it to the recommendations from the group and the data site schema. So from that, we went back and we developed an application profile. And an application profile is a metadata model that reuses as much as possible elements from standard metadata sets. And it defines how you in your repository or your organization are going to create metadata for your objects. So it's basically a sitting back and thinking and designing the kind of metadata schema or template that you're going to use. So we went back and we created a metadata application profile, which we call OSF map to represent everything in OSF. So instead of each of the different objects having kind of slightly different properties, now everything shares the same set of metadata properties. This doesn't necessarily mean that we had to go back into the repository and redo everything. It's really more of a mental model. So that as we move forward and develop these things further, we can tweak and make adjustments so that everything meets the kind of recommendations or specifications in this model. You can also see, it might be a bit small, but in the last column that's shown on this slide, excuse me, we can map everything that we are providing into data site properties. So we can meet all of the recommendations and requirements for data site records. And we do actually export out data site XML records in this format when we share the records with data site. We have a very similar conformant JSON version that's available through our API. And metadata records can actually be downloaded from OSF in that meet this metadata application profile and meet this metadata model. So the other columns in the spreadsheet just represent the labels and whether or not a field is required and what the requirement is for what the value is in the property, et cetera. If you move to the next one then. So the profile consisted of that table that I showed you as well as an introduction. And there's a link there to the OSF project where you can find all of that if you're interested in taking a look. We made it public back in, we made it, we made it public back in April, although we had actually launched some of these new metadata properties a little bit earlier in January. And this was a result of this process. So shown here is a metadata record for a project in OSF that has several of the new properties that we added as a part of this project. So the names of funders and the specific kind of types of material from the data site schema as well as language. And there were a few other tweaks along the way. And if you go forward one more to our last slide just as of last month we have launched a new search interface for OSF. And so it uses this new metadata model and these new fields. So you can see we now have facets for things like resource type, excuse me, and funder as well as some other things that we were already including in the application profile or excuse me in the metadata but are now done much more consistently across all OSF objects. So you can also see at the top there the main search is bringing you back to everything in OSF but you can then tab through and look at just our projects or just registrations, preprints or files. So this was all inspired and driven by the recommendations to use this core metadata schema and it's really helped us out internally by providing consistency across the whole corpus but is also helping us to interact and share with others by adhering to that common core of data site and being able to share that with others. So I think that's it for me and I am handing off to Ryan. Thanks Gretchen. I'm Ryan Shirley with Dryad. Like OSF we went through a very similar process of reviewing our metadata model for compliance with the gray recommendations. Since the data site schema is pretty central to our internal metadata, it was straightforward for us to determine how to add the recommendations into our system. And we therefore focused a little bit of our efforts on making sure that we could make the most of these new metadata fields in our user interface. So this shows a snapshot of our submission process where user are adding data and highlighting the funder field, excuse me. In accordance with the guidelines, the funder field is required unless the user doesn't have any funder and then they must check this no funding received checkbox at the bottom. In the case of a large funder like the National Institutes of Health, we can detect that that funder has sub-organizations and then we require the user to select which NIH institute or center provided their funding. Next slide. So we can obviously display that information on our pages that describe data sets. Next slide. And like OSF we've recently added more search filtering for funders, including that sort of hierarchical determination when we have a funder like NIH that has sub-parts, you can select just NIH and see data sets that are associated with each of the institutes or centers under that organization. Next slide. And then a feature that we recently added to really leverage all this metadata that we're collecting is a dashboard where the funders can log in and track compliance of data submission and publication associated with any of their grants. So they can narrow it down by dates, by submitting institutions and we're talking with funders about adding more specific search and browse capabilities onto this page. Right now, one of the things that we found very useful is that we allow exporting of these results into a CSV so that the funders can process them in any way that they like. Of course, all of this information is stored in the data site schema so that really advances the co-optition model that we've talked about earlier, that when we publish funder information to data site, that goes alongside the funder information that OSF sends to data site and all of the other gray repositories. And so people can use the data site APIs to search and manage this information across all of our repositories in tandem. All right, so we'll go back to David to wrap up. Thanks, Ryan. So now that we've talked to you about the gray initiative, our co-optition model, our recommendations for the metadata, our partnership with data site and how this is being implemented at a couple of our example repositories, what's next? So there are a couple of different things we'd like to relay to you all. First, for those of you that know more about the gray community, please follow along and read our blog. Likewise, as we mentioned earlier, you can look through our Zanotto community to look at our resources and documentations. And you can help us to engage with your communities, with these resources, the presentations, our communications that we've made available. One of the best things that we can do that we need at this moment is we need your feedback. We'd like you to please read our recommendations documentation that we've showed you earlier and we've put the link for that in the chat with the Q&A. And we'd like to get your feedback on this. And we'd like to gather this using the form that you see here on the slide. We'll make sure the slide link is also for the form is available in the chat here. So you can click on that. Please provide us your feedback. Please let us know what works, what gaps you may be finding. Other details you'd like to share with the metadata subcommittee. We'd like to hear it. Additionally, we will have some additional webinars still forthcoming this year. You can join us actually for our next webinar in our gray collaborative webinar series, which is gonna be held on Friday, October 13th regarding our work on metrics. So you can actually join from the link there below. And we hope to see you at that next webinar on metrics. As we get to the end of our webinar today, I'd like to join myself with Julian. And we'd like to thank our metadata search committee members for their hard work and dedication. This has been a massive project for many of us and working together through this and working through the updates and technical aspects of these recommendations with our repository platforms and infrastructures. So I'd just like to thank all of my colleagues on the subcommittee for their participation and collaboration. So thank you all very much. With that, we now have a few moments. We'd like to now open the floor and ask for any questions or comments that you, our attendees may have. I'll ask my colleagues that have presented maybe to come off and share screen with them and then allow them to participate in the Q&A. So we have turned on the Q&A and I see we've already been answering questions. So we'll turn to this now and see what questions are still open. If we can, we can answer these live. And like I said, if you do have other questions or comments you'd like to share with us, please utilize the feedback form that I shared with you a moment ago. All right, so running through our questions. Yes, we have shared the link to where you can find the recommendations. Again, that's in Zanotto. You can find the recommended citation and DOI for that as well. And we can see here is a question here about registering a DOI for a data set and receiving by mistake potentially more than one DOI and be submitted to either a general repository or to a domain specific repository. So thereby leading to hidden redundancy and subsequent issues with proper data citations and linkages. I'm gonna see if any of my colleagues that were co-presenters, if anybody would like to answer that. Hi, this is Julien. I don't have the answer, but I imagine this happens. I think each repository in their outreach and their training and in their guidance that they provide, especially if it's a self-curated repository, they try to make sure that researchers know that the data being deposited and getting a DOI is unique to that repository. So to speak, and doesn't exist anywhere else or if it does, it doesn't also have a DOI. I will say before, I give the floor to any of my other colleagues that I'm sure data scientists is also aware of this and they've done work to try to prevent this from happening, especially since they're very involved in the infrastructure for citing citations of data sets. So they're very interested in duplication when they get DOIs. But I'll leave it to the rest of my colleagues to share anything else. Jump in a little bit. Yeah, I think you're totally right, Julien. This is one of the goals of Gray is to reduce duplication. And part of that is through the interoperability across the repositories that it will become more readily apparent with higher quality metadata, what data sets are where. And so maybe this will be less of an issue. I think it's something we're definitely trying to address through training and outreach and best practices for data sharing for NIH funded researchers and all researchers really. It's something that I think we may get to further as we look at data QA and QC objectives that are part of Gray, which are a little bit downstream from having common metadata and common metrics. And then we can look to that. But I think through the best practices that we're detailing to researchers having clear linkages to related materials in your metadata and having that be a clear part of the Gray metadata schema should help quite a lot because it makes it clear that yes, there are related materials that maybe not even duplications, but people may use different, the multiple repositories for the same data set or the same research projects may have different outputs that should go to different repositories, right? They may use both a discipline-specific repository and a generalist repository and then have other materials in GitHub or elsewhere. And we wanna make sure that those are all well-linked and that the relationships between them and related publications are well-defined. So the metadata quality hopefully helps with that a bit as well. Great, thank you, Julian. Thank you, Anna. I think we'll look here. I see there's a direct question to Dryad. Can anyone browse to see what's been funded by a civic funder or is it behind a login access for only the funder? Yes, I was just typing an answer for that. So our search system is publicly available. So anybody can use the search system and filter by funder. Our API is also publicly available. So you can download information about data sets and access to the funder information that way. Of course, you can use the data site APIs. We do have some things that are limited to particular users like the dashboard. And that's largely because our dashboard allows seeing data sets that are still in progress that are not fully published yet. So that is a feature that we reserve for our funders. Thanks, Ryan. And I think there's also, since you had discussed it, I think there's also a related question here about the APIs. So the question from Jonathan Mote was asked, is attention being paid to common standards for API access? So will the general repositories provide expanded API access with this new metadata that's being made available? Well, from my point of view, I think common standards for APIs are the next big step that I would like to take. As far as coordinating some of our repositories, but we haven't really taken that step yet. There were, I was involved in a number of initiatives to standardize APIs about 10 years ago, but many of those initiatives kind of fizzled out. And now that we have sort of new repositories that have been through a new round of metadata consolidation, I think we're at a place where we can start to tackle that problem again. Great, thanks, Ryan. I see another question that's got a lot of, I'll just add several votes, is regarding how do we build the bridge to explain the metadata concept and working alongside researchers in a non-digital setting? Scholars maybe have access to power issues or access to the technical needs that they may need to share their data. I think one thing we can talk about with this is that speaking to our co-operative model, there's many layers of the engagement in the outreach that all of the repositories participating in the initiative are taking place. We have our engagement outreach that we do as part of the initiative. Speaking of this webinar as one, but the webinar series as a whole, the activities that we've been doing in various events, but we're trying to do more. And I think the other thing is speaking to our own individual use cases as well, where each repository is also taking the initiative to do their own outreach and engagement about the work that they're doing and it's relationship to the gray initiative as well. So this is definitely something that we are mindful of and trying to spread more and showcasing how through our collaboration, through our co-opetition, this is focusing on the betterment of the entire ecosystem, but also how we're able to better serve each of our individual sets of users and I think the second aspect of this as far as the non-digital setting, I think it's also the foundations of the metadata that is making sure that researchers understand why we concern ourselves with metadata and both the reuse metadata, the descriptive metadata and trying to figure out how we can capture that and when and how we can capture that in a digital aspect, basically setting the foundation for that when we can. Colleagues and anybody would like to add anything additional to that. There's a question we get a lot and I think this is also something we're trying to focus on and make more available is what are the similarities and differences between the gray repositories? Can we just pick one for my project or how do we go about that? And I can briefly say we have tried to do this and try to kind of put up some materials that will showcase this and talk to you about the various nuances of uses and needs and capabilities. And I do believe we have some of that in the works that we will be sharing. I think there's actually a first iteration somewhere. Colleagues, if somebody can correct me or if you could share that potentially in the chat, I'd appreciate that. But yes, we will have more documentation about the comparison capabilities of each of the co-optition partners. A few more of these questions that are coming in. Right. If any of my colleagues, if you have a question that you'd like to answer, please feel free. Okay. So I see a question about what metadata requirements would we recommend to a new repository just setting themselves up? So I guess I would say that this common metadata schema that we're coming up with as for the gray repository to apply to ourselves, we hope will be useful to the larger landscape as well. So that's part of the goal of recommending this. And it's something that we've actually spoken with some discipline specific repositories that are funded by NIH or run through part of NCBI and things like that, who are really good at their discipline specific mission having discipline specific metadata that works really well for their data types, but maybe haven't implemented something like organ identifiers or Roar or some of these other common practices. So we're hoping to actually extend this schema across the landscape and that it will be useful. So I think that's, this is what we would give them and it will be an iterative process. So we'll keep learning from different stakeholders. We're going to be talking about this at International Data Week, SciDataCon and RDA, if you're there and we're hoping for that panel session to be quite participatory with the audience and learn from the other experts in the room there. So that's another chance that we're going to keep iterating and getting feedback on this work. There are even more questions. So I don't know which, is there another one people want to jump in on? Gretchen, go ahead. Yeah, so I actually answered this in chat but I'll answer it live as well. There was a question about distinguishing between metadata basically at different levels. So we have, our data site schema kind of describes data sets as a whole thing, but how do you describe the individual variables or the population sampled or things like that? So that's really a difference of different schema. So we're recommending for kind of interchange at this macro level, adhering to our recommendations and the data site standards. But there are other schema out there that you can use for these more kind of granular and discipline specific uses. So for example, DDI is a schema that is used to basically create a code book to describe variables and to go alongside a data set to make it kind of understandable. And they actually have a suite of several different standards. So you can actually track changes in data sets across like longitudinal studies or different kinds of things like that. So certainly you should be viewing data site schema and our schema as one that exists in a world of standards that can kind of suit different needs and work together to achieve different purposes. Great, thanks Gretchen, Ryan. So I see a question here about is it possible to see more details about the funding in the, for example, a link to the grant number. This is something that is starting to be more discussed in the funding community. We can currently link to some funders. So I know NSF and NIH both have an API and they have landing pages for grant numbers that you can direct to. But even those very large funders still do not have persistent URLs for their individual grants. So there is discussion among the funder community to start assigning DOIs to grants. And we are in some talks with the funders about adding those DOIs into our system. So we would greatly welcome that if we can get the funding community to really coordinate on that. Thanks Ryan, I think that speaks again to please if you can, we'd love your participation. We'd like to hear from you. So please fill out our feedback form. If you want to get involved with the initiative, please contact us. So with that, I'm going to now turn it to our very last slide and just to say thank you from all of us, almost the metadata subcommittee. You know, again myself and Julian as co-chairs, we thank you all for participating in this webinar today. We'd like to also thank our colleagues from the subcommittee from joining us and our competition partners for being a part of this overall initiative and project. It's wonderful to see this coming together and having some actual things to share and implement. Again, as we said, we'd love to have your feedback. We'd love to see what we can do to further improve this and work with our partners and with data site. And we'd like to see you at our next several events. So please join us in October for our next metrics webinar. And with that, I'd like to say thank you all for coming today. And we will soon be able to share the recording. You'll find information about this through our Denoto community and access to the links. Thank you all very much.