 I think it's about time to get started. Welcome everyone. I'm Cliff Lynch. I'm the Director of CNI and you've arrived at one of the project briefing sessions at our Spring 2020 virtual meeting. That meeting will run through the end of May, so plenty more still to come. Thank you for joining us. We're going to have a wonderful presentation with five speakers today and I'll talk about that in just a moment and then turn it over to them. We will take questions at the end. Note that there is a Q&A button at the bottom of your screen and feel free to enter questions as they occur to you at any point during the presentation. At the end of the presentation, Diane Goldenberg Hart from CNI will materialize and moderate the Q&A, but please do, you know, put in questions as they occur to you. Now, let me turn to introduce the session at hand. I'm really delighted to have this session. As we know, research data management is one of the really fundamental problems that's facing our institutions today. It is both a partnership and a challenge thrown out by the funding agencies who are placing a greater and greater emphasis on this and it's also very much a consequence of trends toward open scholarship and open science. Some of you probably were able to participate in the workshop that followed the December CNI meeting and a number of the speakers here were very involved in that. And I think, among other things, they will be reporting on some of the outcomes of that work. I think without further ado, I will turn it over to Judy Ruttenberg from ARL who will speak first, introduce the topic and our other speakers will introduce themselves as they take their turns. So all that I have left to do is say thank you very much to our speakers for this presentation and thank you for joining us. Over to you, Judy. Thank you. Thank you to CNI for having this extraordinary conference in this way and thanks to all of you for tuning in for our presentation. So like I've said, I'm Judy Ruttenberg. I'm the senior director of program strategy at the Association of Research Libraries and I'm joined today by four amazing colleagues and they will introduce themselves as we move through the presentation. So in May of 2019, just one year ago, the National Science Foundation issued a Dear Colleague letter describing and encouraging grantees to adopt two data practices, assigning persistent identifiers or PIDs to data sets and creating machine readable data management plans. This guidance was significant in that it reflected the maturity and acceptance of these practices within the research data community, notably the Research Data Alliance, Force 11 and the leadership of tool builders and service providers in both of these areas. So ARL partnered with the California Digital Library, who you'll hear from very shortly, and as well as the Association of American Universities and the Association of Public and Land Grant Universities. And we received funding from NSF to hold a conference to design implementation guidelines for institutions around this Dear Colleague letter. And it was really a dream team to work with ARL and the partnership made all kinds of sense. From ARL's perspective, it is in our action plan, the objective to advance open science by design. AAU and APLU have been convening institutions around accelerating public access to research data since 2018. And our colleagues at CDL, our tool builders, experts, community builders in both of these important areas. And then we also recruited Natalie Myers, who you'll hear from this afternoon for her leadership and involvement in RDA and specifically around these issues. So as Cliff said, we held the conference adjacent to the winter CNI meeting in DC. So just to acknowledge the full organizing committee for the project and a shout out to Joel Kutcher-Gershenfeld for his extraordinary facilitation, workshop design and live note taking. So we had about 40 people attend the conference, but which was by invitation. They represented, they were librarians, including deans, research administrators, tool builders, funders and researchers and association people. We also chose about 15 invitees, some of whom couldn't be in attendance because of other conflicts to interview beforehand. And we asked sort of two simple questions in those interviews. The first was what is the value that they saw in adopting PIDs for data sets and machine readable data management plans. And then two, what are the challenges or barriers to their adoption? So I mean, I think what we learned in the interviews as well as the conference itself is how well established the value proposition for these data practices is. So with persistent IDs, we heard they would advance discovery, disambiguation, enable credit for data creation, for reuse, tracking, linking and connecting pieces of a project or scholarship, helping to automate compliance within the institution around funding mandates and then for reproducibility and meta science. So we heard all of that as a value proposition. For machine readable data management plans, we heard their value in communication and progress reporting to the funder within the institution, etc., to program officers and repository planning, particularly with respect to, you know, data type and storage amount, things like that. Campus planning for compute services. A lot about syncing up with the publication process, so that the issue of linking data with publications, knowing about data that might have some risk with respect to personal information or privacy or the like, and transparency and accountability. So we also heard about challenges. So we kind of went into the conference knowing this. For persistent IDs, a lack of awareness among investigators that they had to get them, did they need them, who was responsible for that. Confusion around granularity, particularly around data, at what point are identifiers issued for a growing data set. Again, the communication between data repositories and publishers and the need for a stronger linkage there. And the concern over sustainability, not just of PIDs, the numbers, but the sustainability of the organizations as infrastructure per scholarship. And for data management plans, challenges. Similarly, lack of awareness of tools to create machine readable data management plans. Lots of concern around investigator researcher burden. Nobody wants to add to that. And centrally, I think that data management plans are part of a grant application. So right off the bat, they're PDFs. They're sort of submitted through old, potentially software for managing those grants and workflows within institutions are local by nature and idiosyncratic and therefore sort of hard to design guidelines that would be universally applicable around them. And then the sort of cultural or policy issue that data management plans are neither public nor typically shared widely even within the institution. So, you know, here's a quick five takeaways from the conference before we dive deep into PIDs and DMPs. The first one was to center the researcher. There was an absolute consensus or shared understanding that researchers need to see whatever guidelines we come out with as to be as to be worth it for this to have any legitimacy. They need to be at the center of our thinking and our models and we need to design tools and guidelines and things around their workflows and not expect the reverse. So the guidelines that we issue as a result of this conference will endeavor to really disambiguate researcher needs and contributions to, you know, to say the DMP process from the needs and responsibilities of the research support services. Conference participants focused on and articulated the need to for greater alignment between disciplinary specialists and the library community, domain repositories, library stewards. So our guidelines will encourage more conversation between library repositories and domain repositories particularly at the point at which data management plans are finalized and data sort of transfers stewardship. The implementation guidelines will also encourage and support support and or advocacy for the organizations that sustain persistent identifiers registries as essential pieces of scholarly infrastructure as well as open licensing of metadata that enables interoperability across systems. Unbundle the DMP. There is a sense that we may be overloading the data management plan with too many expectations, too many roles in terms of communication and compliance and lab management and scientific merit and the implement. So we have taken from this that our guidelines will support versioned, updatable, living data management plans by encouraging multiple stages of DMP creation, sharing and iterating and that with its eventual integration into the grant progress report. And finally, PIDS will unlock discovery. We've heard a lot of that in the interviews and then within you know a lot of commitment to that within the conference itself and our final report will we will provide tangible examples of that data integration across repositories through PIDS. But here to talk to you about PIDS is Maria Gould from the California Digital Library. Thanks Judy. Hi everyone. I am Maria Gould and I am at California Digital Library where I work at the University of California Curation Center or UC3 team which is our program within CDL focused on digital curation and my role within that team is working on CDL's persistent identifiers portfolio. So I'm here at this juncture in the talk today to kind of take a step back and contextualize this discussion in the broader landscape of persistent identifiers. So I have three minutes to give you the 30,000 foot view of everything that's happening in the world of PIDS right now. So easy right? So I do a few things at at CDL that I with PIDS that I wanted to mention just because I think they really tie in to what I'm going to talk about in terms of what's going on in in the PID landscape. So I work as the service manager for CDL's EZID service which provides DOI services for our University of California libraries as well as DOI and arts services for the broader community. I'm also the project lead for the research organization registry which is a project aimed at developing new open infrastructure for research organization identifiers. And I'm also more broadly involved in the larger PID community through initiatives like the PIDAPALUZA conference where we're bringing stakeholders together from from all around the world and in various types of types of communities to talk about what's going on with PIDS and kind of how our activities intersect. So I think what's going on in the landscape these days and when we talk about the landscape of persistent identifiers especially in the context of research data management and open and online scholarship more broadly we're talking about a few different landscapes or a few different layers at once and I'm trying to illustrate that with these images on on the slide here. And one layer of that landscape kind of illustrated in the far left is just this core focus on what exactly do we need to identify what PIDs do we need for all of the things. There are PIDs that are more well known and familiar to our communities like DOIs for articles and data sets and preprints and orchid identifiers for researchers for the people involved in research IDs that are now more emergent like the research organization registries ROAR IDs and a whole wider landscape of new and emerging or more established identifiers for grants for instruments for facilities for samples for conferences projects and and much more so there's this huge jumble of persistent identifiers out there right now and oh sorry Judy can you go back thank you so and kind of represented in the in the middle image is is this question or this layer of the systems and the tools and the structures and workflows that we need to to make sense of that of that jumble and the standards that we use for how to organize them and then the last layer kind of illustrated in the far right is how we connect all of these things together and how we get people on board with with adoption and implementation so we can really achieve meaningful insights about the research process and the broader impacts and so you know this sense of it's really not enough to just have a pit or even a set of pits they need to be connected to each other and they need to be used and so a large part of that is technical and a large part of that is also not technical it's about the communities that we're reaching out to it's about the awareness and outreach that is needed to make people aware of it furthermore it's also about who who doesn't necessarily need to know about a persistent identifier in other words how we can make some of these connections and networks more seamless and invisible to really minimize friction to minimize the burden on researchers or the burden on you know dumping everything into a dmp as Judy was just talking about so how we can really maximize the ease of uptake by minimizing some of that friction so can you go on to the next slide thanks so the uh you know what does this mean in terms of how uh that landscape of those different layers can really be used to network uh to you know to be networked in the research process to really unlock discovery and and to sort of reflecting uh on the previous slide in a slightly different way here and thinking through obviously the core of this is is the kids themselves the doys the orchids roar ids etc but just having those identifiers really doesn't do anything on its own and so there's this next layer uh in this network of of the workflows and the standards and and structures that are needed to to knit them together uh to you know to really make uh make sense of it and then there's kind of this additional outer layer of of where it all goes you know for instance if there's uh data publishing workflow or a publishing workflow in which all of these different kids are being collected uh they need to be indexed somewhere in an open open index like crossref or data sites so that all of that metadata can be queried so that systems can connect to those indexes and bring the insights downstream uh to support better search and discovery and better insights overall about uh the shape and the nature and the impact of the research so those are a few of the opportunities and challenges and and nuanced kind of layers that are underway in the persistent identifier landscape that I think really connects to some of the values as well as the challenges that are coming through in the course of this workshop and the broader effort that this workshop reflects so with that I will turn it over to how this is playing out in the world of uh data management plans and machine actionable dmp's thanks maria uh so my name is maria pretzelis I am the other maria on the uc3 team at cdl so I work in research data management initiatives um primarily with the dmp tool which I'm sure most folks are familiar with um if this tool has been around for almost 10 years it's been very successful it has very wide adoption in the community we've got over 46 000 users 43 000 data management plans and really our focus kind of development wise at this point is to really focus on creating kind of next generation machine actionable data management plans so I'm going to give you kind of an update on where we are with that and talk about one specific pilot project that we have to share with you so Judy if you can go to the next slide please so we have an nsf-funded eager grant right now really to explore the potential of machine actionable dmp's really as a means to transform dmp's to that compliance exercise the static text documents that Judy was referring to that we're all aware of into this network's research data management ecosystem with really the the broad goal to improve the research process for everyone for all stakeholders that are involved in that system and a key question really underlying our work um connects to what maria gould was just talking about oh how to really make the connections within that paid graph within the identifier ecosystem so adding data management plans into that through the use of persistent identifiers so our first step with that was to put in as many identifiers as we can into the dmp tool we did partner with data sites a prototype minting persistent identifiers do i for dmp's we're using that kind of as an anchor to collect changes to the dmp over time that judy had mentioned again um we're using uh the uh data site event data sort uh service to keep everybody update all of the uh players and up to date on changes to a research project over time as it progresses we go to the next slide so a specific um project that we have to kind of pilot out this work with machine actionable dmp is our real world use case really which leverages and tests out our work to make dmp's machine actionable we call it the fair island project and it leverages collaboration between the university of california gump field station which is in french colonisia and the teshara society and teshara is an atoll in french colonisia it's kind of in between the island of moria and tahiti it is absolutely stunningly beautiful tropical paradise um it's also a really excellent spot to test out kind of a lot of the principles that we're talking about today um so because it's a very um you know the teshara society has control over all of the research that's conducted or collected on the island um we really are able to demonstrate how we can advance open science by creating the optimal fair data policies that govern all of the research that is conducted on that island um on that field station so that includes mandatory registration requirements um controlled vocabularies all the persistent identifiers that we can find that are appropriate um and so dmp's kind of in this environment in this project are really the key infrastructure that allows us to track provenance um attribution compliance checking with those optimal data policies and ultimately publication for all of the research data that was collected on the island um so our question here really is can we can we build a model system that allows this data to be fed across the stakeholder system using persistent identifiers that maria was talking about she go the next slide please so we are starting with the teshara field station um in the south pacific and then our hope is to kind of extend the policies and the infrastructure that we've developed in this partnership to other field stations that are administered by the university of california uh the natural reserve system which is another kind of key partner in this project and then you know ultimately we hope to demonstrate really the specific capabilities of machine actionable plans in this very kind of bounded specific use case um and to analyze the downstream effects of these policies um in the resulting release of data so does it actually in in fact speed up the time to release of data we could go to the next slide please so i just wanted to end um on this slide just kind of demonstrating all of the identifiers that we are utilizing to really record the assertions to dmp doi so that we're able to track a project as it moves forward over time um so using the fair island project we're really trying to test out the flow of this information to integrate with as many external systems and information systems to really track how this information can record data management activities as they occur during the course of a grant project so i'm really looking forward to sharing continued results as we proceed with this project and increasingly add more machine action ability to the dmp tool if you're interested in learning more about the ferraman project and our work in this area you can go to our project website which is just fairisland.org and with that i will pass it over to you Natalie hi thank you other Maria i appreciated hearing more about fair island that project is so exciting um i wanted to take this moment um to describe to those of us attending a little bit more about the context of stakeholders in the december workshop and particularly the participation of funders and others so important to enabling us to have machine actionable data management plans and to understand the different stakeholder ways that we engage with data management plan content um in this slide that we're sharing now um i'd like to bring your attention to a pre-conference interview that i had the good fortune to conduct with ben piercin and ashley farley from the bill and melinda gates foundation and if judy you could um take us into uh hearing a little bit from um ben about what we can do right now i'd sure appreciate it thanks judy we've been more and more thinking about this in brian no six you know triangle of of change and adoption um where you know we need to make it possible for people to do this for those early adopters that want to do this and do it right we also need to actually get to easy as well you know normative is going to be tough but like that that well it's not jump ahead yet let's just make it possible and easy um and right now i mean we've experienced even just making an output whether it's a data management plan pick a thing right we we have these great results coming out of the knowledge integration work and normally you'd produce a paper and and an abstract but that actually isn't the output that is ultimately machine readable in the way that we want so we've had to figure out how do you make a machine readable output well today on a data management plan it's just free text in our investment system so there's no guidance on how to actually make that machine readable what really what's the structure etc so i think it would be actually like we want to get down to that level um and so uh while we're you know my life for the last year has been a lot of convenings and conferences of of varying levels of uh action and reality around uh fair um and turning assets into reusable assets into value um i'm counting on this being at a level of uh specificity and actionability that i can take back even if it's not the full thing but to take back to say this is what we can do right now in our guidance in our policies to make it better for uh program officers and grantees uh getting research grants to make sure that their outputs are in fact persistent and machine readable thanks duty i hope that helps everybody understand a little bit more the perspective of a funding agency that runs on um the mission of impatient optimism i'm happy to share with everyone that the bill and linda gates foundation has incorporated many of the takeaways from our workshop into their requirements and their asset classes that will be followed by those participating in their therapeutics accelerator program uh related to kovat 19 so i think that's a very um promising way to look at how a stakeholder like ben and the foundation can engage in in a r l a a u a pl u workshop and come away with something actionable that lets them go back with policy ideas with ways of implementing them and understanding of broader community needs and goals we'll move forward now judy if we could to our next slide this slide shares some information and poll quotes from dina paltu at um nyh and dina's interview prior to the workshop was really interesting because it came at the time where nyh was getting ready to do a request for public comments on their draft nyh policy for data management and sharing nyh recognizes that sharing scientific data advances nyh's stewardship of taxpayer funds but also maximizes research participants contributions they know the increasing access to scientific data resulting from nyh funded or conducted research advances biomedical research by enabling the validation of scientific results by allowing analyses to be strengthened by combining data by facilitating reuse of hard to generate data and by accelerating future research so that's the context in which dina and i were speaking on the day that we did this interview and when i asked dina about the value of machine readable data management plans um she uh directed the conversation toward how she thought this was really important because it fosters connection it allows it creates some consistency it allows for the ability for investigators to be able to update their data management plan so that everything can be linked and connected the data can be attached to whatever the award is but it can also be attached to the data management plan itself so that that plan can be followed appropriately and then others who would like to be able to find and use the data no more about where it would came from and how it was governed this is part and parcel of how in the recent draft guideline from nyh they're proposing that reasonable allowable costs can be included in nyh budget requests when associated with number one curating data and developing supporting documentation number two preserving and sharing data through established repositories and number three local data management considerations such as unique and specialized information infrastructure necessary to provide local management preservation and access to data budget estimates can include all three of those categories now in nyh and researchers are encouraged to do that as they prepare their data management plans could we move on judy to our next stakeholder here we hear from margaret levin steen from icpsr could you play margaret's video for us judy and you can imagine a data management plan being used by a funder to say okay this is what you promised to do can i go back and verify that you actually did it um i can if there's a persistent identifier which is associated with repository where you said it was going to go right so these things interact um so i'm just there there are things that you could imagine a journal when you publish in the journal that they want to look at the data management plan to make sure that what you're doing is consistent if you said i in the data management plan the data are confidential then the journal might believe that the data are confidential if you said in the data management plan oh yes give me money to do this research and the and i will share the data but when that you go to the journal that you say oh i can't share my data it's confidential that you know that there's actually some transparency in this whole thing right so um on the other hand um and again you can give the journal persistent identifiers these things all reinforce one another so there are there are um so there's some accountability measures built into a data management plan that are if it's machine actionable that are really much harder to leverage um if it's a pdf which is stored some place that nobody can see i think we'd all agree um that maggie points out something that um is frustrating to people who want to reuse um data that they see um referred to in published papers and also something that frustrates funders um who expect shared outputs and don't always get them can we move on to our next slide judy here we hear from cni director cliff linch a little bit about um stakeholders for machine actionable dmp content so let's hear from cliff certainly for awarded grants the data management plans and probably the grants themselves need to be considerably more public within the institution they need to be objects that the library and the it folks have ongoing access to because everybody has a stake in this the researchers themselves offices grants and sponsored projects and the compliance side of that um the library uh who's going to be helping with curation potentially um uh the it folks um particularly if the uh data management plan for example is revised to call for uh larger amounts of storage for some reason they may need to know about that thanks judy so here we hear from cliff about who some of the stakeholders are in machine actionable dmp content and some of the reasons for why um they might need to interact with that content or have a stake in how it's reused if we move on to our next slide we can hear a little bit more about a project in the research data alliance uh within the research data alliance exposing data management plans working group i'm going to share to you in the chat a link to the exposing data management plans uh working group page and also a link to a call for contribution related to uh the research data alliance exposing data management plan working group request for comments on exposing data management plans um this working group has been running over two years at the research data alliance it's a free grassroots consortium of stakeholders in research data it's an international consortium anyone can join for free um if you're interested in finding out more about the research data alliance i encourage you to go to rd-alliance.org and there you'll find a way to uh both join the association if you're not a member of the research data alliance already um but more importantly to explore the terrific work being conducted in its working group and interest groups in the rda exposing data management plans group we drafted recommendations covering five main points resulting in 12 recommendations about specific elements of data management plans and how they benefit stakeholders when they're shared and reused those five main points cover fair dmps for fair data production ethical exposure of data management plan content standardized metadata for dmps controlled vocabularies in dmps and persistent identifiers in dmps and for dmps we depreciate everyone's input to these recommendations both where you think we've hit the mark and where you think we've missed it and where you think our recommendations could be improved or where we might need better examples in use cases um this recommendation and its concurrent request for comments gets right at the heart of each of the stakeholder groups cliff mentioned and harks back to everything we heard about funder publisher and researchers stakeholder views related to how data management plan content can be shared in machine actionable dmp environments um so please come submit your input and we hope to hear more from you and find ways that through all our consortial efforts the culture of data reuse and interoperability will be accelerated so that research can be found successfully and in a method that is something we can trust taken up by researchers internationally in a more seamless way in the future i thank you for your attention and now i'm going to pass it on to my colleague thanks judy thanks natalie i my name is jenny mullenberg and i'm research data services librarian at the university of washington in seattle and i'm also a visiting program officer with arl focused on research data and i'm going to tell you now that you've gotten a lot of the background and context i'm going to say a little bit more specifically about the meeting that happened in december and what our next steps and draft recommendations are going to be so the participants at that in that meeting um we had both a series of panels and kind of rotating group discussions and everything was focused around primarily these groups that you see listed here different stakeholder groups so libraries the funders publishers researchers research offices and tool builders and so um i'm going to tell you a little bit about what the group is doing next and how you can both see that work and provide comment so judy if you could go to the next slide please we will be working to um come up with a set of draft recommendations and that information will be presented via webinar so that people can make comments on the work that we've done and this work will also be contributing to the aau aplu guide for accelerating public access to research data and um the link to that report is there on the slide and then also a link to the december meeting agenda so that you can see exactly what i'm talking about when i mentioned the panels and the workshops itself so judy if you could go to the next slide so i'm going to tell you a little bit about how we are working together to create these recommendations and i'm going to use the library group um the stakeholder group as an example so we took the copious pages of notes from our workshop and tried to distill them down into each one of the stakeholder groups and then for each group we worked to identify a problem statement any missing pieces that were um identified in this the environment of research data and then an actionable agenda for each one of the groups so next slide please so as an example for the library group the problem statement included um you there were lots of things that kind of fell into three primary buckets and you see those here so one of the things that was identified is that the existing library expertise in pids and dmps and data management needs greater visibility there are you know pockets where it has good visibility and pockets where it's missing and we talked about how to kind of surface that expertise on a more even playing field we also identified a lack of centralized funding and again that kind of refers to some places where it's um gets more support than others but with a lack of centralized funding it's hard to make it sort of a universal attempt at adoption and then a lack of standardization on use of pids and dmps and data management both within disciplines and within different departments on campuses next slide so for this we tried to identify some of the missing pieces and in that list everything pointed to two main things and that was more collaboration and more coordination among various campus partners and the main goal of that collaboration and coordination would be to elevate use of the existing services and and also the infrastructure that would be used to support that use and so the next step in that is to identify an actionable agenda so using the library as an example again we identified that library and stakeholder partnerships are essential to develop institutional policies on data management and sharing and this extends to include institution institutional expectations for dmp's and pids more visibility visibility around what is expected at the institutional policy level will help um greater adoption for both pids and dmp's and then to help identify the library as the primary point of contact for dmp and pit expertise infrastructure education and training next slide yeah oh there we are so we wanted to provide an opportunity for everyone to discuss of the things they saw on the slide and to ask any questions we also wanted to thank everyone for attending um and i wanted to thank all my co-presenters as well and cni for the opportunity to present at this point oh yeah i was going to say you can type um questions into the chat as well yes indeed thank you thanks so much uh jennifer and thanks to all of our panelists for reporting on your work um really important and very interesting project that you've been working on and devoted a lot of time and energy to we really loved hearing about that and i'm sure we have a lot of questions from our attendees i just want to introduce myself i'm diane goldenberg heart with the coalition for network information um this webinar is part of cni's ongoing spring 2020 virtual meeting which will continue through the end of may and at this time i would like to invite our attendees to type in your questions in the q and a box um we will field those live uh if you have any comments or questions if you want to find out more about the process about how to implement some of these strategies please go ahead and share those with us now while we're waiting to see if we've got some questions coming in i just want to go ahead and share with you in the chat box there um a direct link to our spring meeting schedule we've got lots of more webinars to come in the next few weeks including another one following this panel on statistical consulting in the library i also want to mention that if you have a comment that you'd like to make uh live or if you would like to um engage directly with any of our panelists you can raise your virtual hand and i can turn on your i can unmute you and you can interact directly with our panelists um one of the advantages of this um environment so to our panelists where where are we going to next from here do you have any thoughts on next steps here maybe judy sure thanks diane so yeah as um so jenny kind of previewed um i think where our draft recommendations are going and because we could only have you know there's a limit how many people you can put in a room and there are hundreds of people who could have been in that room in december crafting you know going doing this work and we're grateful for the people who were but we want to really engage everyone who wasn't as well so the idea is to it was a preview of what the of the process by which we'll come up with recommendations and then those will be made um you know we'll we'll push those out um to various constituencies uh you know with google forms and things for comment and that will that will take place we'll be doing that later this month in may you know have an open comment period um you know for a while in the early summer and uh you know get the recommend get the uh draft guidelines these implementation guidelines out um you know toward the fall um and then of course you know rapid the entire project will have a report um to the national science foundation so watch for those and various and we'll you know we'll certainly see an eye um as a communication channel and we'll push those out right thank you and it looks like natalie wanted to jump in there too hi thank you um yes i'd like to jump in there to um share that one way we hope um that we'll be able to share these outcomes is in the rda um request for comments on the um exposing data management plan elements we hope to have those recommendations adopted at the next rda plenary in about four months time so the output of this workshop and the recommendations of its stakeholders can feed into that process in a very meaningful way so we're really thrilled about um how they serendipitously occurred at the same time um the other thing i'd like to respond to is the um question in the q&a from um tim mcgeary um who asked um what are the ways um we've seen any real-time impact of the work we've been doing during covet 19 and he asked if there are any opportunities from this crisis to uh take advantage of and um i think that one place i've seen a lot of activity around this is again in the research data alliance and it's covet 19 working group within that effort um over 400 volunteers are working together to create rapid guidelines for people doing covet 19 research in order to quickly do interoperable data sharing in this wild west environment where some work may be funded some may not some may be being conducted by governments some by funded projects um some by academic research consortia and so on and it's very important for people internationally to be able to collaborate quickly we've seen a lot of these same recommendations percolate up through examples people are taking from the NIH data sharing guidelines um from their funders at funders in the open research funders group like welcome and gates and from the kind of presentations and uh explanations that were shared at our workshop and others like it um i think that there's a sea change in the curiosity and interest of researchers to find the best repositories find the best data sharing methods find the right PID and find it quickly so their data can be taken up and i think that impatience is a wonderful thing and also um one of the phrases i love um related to the covet 19 work was that um we know we don't have time to go back and do it right again later we know we have to do it right the first time so it behooves us to follow these draft guidelines from NIH these draft guidelines from groups like rda these suggestions and stakeholder topics of workshop participants like at our workshop in december because they're the best we have right now and the best chance we've got at um reusable data so i really appreciate that question thank you thank thanks natalie and uh thanks tim for the question and cliff uh cliff has a question for you as well can you give us any sense of how the idea of machine actionable and regularly updated dmp's is being received by researchers i can take that one um when uh building out these features for the dmp tool one of my goals has been to make it pretty seamless so researchers don't notice um kind of like what maria was saying like this it's really the back end infrastructure that when done right is not an extra burden on anyone in fact it should make things faster and easier it should make information uh flow better so you don't have to enter the same data in five different systems that can kind of all be connected so ultimately it should save people time and make things uh much easier so i don't think that your average researcher really needs to know much about machine actionable damage plans um it should really be something that um makes their lives um easier and makes reporting easier going back to a funder you can see all of the steps that your project has gone through and you have documentation about where things are so ultimately i think the response has been pretty positive as long as we keep that goal of making things easy for people and not giving them one more thing to have to what's the pit and how does this relate and you know it should really be something that is pretty seamless for folks right we don't want to know how to make the car run we just want it to run yeah exactly yeah thank you just make it work right who doesn't want that terrific all right well i don't see any more questions coming in right now and um so i think with that i'm going to propose uh closing down the recording portion in other words the public portion of this webinar with uh tremendous thanks to our panelists for coming to cni to talk about this and also to our attendees for spending time um with us today and i just want to let our attendees know if you want to hang around the panelists will stay for a bit and um after i turn off the recording we'd love to have you sort of approach the podium and ask a question or make a comment or just share your ideas about um these processes and um and these the needs within this community so thanks once again to all of our panelists thank you so much thank you to our attendees