 Good morning or good afternoon everyone and welcome to the ANN's webinar today. It looks like we still have a few people joining but we will get underway fairly shortly. My name is Gerri Ryder. I'm with the ANN's capabilities team and I'll be hosting the webinar today. And I'd like to welcome you all very much. Thank you for your attendance. The session today is being provided or presented by Natasha Simons, Sam Searle and Stacy Lee from Griffith University who are going to be talking to us about the work they've been doing to establish a culture of data citation. Now, Natasha, Sam and Stacy all work with the Scholarly Information and Research Information Services at Griffith Uni. And Griffith has got a, I guess, a long-standing involvement in data citation. Long-standing, I guess, is relative in terms of how long we've been working in this area around data citation. But they've certainly been working with ANN's in a number of areas around establishing data citation. Just a few logistics before we get underway. As people enter the webinar they're automatically muted. And that's because we're recording the session and we like to keep the background noise down as much as possible. The recording is underway and any questions and background, I guess, interruptions as such will be edited out later. And we will send out an email when the recording is available. If you've got questions as these sessions underway, please put them in the question pod and we'll come back to them after the presentation. At that time, if we're able to unmute you, if you have a microphone that's working, we will unmute you and you can ask the question yourself. Otherwise, I can ask the question on your behalf. So I think without any further ado, we might move on and start the presentation. Sam Searls is going to get us underway with a bit of an introduction to the session. And then we'll also hear from Natasha and Stacey. So Sam, I'll hand over to you. Excellent. Thanks, Gerry, and hi, everyone. Welcome to today's webinar, which is going to focus on Griffith University's approach to data citation. In 2012, Griffith University's scholarly information and research services began a new project that was supported by and that's full title, which we don't use very much as the data citation infrastructure establishment program. And the goals of the project were to enhance the infrastructure for data citation, to test the methodologies for tracking impact using both formal and informal methods, and provide some targeted outreach to researchers about benefits of data citation. So in this webinar, Natasha, Stacey and I will describe what we've done in each of those three areas and we'll give you a bit of an insight into Griffith's future plans as well. So we'd like to acknowledge up front the financial support that Anne has provided to Griffith. This support has enabled us to undertake the work that we're describing today, as well as previous work to develop infrastructure nets with the simulated areas. And we'd also like to thank Anne for the opportunity to attend events like this one, because as we're being grappling with these issues, it's been really nice to know that we're not alone and that we're part of a community that has similar objectives and similar kinds of obstacles that we all need to overcome. So before we get started, we thought we'd just provide a little bit of background about Griffith. From the single Nathan campus that was established in the 1970s, Griffith has expanded to now have five campuses in southeast Queensland, about 43,000 students and 4,300 staff. And in terms of administrative organization, Griffith has 20 schools and departments that are grouped together into four clusters that you can see on the screen there. So cutting across those schools and departments though, we have a number of research centers and institutes. And the role of the centers and institutes is to bring together critical numbers of staff and higher degree students and to aggregate research from those different areas into a program of research teams. So Griffith has identified a number of priority areas for strategic investment in research. And as you can see from the list on the screen, it's quite broad. There's social sciences there, there's creative arts, and there's some very interdisciplinary research areas like climate change and adaptation. And so that makes Griffith a very vibrant and interesting institution, but it does mean in dealing with issues around research and citation that we need to accommodate the practices of a much wider range of researchers than you would find at more specialist organizations. And we'll talk a bit more about what that diversity of disciplines means later in the webinar. I'm not going to spend too long on this slide, but I guess we just wanted to emphasize that Griffith's work in data citation builds on a set of other activities that have been under way for quite a while and those efforts have been both top down and bottom up. We can talk a bit more about any of these things and questions I want people to come down. In terms of embarking on the current data citation work, there definitely weren't starting cold. A range of other activities were already completed or underway and this gave us a really firm foundation for what we'll talk about today. E-research and library staff had gained fairly deep understanding of researchers that a management needs through interviews that were conducted during an and funded Seeding the Commons project. E-research had also deployed a research repository early on. That's got 13 lecterns and over a thousand items in it. One of those lecterns is a university-wide data registry that's got about 15 lecterns in it that are harvested to the research hub and by a research network Australia. There's been ongoing work on improving the quality and the interconnectivity of the metadata records. We're always looking for chances to maximize the automation and reduce manoeuvres to put a data record wherever we can and that work was also supportive. Finally, we'd already checked our Tories in the data citation board about being the first institution to sign up for the and staff data service to mint DOIs for a small number of our data collections and so at this point I'll hand you tell about what we've been doing in the area of infrastructure. Okay, so digital object identifiers or DOIs are persistent identifiers and they're an important component of the data infrastructure. Persistent identifiers are critical in managing online resources. Without persistence, links to online resources are likely to break and the resource become effectively lost. There are a large number of persistent identification schemes available for use and it's useful to select which ones to use against a range of criteria that includes uniqueness, trustworthiness, reliability, scalability, flexibility and transparency to users of the scheme. In recent years there's been a focus on bare management of and access to research data. Within this context there's a growing international effort to improve citation of research data by using the DOI system. The International Not-for-Profit Organization data site are playing a leading role in this effort and as a partner of data site, ANDS are also contributing. Data site promotes the use of DOIs in data citation as a way of helping researchers track reuse of their data, helping data centres in establishing a mechanism that supports discovery and reuse and supporting publishers with an elegant link between an article and its underlying data. The benefits of assigning DOIs to data sets and data collections also extend beyond their value in the context of data citation. Assigning DOIs to research data collections enhances the concept of data as being a valued research output to be managed persistently and for the long term. DOIs require a commitment to maintaining links to the data and therefore signal an institution's willingness to manage the data for the foreseeable future. DOIs are also routinely assigned to publications and they come from the publishing industry. When applied to data, they indicate that data is to be treated with the same respect as publications to be linked to and to be well managed over time. And finally, DOIs are key to the collection of citation metrics and altmetrics. While citation metrics track full citations, altmetric tools such as altmetric and impact story use DOIs to track mentions in social media and the like. And without a DOI, this tracking is made much more difficult. So most of you will be aware that ANZ offers a site-mode data service that enables research institutions to mit DOIs for their research data sets or collection records. As Sam mentioned, Griffith was the first to sign up for the site-mode data trial back in 2011. Even though we knew we'd be guinea pigs for this new service, we recognised the value of DOIs to Griffith as outlined in the previous slide. The ANZ service is offered in partnership with Datasite, and this means the service is for allocating DOIs to data sets, data collections and grey literature such as VCs and discussion papers. This is because Datasite is the DOI registration agency for these types of materials. Other registration agencies such as Crossref provide DOIs for standard publications. So after the ups and downs of the initial trial, we found the ANZ service easy to use and technically straightforward. We initially developed a PHP script, which is now Python script to use the ANZ service. Our code is available open source on GitHub for anyone who would like to use it. You can also now generate a listing of your DOIs by logging into the ANZ online services, but it's important to remember that the ANZ service is a machine-to-machine service. ANZ doesn't offer a GUI interface that an administrator as distinct from a software developer could use to maintain their DOIs. It will be quite useful to have such an admin interface and a number of institutions have developed their own. For example, the QUT Institute for Future Environments built a nice librarian interface to meet DOIs called DOI Monkey, and the Terrestrial Ecosystem Research Network, or TURN, also developed a nice user interface for minting and maintaining DOIs. So while minting DOIs is technically straightforward, the plan to meet DOIs raised a number of issues at Griffith, which are applicable to other institutions. These issues include questions about what material should have a DOI, how to manage versioning, what level of granularity to apply, landing page resolution, the metadata requirements, and what data citation format should to use. We've discussed these quite widely at ANZ events and also in a Dlib article, so we won't cover them in much detail here. Rather, we'll focus on the solutions that we came up with in the form of guidelines and workflows. So the Griffith DOIs Introduction and Management Guide was developed to provide a framework for minting and maintaining DOIs at the institution. The target audience is internal and the contents are applicable to anyone involved in minting DOIs, which at the moment is restricted to our E-research services unit. We've also kept it brief at only 10 pages and it's intended to be an easy read that people can dip into by the sections that are relevant to them and there are links to further information where applicable. The document makes no assumptions of its readers and therefore begins with an overview of the DOI system, data citation and the ANZ service. The DOI Management Guide section outlines the business rules for minting DOIs, which we'll cover in the next slide. It also makes clear our approach to granularity, versioning, citation format and so forth. The document includes a section on the data site metadata schema as a minimal amount of metadata is required to mint a DOI and we finish with a technical summary that points to the DOI scripts and information on the ANZ service. So we don't have time to go into this document in any detail here, however we're happy to provide you with a copy on request. Note that we expect this guide to require updates in the future, but it's a good reflection of our current thinking on DOI management. On the screen are our business rules for assigning a DOI to data. We felt these rules were necessary to make clear what material DOIs apply to and what criteria needs to be met in order to assign a DOI. There are six basic rules that we decided need to be met in order to assign a DOI to a data set or a data collection. One is access to the materials that comprise the collection will be open, mediated or embargoed and data collections which are closed like due to ethical or legal constraints should not have a DOI assigned to them. Two, the material is citable. It's a citable contribution to the scholarly record analogous to a journal article. Three, the collection metadata supports the provision of the five metadata elements required for compliance with the data site metadata schema. Four, that Griffith University will support management of the collection in the long term which includes access and storage. Five, the material does not already have a DOI assigned to it. So although we currently don't have a way of checking this our small number of collections means this is not yet a big issue but it will become an issue particularly if our researchers publish their data in multiple repositories for example if they publish in the Griffith repository and a data journal repository and we'll need to work out how to check whether a DOI has already been assigned or not and if so whether the Griffith data collection is actually different to the data collection already assigned to DOI. And lastly, the material is held in a database in databases or systems managed by Griffith and not by a third party. And this is not only an ownership issue it's also important for the long term commitment required by the institution limiting a DOI. So once a DOI has been assigned we use it as the link in the citation field. So this is a screenshot of a data collection record in the Griffith Research Hub. So the hub is our metadata store solution as well as our research and profile system. The hub has addressed the need for a comprehensive view of the institution's research output and contains profile pages for researchers and their associated publications, projects, collections, groups and so on. We use the label scientist collection and then express the fully full citation according to the basic data site guidelines. This information is then included in the collection record we provide to ANS in the Griffith CS format. So this screenshot shows the same data collection record in the hub that we saw on the previous screen but it's in Research Data Australia. The circle is around the data citation info element and there are guidelines about what to include in this element in the Griffith CS schema and the ANS content providers guide. This particular Griffith CS element has changed from the original specification allowing feedback from the ANS partner community including groupers. So to finish this section on the role of DOIs in data infrastructure and data citation we'll touch briefly on the roadmap for future DOI related activities. So for us these include more work on embedding DOI minting into our workflows as an automated process. We can't hope to achieve the wonderful workflows of big organisations like Dryad who have links to journal publishers but we can streamline our current processes and met DOIs for open data as a matter of course. We also plan to met DOIs for grey litch jump and all Griffith digital VCs are in our current sites. We will review our guidelines and rules at future points in time because this is an area that's subject to change. We will embed some types of metadata such as coins into the landing pages for each of our scholarly objects including data collections to make it easier for a user of our services to import a citation for one of our scholarly outputs into a web based reference tool. We also have a watching brief on a number of developments including the item project and altmetrics tools. I'll now hand over to Stacy to talk about the impact side of data citation in more detail. I'm just going to give a quick overview of video metrics and altmetrics and basically how to measure the impact of your research. So you've published your article and shared your data set. How do you track who's citing your article by viewing or reusing your data set? For journals and publications to use video metrics which is about analysing and quantifying publications and citations. For researchers' citation analysis or how many times their web has been cited by others is of importance. Bibliometrics is also used for researcher profiles and to evaluate their research outputs. It's used as performance measures for promotions of funding and to provide comparison between researchers. For journals it's used to rank journals within a discipline which has a bearing on where to publish your article. For research disciplines these research outputs are assessed and used as a benchmark for academic performance by the Oxford Research. For institutions, bibliometrics are used both as a quantitative and qualitative measure for government research exercises such as CODIC and ERA to determine university rankings both nationally and internationally. This has a huge impact on funding and grant applications. There are some issues with these formal metrics. The value of citation analysis varies between disciplines. For example, the hard sciences will have a higher number of citation analysis but the visual arts or humanities will have a lower number. The quality of the citation is not necessarily reflected by the number of citations. Basically, your article could be cited but it might not be very... Sorry, I'll speak up. The length of time for a work to be cited can take years. Citation counts ignore downloads, views, discussions on social media sites which are different measures of impact. These tools are mainly provided by Web of Knowledge, Scopers and Google Scholar. So they're starting to use informal metrics called altmetrics. So as scholarly communication becomes increasingly online, different measures of scholarly impact and influence are required. Scholarly citations cannot capture the saved, downloaded, cited, recommended, or discussed that takes place on social media and scholarly networks such as blogs and Twitter which denote visibility and peer sharing. They're particularly useful for new scholarship or papers published in non-traditional journals as they're given immediacy to the research. The data is harvested from a wide variety of open source web services that count such instances including open access journal platforms, scholarly citation databases, web-based research sharing services, and social media. So altmetrics really complement traditional metrics. Heather Piverworth conducted a study in 2010 tracking dataset citations using common citation tracking tools. The process was really manual. The first step was following citations to the paper that describes the data collection then filtering. That was followed by searching for accession numbers, URLs, and DOIs in full text. So it was very resource-intensive. There's no single interface to search on, nowhere to easily download a complete set of citation information from the results page. You could not search on DOIs at that time as web of science and scope is stripped to DOIs and URLs from citations when the inputted references into the databases. The situation is starting to change now. Thompson have just brought out the data citation index tool and we tried it in April. Academic citation searching and search services cover journal articles, books, and conference proceedings. The Thompson Reuters data citation index is looking at indexing datasets and adding those to current indexes so that scientific research data can also be discovered and cited. The assumption is that if researchers are already using products such as web of knowledge or scope this, then adding an index of datasets should bring data discovery into established researcher workflows. I tried the DCI from a repository perspective and we also involved the academic services librarians from different disciplines to trial the product. And here's some of the feedback that we got from the librarians regarding that trial. So overall the data citation index is a good fit for the suite of web of knowledge products than Thompson Reuters offers. It has an integrated interface for a general or specific topic based search and there is access to a range of datasets in one topic search as well as links to associated publications and citations. But as you can see from the feedback the research data Australia is not yet included in the harvesting so none of our DOIs are available in the search. It's a very high representation for the sciences but a low representation in the arts and humanities fields. So the DCI is still an immature product. There are several constraints that may change or develop over time as a product matures. The most obvious one is the quality of the data and the coverage of disciplines. Maybe Thompson should offer the DCI as a bonus in their subscription packages and wait for this product to gain more traction before charging for it. As it is the cost of this particular tool outweighs any discoverability benefits that would currently be offered for data that is currently available for free. We also had a play with Impact Story to see how it works. So here we can see the process of creating a collection. You can input up to 100 DOIs. It's easy to use. It's an awkward idea if you have one. However, this tool only uses DOIs or PubMed IDs. So this is a really good opportunity to come into DOIs through ANS and cite my data for your data collections to track usage while it's still free. So we put the DOIs in the details of the data collection are retrieved automatically and that's where the DOI comes in handy. Unfortunately, it's too early to see any results for these 14 new data collections for which DOIs are meted in March. This shows you the process that you would go through to get a report and normally if it was cited or on the side you would get highly recommended or cited stats. Isn't that the information about available metrics and current limitations on the bottom right? Tools are immature but are evolving quickly so definitely worth having a play with. So for the future, mechanisms for data citation tracking are still being developed. If you have bibliometrics experts in your institution utilize their expertise and involve them in your projects. Keep playing with these tools. As a community we can help shape what we want these tools to do for researchers and their data for discovery and reuse. Look at the bigger picture. We're going to start using a combination of tools such as DCI as well as old metrics to measure impact. Last but not least, experiment. Thank you. Great. Thanks, Daisy. Moving on to the outreach component of what we've been doing. Just a quick overview of the kinds of activities that we've actually done. We established a blog for the project. We spent a bit of time talking to a number of our subject librarians about citation practices and different disciplines. We deliberately introduced data citation as part of a standard consultation with a research group in health and with an individual environmental economics researcher who knew would be depositing data in the near future. We also looked at notifications work flows and the way that dry-edge promotes data citation through the notifications was the specific example that really stood out there. So as part of their community outreach you can look at dry-edge work flows expressed in a series of PowerPoint slides and see what they email out to deposit as automatically. It was really great to see a working example of that kind of communication and to think about how we could do something similar here at Griffith. So in the first instance we took that process and we've manually emailed collection owners of new collections and hopefully that is one area where we'll look to have that notification as part of self deposit automatic processes in the future. We also reviewed all the existing information and work flows that we've got available. So the existing Griffith Coliseas and Guidelines are really a direct response to the Code for Responsible Conductive Research. So while they mention data because it's in sharing they don't really talk that much about research impact. We looked at the academic style guides and because Griffith is so multidisciplinary we've got not one but four style guides that are recommended for youth here. So that's Harvard, APA, the American Psychological Association, MLA, the Modern Language Association and Vancouver Style. And as you'll note from the ANDS data citation guidance in general these kinds of academic style guides have been developed before the current wave of interest in data citation so they don't really cover the processes of sorting data at all well. So to give you an example of that, the guide that we looked at in the most depth which was the APA only mentions DOIs in the context of electronic journal articles and it also really treats DOIs more as a kind of location sort of interchangeable with a web address. It doesn't really see a DOI as a formal identifier similar to an ASV or an ASCZ. We also looked at our training materials and guides and as in most universities our current training on reference and practices here at Griffith is pretty much targeted at new undergraduates so they're not really a group that are that likely to be generating or reusing data sets until later in the academic careers so unsurprisingly data citation would get much of a look in there at the moment. And all this investigative work sort of coincided with our development of some new best practice guidelines for researchers and so we have been able to include data citation and impact at several places in that document. This is still an internal document but it will be out within the next few weeks by emailing through the and email list when that's available. The sections where data citation and impact are talked about. The background section of the document covers policy, drivers and benefits so data citation comes in there. It's also mentioned in a section on contribution to research impact. A section on organising and documenting data and there's a later section on sharing data through repositories that also mentions data citation and DOIs. So following on from all that work a few lessons learned. Firstly it's important to be aware that there are major differences across the disciplines that you'd be likely to encounter in a university like Griffith. I've already mentioned the style guides but in talking about citation practices with our subject librarians it became obvious that there are many other factors that would make a researcher more or less open to a discussion about how data might contribute to their research impact. That could be what kinds of publishing channels they've got available to them, who they can see their target audiences and the processes by which their work is currently assessed. And I'd also observe in a not very scientific way that there definitely does seem to be something to do with the age and career stage of researchers as well. I think at the moment younger early career researchers really need to build a profile that makes them stand out from their peers and so they do seem to be a bit more able to investigate non-traditional ways of getting their research out there. So in working with the group of health researchers that I mentioned earlier, I'd definitely observed that the people that were the most interested in the possibilities of data collections being cited were the postdocs, not the more senior staff members. The second lesson is about choosing your time and I guess we were extremely jealous to hear that CYRO were able to communicate with their researchers about data deposit and citation before the researchers even submit their articles. And that's because CYRO has a process in place by which all their publications are better prior to being sent off to the journals. Unfortunately at Griffith we don't have any way of knowing when a researcher is intending to publish and our processes for finding out about publications operate in such a way that it could be almost a year after the publication that we even find out that it exists and if you think about publication timelines that means it could have actually been two years or even more close to when they've submitted it. So by that time any effort that you might make to encourage data deposit and citation would be that worthwhile and unfortunately that's not a situation that's going to change for us anytime soon. So we've been trying to find other hooks and currently we've got three that we might spend time on in the future. So the first one is the point of data deposit and the notifications that can be automated around that. So I already mentioned earlier that we tried that during the course of this project. So we minted 14 new DOIs for newly deposited collections and sent out a notification email with information about citation to those researchers. And those data collections that were being deposited after projects had completed and after final reports had already been done and so I'm not surprised but I'm still slightly disappointed that we didn't get any response to what's a revision of that community data. The second thing that we're thinking we might be able to tap into is information about when funded research is coming to an end. So there might be a quick intern where data has reached a level of stability and the researchers are starting to think about writing up results before the end of the project. And the final area I guess that we're thinking about is getting researchers thinking about these issues as some kind of planning process. And one benefit of that approach would be getting them to think about identifiers and citation as part of an overall approach to data management not as a one-off special issue of borrowing their attention at some certain point in the research life cycle. Here at Griffith we haven't committed to project-oriented data plans and we're actually considering an approach based on profiling research centres and institutes. And so if we can introduce data citation as part of that profiling process we might be able to get that message about data citation out to people for your publication. Another issue with timing relates to the skills development aspect would be really good to get these messages out to people as part of their introduction to study and research but it's not probably going to make sense to be doing this with first year undergraduates so we probably need to be looking at higher degree inductance and the research methods classes that people do at their higher degree level. But hopefully we can get some of those ideas in front of people while they're still in training. Third point, we've been quite interested to hear that other organisations have been taking an approach to the minting of a DOI as a request and we'd interpret that to mean that a researcher would have to know what a DOI was and have at least a basic understanding whether they were wrong or not. And as Natasha said earlier, our view is that the assignment of a persistent and interfered public collections has benefits above and beyond those that accrue to the individual collection owners in terms of citation. So by making the minting of DOIs rules driven rather than demand driven exercises that should actually remove the need to communicate about DOIs and data citation prior to a DOI actually being generated. So while we would still include DOIs in the citation information on display pages and in notification emails and in various kinds of information resources we don't think the researchers should have to understand the ins and outs of DOIs in order to make a decision about whether a DOI will be minted or not. So finally I guess, well second, finally, we've been brought back to Earth a bit through some of our interactions with our librarians and our researchers about data citation issues and ways that have made us think very carefully about how we talk about benefits. And one of the things that has become obvious is that the formal evidence base for the benefits of data citation is still quite small and quite partial and we need to be careful about generalising from that to make big statements about benefits that may not be valid in every discipline. A lot of our researchers are working in arts and humanities and in social sciences and they're not likely to be overwhelmed with enthusiasm just because we can allow a couple of studies from astronomy and cancer research. And I think if we over generalise there's a risk that the important messages that we do want to get across get lost because the researchers can dismiss that evidence as irrelevant to their discipline or too inconclusive as to warrant any action on their part. So I think instead of overselling we need to be really honest and careful when representing that evidence to researchers. In their work they wouldn't get away with generalising from two studies without kind of putting that in some context and so we need to set those same standards for ourselves in our communication. And I guess this points to an area for further work and then we need to be involved in producing evidence and to put it in some context. And then in 2010 Almas One in the UK was able to do a meta-analysis of 31 studies of open access citation advantage for publications and that's where we need to get to with this to have a compelling kind of evidence base. The other thing that can be difficult is promoting data citation at a time when the border environment still provides few if any concrete rewards. Researchers that participate in external quality assessments and various internal performance management processes all the time they know the criteria for those exercises entered out and regardless of the benefits that flow from them a lot of them perceive those exercises as impositioned and really as symbols of a very managerial culture that is actually preventing them from doing their research not facilitating it. So if our communication focuses on mandates that the researchers know to be weak but that we make sound stronger than they really are I think we risk losing our credibility and having the researchers perceive us as people that want to make them more busy work. Finally I guess we want to approach this work with an understanding and acknowledgement of the pressures that researchers are under if we don't make an effort to understand where the researchers are coming from we can't expect them to see us as people that have their interests at heart. The benefits for the researchers aren't the same the benefits for institutions and for funding agencies and you need to keep that in mind when you are communicating with people. And I guess I'm going to make there a risk that citation is part of standard scholarly practice that's about giving credit where credit is due and so while citation metrics are quite politicised, citation is part of scholarly practice as something that all researchers understand and if they do, it's a matter of course. And so the last part I guess is about being honest and realistic with ourselves as well as with the researchers promoting a culture of data citation is going to be a long term and on bone process. Here at Griffith we're in the position now where we have infrastructure in place for data to be deposited and for DRAs to be minted and as Natasha said we've got some procedures in place now that help us understand these processes. Our new best practice guidelines in corporate data citation is part of a holistic view of data management and over time hopefully our information and training will reflect that better than it does now as well. But there's still a lot of external drivers that are just as important if not more important in determining how well established this culture of data citation can possibly become and these include things like funding agency mandates the policies it publishes, what's included and excluded from research quality exercises and the ways in which things like scale guides and reference management tools like note and zetero deal with data. There isn't actually that much that an institution can do on its own in some of those areas and so we do need collective action and strong leadership if we want to see long lasting change in some of those. We are optimistic though, we want to continue to develop data citation infrastructure and practices and we'd encourage you to think about doing those things at your institution but we do that in the knowledge that no matter how much local success we have here or that you might have at your institution there are forces outside our control but still have to be addressed. That's why it's really important to have this forum like one today so that we can start thinking about this, not just with what's happening in one institution but what we need to start doing nationally and internationally too. That's it. And back to Jeremy. Thanks Sam and thanks Natasha and Stacey for a really insightful presentation and covering an awful lot of ground so I do appreciate the effort you've put into such a well rounded presentation. Probably worth mentioning before we open up to questions too I think it was Natasha who mentioned that research data Australia isn't yet indexed by the Thompson Reuters Data Citation Index and that's absolutely true but it's probably worth mentioning that we are working with them towards enabling that and will certainly keep you informed as that proceeds. So you know there is something in the wings there. So I'll open up to some questions and I think we've got some in the pod so I'll just take back the screen from Sam. Okay. So Sam we've got some questions here or maybe for others. So Susan Robbins I think has no mic but has commented that one thing she found was real examples of the value of data sharing work best to encourage researchers. Do you have any concrete success stories that you can share? Susan raises a really important question here which I guess is that a culture of data Citation will only come about with a culture of data sharing. The culture of data sharing is really a prerequisite for the kinds of things that we're talking about. I'm struggling to think of something off the top of my head there I think where we will see that kind of story when we start doing the planning work with the groups in a more conservative way I'm working with them before the data is being created rather than retrospectively. I think as well we'll think about this again as we come into open access week later this year where we think about how to promote data sharing in a week where there is a focus on open access and it's really good to have some success stories whether they're from Griffith or somebody else's because researchers do look to their own disciplines so we can get a mix of those stories that would actually be really useful but yeah like Sam I can't think of any great ones off the top of my head here so something that we need to do more work on as part of this whole event. And it sounds like you'd be very happy to hear from anybody else that does have some success stories they can share. Sorry. Now we have another question from Anton. Anton I'll unmute you and see if you can ask the question yourself. Can you ask your question Anton? I'm interested to see if and I think we actually answered this previously was a culture of using other people's data it was only when I used an open data set did I realise how incredibly powerful they were. Do you think that when people use other people's data it might help things snowball? The other thing is have you had any problems about the people making their data that transparent in terms of being quite defensive about things that may I don't know the transparency of it all and how that might affect the egos of people who's having their data re-analyzed. I think there's a couple of parts to that question it was a little difficult to hear but the second part of the question was about the data sharing culture and people being unhappy about that transparency and wondering about being sensitive. Yes I can't talk about my experience at Griffith in this respect but I can go back to my time at my nation I guess what I would say is that there's certainly always going to be some people that will have that view but I think for every person that has that view there is probably someone else who I would have had a conversation with who wishes that they could share their data and has made poor decisions in the past so certainly when I'm approaching my communications activity and the advice that I always give to people like subject librarians is it's not our job to do the hard sell because there's plenty of people around that want to share and are looking for assistance to do that some of it is discipline culture and I think we're limited in being able to influence that so I will always try to work with the people that are kind of really willing and able and hope that the other people through seeing the benefits that accrue to their colleagues will come on board over time rather than trying to convince people that we're all on the righteous path. I think that comes down to some of the things that Sam mentioned in the last slide that are a bit outside of our control to do with funding mandates and institutional mandates to deposit data and it does concern me that we get sort of incidental data collections. We get stuff that people are willing to give us we don't collect it as a matter of course I would say we don't have the best Griffith research data collections because it's not collected as a matter of course and there's no mandate that says it should and so these best practice guidelines are really important in trying to push forward at the institutional level with this and we know there's that funding mandates in America for example with the National Science Foundation have more teeth in some ways not completely but they're starting to get there more than what we are here so yeah we've got a way to go with that one. Thank you. And I do think we should promote data citation as best scholarly practice in the context of you know if you use a data collection you should cite it you know and if you produce an article and you have data associated with it well you should provide a link to it you know and that's just part of it should actually become part of just your normal scholarly workflows it's not something hard it's not something outside of that it's just part of the team credit to where credits due basically so I think that's actually a really good message that we should kind of be getting into the workflows and into the promotional materials we get out about it. And I think Natasha that also raises that issue that whether or not a data set has been assigned to DOI doesn't mean it doesn't have real bearing on whether it can be cited so even if a DOI hasn't been assigned to a data set it can certainly still be cited. Yes that's right. And that probably goes back to the first part of that question which I guess was about the in promoting this culture of data citation we need to be working with the producers of data but also with the consumers of data they're going to be equally important people will learn how to make their own data citable through searching other people's data and the more people see data citation on landing pages and things like the heart and research data Australia the more it will occur to them that's just a normal pattern what you do. Great thank you do we have any more questions before we wrap up there are none in the question pot at the moment just while we give people a last chance just want to remind people that the recording from today's webinar will go up on the on the Anne's YouTube channel and there will be email notification go out when that's available and all the recordings from previous webinars in this data citation series are also available as well as recordings for other webinars like the licensing series and the latest changes with release 10 of research data Australia so I'd encourage you to go and have a look at that and don't forget that you know we have other questions coming up so please keep an eye out and feel free to sign up or ask about any events or make suggestions for events that you'd like to hear more about so we don't seem to have any more questions so I'd just like to again thank Anne might have a question sorry I'll try Anne just a moment. Anne do you have a question I just wanted to applaud Griffith Uni in this area this is really valuable to share amongst this community. Yeah no thanks Anne and I think we have some other people saying the same thing that they're really pleased to be able to hear about the experiences of Griffith and the generosity in sharing their information so thank you again to Sam Stacey and Natasha for your time today and I know from previous experience that they're very happy to talk to people offline if you wanted to follow up on anything in particular or to have a chat about the work that they've been doing so thank you all for your time and thank you to our presenters and we look forward to seeing you at one of our future events. Thanks everyone Thanks