 As we have quite a full agenda today, I think I'll just crack on and start off this session. So first of all, I hope you were able to attend the webinar that was presented yesterday by ARC NHMRC about giving some background about the code of responsible conduct and the changes to the guide on managing data and information. So that was intentionally intended as a starting point, a kickoff to already provide a bit of background for the session for today. So this session is around the shared, the main purpose for today is sharing approaches to implementing the data guide at universities. So when we set the data guide, we mean the code of responsible conduct and the associated guide to managing data and information. And the focus for today will be very much around aspects that are mentioned in the code and in that guide relating to research data. So that's our specific focus for today. But other topics I'm sure will come up as they are related to research data too. So I would like to extend a really big thank you and for all the effort and put into this by the Australian Research Council, by the NHMRC, by the Association of Australian Medical Research Institutes, AMRI, by Edith Cowan and by Macquarie University, University of Melbourne, University of Queensland and University of Technology, Sydney. It's very much a joint effort in trying to see what would be a useful workshop and approach for the sector to share approaches in this space. So first of all, I would like to acknowledge and celebrate the first Australians on whose traditional lands we meet, even if it was virtually. So in my case, that's the Wurandjeri people of the Kulin Nation. So I'd like to pay my respects to Elders past, present and emerging. This work here for me is a local connection and that's why I wanted to bring this up. It's created by Mary Nicholson of the Wurandjeri-Willem Plan and it portrays the cycle of knowledge which I thought was very appropriate for today. I hope this image and this depiction of knowledge coming down and being cycled through and flowing through the Yarra River is a good connection to the flow of information, hopefully today, between universities, all on their journey to enable better sharing of research data and information. So just to give you a perspective on today's workshop, the purpose is to support good research data management practice as outlined in the code. And the workshop for today is very much around facilitating knowledge exchange between universities and to see what solutions have perhaps already been developed or which you are in the process of developing. So we're hoping that today we'll provide you with practical examples from other universities that you can build upon and steps and you can view steps that other universities have taken to achieve the necessary changes. And one of the things that we have in mind in this workshop and it probably will come up at a few points in time is when considering the changes to the code of responsible conduct and the new guide to managing data, you'll find that implementing those across the university actually requires a collaborative approach across the university. So we'll be in a few of the sessions that will come up and we'll be discussing that a little bit as what does a collaborative approach look like? How do you do that? So we are starting off with the welcoming introduction. Then we'll have a panel session in which we'll have five speakers speaking each about their experiences at the university, things that they have done at their university to address elements of the code. And then we will have breakout sessions in which there will be likely hopefully a discussion amongst participants on the range of topics and coming back, reporting back some summaries and we'll do a few short polls to get a view across the whole of all of participants today. It should be interesting. And then we'll wrap up and we should close by 3 p.m. Australian Daylight Savings Time. So today's speakers on the panel session will also be facilitators for the breakout groups and we have five and that's Louise Wheeler, Manager of Research Integrity and Research Programs at ETS. We have Stacy Waters, Manager of Research Governance at ECU. Louise Grusser, Director of the Office of Research and Ethics and Integrity at the University of Melbourne. Adrian Chu, Data Management Training Consultant at UNSW. And Helen Morgan, Deputy Director of Research, Strategy, Planning and Performance at UQ. So that was the introduction for today and now I'd like to move on to the panel session in which each of them will provide their own perspective and their own view on the challenges that they take. So I would like to, I will stop sharing and I'll hand over to Andy because he can share their slides. Thank you, Keith, for that introduction. And I would also like to acknowledge and pay my respects to the Gadigal people of the Eora Nation where I currently am situated and where UTS stands. And I would also like to thank Justin for the presentation that he gave yesterday which was a really great and comprehensive overview of the code and the underpinning guidelines. So as we saw yesterday in the code, we have both institutional and individual responsibilities for managing research data. And it's really important that the institutional responsibilities are discharged effectively in order to enable our researchers to meet their obligations. And as outlined in the code in some detail, it's more than just access to facilities. Thanks. More than just access to facilities. It really is about the development of policies, of governance and the provision of guidance and training and support and awareness raising. So my talk today is really about the role that the institutions can play in reinforcing the value and the ability to manage your data effectively and to support our researchers. I'm gonna try and go to the next slide but maybe, Andy, you could do that for me. Thank you. Excellent. So we know why people don't manage their data, where UTS have done a lot of looking into this and I'm sure that what we've distilled down into these six points on the slide are probably familiar to many in the room. Data management practices have changed considerably over the last 10 to 15 years in terms of the principles, the drivers, the frameworks and the technology that's available to us. So we probably all know plenty of researchers who have 20 or 30 or more years of research data conveniently stored in boxes, in their office or in their garage. They know where it is. They may not be able to access it because it's sitting on a five inch floppy or some other piece of outdated technology but they know where it is when they need it and they would consider their data to be managed. We have researchers who don't see the value or the benefit in managing the data in the way that we as institutions prescribe. They see it as a bureaucratic exercise or they feel they've already completed it either as part of their ethics application or as their ARC ground application. But as we heard from Justin yesterday, the requirements have been strengthened from our funding bodies in 2020 and the requirements are changing and increasing. So it's really up to institutions to change the mindset of researchers and to build a positive research data management experience and culture for researchers. Could you go to my next and final slide please? And thank you. Excellent. So everything that's on this slide is what I'm talking to you today. There's no particular order of importance but essentially all of this information, all of these are reinforcement opportunities to point researchers to the institutional system that has been created at our various institutions and that has been designed to meet our responsibilities under the code. So I am thinking from the perspective of UTS and the work that we've done but I think that this is applicable to all of us. So we look at governance and leadership. It's really important that we have strategic alignment across all of the layers of strategy that exists at the institution and also alignment between strategy and policies and procedures. And we'll have more talking about policies later on. We're talking to you today from an integrity perspective by and large but it is also important that we look at data management from a risk perspective at UTS. We've identified research data management as one of our top research risks and we know that the key integration strategies are having a fully thought through infrastructure and support systems in place. It's really important that you have oversight at the highest levels in our case, the research committee but fundamentally, I think in this box it's really, really important that we have endorsement and promotion from the senior leadership and that comes from the Deputy Vice Chancellor of Research all the way through to faculty leadership, heads of school, center directors and then supported by faculty research officers and other central units that are involved in managing data. In terms of engagement and support, really the biggest thing we can do is raise awareness about the importance of data management and how to do it. We can't expect researchers to meet their individual responsibilities if they're not aware of them. So it's important that we have a range of training modules that are available in different formats and that these are available on an ongoing basis. Communication likewise needs to be done on an ongoing basis through multiple channels and the support that is available to researchers also needs to be very open and transparent. And in terms of how we raise that awareness, we really need to focus on the value and the benefits for researchers. Researchers are less likely to engage in risk and compliance obligations. They really want to know what's in it for them. And what we're here to do is actually ultimately create an improved user experience, saving researchers time, increasing the value to them for them and their research. If we look at infrastructure, I think to UTS, certainly the most fundamental thing is that we have integration between our various research management systems, our ethics systems, our data management, our record systems. Again, making sure that it's streamlined and improved user experience. It's important that we have... My slide seems to have disappeared when that comes back. It's important that we have an end-to-end management system. So when we talk about research data management with our researchers, we talk about it being over the whole research life cycle. So the systems that we provide also have to support the entire research life cycle from planning to analysis, to storage, publication, and then retention and archiving. And I guess down in the bottom corner, from a process point of view, again, using UTS experience, we try to create ways to reinforce research data management practices through making... through our systems and our processes. So our ethics application now encourages people to complete an RDMP in our system. Our project establishment forms also ask, how are you managing your data? Encourage you to fill out an RDMP in the appropriate system. For HDR students, it's now compulsory to complete an RDMP. So again, we're providing links between our training modules and our systems, all of which are pointing towards reinforcement of our obligations and responsibilities. I've skipped a bit, but I've lost it now. So that's really what I wanted to talk about today. And now I'm going to hand over to Stacey for the next session. Thanks, Lise. I'm just going to see if I can get this control here. Good morning, everyone, from Perth, Western Australia. I appreciate it's afternoon for some of you, but it is still a lovely morning here in Perth. My role today is to talk to you about our recent experience of data management planning at BCU from an institutional perspective and how we really sought buy-in at the highest of levels that has enabled us to make some good inroads this year into data management planning. Thanks, Andy. So we, like lots of other institutions, have a checkered history of data management planning. We, in 2013, a policy was developed. We have had a paper-based DMP available for researchers from 2015. We've had around 250 DMPs created over a five-year period, mostly our HDR students, because it was a requirement of confirmation of candidature. I guess the big shift from data management at ECU happened in 2019 when research services who oversee all research at ECU went through a restructure. My position in manager of research governance was created and part of my responsibility was really, actually I see my whole responsibility as implementing the code. So myself and my team of one other spent time at the end of 2019 and 2020 unpacking the code in its entirety, including all of the supporting guides and really articulating everything that we were responsible for as a university, but also that our researchers were responsible for. We then looked at and celebrated the things that we were doing well and found the gaps. And really it was data management that stood out as our biggest gap. Thanks, Andy. Andy, do you mind moving the slide? Thank you. So our actual experience, like many others, we have had a data management policy in place. It was a paper-based DMP and we had really poor compliance, as I said, around 250 DMPs over the space of five years. Thanks, Andy. And again, thanks. So our approach to research integrity, we have world-class research at our core. We have then defined three areas of responsible research conduct, our working with others and our professional conduct. We have overlaid that with the supports that we provide as an institution from professional learning to our governance policies and processes and our research integrity advisors. Thanks, Andy. The next component, though, and our priority was around, in order for ECU to discharge its duties to store, retain, reuse data according to not only the code, but laws and WA legislation, et cetera, we needed to have an oversight of what our researchers were, what the data were that they were generating, which is why data management became our priority and why we took it to the highest levels of the university to say, this is a compliance matter. We need to invest and we need to address data management for all of the risk and compliance issues that Louise outlined to the floor. Thanks, Andy. So in a nutshell, really the start of 2019, that the very first thing we did from a research integrity perspective was to put together a data management steering group because we realised in research services that while we may have courage of research integrity, there were lots of other areas across the university who actually did the work in relation to data management. So we pulled together a steering group comprised of us, IT, information security team, our information management and archive services, the library and researchers and our very first agenda, the very first item on our agenda was would you, in terms of the other people around the table, mind if research services had oversight of data management formally and that we would then work with our other areas and it was up until then a hot potato, an issue that everybody thought was important but no one really had ownership of. So we then took that institutional leadership and ownership of data management but working with our friends in other service centres. The very first thing we did as a group was co-develop a conceptual framework for data management that was grounded in best practice from the Privacy Act, the Code, the National Statement, FireSafes, all those sorts of things. We used that to develop our data management questions and then we embedded that in our ethics management system. Thanks, Andy. So our conceptual framework is what we consider our four pillars and the approach at ECU is to ask our researchers what data they have to allow us to, as the experts across the university, to secure, retain, manage, store, access and reuse data in accordance with all of the things that we need to comply with as an institution. But getting by in at the highest level from our executive to say that data management is important and researchers must comply with a key component of this. Thanks, Andy. Having that framework allowed us to then develop our questions and be really particular about what it was that we needed to know about our researchers' data that we collected and so that really informed questions that we asked. Thanks, Andy. So we are fortunate at ECU that all researchers know that they need to test their research in our research ethics management system, even if they're not working with humans and animals, they need to come in and test their research. What we now do is that when you test your research in REMS, we create or we ask them to create a data management plan as part of that process. So as part of this process now, thanks, Andy, we have our researchers completing a DMP. That information that is collected in our online system gets securely stored in the SharePoint database for us. We then use the functionality that Office 365 allows us at ECU to then notify the relevant people across the university about the data that are being collected. It's not that we don't trust our researchers, but we want to put the information in front of the right people within the ECU who can help our research community do what they need to do with their data. So our system notifies IT. There's additional security required. It notifies the library when people say they'd like to reuse their data and share them in the future. It notifies IMAS, Information Management Team, about retention ready dates. And it automatically allocates a secure SharePoint storage space for research records that gets surfaced as a unique Microsoft team for each research project. DMP can be modified at any time and our system then notifies researchers and the relevant people across the university when data are retention ready so that we can look after data from an archived perspective. Thanks, Andy. So far, since late, or not really even mid, probably sort of September-ish, we've had 344 data management plans for EMBA ECU. 87 of those involves some type of sensitive data as defined by the Privacy Act. And that means that IT have been able to reach out to those 87 researchers to say, we think your data might need some additional sensitivity. Let's explore that and see what we can do to support you. Similarly, we've identified 19 that have large-scale data storage needs, 116 who have physical data records that we need to help researchers protect. And now 100 researchers at ECU have said that they would like to explore making their data available by open access and have the appropriate ethical approvals in place. Thanks, Andy. The comfort, the thing that I can now, it's not keeping me awake at night, is that all new research that is approved at ECU, we now know what those data are. We are managing those data. We're securing those data. We're helping our researchers to retain those data and we're making them reusable as much as possible. We did this, thanks, Andy, through the power of, and then our keys to success was identifying clear leadership, getting institutional buy-in and support from the very highest levels, creating a shared vision across all of the relevant service centres at ECU, being able to integrate and automate an online DMP, and most importantly, provide real-time information to the relevant service centres so that they can best support individual researchers who have those non-standard or non-typical needs. So we're putting information about what our researchers collect into the hands of the service centres of ECU that need to know about them and then can discharge their duties. So that's a brief summary of the institutional response that ECU has undertaken around data management and look forward to any questions at the end of the session. Sorry, and I should have introduced Elise, who's the next speaker. Hi Elise, are you able to speak or are you muted? Sorry, I have just figured out how to unmute on my telephone. So I'm on my telephone for the audio and on my computer for the visuals. I do apologise, everyone. I'm Elise from the University of Melbourne. I will just put my video on. I'm essentially trying to talk to you today just about designing institutional policy. So we're looking at the sort of very high level what the university is doing. I am really privileged to have great colleagues across the university that we're working with. And actually, I think Helena is here, Helena Lynn. She is one of the most phenomenal forces who's doing very similar things around trying to bring all the different parts of the university together to focus on data. But I'm here to talk to you about policy today. If we could go to the next slide, just really quickly. For those of you that may not have been here yesterday, I just wanted to quickly recap the institutional challenges set in the code. The principles that are in the code are really around the rigour in the development, undertaking and reporting of research. Principle three, transparency and declaring interests and reporting research methodology, data and findings. And that requires researchers to share and communicate their research methodology, data and findings openly, responsibly and accurately. And then principle seven, accountability for the development, undertaking and reporting of research. And that's so to comply with the relevant legislation, policies, guidelines and ensure good stewardship of public resources used to conduct research. So thinking about those principles, they also feed into responsibilities for in the code, provide ongoing training and education that promotes and supports responsible research conduct for all researchers and those in other relevant roles. Now that's an important reminder that this is not just for researchers and students, but policy has to actually encompass all of the professional staff that support the process of research as well with the new focus of the code and conduct. There's responsibility five, ensuring supervisors of research trainees have the appropriate skills, qualifications and resources. Responsibility eight, the institutions must provide access to facilities for the safe and secure storage and management of research data, records and primary materials. And where possible and appropriate, allow access and reference to others. And we also, as institutions need to support researchers with their responsibility, number 22, retaining clear, accurate, secure and complete records of all research, including research data and primary materials. So that's the code. On the other side of the slide, we have the policy setting, which is actually the guidelines. And that's the challenge that we need to cover ownership, stewardship and control, storage, retention and disposal, safety, security and confidentiality and access by interested parties. That's kind of the summary of the expectations of the principles. We also need to focus on processes. How are we going to support the policy and the responsibilities executed in them? And I think my colleagues in the previous two presentations and my colleagues in the next two presentations will bring some really good activities that are occurring in their institutions to bring that all together. We also need training, which again, in the next presentation will be addressed and the facilities. So that's actually the really expensive part that I think a lot of institutions do struggle with. We could run to the next slide, please. So the implementation challenges for universities in regards to policy and data, particularly large and many, to try and get one policy that fits everything is very, very difficult. And my understanding is that most universities have taken a multi-pronged approach where there are policies that feed into other policies. And so the University of Melbourne is currently creating a policy or I should say refreshing their data policy. I'll talk about that in a minute, but the challenges that have come up in this process have been sort of uncovered for all universities. I think they'll be relevant to all of these. Large scale and comprehensive institutions, we have diverse discipline specificity and I know that sort of an oxymoron, but we do have a lot of discipline specificity in a wide range of disciplines that we cover. With that, the flow and effect is that there's different understandings of data. So we will have research that's undertaken that has an artistic composition like a piece of artwork or musical composition or dance through to something that is a digital giant where you've got terabytes and terabytes of information. There's also student projects that last a few months to multi-year multi-institutional bohemus. So, you know, we've got things that have so many different authorities feeding into them. There's also the percentage of the workforce that's transient and I think that's a really huge consideration when we're writing our policies, especially around the movement. So I don't know if everyone was here yesterday, but Justin, when he was talking about the ARC's expectations, they did expect us, if it's not in the policy, to have guidelines around how are we managing the movement of people between institutions. So that's within the state, interstates and overseas. How are we going to curate the data and the custodianship of the data that is a product of research? And we also have a large percentage of our workforce moving from academia to other employment entirely. So remembering that 33% of PhDs are complete, moving to academia across the sector and the rest of them move into other employment. And one of the things that I think everyone has mentioned today so far is the overlap and shared responsibilities. So data isn't something that is discreetly owned. And I think ECUs decided to put data in research, the research office as a key gem of all of the different areas. But I do think that it is one of those things that will only be successful if all of the various areas still feed in. And I think that's a really great setup that you've got over there, by the way, Stacey. They're also the multitude of legal and ethical obligations that feed in to the data that's being created. And of course, the big cultural change that Louise and others are talking about today. How do you actually shift people into making this a natural part of their world? And then next slide, please. So when we're looking at developing policy in all parts of the code, we have to sort of think about it from what are the key parts or the key things that make policy good and effective. I think about this and I decided to bring up the top eight. And my top eight is that it's endorsed. So any policy that you have doesn't have to just have support from the top, but it has to be lived in the actions of the leaders. It has to be known and understood. So we should have strategies for communication and engagement. And I'm not talking about just communication that it exists, but specific training and repeated reference to it in procedures, guidelines, tools, inductions, et cetera. So I think Louise is mentioning that her ethics applications have research data management plans as something that they highlight. And I think it's that constant iteration where you're just referring to each other and trying to build what the obligations are in the day to day that normalizes the whole process. It should be joined up. So I think not just between the different areas as I mentioned just before, but actually built up around shared goals and values. And I think that's one of the really positive things that the Code and Guides have brought out for all institutions engaged in research in Australia is that we do have shared goals and values. So anyone working at Melbourne could be collaborating with someone anywhere else in Australia. And they should know that whilst the policy wording won't be the same, the intention and the shared goals and values that they have and the spouse are all there. So it's actually really good when you are developing a policy to have a look at the landscape and see who else is actually doing something and what are they doing. The fourth point is that it needs to be relevant. So what's in it for me factor? I think Louise said exactly the same thing at the beginning. If someone can't see why it's relevant to them and it's applicable, they will not care. And that's a fair thing. We have a limited cognitive load in this world. And I think if we can't see something as relevant, we dismiss it automatically. Quite a natural thing to do. It should be realistic and reasonable. So whilst I've said that it needs to be shared around, built around shared goals and values, we don't want it to be just ideals and a hollow statement of both. We need it to make sense, be actionable and be supported with the procedures, processes, tools, handbooks, FAQs, guidelines, whatever else is needed to try and help people be able to fulfill their obligations without having to think about them too much. It also needs to be stable and adaptable. Again, I've got another boxy moor on here. But by stable, I mean, it should be something that can not be updated regularly and last a few years. But it should, and that's usually sort of achieved by being not too specific. In terms of the adaptability, what we're looking for is something that is in the procedure and process underneath. So with that lack of specificity in a policy where you have sort of broader statements, you're allowing for the specificity to be in the level below. So if you think of that triangle at the beginning of my presentation, it's the level below that really needs to be the specific thing that can be changed very, very regularly. Number seven is it needs to be inclusive. So it needs to be the right scope. Now, if you remember a few slides ago, I mentioned the fact that it needs to apply to researchers. So that's our academics and our students, but also professional staff engaged with research. So we're not just focusing on those conducting the research, but all of those people helping. And I guess the difficulty for all institutions, and if anyone has the answer to this, I would love to know. But the difficulty that Melbourne came across is how do we cover dual appointments, visitors, honorary appointments, collaborators, partnering institutions and that? Elise, are you still there? I'm afraid we've lost you. Andy, Andy, was that the last slide? Or I don't know, there we are, there's the next slide. Elise, are you still there? Maybe we could better move on for now to the next one on training by Adrian. We'll see, when we get connection back with Elise, we'll see if there's anything that you'd like to address. Adrian, please, we'd be very interested in your perspectives on the training aspect in response to the code and the guide. Yep, so thank you, Keith. Yes, so let me see where I can control the screen. Uh-huh, okay, oops, yep. All right, hi everyone, I'm Adrian, and I'm from UNSW, and today I'll be sharing our Research Data Management Training Experience. Okay, let me fit a little bit with this. Yep, and I will start off with some context setting before talking about our training experience and I'll end off with what we are doing next or could consider doing. So the framing of the management of data and info guide, it's largely policy oriented. Many of the first sentences for the institutional responsibilities would start off with something along the lines of institutions must have policies on this, on that, or institution policies should address X, Y, and Z. So in this regard, then UNSW has a suite of policies, guidelines, and procedures. In fact, we have a policy that specifically speak to researchers and their data, and we also have a guideline on how to handle sensitive data. However, policies almost never achieve their intended outcomes. That's because policies are never simply implemented. As Elise mentioned just now, policies tend to be not that specific. And the policy target audience will almost always interpret these policies through their live experiences, translate their interpretations into what it means for them in practice, before finally enacting those policies. And what we are after, I believe, is really to get researchers enact RDM best practices to treat and handle their data properly. So rather than living it wholly in the hands of the researchers to interpret and translate what we would really like them to do, it might be beneficial to help them with the interpretation and translation part. So as such, at UNSW, we have an RDM initiative with a steering committee led by the PVC research infrastructure. The initiative is then broken down into three streams, namely people, tools, and policy. And I believe this is roughly quite similar to what the other presenters have been talking about too. And in a way, the people and tools stream target the interpretation and translation of the data policies. And if I can only use a word to sum up the institutional responsibilities, the word will be enabler. So institutions need to have enablers in place to help researchers meet their data responsibilities. And in this regard, I would think that we've started on providing some key enablers through our people, tools, and policy streams. So now I will share what UNSW did in its initial phase with a focus on training itself. And before turning to our RDM training, I think a definition of training would be useful as a frame of reference for what I'll be presenting next and also as a bridge to what I've been talking about. So in my latest paper, defined training as an activity with set objectives that can be demonstrated. But to even get to those, setting those objectives and outcomes, we need to interpret and translate policies. And once all the interpretation and translation is done, we can then design training by structuring the content so that users can engage with the content in a meaningful way to deliver our identified outcomes, outcomes that we can track and report. So in the case of RDM training, the key goal is really to move researchers, each one of them with their various own RDM understandings or interpretations to some sort of agreed-based like institutional understanding. So everybody has some common vocabulary to refer to when they're talking about RDM. And I'll be using our RDM online training, the Illustrate one I'm talking about. Can everyone still hear me? Because I think my Zoom may have crashed. Okay, so I think I may have lost control. Yep, I think, yep, I'm still, okay, let's, all right. So from the interpretation of our data policies, the RDM initiative came to an agreement that we can start off with four simple RDM messages for researchers to understand what it is. So first off, to even know if you have sensitive data, you would have to determine or classify your data. The second one to ensure that your data is safe and secure, use UNSW-supported data platforms because we have already determined the security level of our platforms. The third one, have an RDM plan for a research project because it will start to get you thinking about how best to manage your data. And the last one, complete a short RDM online training because this will get you up to speed with. There's also a fifth message, which is for all things RDM, contact us at RDM at UNSW. I think the controlling of the screen seems to be crashing my Zoom. Andy, maybe you could help me go to the next slide. Yep. So turning to the RDM online training or RDM OT, we have three different versions for three different target groups. Each is structured a little differently because each group has slightly different expectations and needs, right? But all three versions revolve around the RDM key messages. And since they're released in 2019, we have close to 2,000 completions. Andy, next screen, thanks. The intro module has three key outcomes. First one, when someone completes the RDM module, they will be confident in classifying the sensitivity of their data. And in the training, we have discipline-specific interactive case studies that will help them gain that confidence. The second one is for them to know of key UNSW-supported platforms and we have contextualized materials embedded in the modules for that. So the cards like this, cards or guides like this. And finally, we provide a walkthrough of what an RDM plan is and how to complete the mandatory fields. And here's the catch in a way. In order to complete this module, they would have to key in an RDM plan ID in the module itself, which means that as they're doing this module, they are also completing a plan. And so it's a women's situation for both, for everyone, right? And the median time for completing this module and this includes completion of a plan is around 40 minutes. So it's a pretty tight package. Next slide, Andy, thanks. Yep. And I must say, we have had fantastic training outcomes just using the HDR version example here. Based on more than 1,000 survey responses, it seems that almost everyone was happy with the intro module and were engaged by it. Perhaps more importantly, after completing the module, the participants are now confident in classifying their data sensitivity. They now know they must store data on appropriate systems based on their data sensitivity and they also know how to submit an RDM plan on our system. And as training raises awareness of RDM, we have also been getting emails from our research community for RDM assistance. Next slide. And here are some qualitative feedback to complement the stats. As mentioned before, we have three different versions for HDR academic stuff and professional stuff. And if you look at the orange boxes, you can see that for HDR supervisors, they also got something useful out of it. For instance, it actually alerted them to check the RDM plan to see if their HDRs have allowed them access to the RDM plans. So from all my RDM multi-data, I believe I can quite confidently draw the conclusion that for the vast majority who have completed the training, they now have a baseline understanding of what RDM at UNSW is. Next slide. And perhaps it's not really a surprise that we managed to create such a fantastic experience from our research community and at the same time improving RDM engagement level. Based on a systematic review that we did, we found three main features of successful RDM training. And the design methodology behind our RDM multi definitely has all three. We placed users at the heart of our design and development and it's about working with participants to co-produce something that's actually useful for the end users themselves. And in our case, it was pretty useful for the institution too as we are getting more RDM inquiries. Inquiries, we are also getting more RDM plans. And we also have an increased uptake of supported platforms. Next slide. Yup. So, next steps. The training that I've been talking about is just the intro to RDM. And as you can see from the data lifecycle itself, it's just the data management planning stage of the RDM lifecycle. And to speak to the other stages of the lifecycle, we are now developing informational pages for the other stages. If you would like to find out a little bit more about the backstory of why I'm recreating training for the rest of the stages, feel free to have a chat with me. We can take that offline. Next one. Next slide, Andy, thanks. And broader next steps. And this is really just me thinking out loud. I think there's an opportunity here to see if our training methodology works at a broader scale. We are currently transiting our RDM multi module to a standalone interactive package that can be customized for other institutions and deployed on major LMSs. So the upshot here is that if we can try to see if we can work off the current RDM multi and contextualize the training for other institutions based on agreed broad principles, broad RDM principles that I believe are not really contentious. For example, all researchers being able to classify their data sensitivity and how to handle them properly. So with a similar training in place sector-wide, we can then move from various RDM understandings at the individual level, and then we can, and from also across the institutions to a baseline sector-wide RDM understanding. And I think this was facilitated safer data collaboration across institutions. And all researchers across the sector would have some sort of common vocabulary to draw from. And like UNSW for starters, we can just start, we can just target a couple of principles and a few institutions to test it out to see whether this idea is visible or not. So if there's anyone out there who would like to explore this with me, please reach out to me. And also please, please reach out to me if you'd like to find out more about RDM multi. Always happy to chat. So I believe that's my final slide. Great. Thank you very much, Adrian. With this, these are really interesting presentations. We are running a bit a little short on time. There's a whole range of interesting questions in the chat. I would like to ask the other speakers that have already presented, if there are questions for you in the chat, could you please have a look at them and already try to respond to them in the chat? Because I think we'll be running a little short on time after this to address all of the questions that are there. So for now, I'd like to hand back to Helen and to present on what's happening at UQ on the uptake but also future projections. Thank you. Thank you, Ki. Just checking you can all here and see me okay. Fantastic. So I'm going to have a little chat to you today about uptake of one of the systems we have in place at UQ, which is called RDM at UQ. Very handy, easy to remember name. And what we've been doing here. So I think it's important, obviously, when we're talking about the code of conduct that we don't lose sight of what it is we're trying to achieve as an institution. The goal, of course, that we're all sort of doing, working towards in our day jobs is this idea of making sure that the research outputs that come from our institutions are reproducible, trusted, and that they're things that can really be built on and translated into either policy or the future academic work. We want to make sure that the stuff that's happening in our institutions is really good quality. And then we're also looking at it from our researchers point of view that they're getting a really efficient experience in terms of our systems that we have at UQ that they're not having to manually enter the same information into multiple different places, for example. So we're looking for that really trusted quality output as well as a really nice user experience for our people. I can go to next slide. Thank you, Andy. I thought I had control of the slides, but I don't, so I'll just kind of wave. So in terms of where we're at UQ, we're trying to transition to a business as usual state for research data management at the moment. We've had a number of projects in place that have developed various elements of the system. So we have a metadata registry element, a storage provisioning element, a data governance project alongside a research policy project alongside all kinds of wonderful things in terms of developing the capabilities in our repository to link data and publications. So there's been a number of projects over the last few years and what we're really trying to do now is bed down this really coordinated BAU approach without it becoming static. So what we don't wanna see is this continuation of more and more projects, but we'd like the business as usual elements that we have around UQ to feel like they can still be developed and grown and be adapting, I suppose, to new things as they, new requirements as they become available, but also perhaps making sure that any new systems that come online are integrated as far as possible and that we are gradually working more and more towards a really clear and less complicated end state for our research community. And certainly, yep. So certainly I know I had with Adrian saying, his training, contextualizing the guide to the institution and ensuring that the codes interpreted and translated for our research is important. And Elise mentioned that cultural change and making it feel natural to the researchers. We really want people to, as they go about their everyday business that you keep doing their research, try and make sure that they have, they're doing that really good, best practice state management without really noticing, if you know what I mean. So they're engaging with systems that guide them down the right path in a way that helps them to do the right thing and enables the right thing. And I think I've heard the word enabling quite a few times here. We're trying to obviously navigate and enable and push. And the system that we have collects this minimum viable metadata set. And essentially that's what forms our business intelligence or our view into working research data at UQ, which of course is broader than all funded projects and broader than everything that has an ethics application. It's anything that people are working on and collaborating on. From that, of course, you can start to derive a DMP and it's also quite often the same information you would publish at the end of a project. So we move on to the next slide. You'll see essentially that's what the system does is you start with a very early record of a piece of work, a research activity, and it enables automatic storage allocation and automatic access control to that storage. But then also we've been developing, and this is where it starts to get quite complicated in terms of future projections, those archival work shows and the ability to publish either the actual data or a record that says the data exists and how to access it and what licensing it exists under. So we've had this RDM system in place since 2017. We currently have over 9,000 research activity records, so they would be working activity, sort of working research projects. The system does link through to grants and ethics and things so we can see how many of them are funded, we can see how many of them have ethics applications. We can see where people are collaborating across the institution, which all units collaborating with who and that kind of thing is quite interesting kind of data set there. In terms of actual individual active unique users, we've got over 10 and a half, we've got 10 and a half thousand people engaging with the system, huge amount of data. I think that's a lot, I don't know, maybe some of you got bigger data sets, but yeah, 6,427 terabytes, that was of last week when I put these slides together, could be more now as always coming off in, coming off in droves of those microscopes and zillions of files. So as well as those internal people, we also have over 1,000 external non-UQ collaborators from over 100 institutions around the world that log in and access. So we know who they are, they don't get given a UQ login, they log in through their AF credentials or through some other international kind of credentialing. So we get that real record of who they are when they've had access to the data, the version that they had access to, for example. And this was just to show you that all the lines are going up. So when we start looking at future projections in terms of total number of records, it doesn't seem to be any slow, slow-ing in growth, slow. And there doesn't seem to be any slow down in the total active unique collaborators in terms of future projections. Obviously with that movement of researchers, with people coming, going, HDR starting, there's always new people coming in. So we're always getting that stream of new people, new collaborators, and then there's always new work, new projects, new ideas that are happening. So only going in one direction. And also, I always like to look at the amount of collaborators on a record. The intention was never for this to be someone just sits there and has a project where it's only got themselves on it. The idea was always to enable collaboration as a key part of this and to also have that recording and provenance around who's accessed that data and at what stage and that kind of thing, obviously for that full audit trail of raw data, all the way through to published data, that kind of thing helps. But it's really lovely and heartening to see that way more records have more than one person on it than those that just have one. So in terms of future projections, I could obviously sit here and talk about money and resource and the expense of storing more and more and more and more data. But I actually think it's a bit more, the sort of future for me is a bit more nuanced than that. If you think about libraries and books and the number of books in the world, if somebody sat there and went, oh yeah, well, every time someone writes a book, we're just gonna have to buy more and more bookshelves for our libraries. We'd all just be certain institutions that were 100% a library. Whereas what you see is them curating and presenting some of the books in the high use areas and keeping some books on site and sending some books out to the warehouse, which is what we do here at UQ. So we end up with real careful curation of new things as well. So making sure we talk to the academics about what we should have on the shelves, new things as they become available. The curation element becomes more and more important. And I think that's something that we really need to engage expertise on. And I heard yesterday in that session from the ASC, them talking about the researchers' responsibility around knowing what needs keeping and for how long it needs keeping. But as an institutional kind of perspective, I'm really interested to know how much of that complexity we can remove for people and how much of it we can do in an automatic way if we know certain qualities about the data, for example. So it's really been, we've really been thinking about that, designing around that, working out what the next steps are in terms of navigating as much of that as possible. And I do think having that idea of research activity and that curation by project helps, it helps to some extent. People are a bit of frameral, they come and go. So somebody will come, join the project and then leave you here. They get a better offer somewhere else. So they go and live all together. Or an HDR student is contributing to part of something bigger for a while and they've gone. So in terms of curation, it is good to say, yes, the researcher is the one who will know. They're the person who created that information or designed the study, for example. They are the ones who've analyzed the raw information. They're the ones who know which piece of it is the bit that actually evidences the outcomes that they're claiming or the conclusions that they're coming to. But from an institutional perspective, they do come and go. So again, it's that long-term sustainability, making sure we have persistent identifiers for people, for the research activities, for the projects, for the equipment that they're using at the time, for the publications, and that we can link as much of that in the background as possible, but without oversimplifying it to the point where excessive resource is required, for example, without oversimplifying it and saying, right, well, we'll just keep everything forever. So without getting to that point of we're just fine, it's fine, we'll just keep everything forever, how do we start to meet those real clever decisions in a systemic way so that we're not always having to go track down that researcher who's left and gone back to Europe, I don't think. So we do need to allow the system to evolve. And this is something that we're working on hard at UQ. Like I said, we're transitioning out of a project phase and into business as usual, but without letting that business as usual become static, letting that business as usual become something that still has a researcher reference group, still has the right people in the room to make those difficult decisions and navigate and keep allowing this system that we kind of have to change and to meet new cases, new use cases and to keep it relevant, because if it isn't relevant and there's something else out there that's easier to use, easier to engage with, researchers will vote with their feet and they'll go use something else. So at the moment we know what they're using it, it's like we've got to stay relevant, we've got to stay ahead of the game, or they'll go store it on something else and we'll lose that visibility of the data. And so in terms of 20 years from now, I have great confidence, it's all going to be fine, we're all just going to be looking back on this time with fondness and laughing at each other. So do you remember that time where we were all trying to work through these complicated bits? Just joking, I know it's kind of hard, it's kind of hard to look 20 years ahead, but we're often asked to retain data for longer than that and in reality, what the scholarly publishing ecosystem might look like in 20 years is sort of anyone's guess. It could be that, journals don't exist in the format that they exist right now or that people publish or present research outputs in a different way, more directed through to translation or pathways or more quickly into a policy, for example. So it could be that research outputs or even research institutions look a little bit different in 20 years. So I think that what I'm trying to say is as we sort of navigate this and as we enable things like the code implementation and best practice and good practice and reproducible, responsible research, I think it's something that we're going to have to do is to remain flexible. We're in a period of rapid change, absolute rapid change. We're also in a period of limited resource and in a period where our research community are under pressure, time pressure and as well as sort of pressure to be the best HDR supervisor, the best researcher, the best publisher, publishing the best journals, it's that they're under pressures, some of which align and some of which don't. And I think the more we can lift the value statements around valuing research data, the more we can have the leadership people signaling from the top that they value it, which we're looking here at UQ, the project champions, we have to do that. Sorry, Helen, Helen, sorry. Could I ask you to wrap up? Because we also need some time for the breakout session. So thanks. The last slide was not our test to foresee the future, but just to enable it. So good luck with the enabling. Thanks, Steve. Thanks, Helen. And sorry to cut you short there. But thanks, that was really interesting and it was great to have all of those presentations showing different aspects of what it means to implement the code and the guide and all the different pieces of the puzzle that need to be in place. Now, I saw a whole range of interesting questions in the chat. I hope that some of them have at least already been answered in the chat by the speakers. What I would like to, what we'll definitely, we'll capture all those questions and we will address them probably offline and provide the whole list of answers after the session with the recording. I think it's probably a more efficient way to address the questions and make sure we can progress with this workshop, because otherwise we'll probably spend quite a lot of time just looking at the questions. So the five topics you...