 First I'd like to welcome everyone to the webinar today, particularly our three presenters. My name is Geri Ryder. I work with the capabilities team with EMS. And the purpose of the session today is really an opportunity to share some experiences and resources around data management training for researchers and students. So just some logistics before we get going. At the moment all attendees are muted, but we can unmute those with microphones later on for questions. One of the reasons we mute is because the webinar is being recorded, and that means we don't get a lot of background noise in the recording. But the questions and any discussion will be edited out before the recording goes up on YouTube and is distributed. If you have questions please put them in the question pod or just indicate that you have a question in the question pod. And we'll come to the questions later in the session. What we'll do is have a short time for questions between speakers. And then depending on how we go for time we may have some time at the end for some additional questions as well. But I know each of our presenters today are happy to talk to people offline as well if you have specific questions. So we'll get started today. We've got three speakers. The first speaker is Belinda Weaver. And Belinda, I'll just hand over the controls to you. When we think about data management we're also thinking a little bit about why we're talking about it. What is it about the world that has changed that has been that this has now come to the top of the charts as something we needed to do? And I just thought it might be nice just to sort of step into this a bit because it really goes very much towards what it is we're trying to do with people, how we're trying to support them. So one of the things I did when I first started working in this area was really to look at a whole range of materials that were out there. And this fourth paradigm was one of the ones that I thought was really interesting that we started off with Archimedes in his bath noticing the waters coming out. And that was the beginnings of science. And then people started to theorize about science. We developed computational science where people were able to do simulations and so on. But we're into this giant area now of data exploration where the data is so enormous that it cannot be processed by a human. That computers are needed to do the analysis and to analyze the work and manage the data for people. And in that kind of world we really, really need good management of data for people even to be able to do the job of science. So that to me was a real breakthrough moment as well. I hope I'm on the right slide now trying to get onto this one. I found this really interesting talk online, something you can probably still listen to. And this guy Jeffrey Bolton was saying that when he was a young scientist starting out he might do five or six experiments and he'd be able to write that up in a paper. But it's got to the point now where the results coming back from scientific experiments are not just six or ten. They're thousands of experiments coming back. And it's impossible to shoehorn all of the work that comes out of that research into a published paper. So to some extent the published paper which is of course the Holy Grail for scientists and researchers is however really just now more of an advertisement. And the science is actually underneath. And if people don't have access to the data they cannot understand the science. So just as we've had a great push in the last few years about open access around complications. I think we're going to see a greater portion that's starting to get underway now for open data because if people don't have access to the data they're not going to be able to see the research. Understand the research. Use the research. And we've seen the growth of data only journals. These are the game changes as far as I see and I think this really impacts on our role in trying to support people with research data management or in research. Simply that we've got these enormous projects now where we've got huge amounts of data coming in. We're answering bigger questions because we've got the ability to do that. There's a huge number of data sets out there already and it's easy to share those with people because we've got tools to enable us to collaborate. We've got new tools so we can visualize data. We can actually mine data. We can get into that five million book corpus that Google Books has produced and we can find out about the use of words and so on. So there's these new tools out there. We've seen the growth of crowdsourcing where people like getting involved in projects like the Galaxy Zoo identifying the kinds of galaxies that are out there. We've seen crowdsourcing to help us correct the Australian newspapers in digital form. So there's all of these new things coming on board that people are interested in and finding out about and that really impact what we do. The expectation that data will be open and also the code of conduct. And obviously all of us need to understand the code of conduct because it's very much about data has to be managed. It's the responsibility of the researcher to manage data but it's also the responsibility of the researcher to get that data out there to disseminate the stuff that they found out in their research. But to do that it's very important that they have the skills to do that. It's one thing to beat people out of the head with a stick and say you must do this, you must do that. But equally it has to be easy for them to comply. So the government put quite a lot of work into research skills for an innovative future to try to decode and define the kind of skills people are going to need if they're going to be able to do this work. Three of the things that I found really helpful in trying to help me understand how the world has changed in particular disciplines were these three reports coming out of the Research Information Network in the UK. And while they were looking at case studies of researchers in the UK and not in Australia, it's very useful if you're trying to get a handle on how things have changed here just to read through those case studies. And I think a lot of what is relevant for the UK is also relevant to us. In the humanities for example there's very little take up of tools that might be out there to do digital humanities. It's not that people aren't interested but they're rather frightened that if they do start going down this road that there won't be anyone to help them, that they won't have anyone to ask. And so even though they'd like to get into this area there's a bit of a gap there. In the sciences it's a bit of a different story. They're probably not as frightened of tools but they are unwilling to take them up if they are not going to be supported. Certainly there's a whole range of new roles coming in the life sciences. These roles like statisticians, people who can model data, who can curate data, bioinformatics and so on. And they're very interested in people supporting them with that work as long as you're close to where they are. So they don't want the librarian sitting in the library doing this. They'd actually like that person physically based in their team. In the physical sciences a bit of a different story where we expect these people to be very good with tools but apparently there's a very poor understanding of how to search for information in this area so that's a place where there could be a role for us. And they certainly need help with metadata. They're creating a lot of data sets and they need people who might be able to help them work with that. Another thing that was really helpful to me in understanding the changing world was this big exhibition that came from the British Library. They've taken it down now. It's no longer available but it's available on the internet archive if you can find it through there and you can watch some of these interviews with people from different disciplines. But one of the things that they talked about a lot that came up as a thread all the time was they saw that the whole business of supporting e-research really was about teamwork. It was about multi-disciplinary teams that you might have a software engineer who can design something but you need someone who can build a database. You might need an artist to help you visualize how you're going to present the data or to build a kind of virtual reality world if that's the way you're going to go. Or you might need someone to crunch numbers but there's a whole range of skills that are needed. And one of the things I thought about that is we don't have to do everything but there certainly are roles that we could fulfill and we definitely have a place. This is my sort of academic in the pinball slide where I'm trying to explain to people, yes it can be not as easy for an academic or researcher to find the help that they need. And certainly when I started out in trying to build support services in data management I felt that there were quite a number of people in this position where they've come on board, they want to build a database or they're trying to share the data they've got or they want to do a longitudinal study but they need help with designing the way they get data or about surveys and so on. And where would that person go for help? And obviously people turn to their colleagues or they might turn to local IT but is that necessarily the best place to go for advice? Maybe those people aren't up to date. If they go to ITS they're a little bit focused on kind of workstation support or keeping the network going. They're not necessarily focused on research computing which is what that person really needs. We do have a new research unit here, the Research Computing Centre but at the moment a little bit more focused on high performance computing and those very high volume users, not so much on the long tail. Or the people who are a little bit timid about what they want to do, not quite sure, haven't really got their act together. QCIP obviously, whom I work for, can help but have to be invited to come on board. And basically I just felt that people were kind of in this pinball machine where they'd get sent from pillar to post but didn't really get any kind of useful joined up advice and it meant that sometimes they were a little bit frustrated. I came on board with the library possibly about two years ago now to try to build a service to support data management. The idea was to, first of all, that I came and it was just me, there were no other people, but the idea was to work out what the library wanted to do. So very much about scoping a service, working out if there was a role for our UQE space repository to play in data management or not but generally to kind of have a bit of a look around and figure out what was needed and what we might be able to do. A couple of the ways that I started to sort of dig for information was we first of all started to do a survey with RHD students. They were coming along to a lot of our events and it was quick to get them to just run through a little survey about how they were managing the data they had. It enabled us to then start having that conversation with people about, well, you are supposed to manage data and there is the code and have you heard about the code and so on. The university as a whole at this point had no policy and so there wasn't anything we could use to say that you must do this. We were just trying to raise awareness and we did run another survey but I might talk about that a bit later on. We had no real support materials for data management. We had nothing really on our website to say that we were interested in this work. So part of what I started to do first and foremost was to build that material. We had a new unit in the library called scholarly publishing and digitization service and under that umbrella was data management and also scholarly publishing and also the bibliometrics being able to help people with metrics for grants and so on. So we decided our research support pages should encompass all of those three things. And one of the things I built onto the page was a template for a data management plan and a checklist for a plan as well so that they could work through a checklist before they started filling in their plan template. The other tool I developed was a data assets inventory. I think I copied this from ones I found online but so that people in schools who were starting to worry about the data they had wanting to get sort of a picture of the data they had could have a tool to help them recall that. And the main way we started to disseminate information was through fact sheets. So we developed 20 for data management. I think there's nine for metrics now and 15 on scholarly publishing. This is what the website looks like, the data management part of it anyway. It tells them about the code and links through to there and it just gives them the downloads. If they want the checklist, they want the plan template, they want the data assets inventory. We also have the data management questionnaire that we ran online because we're always happy to get more feedback and usually that was a way of us finding people who said, yes, I would like more information. I'd like some help. Then we've got their details captured and then we would follow up with those people. So we've left that online and people do fill that in and lead us, give us information that way or us library. I'm no longer there. The other thing I did was set up an e-scholarship blog which was really to discuss all of those issues as well. When I was training staff within the library we had a whole lot of what needs to be included in the plan. One of the things when I was training staff to do the job of going out and talking to people about data management was that they needed to be able to talk through all of these particular issues. But every single one of those issues was covered in a fact sheet so that if they needed to talk to people about data formats or version control or how to restore data after you backed it up, then there was a fact sheet to do with that. The way that I trained people was trying to be in a very non-threatening way. I felt that it was very important for people to move along a particular continuum so that they didn't feel like they had to be an expert within five minutes, that some people are much more ready to take on this work than others. And I felt it was important that people move along at their particular pace. So I developed this thing I call grasp which was about that it is a continuum and people will go at a different pace. Within a year or so we'd like to have moved from the gauge where we're sort of scoping what is needed to a position of being able to advise and support people and hopefully get to the gold standard of what I really felt we were aiming for, which is to partner. So all of those kinds of roles that came out of understanding what it means to support a research, we would be partnering in that. So at the early stages of the referral stage, it would be more a raising awareness, running surveys, maybe putting conducting data interviews and getting stuff on to research data Australia and just starting to develop those basic materials to enable us to offer checklists and templates and so on. At the advice and support area, trying to help people with the grant about scoping their data, maybe helping them write a plan. If they're going for a funding application, they need to have a plan, help them with that. Maybe give a bit of advice about data that might be in an obsolete format or maybe help with specific tools and services as well. At the gold standard partner point, I would really hope that we might be helping with methodologies for maybe a crowdsourced project or digitizing material for people, maybe an online transcription project of non-digital material, partnering on bids and so on and maybe embedding stuff because I felt that there was a role for that as well. Probably where we're at now is we have these specialists at the top of the tree, people who are the team leaders in those three areas, the subject experts who people who come on to comment and then basic most librarians just have a working knowledge but not a great knowledge. Just in terms of logistics of how all of this rolled out, we offered part-time succumbents to librarians who were interested in the area of data management and building this service. They came on internal succumbents and then they helped me develop the work that I was doing and help in training staff to do the outreach. We also succumbed one staff member to CUSIF as an research analyst believing that that person would then bring those skills back to the library. I had a role on the UQ's research data management policy group that group met and we did quite a lot of work in trying to nut out what the policy would be. We contributed to seeding the commons and we also in the course of doing that data management policy we surveyed 200 plus staff about their data management habits and we were able to present some hard evidence that we really had a bit of an issue which was really useful. We developed training that we offered to the grad school during research week and things like that but we also got training into the UQ staff development program because we thought that would kind of legitimize what we were doing so we developed a data management and a data sharing. I developed some branch classes with generic content that people could run so that a class on collaboration tools, a class on what is data management just so people could start to roll that material out and get it more widely known and we also got the research office to agree that grant recipients would be told that they should have a data management plan. I thought that was quite important. The other thing really much was what I call building the web and really this is my last slide which is that if you're going to stop people from feeling like the pinball you've got to understand all of the people who touch research and might be involved some peripherally or crucially in data management and make an alliance of all those people and get all those people to see that we all need to work together to tell a single story to the researchers who are trying to do this work. So the library with an established service culture was seen as a place where people can come in the front door, ask a question, we can help with data management plans, we can help with data descriptions and metadata but on the whole we're probably going to be referring people onto these other units for things like ethical advice or storage or help with patents and so on. But if we didn't build that network then we would really be leaving people to the mercy of the pinball machine and I thought that that was a bad way to go. Okay that's all I really got to say. As I say here I'm happy to chat. If I've got any material that you might like to repurpose I'm happy to do that and if you're in Queensland I'm happy to help you get started and that's the end of me. Thanks so much Belinda. There were some great ideas for developing an approach to training and also building up your own capability to undertake that sort of training. There's no questions in the question pod at the moment. Does anybody have a question or we can move straight on to Anna and then deal with questions later. If you have a question that's probably in the question pod now. Nothing coming through at the moment so thanks Belinda. We may get some come through as we go through. So I'll put the presenter over now to Anna Shabbot who's our second presenter today and Anna is from the University of Melbourne. So Anna I think you should now have control. Hello everyone I'm Anna Shabbot at Melbourne and I'm going to go through some of the initiatives we've heard over the past and some of the things we've learnt and some of the things we hope to do in the following year. So I guess the why and the how similar to what Belinda's been saying there's a lot of reasons why we want to do this but the training piece about delivering on services to support research data management is to both build capability but also to change culture to find new ways of doing things that people have been doing for a long time. So that's often the resistance in that there may not be clarity around why practices should change and in some cases practices can impact on their current workflows. It's also an opportunity to learn about what is common practice amongst your clientele, the researchers while also teaching what's good practice. And so using training as a great opportunity to collect intel about what's going on is a definite bonus and definitely worse documenting when you train. We have work training that's targeting very much our research high degree students which we refer to as our graduate researchers and we have training that focuses on staff even though students can attend staff training generally the graduate researchers training is just for students. So basically we have two key focuses one is around information awareness raising and outreach and then within training that currently exists and how to integrate it within accredited programs seeing that if it actually counts to study and has assessment tasks etc. then you have a captive audience that can tell you about a couple of initiatives that we've had here. So we'll start with the our skills program. This has been going for quite some time and it has been reported on and you can look at documentation from Starbackers 2009 and e-research Australasia. This grew out of our e-research director who was Leon Sterling at the time and was keen to get a breadth of e-research types of knowledge base shown to students so that they would be better able to be consumers of new initiatives in technology to help them with their research activities and believing again as it is a culture change component that the earlier we can start the better. The types of topics that we have included the way up skills works it's usually one in between one and a half and two hour workshop where we tend to get about 20 to 30 RHDs coming along generally masters by research and PhD students though any postgraduate student could attend. In the research data management space we have separated out the sciences from the humanities and social science although it's not strictly adhered to people want to come along to the science or to the humanities even though they belong to another that's not restricted and they're offered by a range of people and well this year Versi and Anne's actually delivered our program to the science group and the ESRC which is a digital humanities and social science centre research centre within the library here delivered the one for the humanities. These documents are available if people won't have a look at what they do. We've also done training around managing a digital asset so that if students are doing research where they have to collect images and they're not quite sure how to have metadata inserted and make sure that they're well documented in their work then we assist with that and also if they require digitisation as part of their thesis and research copyright, open access, legal frameworks etc. In fact I think they're also open to other I think Monash may also attend these programs if they want to. So then we go to the these were again RHDs where we have developed programs that exist within accredited programs. The Melbourne model has specialist programs within the Masters of Science and it also has what we call breath subjects which are meant to cut across multiple disciplines within various degrees and EScience was one of these breath subjects and it was taken within the degree that are just outlined there. The objectives were multiple but research data management was a key component of that. There was a poster at last year's e-research Australasia but unfortunately that hasn't been archived. I'll have to try to produce a version that can be PDF so you can have a look at it. So what the students, their assessment tasks they're the objectives. Their assessment tasks included developing a data management plan for their own project and also developing tool or using tools for visualisation which was group work but the data management was individual. The sorts of responses were given was and interestingly enough in hindsight I quite like that middle one where they're saying that despite how boring some of the bits were because one of these breath subjects they must take some sort of elective and in some cases they begrudgingly took that but then were surprised at how useful and interesting it was so that was pleasing to see. The actual subject was taught by our information systems people within the science faculty. The other new initiative which started this year and this is quite fresh and is only being I guess trialled this year and will obviously undergo a significant review. It's working on the teaching of digital methods in the arts which is social sciences and the humanities and it's within the arts faculty PhD program and as you can see it's twelve hours of basically six two hour block workshops around particular introductory areas and then some more specific tool development utilisation and skill development depending on the type of projects people are doing in their work. It was spearheaded by basically people whom you may know Craig Bellamy from Versi, Nick T. Berger who is a linguist, Gerva McCarthy Andy May and Michael Arnold from the arts faculty. So these people were have always considered the importance of this area and again there is an assignment required to actually look at what digital methods were employed in their research and they produce an assessment tool. Some of the initial feedback that I got from Gavin was that he's really quite surprised that at this stage there is still a lot of there's very limited skills in digital methodologies amongst students and so probably it's worth considering even going back to the undergraduate level to start developing strengths so even though we have digital natives coming through in terms of some of the tools that are around in digital methodologies in the search there's not a lot of exposure and so that's a potential new area that may emerge from this. So then the other main way I'm involved in terms of delivery of training is around the outreach information and awareness which was very much part of the ANS projects that we participated in but we had already started in this area of looking at our policies and how we get the message across. The general recipe for these forums is that we usually have one key thing that's a bit of a new initiative or something a bit different and one of them like earlier this year for example was the change in our privacy policy we did that as a way of bringing people in and then we go through the regular format of why is it important to have some tips around that and some ideas about tools that we may have that would help and then where can they get services. We've also been bearing in mind that this is a very superficial very broad brush it gets a very broad cross section across the university so it's very difficult to have targeted content because of the unknown of who's going to be coming along. We often get between 40 and 75 people coming to these sorts of things so we get a good opportunity to spread the message and then hopefully we tend to get at least five or six referrals coming out of that with the types of questions people ask. I've put a link in there where you can go and have a look. We have all our questions generally unless they have fallen off we record these and we also have the slides freely available online if you want to have a look. What I believe is probably the better way to go if you have the resources of course this is resource intensive is to actually try to squeeze your way into research and focus groups which would be department or faculty forums generally by invitation where you would have again this formula where you would try to get one of the senior researchers in that faculty to share their views on how the benefits of data management has helped them in their own work also around data sharing and archiving and in the case that we've used is people who are involved in that example seeing how being involved in some of those projects facilitated them we always have Melbourne Research talking about the good that's an interesting word, proactive practice I guess that we want to push the importance of research integrity not from a punitive sense but about doing good research having documentation etc as well as the compliance side around policy and the thunder expectations and we always have a library ITS component where we look at the outline opportunities for collaboration and also what support services we have the other area that we've been looking at is working directly with library liaison teams and in fact we've used Belinda's model that she outlined with our library teams as an introduction to research data management again trying to I guess find their own comfort zone in terms of where they want to come in in this work and this will grow as people and researchers require more I guess create more demands we've also done introduction to research initiatives, infrastructure and services from our technical teams and also around asset management and digitisation so that we see this as an opportunity for us to get referrals because through an introductory awareness when librarians speak with their clients they're able to point them and give us a referral if there are needs in this area so as I was leading on to then another key way that we do training is through the service delivery itself so when we get referrals often through our record services team which is now based within the university secretary's office we are able to engage directly with our colleagues in various groups sometimes it's a faculty level where they have all this stuff and it's not only digital but physical data we had container loads of soil samples so this is also part of data management and it's about good record keeping so it gives us opportunities to help faculties or departments develop a process to improve their own practices on the ground we also find that through projects themselves you have a project which actually gives a payoff to researchers we're also again collecting a knowledge base from the process of providing or getting business requirements for projects and that's what we've been doing with our digital preservation initiative where we are using various groups to review the types of content they have and what sort of workflows they need etc to develop our digital preservation strategy and roadmap and in 2013 we will be actually developing well we will be offering a data curation service and that will again be offering direct support to researchers who are interested in learning more about data curation and archiving and the final thing that we are running next year is we are going to develop a class as an immersive training program for support personnel where we will be this is a collaboration with the UK on a project around immersing but training our support people in the key areas around research data management and curation and particularly the support people we do believe that the the outputs will be applicable to researchers but that's not going to be our initial pilot and it will be run in two universities our own and one in the UK and we're very keen what will happen for all of next year and we will definitely be sharing all the outputs freely to people who are interested and that is me or that's the end for my presentation Thanks Anna that's terrific that covered an awful lot of ground from you know sort of the more general programs around raising awareness to the more focused programs accredited programs as well and we'll be very interested to watch how you progress next year with your building capability because that's certainly an area we're interested in we might move straight on to Sam's and there's only one or two questions in the question box we might hold them off until after Sam's presented so Sam I'll just hand over to you Well hi everyone very brief overview of the Schools Development Programme at Monash will be the starting point for this but then I'm also just going to focus on some very practical tips for face to face sessions which a lot of us running or planning to run in future and then I just wanted to spend a little bit of time at the end giving you a very idiosyncratic perspective on what some of the future directions might be as I see them So in terms of what happened at Monash during my time there we did build up a fairly kind of significant program over time but I guess what I wanted to say was that it did build up over time and it really by the time it had started there were several years of existing work that had already been put into building government structures and policy frameworks and putting infrastructure in place so this maybe wasn't as new to people at Monash as it might be at some other places We did start small and we built up over time which I think is important to know you don't have to try to do everything all at once You'll see from looking at the screen that as well as focusing on HDI students we did also focus on Superbuzzers and that came later and it was partly due to feedback that we were getting from the HCI students we were saying that it's all very well that you're telling us this stuff while we're in this training course but that's not the message that I'm getting from the Superbuzzers so we really wanted to make sure that that message was getting out from the Superbuzzers as well We very deliberately decided to piggyback on existing programs where we could and use the advertising channels and networks that were already available so that does mean I guess that you have to compromise sometimes what perhaps you would like to deliver so I mean it wouldn't for example have been my choice to get the last 15 minutes of a full day long seminar to talk about research data management to a big group of Superbuzzers but it took about two years of lobbying to get that slide in the first place so I was happy to take it when it did come up and just to note that all the copies of the presentations from Monash are available on the refreshed Monash Research Data Management website and they're all available under an open license so you can use those as you will In terms of the feedback on the HDR seminars most people found them useful or extremely useful in terms of the feedback and the sorts of qualitative comments that we got back you can see on the screen there I guess what I would point out I suppose is the one on the bottom right Rebecca was talking last week about the importance of really being fairly honest about what you know and what you don't know at the moment and I guess when I've been delivering this training I've always made it clear that I don't consider myself an expert in this area and I'm still learning all the time and we're all learning together Of course all the feedback wasn't totally 100% positive and we did do a lot of work evaluating this from the library perspective getting information literacy experts and the kinds of things that we realised were issues I guess the learning objectives weren't as clear as they might have been we weren't really addressing discipline specific needs and some people wished that we would on the other hand other people said they liked the cross-disciplinary nature of it so you're never going to please everyone People wanted the sessions to be more interactive and hands-on and I suspect everyone gets that feedback from mixed messages that people felt that they weren't getting a consistent message in this course from what they're getting from other places and there was also a lot of logistical problems from the trainer perspective so in particular just getting low numbers when you've got a five campus university can be had to get everyone together in the numbers that make it sufficient to run a workshop in the first place and we did have the perennial problem of people booking and they're not turning up so sometimes 50% of people would be no shows and that's always hard on the presenters we did get about 150 HDR students through this in that three years which sounds great but then when you look at the fact that there's kind of nearly 4,000 PhD students at Monash overall you do start questioning whether this is really the best way to do it and I'll talk a bit about that later I just wanted to move on to some really practical things that I have learned in my time from delivering these face-to-face classes and I'll apologise if they seem really simplistic but they were the things that really I found made a big difference in how effective the classes were so the thing that I'm showing you now really works best in a longer session you probably need an hour and a half or two hour session to do this and it has to be a session with not too many people so what I would do at the start of the class was ask people to introduce themselves say which department they were from and give a very brief description of the research topic and then a little bit later maybe about 5 or 10 minutes into the session after I'd given them an overview of the different kinds of research data that people might be generating I asked 2 or 3 people in the class to describe their research in a bit more detail and to start drilling into the kinds of data that they thought they might generate as part of their project and as they were doing that I would kind of probe a little bit so they said I'm going to do a survey I would say will it be online or print or both and what are you going to do with the results will you put it in a spreadsheet or how are you going to analyse it if they said interviews I'd ask them if they were recording it would it be audio or video what device were they going to use to record would they do transcripts that kind of thing and what that means after I had done that little exercise which really only took maybe 5 minutes was that I had these real life examples up on the board that I could come back to throughout the class and I just found this really really helpful when I got to the ethics part I could go back to a real example but someone in the class was likely to have a particular kind of problem that I wanted to talk about someone was going to be getting data from somewhere else so I could talk about their particular third party IP issue and so on now it does mean being able to think on your feet no no that's not for everyone but I found it really helpful the second thing I found quite helpful which I didn't do until very late in that 3 years was actually give people a visual overview of the range of things that they were going to have to think about in order to be able to manage their data effectively and I guess I like that this shows that a lot of the things I need to think about aren't technical things I think they came to the class thinking it was all going to be about storage and they left knowing that it wasn't just about storage and what I liked about having this one slide overview was that you could start to draw out the interconnectedness between the different decision points and so the one that I quite often used was to talk about the kinds of consent processes that they would describe in their ethics application as being really important in terms of the effect that it could have on how they could publish and disseminate that data later Tip 3 stories are always good so I increasingly used a case study kind of approach during my termination these were a couple that we used in that supervisor accreditation program that I mentioned earlier this probably goes back to what Anna was talking about having your kind of champions that you can get nice quotes from or preferably will them out in person but you're not always going to be able to get the associate dean of research of your faculty to come along to your training courses but these can be really powerful and just before I left Monash I did a really great training session for the subject librarians and I used scenarios in this and I think this is a really important point for those of you who haven't built up those case studies from real life because you don't have to have real life examples you can actually cobble together scenarios or personas that are based on what you know the kinds of issues are going to be and this session went really well and I think partly that was because the librarians that attended this training session really realised that you could actually identify that something was problematic without necessarily knowing what infrastructure or services or expertise might be available to address the problem even if you don't know all those things it's pretty easy and in a lot of cases just common sense to be able to spot the kinds of potential issues that might come up and I think that really built their confidence we're all in the data management space and I think that we forget that most people don't get involved in research because they're passionate about data, they're actually passionate about their research topic and we often we love life cycles and we often work through data management in a fairly chronological kind of way and I guess one of the things that I've found really helpful and this works not just with students but also with researchers and that's to talk to people about what's going to happen when your project is finished, who do you want to get your work out to, who will be interested what kinds of channels have they got available often those channels are not going to be expensive subscription journals and databases once you've got them thinking about their audience then you can go back and ask them to think about what they need to do in terms of decision making and research practice to achieve those goals later and what role can their research data play in that approach to dissemination So that was the top tips, I'm going to very briefly move on to talk about some future directions I guess I currently see data management falling down a bit of a structural gap between research and education our skills programs have largely come out of funding agency requirements like the CODE and best practice codelines coming from ANZ and I think we really need to be aligning our data skills programs much more closely with teaching and learning drivers so the Australian qualifications framework whatever the graduate attributes for your institution are and government strategies like the one that Belinda mentioned I mean I don't know that many of us are really framing our research data management classes in the context of the future needs of the research workforce and I think we'd have a better chance of getting resourced if we could do that The second thing I guess I mean most of us have probably been fairly focused on delivering sessions face to face in the classroom, it's been talking heads and slide shows, I think we know that there's more varied ways of developing and delivering content that could be more effective I guess we've had feedback that suggests that a more discipline specific and embedded approach might be good and that has definitely been behind some GESC training projects and I mentioned the work in their arts faculty which would fit into that discipline focus model I really wonder whether that is sustainable and I guess I've been wondering how a more methods topic based approach work and so if you think of the silos and this diagram as being the disciplines I'm trying to think of what topics there are that would cut across disciplines, so for example something on managing the data that's generated by surveys could be applicable across a wide range of faculties while still meeting the need of something that's more targeted to specific kinds of research I've been observing for a while that despite the fact that we're all evangelists for sharing and reuse we actually don't make it very easy for other people to share our work and I think one of the ways in which that would be easier would be to more clearly distinguish between the content that's specific to our institution, for example how do you apply to get data storage at Monash and content that covers generic concepts that could be really easily repurposed and I think we might be better off spending time developing smaller and more targeted chunks of content that maybe address just one or two learning objectives and that could be combined in different contexts and the UK Data Archive is one organisation that's gone down this path and so if you look at the creation managed data training resources you see not just that they've put a slide show up but they also have various quizzes separated out into questions and answers so you could imagine using that in a face-to-face class or repurposing it online and finally I guess I just really am a bit surprised that given the collaborative nature of most of us that work in this space we really don't yet have a collaborative approach to developing and delivering this material I would personally like to see us move beyond sharing the final products of what we're doing to actually sharing the work required to build those products in the first place and so I've just popped a few ideas up there just my thoughts but certainly the one on the top sharing learning objects in a central repository with open licensing is something that came up at a recent meeting of the Australian e-research organisations and I think that's a great idea but they need to know that there would be support for that and that people would use such a repository in ways that would make it worth building it and that's it from me just wanted to finish with a thank you to the Folk Up Monash obviously a lot of these materials have come from my work at Monash and I'm not there anymore but they're always very happy to share materials with the ENDS community