 Okay, well, welcome everybody. I can see the numbers coming in now. I will start and I wanna start by saying I live and work in Brisbane on the lands of the Terbal and Yogura people and QT is also located there. And just to acknowledge that clearly there have always been places of teaching and learning and to acknowledge that they were lands that were never seeded and not Aboriginal lands in the past but I still have Aboriginal lands today. So thank you for coming for this session on research integrity. I'm gonna, we've got three excellent speakers today. I'm gonna introduce them all. So Sean Koo completed his PhD in behavioral neuroscience before moving to Canada for a postdoc. On returning to Australia, he took on the role of senior case manager at UNSW where he investigates research integrity complaints and develops resources and materials to promote the responsible conduct of research. Paul Sue completed his PhD in dermatology at the University of Sydney. He's a research integrity and projects coordinator at Macquarie University where he investigates research integrity complaints and delivers research integrity education and training. And Karen Olave and Sina completed her PhD in education at University of Queensland focusing on assessment and feedback in higher education. She is currently in the education and training coordinator of research ethics and integrity at UQ where she designed teaching experiences and online research integrity training. So what an excellent panel to talk about a topic that we should all really care about. So I'll hand over to Sean. Oh, sorry, one last thing. If you're interested in research integrity and meta research in general, we're having the next day most conferences in Brisbane in November. So please come along or join us online. I'll stop sharing now Sean so you can start. Thank you, Adrian. And can everyone see my screen? So thank you everyone for coming today to our discussion group on research integrity training. It's great to be here as part of the AMOS sponsored virtual symposium in the lead up to meta science 2023. Our goal here today is to gather insights on how we can design and deliver better researcher training in our institutions. We know that there are several areas that researchers want better or more training on. And they want that training to help them to keep up with practices or requirements that are always evolving and always becoming more rigorous and more stringent. However, at the same time, we know that researchers have, as researchers ourselves, or people with backgrounds in research, we know that just how overloaded researchers are today. And so that's why we want to talk to our colleagues about how we can better help deliver better training and how we can improve training and help researchers to stay out of trouble as well. So today's session is going to be very interactive. It is more of a discussion group rather than us talking to you. I have seen already from the participants in this webinar that there are lots of people who probably know a lot more than me about how to do this. So we really are looking for your comments, questions and insights. So please raise your hand or pop your comment into the chat or Q&A. And we'll start with our overall discussion questions. And if we want to, we'll get into some more specific scenarios as well. So without further ado, these are the main questions that we're interested in discussing today. So the first of them is, how can we deliver training to overloaded researchers that is more appropriate to their needs? The second is, how can we engage senior researchers in research integrity training so that they can support their mentees as practices and requirements evolve? The third is, what training formats and approaches are most effective? And the fourth is, what is missing in currently available research integrity? So before we look at specific scenarios, does anyone have any comments or questions or insights they'd like to raise right at the get go? Sean, I might use my prerogative as chair. Just to say to that first one, so the keyword for there in me is overloaded researchers. So I'm wondering if we could take the load off researchers, they might have more time for research integrity training. And it's kind of the fact that people are, if people are too busy for research integrity training and they're spending their time on the wrong things, I don't know what you think about that. Yeah, absolutely. I think workload is a constant pressure for academics and researchers, from PhD postdoc all the way up to full professor. It doesn't ever get easier as people go through and it's a really difficult question because there's always more that researchers seem to be being asked to do stricter requirements, more paperwork or admin. And I think it's really hard to try and find ways to take that load off researchers. I mean, one way is to better support professional staff or better to have more professional staff who can take on that administrative load for researchers. But that runs into significant resourcing issues. So I don't have any easy answers to that one. Yeah, and I'll probably second you there, Sean. I think as an institution, we should provide that support to researchers whether through the professional staff pathway or to actually start using processes or technology to alleviate some of this admin burden, streamline processes there. I think a lot of researchers do find that a lot of it is just red tape and they're forced to do it for the sake of doing it. And I guess if perhaps institutions can assess the risk, probably delegate some of the lower risk activities to support staff and let researchers focus on what they do best, which is the research, maybe that can go a long way instead of it being seen as though the institution is imposing these additional burdens and mandatory training and whatnot. Yeah, and it doesn't even have to be related to research integrity either, Paul. I saw a presentation very recently about how adjusting an approval process meant that there was a huge reduction in the number of people who had to get approval to do a certain activity. And that just cuts out a whole lot of red tape that people have to jump through. I mean, sometimes you do need those approvals because otherwise people spend money on things they're not supposed to, but if we can cut out those things, there are people who work on the professional side, professional staff side who want to get rid of as much of this red tape as possible. Karen, did you have anything you wanted to add before we get into some of these questions that are popping up? Oh, I totally agree. And I don't think that the load of work that the researchers have is not going to be reduced. I personally, I don't see that coming. And I have the feeling that we had a meeting the other day and the researchers are saying that there are more requirements about completing training in general from other areas also. So it's not only about research integrity, they have to juggle about different requirements. So that's why we would like to get some ideas about what we can use to make our training more engaging and more easy to digest for the researchers that are very busy. Thanks. So I'm just going to get into the first question from Jason. I was wondering what the scope of integrity is. Is it avoiding fraud? Is it about open and reproducible research or avoiding questionable research practices? My first thought is both and all of the above. I mean, you have on the one hand, the negative side of research integrity, which is non-compliance, I think, which is about things like data falsification, fabrication, plagiarism at the most serious end. And then you have all sorts of other things in there as well in our office, we deal with everything from, anything can be an authorship and research integrity issue from authorship disputes to misrepresenting your data, animal ethics, human ethics, these are all integrity issues. It's a very broad field. And we usually, as investigators, deal with things in the breach. But I think it is also about open and reproducible research and making things better, improving best practice and improving regular practice as well so that it is even better than it is. So I think there are two ends of it. I'm interested to hear what Paul and Karen think as well. Yeah, and I completely agree. And I think historically, there's always been a focus on that negative aspect of research integrity about the breaches and research misconduct. And I think we sort of need to shift the dial and sort of focus on the positive and essentially promote the idea that research integrity means research quality. It means trustworthy research. It means that we can improve society through our findings. It means reduced wasteage when it comes to research funds. So I think if the institutions start focusing on this side, they're gonna get a lot more bang for their buck in terms of return on investment there. Yeah, and there have been different investigations showing that most of the education and training to see the research integrity that focus on principles and values are more relevant or meaningful for people than focusing on the negatives. And I guess the problem sometimes is that because it is a broad area, it covers so many different things that sometimes it's hard to give like a general idea. So, and then that's one of our questions. Should we focus on one specific area or should we focus in like an overview of everything? So our next question is in Kathy, how can we get training into the curriculum in all empirical fields? Any thoughts? I guess the only thing that I can think of is two cases studies or scenarios. To bring some situations to the curriculum that are associated with risks people being involved in research integrity cases. Yeah. I think the general principles and values of responsible research and research integrity apply to all fields. So I think maybe as an introduction that can be incorporated in all research at the university it may be that as Karen's indicated certain case studies relevant to specific disciplines be targeted to those cohorts to make it more relevant for those stakeholders. I think there also is some training that's already embedded in the curricula. I remember from my undergrad days learning about things like Stanford Prison Experiments or Milgram's electric shocks and things that would not be considered ethical today but were nevertheless classic studies in their field. So I think it really is something that does require a lot of cooperation cooperation to get these things into the curriculum because you need the academics who are delivering these courses to actually include them in or invite research integrity staff like ourselves to come along to these sessions. So it's something that I think requires a lot of hope from all sides. So the next question is from Moira. Workloads are often determined by funding patterns and some funders explicitly fund professional development. For example, recently I saw this from the Wellcome Trust, but most do not. This is even more of an issue in non-profit organizations whose revenue is based mostly on grants. I think thanks, Laura, for that comment. I think that's a really good point that funders need to put more pressure or need to provide funding and support and not just pressure. I think it also ties into the next comment here from Kathy, which says, I wonder if funders of research can put pressure on universities to acquire training. So I think there are two ends for funders here where funders are both responsible for providing requirements or stating requirements as well as supporting some of these things. What do you guys think? Yeah, I completely agree. And I think in the US they make it a requirement that at various stages of a person's fellowship that they need to do a minimum number of, I think it was eight or 16 hours of training. So I think funders do play a key role here. And I guess that sort of brings into the question of mandatory training. And apparently some of the literature indicates that if you make something mandatory, it's not as effective. But that being said, it seems like there's only a handful of people that are willing to do it on their own accord. So I think that that's a tough one there as well. I've just got Nick raising his hand here. Nick, would you like to comment? Sorry, Nick. Nick? We can quite hear you. Nick, did you want to... I can't hear Nick. Can anyone else hear? No, I might have to move on and see if Nick can try again in a minute. All right. So our next question is, in terms of incentives, has anyone had success promoting the idea that engaging with research integrity practices, for example, having data management players will facilitate the research and publication process down the road? I try. I've been up in Q and As talking about how having open data has helped me as a researcher and opened up collaborations for me and helped me to publish my research more easily and publish more papers. I don't know how successful it is though. Does anyone else have any experience with whether that works? We're at Macquarie. We're sort of at the very early stages of that process. So 18 months ago, we rolled out a very comprehensive research data management framework at the university, where we've essentially made high degree research students complete a data management plan as part of their confirmation of candidature. So we're still at the very early stages of that. And so yeah, hopefully when it comes time for them to publish in the coming years, we do see an increase, we're sort of monitoring that space at the moment. Do you know if those data management plans are good quality data management plans, Paul? Because I mean... Yeah, so what the university's done is we've invested quite a lot in terms of resources when it comes to helping researchers in data management plans. So we have a research data management plan team. And basically when researchers submit their DMPs, it gets vetted by a data steward. So they ensure that it does meet a certain requirement and standard. That's a... I think that's another one about how professional stuff, supporting students to do this can really contribute a lot because I think sometimes we've seen that... Well, I've seen that if you just leave research on their own to do it, and no one's ever taught them how to do a data management plan or what needs to be in there, you end up with data management plans that are like, oh, I'm going to put it on a USB or I will keep it on my laptop and then... Or they write something and never follow it. Exactly. Exactly. And when it comes to the hard sections where you actually have to put in a bit of work, like when it comes to the metadata and the different formats, then it sort of gets in the too hard basket and gets left empty. So having a data steward provide that guidance has really been helpful in our scenario. Any other comments on that one? Yeah, very similar. We use examples of how things can go wrong with the data and the importance of data management plan and the benefits that we're not really sure how effective those examples and ideas are and if based on those people attending those trainings and hearing those examples, they're actually incorporating those ideas in their practices. Yeah, that's always like a question mark for us. The next question is, from my experience in teaching research integrity, I don't see much focus on publication integrity and teaching students and ECRs about how to recognize trustworthy papers in their disciplines. Could the panel comment on whether they're aware of this kind of training? I think that there is some of it embedded in labs and training of students and ECRs and specific research groups. I think a lot of it is self-motivated as well. I remember when I was just starting as a researcher, we had a lecture from a very eminent scientist who came and listed a whole bunch of papers that he thought he should throw in the bin and explained why. So I thought that was a really useful lecture. I don't know how widespread that kind of approach is. Does anyone else have any comments on it? Yeah, no, I agree with Jennifer. I don't think we really focus on publication integrity aspects and recognizing trustworthy other than the fact that we try to teach our students and our researchers about using their critical thinking skills. But I think what's been really helpful as of late are things like RetractionWatch and PubPierre and the blogs that identify issues with published papers. And that in one sense sort of highlights some of the issues that exist in the literature. But yeah, when it comes to specifically providing training on this aspect, I think that's an area we can definitely improve on. In our case, we do not focus on recognizing trustworthy papers because we know that that belongs to academic integrity and there we do have training about academic integrity that focus on that and they provide information associated with that. So it's another area that focus is particular. I think that's a good point, Karen, as well, because one thing that might set apart a trustworthy paper in one field would be completely irrelevant in another. I mean, in my field where I did my PhD, the number of animals would have no barrier. For example, in a qualitative research study, perhaps. So it is really discipline dependent and does require I think a lot from the academics. And so I remember it was the academics who were teaching courses who would, when I was an undergrad and PhD student who delivered that kind of training because that was their interest. So it wasn't systemic or structural, which is I think a weakness in trying to deliver it. Is that it relies on academics who are interested in doing it. So how we get that is a lot more difficult. The next one is how can we avoid cherry picking the data when the researchers get to choose which experimental data goes to publication and which not? That's a hard one. Yeah, again, the only comment that I have about that is just providing a scenarios where this situation is reflected and we make, I mean, the research integrity officers make comments about that and that it's not right, that you should not do that and those sort of things. But how to avoid it, I have a response. I think it's a hard one because, if you, unless you're doing completely open notebook science, you have to choose which experiments or which data goes into a paper. And sometimes data is just not worth putting in a paper. Like if your whole protocol didn't work, there's no reason for you to publish it. But on the other hand, I can definitely understand. I think it has to be a culture thing. Sure. Yeah, I completely agree. I think it's just to embed those principles and values of why we're doing the research in the first place and not to have data that matches your hypothesis or choose data that only matches your hypothesis. And I think maybe actually with the rise in AI and whatnot, probably hard at the fact that your data can be investigated again in the future. And so if it isn't legit, then it could possibly go down the investigation pathway as well. So I think, yeah, having both the positive approach as well as instilling a little bit of fear in that there might be scrutiny and criticism down the track may assist in the process. I've got a hand up here from Kathy. Kathy, would you like to say something? Yeah, I think Adrienne was saying that they need to be promoted to host to speak. That right? I think I am, but nothing seems to have... Can you hear me? Yes. Okay. Sorry, I can't turn on my video. I'm not sure why, but this is fantastic. I just wanted to react to the discussion about cherry picking data. And I'm wondering what the role is for training our students how to pre-register studies so that they have kind of on record what they plan to look at ahead of time ahead of collecting data. Another approach is replications. So setting a norm in the field of replicating research. So if the data are cherry picked and a replicator tries to replicate, but can't replicate, it's one indication that the data were cherry picked. Just a couple of thoughts. Thank you. Thanks, Kathy. I think pre-registration is a really good point. And I know that the Center for Open Science and OSF have a really helpful platform for pre-registration that there are lots of other options out there that people to do pre-registration. I've never received training in pre-registration. I've done it once, and I just kind of did it because I thought it would be useful and interesting. Does anyone else have any other experiences on that or know of any training around it? My understanding was that it's more common in the clinical sciences to have your studies registered beforehand. But based on yesterday's talk on research waste, I think it's slowly expanding to the other disciplines. That being said, apparently, there's also issues about gaming the system there as well. So you could essentially get some pilot data and then put in what you intend to do afterwards. So I don't think it's necessarily a civil bullet there either, but it's something that could also help. Yeah, I think so. I think there are also times where it's not always applicable. I've had discussions with researchers and colleagues who have said pre-registration is interesting, but my work is really exploratory. So I don't really feel like it's appropriate for me when I'm just probing different questions at different times without any serious plan. Like if you were trying to do something that's more confirmatory, like if you're running a clinical trial, you have to sort of pre-register that. Then it's more appropriate. So it also depends on discipline as well. I guess the other comment about cherry picking is that sometimes it also helps to have more eyes on the ground so that people are accountable to... I guess it relates to their data being more open. So if you're in the lab, if there's only a single person that's collecting the data, then I guess there's more of an easy pathway to go down the more deviant side of things. Whereas if you've got multiple people in the lab, then there's more transparency there as well and that sort of keeps people to account. I think that's a good point. It's easy to cheat if you're just on your own and you think no one's looking. But if someone's looking, it's a lot harder. I've heard of labs where before a paper is sent to publication, people have to present it in front of the whole lab. And so I'm just going to let Angelina talk. I've just seen your hand up there. Would you like to say something, Angelina? Sorry, I just have to do a bit of work in the background to promote them to... Sorry. No worries. Hey, can you hear me? Yes, great. I just wanted to comment on this latest point about having multiple people contributing the data as a potential way to avoid fraud and chair-picking. Well, actually, the point is that you have to present your data to your supervisor, normally the principal investigator, and then the principal investigator decides which data goes to the publication and which not. And so we have this single point of failure, like this actor who if they are not playing responsibly and correctly eventually can, well, do the chair-picking. So I just wanted to say that it doesn't really help because we have this single point of failure and whether somebody can think of other ways to do that, to improve the transparency and so on. Yeah, fair point there, Angelina. I think, yeah, there's still going to be that weak point in the PI or the CI determining what would go into the paper. I think maybe the comment more related to in the initial data collection stage where a student might be collecting the data and there's that temptation to massage the data to fit their hypothesis. I guess another point would be that when it comes to the actual publication stage, at least in the Australian context, each author on the paper does have a degree of responsibility on what's presented. So although the conversation might get awkward, there is the ability for researchers to express concerns about not putting in certain data or including other types of data. So there are safeguards in that respect, but yeah, definitely open to any other ideas where we could potentially address this issue. That's a really difficult question. And yeah, I think the supervisor's single point of failure is of valid concern. But as Paul said in the Australian framework, we do hold all authors accountable. I know at UNSW, every single UNSW author is responsible for the whole paper. So we do hold everyone accountable. I'm just going to move to the next question because we do have quite a few in the queue. Jason asks, from your perspective, do we need more meta-research on research integrity? For instance, evidence on the effectiveness of training programs or research on the drivers of research integrity, research on the benefits of research integrity, for example, more on public trust in science. Where, in your view, are there the most serious gaps in the meta-research? If any, would any of my colleagues like to take this one for starters? Yeah, I definitely agree that we need to do more research on research integrity. We're speaking to an audience of researchers and everything's based on data. So if we can actually use data to inform our decisions going forward when it comes to education and training, I think we're sort of speaking the same language there. I know that, at least at Macquarie, when it comes to the research data management training that we provide, we do ask the participants to complete a pre- and a post-survey, you know, asking them what they know or don't know, and then seeing what impact the training has had after the completion of the training. So I guess, from that end, that also allows us to tailor future training to improve what is delivered to the participants. I agree. I agree. I think we always need more research and always need more data. I've thought about this a bit, and I think one of the key issues in implementation is incentives. But in order to get the people who set the incentives to act, I think they might need more data. So it's always good to have more data to show that your research is that used particular practice are more reliable, for example, or have greater impact or do the kinds of things that you want in order for people to start thinking, yes, we should implement these because the decision-makers are the people who are worried about things like funding or impact and things like that. And so they might want to see more evidence, I think. I really like the idea of a meta-research and research integrity. I guess one of the complexities in our place is that we have people attending from different disciplines. So we need to be broad and try to make it engaging for everyone and relevant for everyone. And what is more difficult is the measuring the impact shows that we really don't know how to measure whatever they are acquiring in the training, they are applying it in their practices or not. I think that's a really great point about whether you can see whether the training is working or not. And I think maybe some of those things don't necessarily end up published because maybe they're just internal. Like if you're running a training program, you evaluate its effectiveness, but maybe that doesn't get published. Is that the kind of data you'd be looking at, Karen? It's more about the behaviors of researchers in relation to research integrity. For instance, they are going to publish a paper. And in the training, they said that it's good to have a record of the conversations between researchers. And everyone is on the same page and everyone knows who is responsible for what. But our question is, are they actually doing that? We have told them to do so. We have told them that that's a good practice. But we wonder how much of that message is transmitting into the real practices of their research. And that's really, really hard to measure. And that's how we should perhaps measure the impact of the training in terms of how much. And that could be just following up on their practices in the future after attending the training. Yeah, I agree. That is a really difficult thing to figure out. And I mean, you can do it. It just takes a lot of time and effort, which we don't always necessarily have enough time to do those things. So the next question is from Nick. It seems likely that training might be seen to be more acceptable if people recognized how much bias or QRPQRPs arise from unconscious versus conscious bias. Are there data out there on the relative importance of conscious versus unconscious bias? I don't know the answer to this question. Does anyone else, anyone from the audience have any expertise in conscious versus unconscious bias in QRPQRPs or anything like that? Sorry, Nick. I'm sorry we don't seem to be able to answer that for you. The next question is from Kathy. Should we be doing more training on how to detect errors, detect questionable research practices, et cetera? We've seen some of this, but I'm guessing there is much more to do. I think that's related to an earlier question that we had about looking at trustworthy papers. And so I suspect that the answer here is similar. Any thoughts on that one? Yes, I know that there are certain bendels these days that are developing trust markers and looking at what's published out there. But I think that's more looking at the positive aspects of papers but not necessarily questionable research practices. There are programs out there that are able to detect image manipulation or statistics that could be used to see if data has been manufactured or fabricated just based on statistics there. But I'm not aware of anything that we can put in practice on a day-to-day basis. And I guess the other question is whether or not there's real value in focusing on that aspect as opposed to trying to promote the positive side and trying to instill that cultural change. But I don't know, I'll open up to my colleagues on what their thoughts on this are. I might just open it up to Jason as well from the floor. Jason, would you like to address this one? Hi, can you hear me? Yes. Oh, this was about the conscious unconscious one. Sorry, it's a bit late. That's okay. We do know from surveys of researchers that about 50% will self-report that they do use questionable research practices. So it seems like it might not be conscious in the sense that they don't realize that it's problematic to do that just because they haven't had the appropriate training. So maybe that kind of gets the next question. Thanks. And I think that's something that's a good point. It's something that I see as well in doing research integrity complaints is that most of the time when there is a breach of the research code, it's more on the minor end and it's more due to education training but then deliberate intentional misconduct. So maybe that also feeds into that idea that it is not always intentional or so it's mostly unconscious or due to training. Any other comments on that one? Yeah, I guess in my conversations with the research integrity officers, we have discussed about many of the problems associated with awareness. So we focus a lot on that, on the training and building awareness about what can be wrong. Yeah. Yeah, the same. So majority of our cases we see relates to research is not being aware. They just, I guess, weren't given the appropriate training to begin with. So a lot of it is unintentional. I mean, that's one of the reasons we're here today as well because if we can help improve training, then we can prevent these issues from arising and we can reduce our own workload. So there's a lot of self-interest in this from my part, at least. It's funny that you make that comment, Sean, because what we've actually noticed is that sometimes when we do provide more training, then we actually get more queries and more complaints. I think sometimes we get in this false loop of thinking that the more training we provide, the less cases we're going to get. But in my experience at least, what we found is sort of to be the opposite, at least at the beginning stages. Yeah. And hopefully once you get over that initial growth and awareness that people will understand and then after that it will plateau and come down to a lower level. So long term, I'm still hopeful. That's the hope. So in terms of doing training on how to detect errors and questionable research factors, like I said before, I think that this is one that is a lot like how to detect errors. I think it's something that needs to be kind of or trustworthy papers. It's something that needs to be integrated into the training of young researchers, PhDs, early career researchers and even supervisors as well, because the tools are constantly evolving. A few years ago, image or image duplication software wasn't even a thing. And now we have multiple options, often in the startup phase, but multiple options for trying to detect image manipulation or duplication. I think there are constantly new tools and ideally we would be able to, but it's hard. I might go to the next question then from Danny. How much do you coordinate with the training that is happening in your library? Understanding where to publish and how to make that assessment is a basic thing in the library's train. I'm going to confess that we don't coordinate a terrible amount on where to publish and how to make that assessment. For me, research integrity within our first and my remit, where to publish hasn't been an issue or been part of what we deal with. However, we do coordinate with the library when we're talking about topics like plagiarism and copyright publication ethics or those kinds of things. So we definitely do coordinate with the library and we link back and forth between each other's materials. So speaking for myself, I am a big fan of our library and everything that they do and the expertise that they have. So over to Paul and Karen for their comments on this one. Yes, similarly, when it comes to the publishing aspects, we haven't really been coordinating too much on the publication side of things. We do liaise with the library a lot when it comes to, I guess, the governance framework and policies and whatnot. At least the focus is more on research data and so we do coordinate with the library when it comes to the use of repositories and information related to active storage of data and long-term storage. Yes, same. The library delivers several trainings. We do not coordinate any of those trainings. They do that. They coordinate those trainings. They deliver those trainings and they're mainly associated with publication, peer review, those sort of things. We have seen the content that they provide so that we don't repeat the same information. But yeah, it's a different area. Our next question is from Rose. I think one of the key difficulties here is not communicating the importance of these efforts, like data management plans, but rather providing sufficient training in how to competently accomplish those things. I think generally speaking, researchers want to do high-quality work and they want to act with integrity. Things fall apart when it comes to knowing how to execute. Are there any shifts or broad trends that you are seeing in the world of research integrity training to move away from the why and into the how? I think that's a really good point and I think it's one that I haven't seen a lot of training in how to do things unless, for example, editors are helping people do open data or something like that. That's not really within the university context. I would like to do more on our, but it's also quite difficult because there are so many different tools and options out there. So I'm going to pass over to my co-host to talk about this one. No, I completely agree with Rose. Actually, in the state of open data survey last year, I think it was up to 72% of researchers wanted assistance with the how. One of the things that we have focused on is providing that assistance through the data stewards in the submission of the data management plans and a selection of an appropriate repository to help with the metadata. So I definitely think that providing that support is going to go a long way when it comes to promoting research integrity. I guess it's difficult on the how to provide all the information that they might need. As Sean said, the characteristics of each research is different. The research methods are different. The collaborations are different. So we usually have two hours training and it's really hard to cover all that is relevant for each person. So that's why we end up giving general information. That could be one question. Shall we just provide a general overview or shall we focus in just one particular topic in one training, let's say, authorship just one training about that rather than providing an overview of all the different things? Thanks. I'm going to move to the next question now. On avoiding cherry picking from Laura. So on avoiding cherry picking, that's the role of pre-publication plans. But pre-publication plans are a high effort solution. I think that's a good comment. Thanks, Laura. Another one from Anonymous. The relevant question also is who is delivering the training? That is a question of trust. Most experienced researchers know that the values officially preached by the institutions do not align with their actual actions. Ignoring or hiding cases of unethical conduct to avoid damage to their reputation, putting profits or income before doing robust and transparent science. Thus, training delivered by such institutions, for example, universities will be seen by default as ticking the boxes exercises rather than meaningful activity. Training provided by organizations that don't have vested interests could be more engaging and consequential. So I'd like to say that at USW at least, we do not just hide or cover up misconduct. As someone who works in the area, we do a lot of work investigating these claims. So I would say that that's not always true, at least in my institution. But yes, I do believe that who delivers the training is important. This is why I really enjoyed, I hope that as someone with a PhD or research background, as well as a research integrity role that my contributions, small as they are, do help. But I'd like to know more about what others think on this. I completely agree. I think the majority of us that get into this field of research integrity do it because we truly believe in good science. And same goes with the research integrity advisors and champions that we select. We choose people that actually are passionate about research and about sound science. And I guess when it comes to choosing who does the training, that sort of raises additional points about who's the most appropriate person to provide training. So at our institution, we've sort of focused on the HDR students and the early career researchers because we feel that they're more amenable to training and also we have the advantage of making it compulsory at times. I guess this sort of relates to question two that we've got up on the screen right now about how we engage the more senior researchers that are sort of ingrained in their practices and some of them believe that they know what's best. And the approach that we've tried there is to take a peer-reviewed learning approach, perhaps using situations that aren't so clear-cut and so that provides more engagement there. So I don't know if I answered the question there but that's the approach that we've taken in quarry. In our case, in terms of who is delivering the training, it's the research integrity officers that are the ones who are delivering the training. But in my role as the individual training coordinator I provide insights from an educational point of view to make it more engaging and relevant. And we also get feedback from these to get an idea of their perceptions. Yeah, if it's about taking the boxes, we don't see that. I mean, from my point of view, I don't see that in my area. Yeah. Thanks. I'm going to move to the next question now. Are there actual trade-offs between integrity and advising ones career on traditional metrics? Or is that a false narrative? Who wants to take this one for starters? I think, Jason, you do make a good point there. And from what I've observed, sometimes there are perks to people cutting corners. I guess in terms of, at the end of the day, it comes back to researchers being overloaded and the amount of time that you can spend devoted to your research. And so if that time is spent doing the training and making sure all your T's are crossed and your I's are dotted, and associated with the admin side of things, then that means that that's less time devoted to the research itself. So I think in terms of traditional metrics, yeah, it is that fine line of still conducting responsible research, but also meeting all your requirements. And sometimes we do see that those that do cut corners do get an advantage in that sense. And so maybe that raises the question of whether or not the second needs to be looked at and rejigged and that there are rewards and incentives associated with those that do the right thing. But that's just my own personal view on that. That's fine. Well, how would you reward that? Yes, I don't know. Maybe when it comes to the promotion of one's career and to the next level, there are categories related to mentoring and things that relate to research integrity, trust markers like open science and data management plans or providing all the information when it comes to your publication. So I don't know. They're just ideas at the moment. I think it's a risky tactic. I know that there are lots of perverse incentives, but if you're trying to advance your career on traditional metrics through gift authorship or fabricating data because you can't be bothered actually running experiments, it's a risky way of doing things. Because yes, you might get a national research chair, but you might also end up written about in Nature and Retraction Watch. So I would like to say that our incentives are good enough, but I would just say, the only thing I think I can say with any certainty is that trying to boost your metrics through not doing everything with integrity is a very risky approach. So the next question is from Laura. And I know that we are reaching an hour now, so we're going to just keep going for a maximum of half an hour. So Laura asks, pre-registration is expanding in economics. The American Economic Association has a pre-registration site. Berkeley Institute for Transparency in the Social Sciences has a course in research transparency and reproducibility that includes pre-registration. I think that sounds great. And if people who are providing the pre-registration tools are also providing the training, then that makes the most sense in terms of teaching people how to use it. And because it's an association, a society that's doing this, it sounds to me like it has the potential to be very influential. What do you think? Yeah, no, I agree. All for it. And glad to see that it's actually expanding into other disciplines other than the clinical sciences. So on to Nick. One way to getting procedures like pre-registration into the regular training stream might be for departments to mandate a discussion of the appropriateness of pre-registration in graduate advisory committee meetings early on in MSc and PhD programs. Are there departments out there that have such mandates? Not that I know of. There are lots of things that supervisors already mandated to do with their students. So I don't know if pre-registration is likely to end up as one of them. Do you have any other experience with that, Paul? No, so we haven't explored the pre-registration side of things. Our focus at the moment has more been the research data and we've sort of gone down that approach with the masters and the PhD programs where there are checkboxes at the different yearly milestones that ask the candidates about whether data management plans are and whether data is going to be deposited, et cetera. But yeah, definitely something that we could look into. Darren, have you heard of anything like this? No. Okay, so the next one is a comment from Laura. One thing we can do as research integrity functions is highlight good practices, for example, to do so in a celebration event. I think that's a really good idea and one that I would ideally also pair with a bit of money. For example, I know that triple AES, the Australian Academy of Science, I think, they recently launched a research integrity prize. So that would have $10,000 associated with it. So I think if institutions do these kinds of things as well, I believe that there was another, possibly in Tasmania that was doing research integrity prizes. Those kinds of things can be really good and really good incentives as well. It will help to promote the right type of researcher, one who's doing these things. Any other thoughts on that? Yeah, we're hoping to launch a research integrity award this year as well, together with a research integrity week awareness campaign. So again, trying to focus on the positive side of things. We don't have anything like that here. We made a proposal of, and Anna still on the stage talks about a proposal of having research integrity champions. Yeah, but we're still in the early stages of that. I'm just going to skip over quickly the next two comments because they were made 20 minutes ago and seem to be related to the discussion. Then just agreeing with what Paul was saying clarifying another comment. So the next question is from Anonymous. Is there a research of just what the existing research integrity training across different institutions are? As a student, I can say research integrity training can vary from mandatory online modules of MCQ questions to actual discussion workshops. Sometimes they are outdated as they do not consider the more recent developments in error detection. Does our training vary depending on institutions? For example, I mean academia best industry versus government. Any thoughts on this Karen? Not really true. There is a difference. Currently at UCQ, we deliver training for all the university, but also sometimes if the faculty requires the training, we go to the faculty and deliver that training. And in that case, we make some changes to the material to make it relevant for that faculty. But yeah, in terms of government or industry and there are differences in terms of the training that they deliver. Yeah, so I guess I'm aware that across Australia at least a lot of universities and research institutions do end up using what's off the shelf, online training modules. The approach that we've taken at Macquarie at this stage is to sort of develop our own in-house. And so one of the advantages there is that it allows us to tailor the content and material to the different cohorts, whether it's the early career researchers or HDR students or more senior researchers. I guess one of the disadvantages there is that it does take a lot of time to actually come up with material. And what we've tried to do at least is we've tried to take a blended approach where the people need to do the online training but then we also follow that up with an in-person workshop where there's more engagement and dialogue and it's a bit more interactive. And that workshop hopefully addresses any gaps that the online module provides. Yeah, I agree. I think that we do end up a lot using these off the shelf products. So that kind of creates some uniformity across institutions but also variation between using product one and product two. I'm going to move to the next question which is from Danny, which clarifies and links in with something that they were asking earlier, which is on whether we cover where to publish. And the question or the comment is, where publish addresses issues like predatory journals and suggestions of why a paper might be a poor one. And isn't it all tied together with integrity and openness and reproducibility? I don't really know exactly how to answer this question for starters predatory journals are I think an integrity issue but it's difficult to identify exactly what someone has done wrong if they've published a predatory journal. It really depends on the specific facts of the matter. So I think that's why the library is probably the best place to do the education on that one. Paul, did you have anything on this? Yeah, so I guess in the workshops that we provide, we do highlight the issue of predatory journals. And basically I think what we try to focus on there is how it impacts on the research if they end up publishing in one of these journals. The fact that there is no peer review that there are higher costs associated with publishing in these journals and that they're contributing to a bigger problem. But I think the key message we also drive through there is that it's going to impact on your reputation. You've published in one of these journals, people can see that it's essentially rubbish and then that's going to be on your CV and impact on your own reputation. Now I think the fear associated with that is enough for people to really stop and think about it. We obviously direct them to resources like the library to the various like white and black lists, cables and bills. Obviously that doesn't solve the issue, but we really reiterate that researchers need to do their due diligence before they publish in one of these journals. I think that's a good point. I'm just going to take us on to our last question here from Nick. Many open science practices and exposure to sources of chronic research deficiencies seem to be generational. It's just a lot more difficult to do good science today than it seemed to be 20 to 40 years ago. Are there departments out there where open science advocates have successfully incorporated open science standards into defining what is expected practice for modern scientists trained in their departments? Shouldn't this kind of thing be prominent in advertising to prospective graduate students? Possibly say that I think science today is better and more open than it was 20 to 40 years ago. So I think the standard has actually risen. When I've read papers from the 1960s and 70s, and seeing graphs drawn with two rats and no aerobars even, and nowadays we're showing the raw data. Every single dot point is on the figure. There's larger numbers. You have defined aerobars. More sophisticated statistical analyses, potentially even open data sets. I think good science stays, I think science stays better than it was 20 to 40 years ago. I'd like to give us credit for that. That researchers have stepped up and constantly evolved and constantly improved. I have to say that science 20 to 40 years ago was bad, but that certainly things have become more rigorous in that time. I completely agree. I think maybe where we are with the research integrity is where research ethics was about 20 to 40 years ago. So at least in Australia when it comes to the various legislation and guidelines and all that, I think it was in the 80s and 90s where we really had that first established and it does check that generational change to sort of embed it in the culture and make it standard practice. So yeah, I agree with Sean that that it is going to take time, but we are making slow incremental progress there. Aaron, did you have anything to add? Not really. Yeah. I'm not really sure about this one. Thanks. So I think that brings us to the end unless anyone has any final comments or questions. We're, you know, more than an hour over time. We're over an hour. We've had a very lively and interesting discussion. So thank you very, very much for making the time to come to this session and for your comments and questions.