 And in this debate, we're going to be talking about questionable research practices. So a set of practices that include, among other things, p-hacking and cherry picking or selective reporting. And as you see, we have four debaters in this session, Daniel Larkins, Paul Glazu, Jane Hutton and Dorothy Bishop. And there will be a number of things that our debaters largely agree on. For example, that questionable research practices are widespread. And that at least in some instances, these practices constitute ethics violations. And that questionable research practices require many parts of our research system to be fixed to wipe out the problems. Our primary point of difference tonight, well, this morning or whatever time it is for you, is whether our existing systems, so our existing ethics committees and IRB panels are the right mechanisms to manage QRPs. So whether having those systems address questionable research practices, try and catch up questionable research practices, would be an efficient use of those structures. Or whether giving ethics committees the authority to make calls like that would, in fact, just make matters worse. Would it have the perverse effect of privileging some designs and methodologies over others? So along the way, as we go through this debate, there'll be some assertions that our debaters disagree on. So Paul Glazu and Dorothy Bishop will argue that despite good intentions and good aims, many ethics committees have degenerated to a point where they create lots of bureaucracy and are often more focused on legal issues rather than ethical issues. And as such, are not the right avenue for dealing with QRPs. Daniel Lackens and Jane Hutton will disagree with that position. So you'll hear alternative proposals as well, such as whether instead of approving projects, our ethics committees should perhaps instead focus on certifying researchers. And we'll tackle questions of whether the current committee structures sufficiently appreciate a diversity of methodologies, or whether the standards of RCTs continue to dominate and how this might limit committee's abilities to be able to demarcate what questionable research practices are. A critical question in this debate will be whether the ethics review process comes at the right time to catch questionable research practices, or does that review process happen too early? Are QRPs, in fact, a problem that occurred later in the research process? Or are they a planning issue? So each of our debaters will now speak for 15 minutes, and we hope to leave 20 to 25 minutes at the end for your questions and some general discussion. So our first position in the debate will be put forward by Daniel Lackens, who will argue that we should indeed make use of these existing structures, that his talk will be followed by Paul Glauze, then Jane Hutton, and then finally Dorothy. So at this point, I'm going to hand over to Daniel to kick us off. Thanks, Daniel. Alright, thanks so much. I have some slides to share with you. And I would like to start off this discussion by first stating that, of course, I'm going to talk about my personal views here, just to be sure I happen to be also the chair of the ethical review board of my university, but the viewpoints are mine and not of our ethics board and also conditional on the fact that I was asked to support this, which I'm happy to do. Now, there are different code of conducts across the world. There's a European one. You might be somewhere where you have your own code of conduct, but I just want to highlight a couple of statements that are in the Dutch code of conduct for research integrity that touch upon the issues that we'll talk about today and maybe where an ethical review board should take some action. So starting point here is that researchers have to make sure that the choice of research methods, data analysis and assessment of results is not a consideration of possible explanations. It's not determined by non-scientific and non-scholary motives, which includes the probability that you'll get tenure and keep your job. So this is in the code of conduct. There are other issues related to the design of studies. Make sure that your research design can answer the research question. So don't waste time and money by designing a study that is not going to answer the question you're asking. Ensure that the methods you employ are well justified, so that requires some expertise. And also do justice to all research results obtained, which in a way means make sure that you design a study that's informative regardless of the p-value. So these are things that are already in our code of conduct as issues that we want to improve upon. And I think the main argument why an ethical review board needs to take these aspects into account is that we are trying to prevent research waste. So there is the underlying assumption that waste can be prevented. So we're not talking about research where people explore and have a good idea and do a good study, but don't find anything. That is part of science, but we're really talking about research where people shouldn't have done the study to begin with because we know it was faulty or wasteful from the outset. We could have prevented it. So this is especially true when research is funded publicly, I would say. There is an ethical dimension here. The current estimate of the cost that requires to save a human life, according to Gifwell, is about $2,300. So I would say that any $2,300 that you waste is a human life. So there is an ethical dimension to just general research waste. And just to use some of the arguments of one of our debatants today against himself, maybe, it's important that research is designed well. So is there appropriate design and methods of an experiment? And if this is not the case, this basically is one of the factors and there are many others, right? So there's no single magical bullet that will solve all problems, but this is one of the aspects that we should focus on. So many researchers regrettably are not sufficiently trained to design studies that actually meet these requirements in the code of research integrity. It requires quite a lot of training, for example, to perform a decent sample size justification. Also issues related to multiple comparisons and how to deal with these kind of issues are difficult. So ideally what you would have is just an expert you can go to that helps you with these parts of your design. And those experts would be paid by somebody who's not you and available for you at any time. Regrettably, this is just not the case. We know that there are many fields where there is no expert available so what do these researchers do? Now, in the process of performing your research, what I see is that the main point of contact with any external party before you actually go out and design, run the study that you've designed is the ethical review board. So this is the only point where people take a break, ask somebody else if it's okay to go ahead. So it makes sense that at this point you also want to try to see what you can do to improve things. The idea to expand the tasks in an ethical review board to encompass more than just measuring the harms and benefits to the participant directly, which is of course the core issue that an ERB has to focus on. You see a small expansion to other domains that ERBs also take on maybe again because this is just the point in the research cycle where there is this point of contact with another party. You might already, for example, have a consent form that includes information about the general data protection regulations. So let's say in Europe, you have to make sure that the data that you collect follows the law in terms of privacy concerns. Now, this is a law. This is not ethics. You just have to adhere to the law but the check of whether you are adhering to the law, at least in our university, happens in the ethical review board procedure because there is a section on this. So why not add other sections in this procedure that check the experimental design? So I would argue that a lack of coordination in research is also a form of research waste. The fact that we don't have experts that help us and that we don't work together to prevent research waste is itself an ethical issue, I think. So now comes a critical point, of course, and that is if there is an ethical review board that consists of people who have the expertise to evaluate aspects of the research design, like a sample size justification, but also other issues that can be checked upfront, then this can be an efficient source of feedback leading to improvement in the design. And if this indeed is an efficient source of feedback, we might have an ethical responsibility to do this. And this is not uncommon. There are other places where this happens, for example, statistical reviewers at some journals will also give you feedback to improve certain aspects of your maybe analysis after the fact, but it could be in a registered report before you actually perform the study. Now, of course, this requires a competent ethical review board and a review board that has a certain amount of content knowledge. At our university, we have a tired system of ethical review boards. There is one at every department. So these consist of peers who more or less know what you are studying. And then there's a general ethical review board for more complex questions. It could be that your proposal is already addressed at this lower level ethical review board and there are content experts that can give you this advice. And some departments have a long history of actually already requiring sample size justifications, which is a hassle the first six months or the first year, but afterwards people learn for their specific designs how to improve their design and prevent waste. So a good ethical review board can help to improve the informational values of studies. You can think about evaluating sample size justifications. We also often see that people want to report a large number of tests and then we can recommend to control for multiple comparisons. And in some cases, you can recommend that people pre-register more unusual analyses. So treating this informational value of studies as an aspect of research ethics can motivate researchers to educate themselves about certain domains, like sample size justifications. If nobody asks them for it, they might not look into this and design studies that are actually not informative. But if there is an entity that checks and tells them when they're not doing a good job in advance, people are motivated to improve. And over time, we have seen this in our university that people learn how to do these essential parts of their research design themselves. And then after that, things go much more smoothly. So it's a bit of a bump to get over, but after that, things happen much more smoothly. All right, those are my arguments. Thank you for your attention. Okay, thank you. Our next speaker is Paul. Paul, do you want me to use your video or do you want to video? Yes, okay. All right, just bear with me for a second. Good evening. Is that working? I'd like to put the case that questionable research practices are rarely an ethics violation, but instead are a systems problem that is longstanding and that we need to consider from a systems perspective. And that in particular, the putting an ethics framework around this limits our potential to deal with this pervasive and longstanding problem and that a systems perspective will help that. To illustrate this, I wanted to quote from Doug Altman, who sadly passed away a couple of years ago, but founded the Centre for Statistics in Medicine in Oxford and spent a lot of his working life in trying to correct questionable research practices by giving people better guidance and working out systems corrections to the problem. So in an article written in frustration in 1994 in the BMJ entitled The Scandal of Poor Medical Research, he said, What then should we think about researchers who use the wrong techniques either willfully or in ignorance, use the right techniques wrongly, misinterpret their results, report their results selectively, cite the literature selectively and draw unjustified conclusions. We should be appalled, yet numerous studies of the medical literature in both general and specialist journals have shown that all of the above phenomena are common. This is surely a scandal. And this was 25 years ago and Doug had subsequently documented the history of the documentation of this problem that we now call questionable research practices. As a statistician, he wrote and tried to correct a lot of these through educational processes. One example of this is an article he did with his colleague, Martin Bland, about the right and wrong approaches to testing the foreign after trial where you have a control group. And in this example, the FLAX group is the intervention group, the RISC is the control group. You can see there's really no difference between the groups. But the authors here had done the pre-post analysis on the FLAX group and concluded that it was effective because the p-value was less than 0.05. So he was writing about the ignorance of this as an inappropriate process, which I will note, was not just an ignorance of the authors. This was also missed obviously by the peer reviewers and by the editors. So this is a very pervasive ignorance that is not confined to the bad practices of the authors alone, which is why I posted on Twitter this wrong analysis. I said in a two-arm control trial is testing the pre-post difference instead of the intervention control difference using a post difference or an anchor, ignorance of statistics and ethics violation or please do a 95% confidence interval. And very few people thought it was an ethics violation. The largest percentage was for the ignorance in statistics. And some people wanted that extended to let's do a confidence interval instead of doing this p-value testing. So wrong techniques either willfully on ignorance, Doug Altman said. Well, Siskins law would say if you have to choose between navery and ignorance, choose ignorance. It's much more common. And I think that's illustrated by this analysis of health services research articles looking at by their definition, questionable research practices. In this, they found a median of six questionable research practices per publication. And notably less than 3% had zero questionable research practices. And again, note that this had passed peer reviewers and editorial processes without these things being picked up. This is why Doug Altman was concerned with trying to train not just the authors but the peer reviewers and the journals that also were ignorant about correct research practices and weren't fixing them up at what is a late stage in the process. Of course, it would be better to pick them up earlier but they're being completely missed. So this is very pervasive. It's not just health services research in a series in the Lancet in 2014. We tried to document avoidable waste in research which included these questionable research practices. We found avoidable design flaws in 50% non-publication of articles, 50% and poor and biased reporting in about 50%. And if you put those together it comes up with the 85% waste in research. And this is across the whole of the biomedical research enterprise that we found this best documented in clinical trials but documented in many other areas of biomedical research as well. Suggesting that this is a pervasive systems problem again. So to quote Doug Altman again as the system encourages poor research it is the system that should be changed. So I suggest we need to reframe the question here from an ethics violation to are questionable research practices a systems problem? And the answer is yes, to deal with these we're going to need funders, research institutions, peer reviewers, journal editors, authors, PhD students and everybody involved in the research enterprise to work out how they can act and contribute to the reduction in questionable research practices. Thank you. Sorry. Okay. Thank you Paul. Our next speaker is Jane Hutton. Over to you Jane. I don't know, I can just remember to unmute. Thanks everybody for coming along and I'm sure you'll play the game of seeing where I do or don't agree with the previous speakers. So in terms of whether QPRs are misconduct or unethical and what ethics committees should be doing in terms of the strength of their position I think prevention is better than cure. Let me tell you a quick story. Many years ago, depending on your age in the 70s and 80s the Economics and Social Research Council in the UK had the two-tier system that Daniel talks about. Research proposals were first of all refereed by their own subject area people and then those that passed that went on to statistician. By the way, there's quite a lot of medical journals do this. I tend only to see things that have already been deemed interesting. Unfortunately, what then happened was the statistics committee checked the quality of the design and proposed analysis and something like 90 or 95% of grants were turned down. And after a few years of this, ESRC recognized that they had a problem and they had a choice. They could improve the quality of training in design of studies and analysis of those studies or they could get rid of the statistics committee. I won't give anyone a prize for telling you which decision they took. But I'll give you a little update from a PhD student who recruited me. I do mean it that way around. He was doing a study of behavioral economics and he wound up with two supervisors in behavioral economics, the main one and the second one. And the second one said, why are you studying the sign effect? It's internationally agreed and established. Well, three years later, we still hadn't been able to get anyone to agree with the definition of the sign effect was with any degree of precision. And a review of all the studies on it showed basically complete chaos. So perhaps getting rid of the statistics committee wasn't a good idea. However, it's the cure worse than the disease. Lawyers like really long, long, long complicated forms. So for example, in medical research, I tend to refer back to the Nuremberg Code with 10 points, not the Declaration of Health Sinking, which travels on ironically, the Nuremberg Code was written by a lawyer, but only one. And you can get all sorts of problems. So mother child transmission in HIV where studies were being done in developing countries because it was cheaper. A lot of debate, Declaration of Health Sinking was revised to exclude the use of percebos. Great, we're decolonizing the decoration. Unfortunately, the next day, they had to issue a revision because people writing it didn't understand the role of percebos. And the other big problem as far as I'm concerned is there is the obsession of consent. The reason these forms are so long is individualism and luxuriating in autonomy. It's not only the patients, the participants who matter. As the statistics code says, it's all the people who could be affected by the future decisions. You do bad research and you make bad decisions on lockdowns, on vaccines, on diagnostic tests covered. It's not just the participants who suffer. But the big problem has been the obsession of consent has meant that in English-speaking countries, certainly, regulation and ethics committees have expanded to include observational studies. And that actually means that what you land up doing is the opposite of what you intend. The ethics committees create questionable research by asking for informed consent. And yes, I do have proof of that. One stroke register landed up spending a quarter of their total funds in trying to get consent. And they still only got consent from half the people. So what have we forgotten? We've gotten our own worldviews and our assumptions that our ethics rely on. The UK has spent billions on worthless data. How do I know? Well, first principles, but Biobank. So recent conference talk by somebody who was very enthusiastic about Biobank stated specifically that, of course, it was grossly biased data. It's comfortable, wealthy people who've contributed it. And to use it, you have to assume, as many social scientists do, that if you want to study rich white American college kids, you can generalize to the entire world. You can't. And I did actually point that out 20 years ago to the MRC, NHS and Department of Health. I was a note from being an expert witness on metal and metal hip replacements where I compared what we got from England and Wales with the Nordic countries. And collecting data on diabetes safely has meant the data is worthless. We need to consider the approach, for example, taken in the Nordic countries, which distinguishes between when what you need is an approved researcher and almost all observational research in the Nordic countries will come under that heading. And when you're actually intervening to do things. And we also need to bear in mind the wider issues. Loss of employment in the UK because we made it impossible for pharmaceutical companies to operate. Knowledge of workable leprosy treatments were the real issues. You can't get the free treatment to the people. And perhaps we should be thinking more about research into implementation, not just new drugs. But the first thing we have to do is decide on the question. Well, what is bad about questionable research practice or indeed, what are questionable research practices? Is running a small trial a bad idea? Well, I think so. What do we want to achieve and can we achieve that through ethics committees? Ethics committees can radically improve design quality. Data standards and monitoring boards should improve conductive trials or they don't always. What is informed consent doing for us in all these forms and the information? Is uninformed refusal to participate a better option? What are the better ways? Certainly at the point of view of analysis, I think the Equator Networks, you saw that in Paul's slide, and approved researchers having more people available to referees is probably more effective. So let's think a little bit more about what ethics are we working on? Because what we take for granted, colors the options we consider for solutions. So I had a great struggle publishing a paper on implementation research and the ethics thereof. So implementation research is saying, we know there are effective treatments for leprosy. There is no need for people to so be having amputations or the rest of it. So what's going wrong in implementing the treatment or why are we having so many back X-rays when they achieve nothing? At which point, you're really interested in talking to doing research on the health professionals, not the individuals. Now, if your view of the world is that ethics is first and foremost and almost only about individualism and autonomy, you can't cope with any kind of logical structure for asking those questions. And that the standard response for American reviewers to what we've written was, but this doesn't fit in with autonomy. Is that a sensible starting point? Well, maybe if you take a very narrow utilitarian and consequentialist ethics point of view, then you say, yes, it's in the interest if you're trying to get tenure to do the minimum possible standards of the maximum possible number of impact and publications as an individual. Of course, you're ignoring quite a lot of costs when you do that. Just as lawyers ignore, very lawyers are getting the blame, the cost of their regulation. And this is like for those of us who do medicine, thinking about the sensitivity and specificity of a process or only the sensitivity and not specificity and forgetting the positive predictive value. What's the point of doing diagnostic tests when you know that 90% of your results that are positive are false positives? We could also think in terms of not only broadening the concept of utilities considered, but the duties. I think we could easily argue we all have a duty to improve the system by assessing ethics committees and journal review. We need to improve the way those work in order to improve question or research practices. If we actually make the treatment worse than the cure, well. And then virtue ethics, which I think actually is where I tend to learn, and I think it's consistent with what Daniel was saying about people finding it painful at first but the longer term benefit. It is much better if we put the effort in early to get people to behave well and study design and analysis and for that to become a habit. Virtue ethics require us not to be lazy and yes, QPRs are a systems problem, but we are the system. It's not them, we are the system. If we want these things to improve, it's up to us to put in the work to get it to improve. By teaching, by refereeing, and by advocating. And of course that's a big job, so it means we do need to work together cooperatively because no one of us can achieve this. Thank you. Thanks, Jane. Our final presenter is Dorothy Bishop. And so Dorothy will speak now and then we'll open up. I can see that the questions are coming in through the Q&A, so we should have plenty of time to get to those. But for now, Dorothy. Thanks, Fiona. And thanks for organising this. And I'm sorry you're so late in Australia, but it's great that we can all be here. So I'm Dorothy Bishop, I'm a professor of development and neuropsychology at the University of Oxford with a particular interest in all things to do with reproducible and open science. And I found this question that Fiona posed for us really fascinating. And I'm glad we're having an opportunity to talk about it. I should say that the title of this session, which was really our QRP is an ethics violation, she's got us to drill down to think more specifically about should ethics committees or IRBs in the States be the people who are responsible for somehow trying to fix this problem that we I think all agree is a big problem with QRP. And my attitude towards this question is strongly influenced by my attitude towards ethics committees, which has something in common with Jane's attitude, I think from the things you've been saying. But of course, like so many things in life, the reason we have ethics committees are very powerful and very good. And in fact, in Oxford, we are supposed to do some sort of online training about ethics. And it starts very starkly with Nuremberg and with terrible things being done in Nazi Germany in the name of research and makes you aware that the real focus is on not harming individuals who take part in your research. And the main goal is seen as protecting people, as Jane says, individuals. That's the impression that comes across if you go through this training that they're not so much interested in protecting science as an enterprise, as in protecting people from physical or emotional risk, from invasion of privacy, and from exploitation, which I guess is where consent comes in. You don't want people to be exploited, particularly again, if we're talking about hospital patients, people are vulnerable, might sign a form even though they would really be better off not doing so. And use of inducements is a very interesting one. So ethics committees, I would say, started out being set up to ensure that risks like that were considered and mitigated for research that the institution is carrying out. Now, my experience with ethics committees, and I work in areas that veer into the sort of medical, so I work with children with various developmental disorders, including genetic disorders, as well as with typical adults. It was so negative that I had blogged about it a bit, and at one point I wrote a blog talking about research regulation and drawing a parallel between research regulation and the explosion of populations in an ecosystem where there's no predators. Essentially, what we have is a situation where there's always a good reason to introduce a new regulation, but there's nothing to stop the explosion of regulations. And so this is where I start coming from when Daniel starts saying, well, we could include coverage of QRPs for ethics committees, is that I already think that we have introduced too many roles for ethics committees and that there's another way to do it. So I'm not saying we shouldn't worry about these things far from it, but I do not think that expanding the role of ethics committees would be the way I would want to go. What we have are procedures that were originally designed to protect people from fairly major, serious, possible harms being extended to relatively trivial circumstances. That, I think, has a really corrosive effect on people's attitude towards ethics, and it's certainly, I mean, I started to get very, very cynical. I mean, I like to be ethical, and the only people that make me feel sure is the unethical ethics committees, because they'll start telling you you haven't got to form in the right template or something. Things that really go a long way from where you think they should be. But also, and again, I think this is a point that has already been made loyal. So we have a situation where what ethics committees have been co-opted to do because they'll be in one place where it can be done, is to ensure, and this is what Daniel made, that things like the GDPR regulations are adhered to, and in our ethics committees, they smuggle in a whole load of other stuff. I mean, basically, you feel that part of the role of the ethics committees these days is to prevent the university from getting sued or something goes wrong. And this is really conflicting because we're told on the one hand that we should have language in our documents for participants that is clear and not scary and not difficult. And then we get told that we have to include these incredible sentences full of legalese about, I can't even read one out, but I've got one in one of my blogs that I was asked to get parents of children with language impairments to sign up to when we know that many of these people have limited literacy skills. So you have to watch it. If you're gonna say research regulators should do more, you have to really be careful of unintended consequences and an explosion of bureaucracy, an explosion of particularly things that were brought in to deal with a very real bad circumstance being somehow blanketly applied to absolutely everything. And one of my examples that I gave in one of my blogs was a situation where as we wanted to recruit mothers of babies at the local maternity hospital so that later when the child was one or two we could do things with them. And we did all we wanted to do was to then to agree that we could possibly contact them at a later age. And we were only seeing mothers of healthy children. But we were then asked by the ethics committee what it was suggested that before we contacted any of them subsequently we would have to write to their general practitioner and check that the child had not died. Now clearly if your child has died this is awful and getting a letter from us saying would you like your child to take part in this study would be horrendous. But at the same time the amount of work that would be made for the GPs would be massive. They would not be seeing patients during this time. And I worked out, I went and actually got some epidemiological data on how likely it was for this to happen. And it was vanishing really small. It doesn't mean it couldn't happen. So you're defending a lot of the time against very minor risks. Now that's not true with QRPs. We know QRPs are happening all the time. But I think that the way to handle this is my radically different proposal is that we should think of it as a kin to allowing somebody to do another very dangerous thing which is to drive a car. So you get behind the wheel of a car. Potentially you've got this very heavy thing running around very fast and kill people. What you do, you get people to take the driving test. You might have to have a lot of lessons and you get yourself certified that you know how to drive. Why not have something similar when you're authorizing people to be able to do research? And then you can build into your training things like really getting them to understand why QRPs are such a bad thing. Because the reason this persists is that people think it's just the pedantics, statisticians being difficult. They have no idea that this has serious consequences. Most of the people I work with, they just think you're being panicky. If you say, you know, you've got to do a bond for only credit. Oh, a bond for only, a chance for only, you know. I gave a talk this morning in a different, in the Oxford Berlin summer school about exactly this. That science is cumulative. If you let dodgy stuff get out there, if you let that stuff get out into the literature that has got, you know, where you've misinterpreted what a P value means because you've been basically retrofitting your hypothesis to your data. You are building a body of work, but it's wrong. And you're building a body of work that other people will then try and build on. I mean, the seriousness of it needs to be explained to people. So I would say you should have something akin to a driving test and doing good ethics, but you should train people very much to understand the implications of what they're doing so that they understand it. And then of course, if they're found to have serious violations, just as with somebody who's found to be driving far too fast or driving dangerously, they can be fine, they can be banned, they can have their credentials taken away and not allowed to do more. So you can have quite strict penalties then. If you've got people to demonstrate that they understand what they're doing, then you know if they don't do it, that there's no excuse for ignorance. I think that's much, much better than trying to get the ethics, poor old ethics committees to do it. Because I mean, I have huge respect for people who sit on ethics committees, many of whom do it in a voluntary capacity and it's not easy work and everybody hates you because you're continually telling them that they can't do things. But I think the problem would be that it would be hard for them to actually be up to the task of understanding all the different QRPs and many different disciplines. I mean, I think already we have sort of some perspectives coming from medicine and Daniel and I from psychology and there's other areas. And indeed somebody in questions just said, what about research on animals? Well, ethics committees certainly in the UK wouldn't normally handle that whereas QRPs would certainly apply to research with animals. So I think some way of having training for being a researcher, that is where you get some certification would be a more effective way of dealing with the wide range of things that we're talking about. And it would mean some bureaucracy because you'd have to get the training and certification. But I think I would personally felt after the committees I've been through that I would prefer to do that than to have as currently as the case every single study we do goes to the ethics committee. We then discover something wrong with a study. So we go back for a correction over often quite trivial things like how we were to form. And it's time consuming, it's deathly. And I really just wouldn't want to, I think if you started bundling in examination of QRPs on top of that that the entire system will probably collapse which perhaps wouldn't be a bad thing. I also think though that some ethics committees just wouldn't want to take this on because it precisely for the reason in fact that Jane Ray's which is they see their role as protecting individuals and it's not so clear how stopping QRPs protects individuals other than that it might stop somebody from being enrolled into a study that hasn't got a chance of showing anything useful. But I think perhaps for me as a psychologist the bottom line is I think we need to think of the psychology of this as well. I know the impact on me of being told what to do by ethics committees. I've had good advice from ethics committees and what I'd really like to do is be able to ask their advice when I come across problems that I find challenging. But if they start telling me that I've got the wrong sort of form or the wrong acronym or whatever I just get thoroughly cynical about the whole process and start thinking that you almost want to tell lies just to sort of stop the whole thing and make it go away, which is terrible. I mean, this is why I say it can really get quite corrosive. So I'm sorry this is turning into a rant and as you can see I probably need psychoanalysis on this topic but it has been a long, long research career where I've seen ethics committees initially that they didn't exist when I started out and I've seen the grow from being quite small, confined organizations to a monster which is very hard to satisfy. And I don't think expanding the monster further is necessary the way to go. I think we really need to rethink this but what you want is to have the researchers themselves not just be able to go through the motions but actually have a deep understanding of why certain things are right or wrong. And for that, I think they need to be the people the focus of our attention and getting people adequately trained is what is necessary. Otherwise we'll have unintended consequences from outsourced I don't think we should be outsourcing these decisions to a third party that is I think ill-equipped to deal with it. So end of rant. Thanks, Dorothy. There are quite a number of questions now in the Q&A but just before we get to that we've got time to get through those. We've had now a very concrete alternative proposal here from Dorothy about certifying researchers treating this like a driver's license, having penalties for the equivalent of speeding or other issues. I just, Daniel, can we just have a quick summary from the other panelists just a minute each maybe a response to that proposal? Daniel, you... The response of for driver's license, having a driver's license. Yeah, I was listening to this. I wondered, so why did I get my PhD? Like what is that worth? I mean, so didn't we, weren't we supposed to train people during their PhD to be able to do these things? So that's one thing. And second, I mean, I would love to have additional testing and schooling especially for senior researchers at the university and talking about heart cells. I think that's gonna be an incredibly heart cell because they're not gonna do it. I don't think they're gonna do it. So I find it a lot more convincing to put the pressure on them because there is a hurdle in their way. So to force them to do it because if you just invite them for a driver's test, it's great if you get it to the working but I don't think it's gonna work out in practice. Paul, do you have a reaction for the proposal? So I think it's a very interesting idea and it's a useful metaphor but again, it's a systems problem. So if you think of driving as the problem, one of the elements of safety in driving is having a driving test, right? But there are a lot of other elements in the whole driving system like having safer cars, having seat belts. And also I really like the metaphor but you only need one person who's in the car who's the driver who needs to be licensed and that might be a methodologist on your team who could certify it. It's not necessary that everybody who's a member of the research team, for example, has to have a driver's license but amongst the protocol that's gonna go through the ethics committee, the ethics committee could sign off and say, yes, there is a certified methodologically trained person who is on this. But to focus, I think that's one part of the solution though is multiple elements. As I said, you need the traffic lights, the lines in the middle of the road, safe cars. That's the systems problem. So if you focus down on that as the one solution, you'll forget all of the other potential things that have reduced deaths from road traffic accidents and licensing is just one component of that whole system solution. And Jane, do you want to? Yeah, a couple of points. First of all, although a black site, the Office for National Statistics does have a category of approved researcher and they are allowed access to certain information. The Nordic countries definitely have approved researchers. So, you know, I know that most epidemiology research done with informed consent is worthless because in places like Finland, you can get your responses to your questions about alcohol and tobacco and cannabis consumption, but you can also fill out your non-respondents because everybody is identified. And you can see that the non-respondents have much higher death rates from alcohol, tobacco and cannabis related causes or hospital admissions. So I think one of the... I agree with Paul on the systems thing. One of the systems things is certainly what you want to achieve from the research. Now again, my background has been more medical and as we've got two psychologists on the panel, I think one of the big systems problems is that psychologists think they know about statistics. And therefore, oh, I'm glad Dorothy's agreeing. Therefore, these people get their PhDs thinking they've been taught statistics, they've been taught nothing of the sort. My PhD students I mentioned did courses in the stats department and he compared those with what his colleagues did in social science. And social science was pretty close to saying, can you put a round peg into a round hole and a square peg into a square hole? And no, don't ask me what round means and what square means. So I think if I was honest, I would think we should probably write off almost all social science research and start again. That is an extreme system. Start our license system by asking everybody from professors downwards to retake their driving test. I mean, I was nodding about the statistics thing, but of course the reason, because it's absolutely true. I mean, I was taught statistics not by statisticians. And that we have a huge lack of applied statisticians, which is part of the problem. If you want to, and posts are not being created for these people. In fact, again, this session I was in this morning in the Oxford Berlin School, it's been quite interesting. I just had a tail end of the session where Ulf Turfi made exactly this point that we should be funding statisticians or methodologists with statistic knowledge in departments, but we don't have them. The medics, of course, typically do have the benefit of having a statistician who they go to for advice. We just don't have the funding for that and it hasn't been given sufficient priority. But I think if we all decided we wanted to do that, we would have a terrific problem because I don't think there's a great pool of applied statisticians out there who would be around. I think we really should be training more of them, people who really know that area and that would really help solve some of these problems. I think that's another systems problem in two ways. One was getting rid of the stats committee 30 or so years ago in ESRC in the UK, which was a deliberate decision to downgrade the importance of design and analysis. The other problem was actually done by the Physics Research Council, which decided to restructure master's degrees on the basis of what you need to be an engineer. And they were told 30 years ago that they would wipe out statistics because the entry grade for statistics, like statistics, is still very often a master's and the people who want them are not large industries who are going to sponsor master's degrees. So yes, I mean, again, this confirms Paul's point about the systems and the need for us. So it's lovely that you point out these huge errors. I just want to point out that if next week you start requiring that your local ERB has a certain number of checks in there, like a decent sample size justification, multiple comparisons issues, this issue will be improved immediately without waiting 40 years before we have funding for a huge army of applied statisticians. And the best person at the university who's capable of evaluating this might do a better job relatively speaking on a shorter time scale than, yeah. I mean, it's the best that we have at this immediate moment. I love your future IDs. All seniors are trained. We have 50,000 extra applied statisticians, but what are we gonna do next month? Paul? So Daniel, I think what you're arguing really is that one of convenience, we already have the machinery of ethics committees set up. Why don't we extend it to the methodological issues? But I think as Dorothy has been arguing, that may actually have perverse consequences in that most ethics committees aren't actually trained for that. So an alternative that you could probably easily set up, again, coming to Dorothy's argument, is a protocol review committee instead of an ethics committee. And the protocol review committee could then decide if there's an ethical issue, you refer it to the ethics committee that can deal with problems like consent, but they would have been set up instead for methodological purposes. And I think that may be a better model. I agree harder to set that up next month, but you could probably set it up next year. But one other point to that is, I think all of these things need a proper evaluation to them. We're actually proposing lots of effectively interventions in the system. We also would need to look at the effect of those interventions, as Jane had described with the statistics committee and what occurred as a consequence of that. Effectively, that was a sort of pretty between the eyes evaluation, that this was a problem and everyone complained about it. But some of the problems will be subtler. And so evaluation is key to these intervention changes within the system. I think we should turn to some of these questions now. I'm going to start with the ones in the answered bit and then I'll go back to the other list. And if you're in the audience, do get in and vote thumbs up and thumbs down on those questions to help me try and figure out where to start with them. This is Brian Nosak saying great presentations. I expect that all of you presenters believe that there is innocent faulty work and then there is misconduct and he's interested in where your various demarcations between those things might lie. Does anyone want to take a stab at that? Where is it for you, Daniel? The point between innocent mistake and misconduct. I think with respect to the ERBs, the people who are going across the line can easily fool the ERB. I mean, we are in no position to figure out that they are fooling us, that they're writing up things in their applications. That's not true. So it's clearly the intention to just go beyond the truth. Ignore the truth and make the point that you want to make. And the people who want to do that, that's the ERB has no possibility to do anything about it. I think regrettably, but that's not the place. No, I don't think so. I mean, the point I made is when is ignorance culpable? And I see that, you know, both at the individual level and the systems level, we know that there are a lot of pressures on people to publish. In the UK, we've had a disaster in higher education where we've tried to judge every higher education institution by research, and then suddenly every higher education institution by teaching. And in fact, the original vision was to have a lot of very good teaching institutions, but not to set teaching and research in competition for promotion. So as well as the individuals, and it's easy for somebody like me to say, because I'm, you know, at the top of the cream relatively. But we also need to provide support to the, okay, the ones I know are the young statisticians who are being told that if they don't come up with the result which happens to involve dropping out a few people that is significant, they'll lose their job or things like that. You know, if you don't shut up about telling us about what the problems are with our studies, you'll be fired. So we do need people with integrity. And I would suspect that that's one of the difficult long-term challenges. I agree with Daniel. The short-term saying you've got to do this is a good way of starting to achieve the longer-term because people will do it because they have to start with, but they will also begin to think they may even argue back, why? And that gives you a chance to say why. I would also say on informed consent, I dislike the idea that ethics committees are only about consent. And Nuremburg was very clear, the quality of the science is very important. And I think we need to bear that in mind. Any other thoughts about where the demarcation is between innocent mistake and misconduct? I think I agree with Daniel on this one. It's coming back to Doug Alton's thing, it's whether it's willful or ignorant. And that's often hard to tell. I can imagine p-hacking that most of the majority of p-hacking is probably ignorance of the implications of it. But people can do it very deliberately and there's a whole spectrum of how aware people are of that. So I think we need to forget whether it's willful or ignorant. We need to train people and then do the detection work and set up the systems for that. If you have, for example, registered studies, that makes it possible later on to detect the p-hacking type problems because you've got a registration of what your analysis and what your outcomes, et cetera, that you're going to do without that infrastructure of your system in place, you can't set that up. But it also trains people to think that way. I mean, that's fairly effective in medicine. It's not perfect, but randomized controlled trials in medicine now, you can almost always go back to that protocol. I'll clear, modular the straight deceit. You can see if people have changed their criteria and you can evaluate whether you believe their reasons. Dorothy? Just on Brian's question, I don't think there's a hard, I think the thing is, it's not sort of a complete binary divide. It's a matter of degree. And the way it turns into a matter of degree is I think there are a lot, I still think there's a lot of people who really, really don't even know that p-hacking is a problem. I mean, I've talked to people, they said, I want to do X, Y, Z, and I said, well, yeah, that's p-hacking. And they said, no, no, it's doing good research. It's finding the interesting things in the day. There are people who really think that. Then there are people who've been told they shouldn't p-hack and who are a bit nervous. But as a very silence, I think, with the one who said, they think it's a bit like jaywalking, that they don't think it's like robbing a bank. And we're all, which one of us can say we've never speeded? We've never, you're probably all gonna say you've never speeded. You know, got a few miles over the speed limit. Which one of us is always obeying every single rule? I think a lot of people regard it as, this is why I say they think it's pedantic statistician. They don't get it. They don't get that it's serious. And that's why I think the training has to be there to make people realize that it is serious. Because if people think they're being made to do pointless little silly things, and in a sense, having their funds spoiled and being made to do a lot of things that really don't make a difference at the end of the day, which is how people think of it, then they're going to rebel and then they're going to start genuinely bending the rules and feeling that bending rules is fine because the rules are made by idiots or people who are just being extremely pedantic. So that's why my view is, if you can't get people to deeply understand the consequences of questionable research practice to a level where they suddenly see that the consequences for science and their colleagues and for the future of humanity, if you like, are serious, then I don't think we've got the right solution. And I think that's why this has persisted, that people haven't understood the nature of the problem. This next question that I'm going to read is from Tamara and Haven. And this, I think touches on something you said before, Paul, about how we're making interventions in the system and that we need to be monitoring those. So the question is, given that all of you different speakers seem to agree that ethics committees are currently not functioning optimally, how do you envision studies being carried out to see if different types of risks, different types of emphasis could decrease research waste or bureaucracy? What kinds of studies would provide us with an empirical answer to the question about how to intervene and evaluate these things? Yep, so, there has been a lot of research in this. The first peer review Congress, because Drummond Rennie at JAMA was concerned about the problem, was in 1989. So that's, you know, over 30 years ago. And that was to explicitly encourage research into improving the peer review processes because a lot of our medical editors had recognized what a problem it was. Since then, there has been quite a lot insufficient but quite a lot of research trying to find ways to improve the system. The BMJ, for example, has had someone who is allocated to doing research about the sort of research, the processes of the BMJ. They did things like conducting trials of whether you could train people to be better peer reviewers, randomized trials to be better peer reviewers. They also did an interesting study of deliberately inserting errors into studies to see if the peer reviewers detected them and which sorts of errors they detected. So that's very important. It's not just trials then. It's also that sort of empirical research is needed and it's been done a lot in the peer review area, almost none by ethics committees that I could find, by the way. So the peer reviewers have probably been the most. There's been some institutional work on training people, developing tools, et cetera, and doing empirical tests to see whether you could improve processes as well. And I noticed Malcolm McLeod's on the line who might actually talk to some of those sorts of processes as well. Before Malcolm comes in, I appreciate the studies that Paul have been in. I've been a participant in some of them. But I would still actually back off and respond to Tamarind's question with question and suggestion. The first question has to be, what do we really want to find out? Because the current system does not provide, there's plenty of people saying they're not enough statisticians or whatever. The current system doesn't regard that as valuable, which suggests that we need to think about what are our values? How do they interact with the general population's values or the funder's values or so on? And my second point would be the classic sort of public health. Here's a problem, let's find out what we know about first before we design the trial. I would do the epidemiology first. I would find out what happens in different countries. Once we've agreed on what we're trying to achieve and see which of those countries achieves more effectively. Certainly with the discussion about research and developing countries, South American colleagues said when the comment was, it ought to be ethically reviewed in your country, the response was, no, our committees can be bought. Level of bribery. What is our vision before we find out how to achieve it? My vision would actually be for a lot less research to be done, less and better quality. I am working on how to make Malcolm a panelist. Just give me a minute. Let me read another question while I'm doing that. Can I maybe add an answer? Oh, sorry, yeah. So first of all, I think I agree with Paul. There should be ethical review board should give complete openness to their processes and allow other researchers to examine what they're doing. It's a big problem. There's so little research on this. And I think ethical review board should open up and give access to this data. Now, if we do a good job preventing research waste, there doesn't need to be a conflict. So Dorothy is very worried that people will see it as useless clicking on things and checking boxes. But if you can demonstrate that the things that these additional statistical parts of the review process catch, underpowered studies, lack of control of alpha levels and all those things, if those lead to easier acceptances of papers later on, which they should, because you've designed a study with more informational value, people should start to feel like, oh, wait, whoa, whoa. This is actually paying off, right? That would be my ideal. And a good ethical review board should be able to do this, hopefully. Apparently we've just pulled Malcolm off his elliptical trainer, but he's with us now. Do you want to respond to where we're at? Paul's comment, yeah, yeah. Thanks, you're not going to get video, I must say, because I'm not as young as I used to be. I think that most of the participants, I'm Malcolm, I'm a neurologist from Edinburgh Academic Lead at our institution for research improvement. So at an institutional level, this issue is very live. We've got lots of initiatives and we've got very little idea how effective any of them are in shifting the need of performance. Now, some of these are relatively straightforward and you can quite easily imagine a randomized trial that would test the intervention. Some of the things that we would like to do are much more complex. And I think in that space, borrowing from the clinical world of randomized trials of complex interventions may be helpful and beneficial because you don't quite know what it is that's in the box that shifts the needle. I think there's another dimension to this that there are some interventions which are so obviously low cost and probably very high benefit that you should just get on and do it. So for instance, encouraging researchers to submit preprints in bioarchive, very low cost, real apparent gain, you don't need a randomized trial to say that's helpful. And then my final observation, sorry for taking it at a time would be that research funders are spending billions and billions of dollars every year, much of which is going to waste and they are intersectionally reluctant to spend even 0.1 of 1% on that on the research that we tell them how to better invest their funds. And I've had an application to one of our major UK funders for research improvement turned down because it wasn't the sort of thing that they funded, didn't even get to the panel, it wasn't the sort of thing they funded. It is an outrage that these people are spending often public money on the same old stuff that we've shown hasn't worked and aren't even investing in seeking to improve it. And if I did that in any other walk of life, I'd be fired tomorrow. So Malcolm, meta science needs to be funded better. That should be a theme of this conference. I think that that's very much the case. I just wish people, the financial regulators of this country were fired when they didn't do their job. I'm going to try and, I'm sorry to cut you off. I want to try and get through. There's a lot of questions here that I've been not attentive enough to. So let's start with this one from Patrick which has floated to the top now. So editors are already struggling to obtain skilled peer reviewers. If we do decide that we want another layer of regulation and checking at this ethics review stage, where would the resources come from to ensure that ethics review boards are staffed with the expertise necessary to do quality reviews? We've talked a little bit about training more statisticians, Paul. So I, a lot of people have said this already but I think having more trained, I'll call it methodologists rather than just statistics because I think it's a broader issue than just the statistics component or that we need more methodologists and greater encouragement to train those and include them and fund their careers. That's crucially important. But the other thing we can look at again is the sort of systems tools based process. So there are tools that can automate the checking for example of the statistics that can check the completeness of your reporting, et cetera. That can help the peer reviewers who are looking at protocols at any stage not just the ethics stage in lightening the load of doing that, at least making it more feasible that the few methodologists that we have can be more efficient in doing that process. So again, work in the development of those sort of tools is critically important. So Patrick's now asked where do the resources come and the time come for the training and how do we assure that once we've trained these people up that these methodologists don't ship jump ship into another industry to make a lot more money? So at our university, we have a lot of these local ERBs. So there are actually quite a lot of people involved in this and what you just need is the best three or four people in this method statistics area in the department. So they know a little bit about the field. The best three or four people, you schedule them free from all more boring management tasks that they don't wanna do anyway. You let them come together with their fellow stats method nerds once a month or every other week and they go through and prevent the biggest blunders that would happen otherwise at their department among the colleagues they're associated with, they're strongly motivated to do this. They enjoy it. So just schedule the best people you have at the department free. Those might not be international experts but that will already give you a large improvement. There's the next question is also from Patrick. Actually, we might Whitney, can we add Patrick as a panelist as well? Thanks. While we're doing that. Can I just make one point while you're doing that? I think the problem is these people, methodologists, statisticians at the moment certainly in the UK there's no career structure for them. I agree there's no career structure but I spent 20 years at Warwick saying you could radically improve our research rankings and our research if you would do this. Same for the efficiency of all our admin. I'm slowly getting them to accept some joint research projects because I put my own time into it. And we have the same people. Yeah. Career structure is certainly also a challenge. If you don't respect people who can look at a data set and see the problems and you're just a little technician. Fine, I might just be a little technician but you might need me. Like a panelist on the plane. I'm going to read Simone's question and then Patrick you can ask you the next one but Simone asked what one big challenge of impairing ethics committees to adjudicate on QRPs is in this arena is that it seems to involve more judgment calls than do many other things that ethics committees are charged with checking. So should ethics committees be using their judgment a lot or should they be mostly following relatively clear cut policies? That's a very good question. Reminds me of Jeffrey Berry who's a statistician who wrote one of the sort of standard medical textbooks of medical statistics saying that you need an odd number of statisticians on any methodological committee. And I think we have to recognize explicitly that you do get disputes between methodologists. And so just that was a worry for me about an ethics review board having just one methodologist is that that voice of their particular methodological bent becomes dominant and why it would be better to have a protocol review committee that actually had a mix of methodologists that could say, yeah, but not everyone agrees that that's the right way to do it and analyze a factorial design or whatever. Just to add, I think the level of these questions about the best analysis strategy that's often not even remotely close to where you end up as an ERB and my experience. What you end up doing is saying, look, you write here that this is your research question and you write here that this is the number of participants you're collecting and that goal will not be met with this thing. So there's very little sort of subjective evaluation. It's just logically, this doesn't match. And then if you also gonna do these 17 tests and you wanna make a conclusion that also doesn't logically follow. So it's really at this very basic level. And hopefully, yeah, the more subjective things we should leave to researchers. It's not necessary, but they already make these basic mistakes. There's a couple of related questions here. One is, should ERB ethics review boards require registered reports so that someone with expertise does the review before the methods, before we proceed with methods? And another one that asks any thoughts on recent discussions of funding models where funds are allocated simultaneously with stage one registered report review. So how do we see the role of registered reports in this process? Can I, I wanna address registered reports. I'm sure somebody else could do that, but trial registration, where you have to register your trial has taken more, it's probably five decades now since John Symes wrote his pivotal paper on the importance of registration and stopping bias. And it has slowly gotten there. I think this is gonna take a long time, but it's a crucial predecessor to registered reports because you have to specify what that is. That allows people to do the appropriate studies then to see have you lived up to what you said you were gonna do in the registration, in the protocol, but not all journals check that, in fact, very few do. The Lancet, I think is one of the few that actually take the registered protocol and compare it with the publication, for example. But at least the infrastructure is there to enable you to do that. That again could be automated. So I think there is a lot in the registration process that we could learn from what's happened in clinical trials. That, importantly, the ethics people, because as Daniel has been saying, they're the only ones that check at any stage, that crucial stage that getting them to check that there has been a registration is a crucial thing. You could apply that to, I think, many other areas of research, but not all. This is a difficult problem, but I think you could certainly extend it from where it is and that extends into the registered reports idea as well. Sorry. Yeah, maybe to add. So I think it sounds like a pretty ideal solution. If you want the best people in your field to take a look at a proposed research proposal, I think that the people who would peer review a registered report are in an even better position than the members of this ethical review board that have methodological and statistical skills. So I think it's probably the best solution to be honest. Patrick has joined us now. He's got approximately 150 questions in the chat, but he'll just ask one or maybe two. Okay. I've also gotten permission from Fiona to make a small point about resources, jumping back a little bit in the conversation. So one of the reasons I ask that question is that I think proposals to get more training for methods have been floating around for a long time. I mean, you can go back 10 or I'm sure much longer and you'll see the same proposals. But the reason that that hasn't gotten traction in my opinion is you actually need to devote resources to it, which means either convincing the right governmental authority that this is worth spending some money on or redirecting other resources that are currently being spent for other good things to the methodological training. And so that's the thing that I think is hard. And I'm not sure how to get there, but I think that's a tough enough to crack. The question that I wanted to ask though was about the politics of framing things as ethical violations or not. So what I mean by this is that even if we're generally on board with the idea that questionable research practices do have an ethical dimension and that there's something ethically wrong about doing them, it still might be better politics to frame them in a non-ethical way. So you might convince more people or get more people to do the good stuff. So I'm just wondering where the pantheon will solve all of this question. Daniel, do you wanna go first? Well, Patrick, I mean, thanks for the question. You are probably right, but I wouldn't like it. You're probably right that people don't want this ethical sauce over things regrettably. I mean, from my personal view, I just think this is an ethical issue. I think scientists are misbehaving. They should feel like they're violating some sort of ethical principles every now and then because that's just what it looks like. And also, if you ask the general public about it, the general public often also thinks that people are behaving unethically if they, for example, selectively publish. So we should call it what it is. It's an ethical violation. But you're right, politically, we should just sort of say that this is for their own good or whatever. Paul. Can I strongly disagree? I think most of it is not actually. An ethical, it's ignorance rather than willful. Of course, there's a spectrum there. So I think it would be better to call it, you know, this is a methodological review process rather than ethics thing, because otherwise everything that you're talking about is called an ethics violation. And that's six per paper that we saw in those health services research papers. I get I just think that that's an unnecessary overkill and painting researchers incorrectly. Jane. I think answering that, and it also picks up one of the questions. I do think we need to be, at some stage, we'll have to be more precise about which question of research practices we're talking about. I certainly think that if you have pre-specified, well, I certainly think I would say it's unethical to design a study where the questions are asking will never address the research question. That's true of most of the behavioral economics sign effect studies that I saw. The studies were not fit for purpose. And I think if your study is not fit for purpose, that is willfully ignorant unless you have someone managed to get an undergraduate sneaking something in. Anybody beyond that, it's willful ignorance. I agree there are some other things about cleaning data and so on where it might be ignorance. But I think we need to be clear what we mean. Do we think in a short paper, not mentioning that you dropped 3% of your subjects is excusable or not? Yeah, we can think, you know, I'm not sure whether the proliferation of all these things matters or whether they're actually coming back to false point. Some of this is about the perverse incentives for people who are doing what is called research. Dorothy? Yeah, I think I'm in an embarrassing position of really being able to see everybody's point of view. I mean, I think often, and I suppose part of the issue is it almost doesn't matter whether you're doing it intentionally or not, it's having this very negative effect. And so I think on balance, I've probably side more with forwards to sort of thinking why not just sort of explain to people that if you do this, it's very, very negative. And the trouble is once of course they know that then it does become unethical if they step over the line. But I think most of the people I come across are not intentionally doing something. You can get very upset if they're being told they're being unethical when indeed they have been encouraged by people. I mean, if you think of Darryl Ben's advice to a young scientist, it tells them to pee hack, it tells them to heart. You know, this is in a book published by the American Psychological Association for God's sake. You know, people have been told to do this and the generation that read that the first time around have been training their students to do it. So I think the consequences are terrible. And I think if I come back to saying, I just think we really need to turn that around and get people better trained. And I agree, it costs money. But I mean, Patrick's saying, well, you know, where are we going to get the money? What other deserving causes are we going to take it away from? I think we've got a lot of undeserving causes that we could take it away from. And I'm going to follow this up with a question about the logistics, Dorothy, of your driver's license test. Oh, yeah. I'll be right on the spot now. So would this be only for human subjects research with certain statistics? How would it affect people who want to work across disciplines? Glenn Begley has apparently suggested a similar test, but for techniques like Western blots, but the logistics behind that are also unclear. So who would develop the ethics? Yeah, I mean, I think I, of course, it's when it's put on the spot, it's a flaky idea. I hadn't really thought it through. And I think it's partly because I never expected anybody to take it all that seriously, but I think it could be done, but it would need to be done in a way that you would have to have different routes through it. And I think it would be good to have it, including animal research. I mean, as I said briefly previously, a lot of QRPs affect people doing animal research, and that doesn't normally go through the human subjects, well, it doesn't go through the human subjects ethics committees. So I think you would need to have modules that were specific to the methods that you use and that you would get passed on the modules that were relevant to your area. And I mean, one of the reasons we have such difficulties is that many of our fields are changing very rapidly. And so you actually are running to keep up anyway, because you're using methods that 10 years ago weren't around. I mean, now Jane will be horrified to hear this. Psychologists are all diving into multi-level modeling, which I think it's very easy to get wrong. It's much easier to get that wrong than something simpler. But so, if you want to use a particular method, I think you would have modules that come with that method and you'd be certified to use that method. Jane? It may have changed since I was on a panel looking at animal research. But I think in the UK, you can't do animal research without a home office accreditation. So there is something, I mean, it's something that's not necessarily perfect. But in the end, it does come down to the supply and priorities of statisticians. I could be mischievous and say that in England, you can be a primary school teacher without, while hating maths and being incompetent at it. It's not really surprising. We've got a problem at degree level. If we think maths is such an irrelevant subject, we can't be bothered to have it taught properly at primary school. And we hand out A-styles to half the population. So I think it's an interesting potential solution for animal research areas as well. But the other, because Malcolm's online, I'd be interested in his comments on it, is in Edinburgh, they've developed a thing called the Experimental Design Assistant, which walks you through the process of developing an animal research protocol. It constrains you to write protocols that are reasonable. I think if you're using that, that should be fine. You need to then be licensed if you wanna go outside that to do something that's beyond what the Experimental Design Assistant would let you do. So I think we have to think of that. There isn't just this one tool making sure that things happen correctly. There are other tools in C3R, but I think it was developed in Edinburgh. Anyway, the Experimental Design Assistant specifically designed for protocols for animal studies. Well, we are very rapidly running out of time now, but I wanna quickly just ask Marita's question. And this is something that I think each of you have touched on before, but it would be a nice thing to end on, I think. So the question is this. Peahacking takes place long after ethics review approval when you're analyzing results. This is an example to show that ethics review boards are not the bodies that can prevent all QRPs from taking place. So the other alternative solutions are preregistration, registered reports and so on. But what do you think about that? Is it all, is it the wrong place? Is the timing wrong? I would say yes. I mean, that's one point that I, one reason why I'm keen for something else. I would say no, because I think that sometimes you can just see people walk into the trap of an infinite number of dependent variables. We ask them, what are you gonna do? Well, we're gonna manipulate this and this and we're gonna ask them, we have the dependent variables, this and this and this and this and this because we need to know what kind of questions they ask. Anyway, and then we see this huge mess. And that's often a case where we would say, we think that your future reviewers will be more impressed if you preregister your main predictions here, because we just see that this could go everywhere. So it's not a hundred percent solution, but it helps. Jane? Yes, I'm very much a favor of both and and this, both the short-term and the long-term. So I think whether you call them ethics review boards or methodology review boards, maybe it could be better to have both names. There is a lot that could be done at the design stage. Yes, you also need the information, preferably preregistered protocols at the publication stage. And we should be looking for short-term gains as well as long-term strategies. And by complete accident, Paul, you get the last word. So if they were called methodological review boards, I agree, but remembering that that's not the only place that you can pick up the problems. The earlier, the better. Actually, the methodological review is late in the development process of the question. So the training, that sort of thing, actually would be a better preventive place. But then we also need to think down the other end of where the journal is looking at the registered protocol, et cetera, and making sure they can do that job readily. So there isn't one spot. Just happens to be one that is convenient at the moment, but we need to think of all of the spots for correcting these errors. There's so many of them to pick up in so many places. Well, that was such a great session. Thank you so much to our four debaters, panelists. I'm not sure what to call you. And thank you to all of you in the audience. I'm very sorry to the approximately 23 people whose questions I didn't get to. Remember that the Slack channel is open and you may be lucky enough to find some of our debaters in there if you want to try again, asking those questions or to continue this discussion. I'm sure there's a lot more we could also about it. But for now, good night or good morning, good afternoon. And thanks again, everyone. Bye. Thank you very, very attending. Thank you. Very interesting.