 So I won't be talking about my main research, it's again another project of mine. It's time about questionable research practices in the context of research funding. It's research together with these people and they also block, you might know, Kist, Vasa and State of the British. And what I'm going to present basically is a survey with a friend and researchers on the prevalence of questionable research practices in the context of research funding. But I'll quickly start with a bit of motivation and conceptual background, let's see. So the motivation for the paper is that there's been a real boom in research on research integrity and awareness of the importance of research integrity mostly QRPs, I assume they're all familiar with QRPs, so those are the practices that they violate what is extendically and ethically desirable in science but they're not as bad as say majorism or falsification or fabrication. So there's a lot of research now about QRPs and how prevalent they are and at the same time there's been a lot of research on the way we allocate our funding in science and what we typically do in the western world is we ask researchers to write proposals, they get peer-reviewed and then there's some kind of ranking of those proposals and the top ranks get money. And so there's research on both of these things but there's very little to no research on whether there's QRPs in the context of research funding so we had written a paper, Andreas Christ and I, saying that anecdotally we think there are a lot of QRPs in the context of research funding just if we look at our own practices and our environment and conceptually we should sort of expect there to be all of QRPs because there's competition, people will want to cut corners and have better chances of behaving the way they shouldn't and so we made the point of probably we should expect there to be a lot of QRPs that we don't actually know so that's why we ran this survey. Before I get to the survey itself, a bit on the QRPs themselves so for QRPs in research practice there's lots of lists which for QRPs exist and there's stuff like not pre-registering your research and then presenting it as if it's hypothesis driven or not controlling your data and stuff like that. There's nothing like that for the QRPs in the context of research funding so what we did is we just defined it as whatever clearly violates scientific codes of conduct and then we've looked at the main codes of conduct that exist, piece 4 and then actually also code on peer review because it's very relevant to the topic and we've tried to distill the core values that all these codes of conduct identify and then we've tried to map them and get some kind of summary and these are the values we ended up with so accountability, honesty, impartiality, responsibility, and fairness I should say all these codes are relevant, they don't say much and we could have made other classifications as well so we captured the main points and so before I get to the survey I'll just quickly go over those values and say what we thought would be common ways in which they're violated so accountability, which I think is pretty sensible value entails that scientists should be able to justify the claims they make and their actions at least to the degree that is required by the context in which they're made and ways in which we thought this might be violated in the context of funding can people read this by the way or is it too small? when people write research proposals they might intentionally overestimate the certainty with which they made their claims to improve the subject the first key one and very hard to avoid I think is when we use grant funding those projects need to be peer-reviewed and that's just incredibly difficult you have to predict which science is going to be successful which one will be successful and we know that it's almost impossible that's even harder when you have to sit in a panel and compare projects from say physics and philosophy which happens on many occasions so it's not a surprise that when we look at the inter-ratery liability of peer-review of grant reports it's basically zero it's non-existent and whether you get your grant or not is entirely the level of control for reviewers but the point here is that when you peer-review a project you're basically forced to make claims and evaluations that you cannot really justify and definitely not to the extent that it's required because it's a lot of money that you're handing out and people's careers depend on it so you would want those claims to be really well justified and actually probably they are more innocent but probably also common is people having to review a lot of things and when you get one of those projects of 80 pages or 150 pages you can't really expect everyone to put in sufficient effort to really review them so probably often people won't put enough effort into it then honestly the second value scientists try to be accurate and transparent in their communication they shouldn't present whatever they want to say in a misleading way one way in which we think that might be violated in the context of funding is authorship violations so it happens quite often that a PI writes a project and then the PhD student submits it as a fellowship application or the other way around a junior researcher writes a project and then the PI submits it as a project it's often because that's just what the rules of the competitions are like only professors can submit projects and if the professor wants to help the postdoc do their research well then that's the only way of doing it but still it's somehow the efforts of the author don't get credit and clear violation one that's not very commonly philosophized soon but the other fields it is at least according to some interviews with scientists and that's not being completely clear about your methodological plans in your grants because they are scared that the reviewers will kill the grant and then just steal the idea one thing in which there is empirical research is double-dipping so submitting the same grant in different places and cashing the money multiple times without declaring this and finally another way of lying is putting irrelevant research articles in your grant reports to make sure that the evaluation is positive and have good chances for the next round impartiality that may be a bit of a negative value but it comes back in all the codes of conduct and it says that scientists shouldn't let their personal opinions interests and preferences influence or let's say Trump the more general aims of science like truth and spiritual value this is very hard to avoid here if you're probably impossible in the first place because cronyism is inevitable so people's networks are reflected in the reviews they make but even other biases like racism and sexism seem to be very hard to avoid there's many studies who actually find the effect on those another one is applicants citing potential referees in their projects in the hopes of improving their chances or even when they can choose reviewers to choose their friends that responsibility which requires researchers to take into consideration the broad interests of society so they're playing with societal money and they shouldn't be wasting it one reason why this is sometimes violated we thought in the context of research funding is that sometimes there's minimum budgets in applications and when we ran a survey at K-River one of the things we heard from the humanities researchers was well, I would like a smaller project I don't need a PhD student for two postdocs or whatever but actually there's a minimum budget and we have to apply for this project and when there's too much money often people will use it for things that weren't described in the project particularly when you come to the end of your project you still have some funds and you buy books for the library or pull it with another professor to hire a PhD student or whatever it's not necessarily a waste but it's also not described in the grant project it hasn't been evaluated so it is a question of research practice one that's particularly bad I think are grantsmanship consultants so these are people you hire to pin your grants and even though they don't know the science they are super effective their success rates are incredible and in the zero sum game that these funding competitions are it just adds noise to the system what you're looking for is not people with flash grants but people who can do good science I think this is a waste of money and it sort of makes the signal that we're looking for hard the final value that most quotes have was something very broad like fair in some respects you can actually substitute all the other standard I think but one thing that we put here is that it's actually against the codes of conduct to not report violations and because we think that many of these practices are very common people who do not want to commit them are incentivized to do it anyway to have a better chance and are unlikely to report them because so you might say it's not clear that they are really QRPs like after all they are very widely accepted we think like many of these are not allowed you will not hide this for your colleagues you can engage them with the test of intelligence like you can buy books for the library or use funds to give your PhD students that's not for your own benefit but still it goes against these codes of conduct and finally many of these practices are sort of forced or strongly incentivized by the system if you want to help science and peer review grants you can't be helped but be biased and give unjustified views so that's all true but still I think they are clear violations of the codes of conduct and so they are questionable research practices that can be avoided if we didn't have to distribute funding in this way what I do want to admit is that it's definitely the problem of individual researchers what's the problem I'll talk about this more later is that the way we distribute funding sort of incentivized or forced us to engage in these practices and so if we want to solve this problem we could consider changing this but I'll go back to that so now to the survey itself the survey was a set of a couple of demographic questions and then a set of questions for each of the roles that you can play in the funding system so for the applicants, the reviewers and the analysts and respondents only got those questions which for the roles they indicated for each of these roles we had QRP questions which were of the form over the past days how often have you had a particular QRP and then information questions which were just questions to get to know more about the researchers' experiences in the funding system like how they interpret conflict of interest or what quality was of the reader reports they were getting the survey was online and anonymous we did a pilot with 139 ERC panelists which we then used to finalize our analysis methods and pre-registered hypotheses and so on and then we distributed the main study and survey through FWO to send it to their applicants and reviewers and analysts just to give you an idea of the questions we asked these are all the applicant QRPs so we asked whether they ever intentionally overstated their confidence in the predictions whether there were any authorship violations whether they put irrelevant outputs in the project reports whether they asked for more funding than they needed whether they used funding for unrelated purposes whether they did double-tipping whether they presented research that they had already done as planned to be completed in the project whether they did selective sighting and whether they suggested the names of friends as reference so those were the applicant QRPs the reviewer and analyst QRPs then were mostly about whether there was enough effort and time put into the reviewing activity and then conflict of interest questions so whether they reviewed close colleagues whether they let personal preferences or relationships influence their reviews so that was the survey design then the study design the main expectation and main theme were interesting but we didn't pre-register these hypotheses I was just about the prevalence of QRPs on the basis of the pilot we already assumed that they would be rather prevalent so there was definitely what we expected to measure these we used just the individual item response so all the individual QRP questions but also to aggregate measures borrowed from our influential study on research integrity the first one was on the level of respondents and finally measured there was one if they had indicated four so the questions were on a scale from one to seven four for at least one QRP meaning that QRP at least they engaged in more often than not so there was one aggregate measure the other one was also binary on the level of respondents and it was one if they answered two on at least half of all the QRPs so that measure would give us an idea of whether people engage in a broader age of QRP so we used these outcome measures to test two hypotheses the first one was that we expected men to engage in QRPs more than women that was based on both the pilots and research on research integrity in other parts of science and the second one was just a conceptual one we expected people who self-reported higher success rates in funding to also self-report higher rates of QRPs just because we assumed that doing QRPs might improve your chances and you would get higher success an expectation that we wrote in our registration it wasn't real hypothesis was that biomedical sciences would have higher rates of QRPs but it was also found in a lot of the other studies because we tested psychosis with regression analysis we had to be transparent about our causal assumptions and we needed those causal assumptions to design models so those are the directed acyclic graphs that represent those assumptions I should say these are very tricky to make because these variables are probably all related in very complex ways and it's very difficult to from the literature really distill a deck that you know makes sense so we just report this because these are the decks that we used to design the models and these are assumptions so if people analyze the data with different assumptions they can just use different assumptions Can you explain the graphs a bit more? Yeah, sure so these are the causal these are the relations we actually wanted to estimate that we're interested in so we want to know the effect, the direct effect of your gender on engaging on QRPs the reason we are interested in the direct effect is because in the literature there are there are hypotheses about why rates might be higher for males and females like testosterone rates and so on but all these hypotheses are about the direct relation and so testing the direct relation rather than the photo effect like the effect that gender has true seniority and field and resemblance as well would be harder to make sense of such a new effect And how can you isolate the effect of gender if it's not through seniority or it's in conjunction with you? So basically these decks tell you which variables to include in your models like if you want to know the direct effect of gender then you'll have to control for fields and seniority and respondents because there isn't a gender effect through those as well and say for QRPs gender influences success and gender influences QRPs we say so if you want to get a good estimate of QRP itself you have to include it and talk to this already is success just like self-recorded how many grants you got? Like success rates Or the non-measure of success rates So is that more of a scare as well? Then does disciples use hierarchical models? Simply because a lot of the variables in the models are just demographic variables which even are structured to our data and there were some varying effects in them and when you get something a varying effect for field by seniority you have five fields and five seniority categories you end up with having very small groups of people like say a female in the natural sciences with 30 years of experience there's maybe three of them in our entire data set and by using this hierarchical structure you can pull information between these groups because they share like a hyper-priority and even if there isn't much data you can go slightly more informative in conservative estimates and then we use major models just because they go so naturally with the hierarchical structure and because they're not very intuitive to interpret and then the models itself were ordered logistic regressions logistic regressions because our outcome measures were either binary or order logistic I won't go into more detail on the models but we designed them using the pilot data and using simulations and we also designed multiple models and outcome measures just to make sure that the outcome was a bit more robust under different structures then the results FWO sent the survey to a thousand reviewers and there are 748 panel members we also sent it through the FWO newsletter but I assume no one actually needs that we had around 900 complete responses we had some criteria for drawing out data we ended up with 704 usable responses they were distributed over the demographic characteristics in this way it was more as what we expected for all of them but note that of course the continents there were lots of Europeans because FWO is European other than that we don't see a particularly strong bias distribution over fields in age categories matches what we expected but it is a convenient sample so we can't exclude it there were some selections some sampling effects so there might be sort of a collider bias influencing all these different variables then we also had a few questions about what we call funding characteristics demographic questions we asked how many applications people submitted over the past decades whether they often liked funding to do research to do proper research and what their success rates were these don't say very much like this but it's interesting that there was a lot of variation between fields in particular the biomedical sciences really seems good because they have to apply a lot for very low success rates and a lot of lack of funding so that was the main interesting thing there then the results for the prevalence this is the frequency of responses for all QRP schools so you can see that 45 of all responses so that's the number of respondents times the number of QRPs 45% of them were larger than 1 so that means more than never but overall most responses were either never or almost never 76% of all the respondents had at least one QRP with a score of 4 or more so that they engage in more often than once and 42% engaged in at least half of the QRPs at least more than ever so this is probably not readable but these are all in virtual QRPs the most common ones surprisingly perhaps were putting less time and effort into reviewing a proposal that you thought was required spending unused resources at the end of the project and then intentionally overstating your confidence in the conditions so those were really quite high most of the other ones were substantial therefore the hypotheses they were both rejected we didn't find a clear gender effect these are the posterior predictive samples from the three main gender models and you can see that it's probably a bit too small but the blue is there and the green is here that's the one that we expect women to engage in QRPs more often than ever and only in very minor resources probably it doesn't mean anything it's just a sample but this doesn't confirm what the literature finds about QRPs in other parts of science then also against what we expected is that so higher self-reported success rates weren't associated to higher reported QRP rates again the opposite higher rates of QRPs at lower success rates lower self-reported success rates what was confirmed but it wasn't the hypothesis was the field effect QRPs too seem to be more prevalent in the life of biomedical sciences life in biomedical sciences or the light blue you can see it the other fields are more or less the same but they are clearly a bit higher in the posterior predictive samples and that might have something to do with this lack of funding and having to apply more success rates like that more opportunity to engage in them and more incentives so I think as for the prevalence what our study suggests is that they are at least somewhat prevalent and you should remember that these are self-reported QRPs typically people underestimate how often they engage in QRPs many people might not think of them as QRPs because they are so widely accepted so maybe the effect here is weaker than in other studies but still I think it's fair to say that it's sort of lower or greater then the other questions or all of the other questions we ask the information questions they are about people's experiences with peer review most of the information questions and these sketch sort of a coherent picture of how people think about peer review I think peer review of grant proposals 31% or the 2% of our respondents indicated that more often than not when they said in the animal they thought there was no real quality difference between the bottom 10% of the projects that were selected and the top 10% of the project that were not selected so that it was basically random which one is auto money which one is student all fields yeah fields it's just all analysis then 35% said that more often than not when they had to compare proposals in a panel there was no meaningful comparison possible then when we asked them about how much effort they put into reviewing when to a self-reporting was rather low 40% to 19% said that more often than not they didn't score 4 they wouldn't put in enough effort if we asked the panelists how often are you in a panel with someone who didn't put enough effort than 40% more often than not then we also asked them how difficult it was in their fields to get someone as a reviewer who neither lacks expertise nor has a conflict between just so that turns out it didn't make sense because if someone really understands their research then that person then finally we asked them about the quality of reviewer reports they received around 60% said that more often than not they showed a lack of expertise they were inaccurate 50% said that more often they were biased against them 30% said that they were biased in favor of them of course you should take all of this with a grain of salt and again it's people who are rejected who are left to be ignored on their rejections but even when the bias is positive or when they're self-reporting there are quite some people who say that more often than not there is a problem with rejections for deciding how we should distribute research funding so there is a growing body of literature on this topic and literature focuses entirely and makes sense to on the epistemic problems of peer-reviewed project funding so while rather less supported finding is incredibly extensive estimates are around 20% of the budget that is distributed that goes to the process itself and of course if you can choose that 20% actually found science that is a huge understanding in there and most of the cost is on the applicant side so professors could evolve their time into writing applications and professors are very expensive I already mentioned the integratory liability there was a recent meta-analysis and I think that if you take the top 80% of the projects the integratory liability basically is zero which is a problem because it's only 5% or 10% or 20% of that money so it seems that maybe we can sort of identify the weakest projects because that would not be the year to what we need to actually identify the strongest projects and that comes back in studies that changed to what extent peer-reviewed scores are predictive of success most studies find more effect some studies find a weak effect one study finds a stronger effect and I should say this success can be with the metric studies so those are not the perfect measures should they come and interpret in this peer-reviewed scores don't say much in regards to the number of citations for patents or applications in projects then the bias this is maybe best supported criticism journalism is identified or supported whenever it is tested so it is impossible for panels not to be biased by their relations and it's not just close relations but it really goes through the network quite far and then racism and sexism and other biases and he said that his judgment is that journalism is inevitable and the other biases you might be able to avoid when it's right then one problem that isn't well supported and peer-reviewed because it's very difficult to test university studies that do it and there is a peer-reviewed spot that is very constructive it encourages people to submit projects that stay within the paradigm and in interview studies with researchers they do often state that you don't write a project that is very clearly in what everyone does then you're very likely to get it and I think that's a huge cost and the field ends up in a bad paradigm and they might just be stuck there because it still opens up in the paradigm with these studies peer-reviewed project funding probably has a tendency to over and under funds so what is well supported is that there is this funding suite for many people of fields so people have a amount of funding at which they produce the most per floor if you give them less they don't have to do good science, if you give them too much they don't have the capacity to be as efficient and so for example for biomedical sciences there should be around 400,000 the team with peer-reviewed project funding is that there are a lot of competitions they all use the same criteria and it's all the same people competing so if you have few people one is slightly better than the other that person might end up with all the money and get the person in all money and get an undesirable way of finding the weaker person who could have done a lot with the money and the stronger person found so those are the extended problems what we've added in this study is that probably QRTs are rather prevalent and we think that many of these QRTs are strongly incentivized or forced by the system like peer-reviewed QRTs for example and so we think there is an ethical costume system as well and so we went with these results to FWO as well which is the foundation of a new organization and we tried to think of low-gaming food like easy fixes within the system for some of these QRTs but we couldn't really think of any except for the double-definition that can stay whether you've already submitted the application somewhere else so there are no very easy fixes and so it makes sense to consider alternative methods for people with family I'm sure the two main candidates are waterless and baseline funding which make a lot of sense I think they have a lot of epistemic advantages I'll talk about the ethical advantages so both of these systems are non-competitive and because of that they remove a lot of the QRTs that we've created firstly because we don't have to evaluate proposals in those methods of funding allocation so all the QRTs related to highest reuse on the whole just disappeared both of these systems are also a lot harder to gain than the current system so there wouldn't be any QRTs related to that there would be no point in transition consultants would be no point in pipping your track records by salami slicing your publications through a stronger profile people wouldn't have to apply for funding even if they didn't want to but just because their tenure file needs a certain number of grants or whatever and then finally in both of these systems there wouldn't be any prestige or merit in acquiring funding which would take away all of the incentive for it so of course very easy to say that the ethical cost of these systems is lower because it's very hard to predict these are very complex things like the whole funding system is so complex it's very hard to predict what will happen if you change it so drastically so what we really need is to do experiments and we need to test we need to use part of our funding and distribute it in a different way and then measure and see what happens this is being done to a very small extent like New Zealand has one study the Danish Research Council has one the Volkswagen Foundation has one the Swiss Postdocs also Wateries but it's very little so I really think all big funding organizations should do more research I've tried very hard to determine if they'll be able to take a break or we can just keep going if that's fine for everyone any questions? as usual we always have very provocative thoughts so I really enjoyed because I've been on many channels but I really enjoyed the randomness of this system however I think it's a very good system if actually the upper 10% are very close to the lower 10% and it's at the end all these QPR QRP are really prevalent but in philosophy when I was in science in philosophy I've never seen the lower 10 to the higher 10 so if maybe funding my project is the problem or maybe we should find other ways to fund people but it would be very difficult to sell philosophy objectively we're not a real science there's a lot of garbage but real garbage real real real bad stuff so shall we distribute the money I can understand maybe in some science between the top and the lower it's very close and at the end in certain disciplines there seems to be more longer normal curves bigger normal curves maybe in philosophy it's not funding my project we should fund by a person or other systems I'm showing to the world I don't know if that is a big difference between fields but what we proposed to K.River actually was that for some of the humanities disciplines like philosophy, project funding really doesn't make sense it's like a model from the sciences maybe we should find some tools what we propose was design funding then after 4 years evaluate see whether the person wasted the money or not if they wasted it just cut the funding and if you not continue it then you have a small cost of a few pranks and debilers but you don't have the cost of reading all project proposals and sending it out to new viewers I think something like that would make sense but you know why we switched to in social sciences to fund by project for example in France for a long time it was funding by lab and there was a strong bias against young researcher in this system so we switched to funding by project and saying oh it's better for a young researcher because at least they can bypass the lab director so maybe funding by person I think the young researcher has some kind of a priority or something yeah because in the current system we have this problem of people with strong profiles getting all the money so again what we propose to give was to give all young researchers just a fixed amount of money and once you're old enough you have to go to external competitions to find your own money but there are many like the Richard universities in Switzerland they have big grants for young researchers it seems to work very well you have so much money of course it works well it's very difficult to test it's a one year existence so in general the problem with all these things is that there's lots of intuitions you would want to test them but the experiments are very expensive and you need a long time frame to actually measure and then success in science is so rare that you have maybe 1% that you need centuries and mass studies to actually measure so it is also tricky there's the fact that you have a few intuitions which seems extremely plausible and some are not really conferred so I found it quite odd are you talking about? well the hypothesis you had all of them I was like well sure of course and it was kind of the opposite so you've been surprised here as well so part of the explanation might be that it's all self-report so we ask people to self-report success rates but it's across all different kinds of grants and how do you intuitively do that here so I don't think it's a very reliable measure and it's also self-report of the QRPs and maybe there is some association and maybe people who think they're very good to also say I never learned the wrong way but it could be very complicated like you said on the different panel I was often the people that put the more time were the people that had the more success before and says it's my moral responsibility to really put a lot of time so it's really breaking the causality in that kind of system and so maybe sometimes the crook are crook all the time so to really test those hypotheses you need a better measure side thank you thank you that was fascinating I also when I saw the hypotheses I thought that they seemed perfectly reasonable and then when I saw the results of my mind instantly shift and I was like of course it's just self-reported everybody thinks they're great but you might also have predicted the reverse in self-reported cases people would be more honest than in cases where they're observing others which is interesting which kind of takes me to that slide where you showed people's estimation that their reviewers were inexperienced versus that they themselves are inexperienced we've all had that scenario where reviewer 2 is an idiot but we've never been reviewer 2 I wanted to invite you to say a little bit more about that distinction between self-reporting and estimation of others because if it seems like the researchers themselves think that this is a problem and they see it with other people they just never see it in themselves then maybe your study actually shows that this is a general problem it's just it's always invisible to the person themselves does that make sense? there is something that comes back in all research and equity studies where you would ask people about observing stuff that's going to be higher than when you ask them about doing it themselves but of course you have more information about yourself so there's always a trade of reviewer 2 I don't think there's an easy fix for this apart from some kind of model that can produce this but yeah, there are studies about people underestimating therapies in self-report studies in the next few years so at least we know that it's probably if they really see the score piece probably they will realize I'm completely convinced I mean I'm the right audience but Taitan I feel like you're also preaching to the choir because we all know that brands don't do the work that they're supposed to do just to control quality and to assign money according to that quality so I think it's really good evidence to show to the FWO to say peer review to filter quality is not good and it actually drives QRP so I'm completely sold on this I'm a bit more interested in how during the FWO how do you present this paper why are they against it and how can we find alternative ways to pressure those institutions to change but your work here is I see it as like evidence and work to show evidence based research policy to change research policy based on actual evidence and hypothesis based thing and data and it seems that those institutions are actually not responsive to evidence weird enough because they are supposed to generate how evidence is constructed but they themselves are not responding to evidence so how can we not as individuals pressure those institutions to change actually we gotta change how we so I'm a bit curious about your interaction as an expert on the topic with those institutions very difficult interactions but I wasn't very experienced in them before this I'm not very frustrated I thought for example when we made these points they say well alternative reliability so well we already have your battery I do want to well that's dark yeah the same for the use of leftover funds they say we are completely fine with researchers losing the funding for different purposes because we are responsible but that's basically baseline funding we are just giving researchers money to decide so well one thing that you we should understand is that those policy makers have good reasons to be a bit risk averse because it's they who take the risk and not us and it's public money that's important but they will be held accountable but there is good evidence that the money is being wasted like for example when you say that 30% is dedicated to research funding aren't they really like oh that's really inefficient when they see this kind of stuff well what they see is that well FWO for example always just says it's a leader in research in Europe and in the world we are doing so well we want to change anything well of course everyone uses the same system so it doesn't say much about our system but they think everything is going well we don't want to make drastic changes I have no idea how to get things to really change so we have been successful in a few small things we really best promised to organize a lottery in the middle category of projects so they have decided to oppose they had a nine category system for the internal funding with like a plus plus a plus minus so now it's just a b and c they will fund a they will go lottery in me that's what we promised so that's something and there's a few other small things that we promised to change they always use to review all of these things in the same order that we know from others from other fields that there are other effects in evaluations it's always saying these things will benefit from this so so those things you can get them to change the very small things but bigger changes I must do research about this I can't say anything sensible about this an easy roadmap towards changing this and so institutions just move slowly and it will take a lot of evidence and a lot of time and then small changes will come and what about the open science movement as a way to to sort of hold this kind of to embed this kind of evidence within the open science movement as a sort of like looking at how the open science works and not only how we exchange data but also how we exchange money and sort of I don't know how big the open science movement is here in Belgium to sort of also act as a pressure because I know that in the Netherlands the open science movement has also moved into the research founding kind of arena and they seem to have been successful at at least sort of raising the issues with the research policy makers so I'm not sure how big it is here I don't know how big it is in Belgium but there is definitely overlap between the two communities like another prominent proposal like alternative funding methods to make scientific citizenship the main criteria for if you actually publish your data and so on these are all in one description now it's here in Belgium that's behind compared to the Netherlands how the MWO they have a commission that will study their own funding allocation process and we've been in touch with them so there's definitely stuff moving all over the world so you're still actually I want to just tell them to really rebound off that because I have a very similar question what was I was actually kind of shocked that they were willing to send it to a thousand of their reviewers and all of their panelists what was that like email exchange or telephone call like how did that go were they actually just like this seems like a really great idea and we're really interested to know the results of this data were they like on board from the beginning actually they were we were also in touch with ERC and MWO and there it had to move to 15 committees and people before we would get anywhere and at MWO we just had this one person that was really helped us and sent out so that was great and then they invited us to give a talk and the whole board of directors was there and so they were no I don't know I don't know but then the interactions themselves with the people our boys they turn defensive and we are too aggressive in our communication and nobody is very different because you're attacking them in sort of the core of what they're doing this period of project funding and you're almost saying about what you've been doing for the past 20 years is bad and they think it's all going very well so it's very difficult not to let it go into this defensive antagonistic kind of communication I've never managed to do all the meetings with FETS and I've really tried so I like to know how it feels so our question is what do you and the orchestra do your QRPs are linked to the defined as a opposition to some kind of elections and so if you clear and understand fields where your purpose is to convince someone around this kind of QRPs QRPs is somewhat epistemologically virtuous so first it's right and you just need to convince someone can you get the corner on this to convince someone that you're the right yeah I see your point we're getting to this grand mansion of consultants again so I think this just adds noise to the system and makes it harder to get at what we really want maybe for the best research in the world it happens to be the case that it's a virtue that you're very good at convincing but overall for the whole system it's just a cost and it makes it harder it's better for everybody else half of the whole system more enterprise-friendly with just the same kind of conduct if we could measure the quality and not convince and it's better to measure the quality because that's what we want we want quality now we have to use convince as a proxy for quality and people are optimized for being convincing and not for quality and we don't always get that I don't think I saw it but did you measure or ask these respondents to what extent they were considering these QRPs to be West Coulomb or actually quite fair what was their feeling about some of those practices that were maybe involved in did you try to measure that? we considered it and in the end we just decided to very strongly prime them for thinking about them as QRPs like I say this is a study about QRPs we think there are some people who actually will practice and want to know how often people engage in them because we thought it would get very complex we asked questions like that because then people would start adding an extra layer of judgment to the responses but now we have to the other side of the trade-off is that now maybe people didn't think of them as QRPs so the title was the ethics funding you talked about efficiency cost-benefits so what is ethical at stake according to you? I think some of these my main interest is in the efficiency and the cost-benefit but I do think there is a medical cost like some of these QRPs are just ethically wrong it's ethically wrong to submit a grant and not have all the articles on there ethically wrong to use I don't want it to buy books from a library if society didn't give you a mandate so but I think this ethical cost is very small compared to the epistemic cost for the efficiency but it's just adding to all the other elements let us imagine you're right and we do a lot of studies about funding and we discovered that in this discipline we should have surgeon's key paying permanent people because one of the examples why FW is so attached to project funding is because they chose it much more strongly than FNRS because they were announced to permanent position they were announced to all kind of schemes just to completely project funding which I can see how the people participate in that but let's imagine you do and you have data for this discipline it's better right now it's better to finance people that would be optimal do you think it would people would accept it because it would be treating people not in equal way and they would stream that's a good question in general I think these sort of intuitions of the population play a big role in this debate like lottery not many people like have read papers with simulations and know all the details in ways we can do lotteries people have very strong opinions about them like one year most just way of doing it so I assume those kinds of problems would be inevitable if you if you talk about researchers if I say the optimal way to finance philosophy is completely different than the optimal way to finance completely different procedure completely different evaluation because obviously philosophy has nothing to do with bio and it would be even empirically grounded in your could you convince the FWO to treat philosophers completely differently than biomedical scientists I mean there are there are funding agencies that use different budgets and cut-offs for different disciplines which makes all the sense because it's very cheap in some fields and very expensive in others but usually the forms are exactly the same evaluation with all the problems you said they're exactly the same there's a mythological part for the grant of philosophy where it's always very complicated when it's in that part so don't you think that if people accept different amounts of money they will also accept different well I mean they will feel more welcome within those forms that they have to feel because when I feel the methodology and I'm not sure what my methodology is I have to say that when I'm on a panel that's always the part that is the more obscure and I would like force for it to be better to explain how they plan to do their thing except I said I really am right it would be interesting if you thought a little bit more about but that's personal it's just different grant you say okay medical studies you need a lot of money so I think that people can accept easily that money could be different different of you but quality could be evaluated in different way that's a more touchy subject even not even among the public the taxpayer some researcher quality of uptown scientists should be evaluate completely different quality down down I feel like some people actually demand that this should be the case right like do you have anthropologists who say we should not be held to the same standards of data sharing when we talk about ethnographic interviews than when we talk about biomedical research about CRS data I think people especially with the humanities they do demand that we are treated you think people would accept that the central agency evaluate each field in a completely different way I think so why wouldn't they welcome that they are treated more according to disciplinary standards rather than top down mostly I I'm not my intuition is not sure I have an intuition that we want to okay I think I think the story partially the case we advocate to apply department of computer science and for them a lot of quality of their creativity get from the talks that she gave so they are like she is genius because she has 50 talks which for us is a big step so there is some kind of much higher stuff I do wonder if philosophy gets baseline funding and biomedicine has to write grant proposals that might I think the biomedicine researcher would get sorry I have to take you take I did have I did have one more empirical question so you mentioned having looked at QRP's X gender and X success and X field another one that I would have thought I would have had an intuition about would have been QRP's X how well funded you feel like a desperation effect right did you lack of funding so we asked them how often over the past decades did you not have enough funding was there a correlation nothing very strong okay obviously I would have the vague idea that like the more I feel like I desperately need to be funded the more I am going to you know sex up my grant applications in potentially questionable fashion right like there was a weak effect for some of the fields but only weak okay cool your survey is an intuition destroyer damn we are a bad measure so we are thinking of running the same study at MWO because we had already asked them before FWO and they have this committee so we might still do that cool yeah that would be really interesting that would be really wild if you saw some kind of radical country effect I don't have any intuitions about that other than that I've heard that in general the Netherlands is just sort of a more competitive academic environment they generally feel more pressured than we do so in Belgium about half of the funding for research is still block funding so the government giving it to universities directly and it's only FWO in a small part like both funding MSH sites that is project funding in the Netherlands the proportion of project funding is a lot higher so that's why we think of it as a big difference sorry I have to go they call me from the baker she said the deal thanks good luck sorry not your share yeah I'm wondering this is also just an intuition but the shortest is our handle because of course if you're filling in the survey but it's self-evaluating and so from the questions you showed me I would quickly get that you're trying to refer to sort of your own question of research practices and so you may be called a little bit defensive and it'll be perfectly fair whereas if maybe you would have made it broader you would have put in many different questions how do you find the strategies to be used whatever they may they may not have been able to distinguish this clearly in the questions about QRP's from the other questions and may have answered more that's true there's of course a tradeoff between the length of the survey and there's a deception like if you want to justify deception for the ethics committee you always need a good reason for deceiving people the same personal data we just kept it as ethical as possible and for the people answering honestly the anonymous survey so what another big survey has done is sort of randomly throw out the data they had a particularly clever way that for one of the respondents it could be you could figure out whether they answered yes or no in the QRP questions because for like 80% of them the square would be yes and the circle would be no for 20% the other way around it would always be a sort of probability so for any researcher you couldn't find out the definition you could but I didn't know about it before that's something we could