 So I suppose since I'm punitively the moderator of this panel get started and I think I won't introduce you all you can do a better jobs of introducing yourselves. But I did want to sort of set some some ground rules for what we're going to talk about. So Stephanie and I as journalists, we're not scientists. We don't have training and science really so we're going to leave the error detection up to Elizabeth and James and then she and I can talk about some of the downstream implications of the work that you both have been doing and you know how that fosters or undermines a trust in science or mistrust in science and you know what it all has to do with with openness. So, at least since you're on top right for me at least why don't you get started and talk a bit a little bit about your background, how you came into the world of error detection, and a little about that very interesting path because I don't know. I assume that most of the participants here know a little bit about you. But you've really been doing some remarkable work with with looking for problems with images in particular. Yeah. I also have some slides. I don't know because how can I talk about my work without actually showing something. So, if that's okay, I am going to pull that up. I'm going to be seeing my start slides, I hope. All right, great. So, yeah, so I have worked 15 years in academia at Stanford, working on on microbes that live inside our bodies. And as a side hobby, I started to work on detecting plagiarism and images in scientific papers and I turned that into my full time job couple about two years ago. The work that I do is is looking at images in biomedical papers, and I specifically look for duplications. I cannot look, I cannot find a good Photoshop, but I can find an image if it overlaps for example with another image or it's just duplicated in another paper. So I've pulled a couple of examples here that show sort of the different types of example different types of duplications that I've spotted in papers. And on the left on the top left you can see a bunch of panels and I've marked two sets in our red boxes and with blue boxes two sets of panels that are identical. So this is sort of a simple duplication and this could just be an error. Still, it should not be the same, the same photos because these represent all different experiments so it's still an error and I feel it should be corrected so these are all from papers that have been, have been peer reviewed and published. So I'm looking at these papers and finding errors like these so simple duplication. But I'm also finding reposition duplication so those are the two examples shown on the top and the bottom right. You'll see their overlapping panels, or Western blots that are shifted or they could also be mirrored or rotated things like that. So those are reposition duplications and they're a little bit less likely to be a simple error maybe this was intentional, or it's very sloppy somebody just didn't label their their samples very well or even rotated them. And on the bottom left you can see sort of the worst examples this duplication with within a photo so you see a bunch of photos there. In panel A, you'll see lanes one and three that are marked with a blue box. Those look exam identical to each other. And in panel D you see three examples three lanes that are all looking identical to each other and so basically this is this is like looking at a photo of a dinner party and you see Uncle John three times that's not really what you would expect the same photo. It seems very likely that this photo is folder photoshopped, and this was done intentionally. So I looked at 20,000 papers, scan them all by eye. And I found about 800 or 800 of those papers to contain such duplicated bigger figures as I showed in the previous slide. So that's 4%. And so half of those we estimated were intentionally duplicated. And I reported all of these to the journal editor so 800 papers were reported by me. And now we are five or six years later. And unfortunately, even though these all contain big errors or even intentionally photoshopped images, only 40% of these papers have been corrected or retracted so if you focus on the left bar you can see all the 782 papers that are reported five years before, and about 20 something 28 or so percent were corrected, about 10% were retracted, and 60% have not been not being touched by the journal they're still out there with their error or with their Photoshop or their overlapping image. I think that's very frustrating we all say science is self correcting but it seems that most journals do not respond to these things and and and take a very long time, or, or don't address these items at all. So, these are my slides and my background and look forward to hear from the other panelists. So I have a question for you. When you started was your. Did you have any preconceptions about why you were seeing these problems, did you think that it was mostly misconducted you think that it was mostly honest error or maybe not. I mean, we do know from the reflection literature that of the retracted papers, roughly two thirds result from misconduct. And a lot of those, a lot of that statistic comes from you now with the work that you've done and what you found, but as you say half of the, of the problems that you found were were not misconduct. So, were you surprised by that were you surprised how sloppy or, I mean, shoddy I guess the work was or, and what have you learned, you know, since you published that paper which was several years ago. Oh, and I should add, why don't you tell the audience. How easy was it for you to get that paper published. Oh, that was very hard. That was very hard. I think we submitted it five, six, seven times before it got. We've been in the end we just put it on bio archive that was still pretty revolutionary those days. And yeah, we submitted it to several journals and it was all sent back like this is we don't believe that you did this like I heard that a lot like we don't believe that a human can scan 20,000 papers in a reasonable amount of time and we don't we don't think you are right we think there's no golden standard like like you claim that these are duplications and and who knows that if you have that gift like nobody can check you and so yeah, no it was very hard to get it published. And based on. Oh, well, so I'll let you answer the first part of the question which was were you surprised at what you found. And then if you could also say. If you looked at a slice of 20,000 papers there are 2.1 million papers published a year. So can we assume that you know that it's 100x difference or, you know, the. Yeah, it's hard to know so to answer the first part was I surprised I actually now suspect that more than half of these are done intentionally. And, but it's hard to know like if I look at a paper and I see a duplication like that. And an author sends in says oops I made a mistake. Here's a new set of papers. Like, it's hard for me to know if they're speaking the truth, or if they quickly send in something that wasn't photoshopped or something like that when it's a Photoshop when it's that category three that those duplications within a photo I'm pretty intentional but there's a lot of gray zone what I'm not quite sure so I think. Yeah, it's hard to know we we we guess it was half half of them intentionally but who knows, it might be more. And what was the other question. So looking prospectively you looked at 20,000 papers but that's a tiny fraction of the number of papers that are published every year. Are you. I mean, when you look at the publication say since 2000 2020 2019 are you seeing the same level. If you were to do a sampling or is it getting better or worse. I would say that in the journals that I scanned and I found these duplications in it's getting better so it's going down. Because those journals are in general a little bit more attentive now to these things and try to capture them catch them before they get published so during somewhere during the submission process. Particularly plus one is has greatly stepped up and increase their guidelines for image preparation and so they're finding more of these and I haven't checked but I would expect fewer to find fewer of these. If I would scan their papers now, but I also see a lot of influx from papers that are produced by paper mills which are sort of the scientific. Plans that are. We don't really know but it's like a lapse that makes scientific papers that make fake scientific papers and sell them to authors who need a paper for their career. And those are those don't always contain duplicated images but they can can contain a whole range of other problems and and those are. They're on their rise for sure and my at some in some fields have taken over their field and since that's almost every paper in particular fields might be a paper mill paper. Well I encourage any participants to go to to Elizabeth's blog and and look at some of the work she's done on paper mills we've certainly covered the fruits of that when it comes to retraction but I believe that there are many other paper mill papers that have yet to be retracted from the numbers in the hundreds probably deserves to be in the thousands. If not, if not more. Let's let's shift over to James who's also recovering academic, but fully recovered. Congratulations. Let's talk a little bit what what, what at least does with looking at images for me I my eyes don't work that way my brain so it doesn't work that way, but you came up with a couple of data analysis tools which in theory are easy for humanities major like me to apply so talk a little bit about why you did did that grim and sprite I'm thinking of and and you know how that came about how you applied them what you found and and and tell us you know whether that sort of thing is is becoming more common in your Well, all the tests that we have, which between me and the first one not Dr Nick Brown I think is for now formally, although we have a variety of other methods for crowbaring method section and raw data open. All the rows, because they were retrofit the things that we didn't trust. It was never a matter of, I'm sitting around thinking of the statistical mechanics, which is something that I really do. It was the matter of, we see things that can't possibly be true. And we need to find a kind of an empirical loophole within how the numbers were generated or the process that they arose from that says this is a reliable observation that are not set in context that it will give us at least a partial answer from whether or not the elements that we see numerically can exist. And generally when they work. They work very well. It's a, it's a matter of, it's a matter of their application, like most things what with the other components of a question you're one of these multi questions. Sorry. So, well, let me, let me jump in first to say. So, if you and Nick Brown were had this sort of for lack of better word bullshit meter, which triggered in you the desire to figure out how to retrofit some tool to look at the data. I, and this is largely rhetorical question, but I want to hear your answer. Why weren't journal editors and peer reviewers, figuring out these studies and we can talk about some particular like really, you know, didn't pass muster before they published them. That's a good question. Well, there's a proximal and a distal reason for that. The proximal reason is that in general you review only the paper that comes in front of you. You're in a hurry and peer review has never really seen its job as interrogating the information that's in front of it. It's seen as a process of interrogating the process that's in front of it. So, you know, things get slighted at least in my experience and I've had plenty of people tell me that my paper is naughty and that it shouldn't be published. It's a matter of this was conducted incorrectly this you took you took the wrong observations you don't have enough certain you haven't read a sufficient amount of background. Rarely is any form of computational reproducibility part of that equation. The proximal reason the distal reason is that a lot of the time when we've had skepticism about a certain author or a certain paper it's in the environment of other things that they've done things that they've said things that the public things that have happened elsewhere. So, I'll give you an example with the paper came in front of your desk. Adam, who is now the editor, and it's a perfectly fine paper maybe a bit trivial things perfectly all right. Let's evaluate that paper. Well, what we would do is say 21 first author papers, most of them sole author papers in a year involving field experiments with no funding. Where half of these models sound like something that was invented by a particularly inattentive undergraduate. I don't believe that this process is happening on a kind of a macro career scale, the way that it's being presented. I don't believe that immediately covered research right now, you did an RCT on patients that turned up within that particular time window you've got everything together in what three weeks. Then you recruited all of those people, they all follow the instructions precisely they all managed to finish the data collection everything is perfectly okay. I don't believe you. So that's, I'm going to display I think some philosophical ignorance but is that sort of a Bayesian approach to, to looking at scientific research and literature. No, I don't, I don't get to where the skeptical approach. I'm thinking Bayesian analysis in some forms informs what we do, but it only has the the the loosest kind of collective association to the actual process itself. I don't mind I'd like to kick it over to Stephanie now because you mentioned COVID research and Stephanie is doing more than just about any other person on the planet certainly more than a person with a typewriter to inform the public about what's trustworthy and what's not when it comes to COVID research and Stephen if you could talk a little bit about your process when it comes to evaluating science when it's informing a story that you're writing for Buzzfeed or how do you approach. Let's talk about the ivermectin story for example which is really important. Yeah, sure. So I just want to start off by saying that James and Elizabeth have done so much amazing work to improve science and science is better off because of them. I'll just speak briefly to the small part that you and I play in this whole ecosystem as journalists with the caveat that I'm getting over a cold not covered. So forgive me. So yeah I'm a science reporter at Buzzfeed news I've been here for six years. And as we've all established by now science is a very human enterprise and humans get it right a lot of the time but mess it up occasionally to often. So Elizabeth and James and data suits like them, you know, their job is to to find errors and in a lot of my work over the last few years I've had an overlapping but distinct task which is to try to understand like why I'm how those mistakes come to happen in the first place. And often errors are a result of unintentional floppiness, but sometimes there does seem to be more of like a willful intent to deceive or cover things up. And so, you know my goals as a journalist are, you know, hopefully the scientific community will learn from these stories lessons that will help it get it right in the long run. I want to echo the journalist panel yesterday and saying that I also hope that the public will learn to associate science with principles that are like so quarter science but aren't always communicated clearly which is nuanced and uncertainty and gradual consensus rather than black and white answers and solutions. And then finally, a thing I've learned about science is that frankly scientific institutions tend to be extremely stubborn and opaque. So by reporting on them I hope to try to make them a little bit less so. So, you know, in my job reporting consist of, you know, interviewing people asking questions and getting answers on the record, filing public records request. I can consult experts who don't have a dog in the fight. I can independently bet both sides of a story and lay it out a lot for readers and I can use my position at a mainstream media outlet that reaches millions of people to try to get answers out of people. So I don't know what institutions or try anyway. The ivermectin story is something that I've read about a couple weeks ago. And so, for those who are maybe not aware ivermectin is like the COVID drug of choice. I'm not going to say Dr. Cora Quinn, but that was 2020 and 2021 is all about ivermectin, which is like this dewarming drug that's been around for decades and like a certain segment of the population is convinced that it can either like prevent me from getting COVID or it can like cure you outright. And a lot of these people are choosing to take it instead of getting vaccinated. I'm not very concerned because there's no reliable evidence for ivermectin. So anyway, the story I wrote about most recently with this was, there's a clinical trial done in Argentina. Last year, and it said that it gave ivermectin to like hundreds of healthcare workers in hospitals in Argentina. And now the ivermectin was 100% effective at preventing all of them from getting COVID, which is like pretty remarkable like 100% is pretty amazing, I would say can't can't really get better than that. But then these two data slews, Gideon Meyer with cats and Kyle Sheldrick started looking more closely at the study which was being cited by people who were pro ivermectin and citing it as a reason to not get vaccinated. And they found a lot of issues with them, which was data for the participants like just didn't match up internally. And then you look closer and supposedly four hospitals carried out this experiment, but one of them said that it actually never participated has no idea why it's in the study. Officials in Buenos Aires said that they had no record of like ever approving the study. And this study was published within a week of submission in a journal that's published like 10 articles in the last two years like total that's all it's lifespan. And so, you know, the arrows were raised by getting and Kyle and I was very interested in this because obviously ivermectin, as I said, like a very hot subject right now. I mean, and I just started trying to gather their evidence as questions, like trying to verify to what extent this trial was actually conducted as advertised and involved, you know, talking to all these doctors in Argentina and then finally talking with the investigator himself who says that the study was conducted exactly as he said in the study. And so, nevertheless, the love inconsistencies and the story and that's what I tried to lay out in the story and you know, let people come to their own conclusions, but that's an interesting case where, you know, we've talked about journals really doing anything about problems that are brought to their attention. This journal, which is the Journal of biomedical research and clinical investigation, like changed its submission fee after I asked, you know, like how much it charged scientists to submit to the journal. And then it took down the study for like three days when I asked about like all these questions but the study and then put it back online so while the scientists told us that he had explained everything to the media so that's why we put it back online. But it did change the hospital that said it never participated to other peripheral medical center in the study without like actually changing or removing any of the data. So, you know, journals are like not great keepers normally. This one seemed to have done like a less than thorough job in reviewing it. Some have no gates. So, let's, I'd like to ask you as a science journalist. So what is, and if you could talk about the misconduct cases or other cases related cases you've worked on, what sort of, you know, rises to the level that it's going to get your attention. I mean is all about clicks is there something that appeals to you, Stephanie Lee in particular. And have you noticed over the last six years, any common threads and the stories that that you've covered when it comes to difficulties in science with honesty say or openness. I'll start with the first question which is just like how do I decide when to write about something. It's a good question. And as you know at Retraction Watch there's a lot of, there are a lot of errors happening in papers all the time. I cannot investigate every single one of them I just accidentally don't have the bandwidth. Thank you for not having. Thank you for doing the service that you do. But so I'm looking. So I try not to bring more attention in the debunking of something than what it originally, you know, gained. I'm looking for influence on policy on people's health on culture. So I'm looking at money. Did the theory, born out by the study, you know, need somebody to make a lot of money and like, so who were they. A couple years ago, I mean, I don't think I don't feel like anyone attending a meta science conference needs like a refresher and Brian wants it and James said it was like lease of all. I just quickly very quickly recap that case which was, you know, 2017 2018, the leader of the food and brands lab setting food psychology and marketing research at Cornell. He was, you know, on the today show on magazine covers, had tons of grant money, had tons of papers out. And his whole thing was science back strategies to help you like lose weight and eat better without really like overhauling your diet or your exercise just by like making small changes to your environment. And this like made him like the most famous, you know, food psychology researcher of this kind, I would say. But as James and you know, other data detectives found out, dozens of these studies had shoddy data and then that like just didn't add up when you look at them from the outside and then what I figured out like by updating emails between researchers involved in doing these studies was that they were actively like engaging this activity of starting with like a headliner conclusion that they thought would be really interesting and then like trying to cherry pick or otherwise be hack their data to make it get that conclusion. And so, I mean, this was somebody whose advice had like, like, touched so many lives had influenced, like policies had influenced school that were trying to buy in their cafeteria to get kids to eat better. You know, he had bestselling books like his impact on the culture. And on on the scientific landscape was with huge. And so I felt like it was worth digging into that to explore and to explain to people that this science that you thought was like so solid is maybe definitely not so much. So, so I look for influence in in sort of the conventional sense. I would also say I look for influence in there's many other ways that harmful or shoddy science can be influential. And like with ivermectin study. I remember when we put up the story, somebody tweeted at me like this story is the study is not influential it was published in like to use their words, a predatory journal that nobody reads. But the study had been like cited at a US Senate hearing, and it was, excuse me, it was talked about on Joe Rogan's podcast and if you don't know, like Joe Rogan has his podcast reaches millions and millions of people. So I, you know, influences not just about number of citations with with a paper even though, you know, that's a conventional thinking in the scientific community, it can take all kinds of different forms so I'm paying attention to, to all this different forms could be. And I've not forgotten whatever else you were asking me. I'd like to I'd like to move on because you know it occurs to me, and this is for for James and Elise, and maybe more for Elise but what is what's the goal in in doing what you've been doing with with error detection and sort of policing the literature ultimately what where do you see this going obviously we're never going to completely do away with misconduct never going to do away with shoddy research so you know what do you hope to achieve. For me it's it's just a flagging papers that might have a problem and making that available to other readers because as I've shown you like it takes too much time for journal to respond to these concerns, if you raise them in the official way. And so I just post everything on papier and hope that people use papier to during the literature search, just to make the reader aware that there might be a particular problem in figure three of a particular paper, or there might be a particular with a particular author, and so I just hope that I can scan as many papers as humanly possible that's sort of my goal and flag what I think is of concern and of course I cannot detect a lot of problems. I would not have been able to detect the surgery sphere problem. I focus is I focus on the things I can do best and I hope that others do what they can do best and fly that, because in the end, if you base your paper if you base your research on a paper that is either an error or misconduct, you might waste as a researcher a lot of effort and research money trying to replicate that and if it never happened or if it contains an error, we need to warn other people to to, you know, proceed with caution, and don't base your research maybe on this particular paper. Component answer to that. I can't do what police does and I've tried. One single solitary time I have found a visual error in a manuscript in the wild, and I was so happy. I, I did a little dance for a while. I was wildly out of proportion compared to some of the more numerical problems that we've found, but my interpretation of that question Adam you're asking what's what's essentially the end game. Where do we end up. Why begin in the first place. The simple answer is, when we discovered that this was a thing, and I was working with Nick early in 2015 and he was just a mad old man I met. It was compelling. And it was, it was an interesting problem. And I saw a gigantic disparity between what I thought was the importance of the problem and the amount of attention that was being paid to it. The end game is a little more complicated. Globally, I feel like from the perspective of structural funding and trust and support. The interface between science and the public, etc. Science has slid to a certain degree within the public consciousness in a way that annoys me. And often there is simply not and there are too many people and not enough money. This is a structural problem we have built for ourselves. The kind of global environment of fiscal austerity over the last couple of decades, which we've managed to fill the hole with hope with PhD students who eventually we can disappoint when they make realities job market. And there are not a lot of structural tools where you can push any button whatsoever to say the publication environment is overheated. A lot of pages shouldn't have been published in the first place, much of this is never read carefully. Somewhere between a quarter to a half of editors are actively bad at pursuing quality control as opposed to the other metrics that are relevant within the journal and that everyone is right to have a certain degree of skepticism within the bounds of course of the fact that science is real and that empiricism can actually be established now are very few things that you can do in your underpants on your couch in the middle of the night that allow you to make that point incredibly loudly to the extent where someone like Stephanie would write about what you're doing. So, I'd like to ask, I mean, you talk about editors and serve in difference, and it occurs to me that, you know, so you and Elizabeth are both in a sense outsiders in this endeavor right and so you can bring a certain amount of distance and maybe cynicism skepticism. It's, there are some editors who have been instrumental and uprooting and unrooting a tremendous amount of fraud in their particular literatures I'm thinking, for example, of in anesthesiology which is the reason that we got into a contraction watch to begin with Steve Schaefer, who was the editor of a publication called Phuji and analgesia. I wouldn't say single handedly, but there weren't that many other hands helping him uncovered three of the largest fraud cases in science history, certainly in modern science history. Fuji. Scott Rubin who was sort of the first. Oh, and both and both. And then, but the rest came right so you know so if you look on the reflection watch leaderboard to the first two or two of the first three at least for anesthesiologists and Steve was responsible for that. But I will say that his predecessor at the journal was responsible for burying the Scott Rubin case for a decade when he refused to retract a paper. The Fuji case when he refused to to retract a paper that clearly had falsified data so your point is well taken. But as outsiders do you think, do you think you're, you're, you're helping the, the, the patients, or, but at the same time sort of possibly risking public trust in science by exposing, you know, so much so many problems that maybe the public Stephanie will write about and the public will think hey what's what's going on here these people are all corrupt. And I know it's a little bit of a straw man but please address why. It's not a straw man at all. The immediate component response to that is the trust is supposed to be earned it's not supposed to be when you are within certain arms distance of a picket everything goes away. And we have a blanket assumption that everything is conducted appropriately. And the inherent response to that is that we are really genuinely supposed to keep our house in order. If you want to wear the mantle of trust as we are the we are the people who push forward the collective knowledge of humanity. If you want to wear that then you have to have a tremendous internal resistance to your own processes, when mistakes are made. And there needs to be a solid appreciation of the work that goes into pointing out the fact that they exist I mean your, your point from anesthesiology and analgesia is extremely well taken because that is a medical journal which are generally the fanciest journals that we have. The ones that sell the most advertising states the longest, the longest running the best known. It is a proper journal with a proper impact factor for any given value of proper. And it's reasonably well regarded within those circles. Now, if we're talking about the editors in that case they were one out of two from from a hit rate of would they be willing to deal with this you can you can imagine what happens to a regional scientist when you turn up and say literally none of this paper that you published 60 years ago happened. The disinterest is wonderful and stunning. But I want to know what at least because I always want to know what it is. I am worried that you know people like like Nick and James and I hopefully we're we're trying to find errors ever trying to clean our house and, but the lots of journal editors are not responding they're they're not responsive or they're listening they're taking other sets of images that the authors have submitted and just doing a correction when I'm just looking at the image and it was like completely photoshopped. I'm seeing lots of gullible and maybe even corrupt editors who are just not addressing these things and and I am very worried that the general audience is looking at all these retractions we've had with covert 19. And we'll conclude that science is not to be trusted because you know all these papers shouldn't have been accepted in the first place and yeah if other people then find big errors then how can we trust the whole system and I do feel that's you know we already have so much misinformation in science and online that's part of the work I'm doing could be misinterpreted that science is not to be trusted and I hope that people actually look at people like me who who try to clean up science but we need more editors and publishers to to work on retracting all these papers or correcting them and I see a cat in the view and so I'm completely distracted now but yeah we need to we need more we need more action and faster and I've been in a lot of talks where words like stakeholders are used a lot and if I hear that word I'm like okay that's just a beautiful word but there's no action we need action we need institutions who take actions as well and there's too little of that. Stephanie, does that ever cross your mind when you're reporting on a story. You know, gee, I'm worried that this one is really going to chip away at the public's confidence in science and scientists or is it just you know what if you if you came to my attention then you deserved it and so tough luck. I kind of feel the latter. I mean again I'm looking. I'm looking for instances that people are going to have heard about again that's by way of like, did they raise a lot of money because of this did they, you know, did this change policies is this change this affect people's health. So I look for you know high stakes situations to sum it up so yeah if an idea that is a foundation for that for the adoption of it in the real world is found to be shoddy than I think you know it deserve to be exposed because so many people don't care about like oh this this amazing discovery is made and like there's inherently so much less attention paid to the like oh it didn't actually work except in some rare instances and I agree with James that like trust needs to be earned and you know by by any institution by any set of people and you can't just be granted automatically. So when we shape all that out, scientists can be very to speed on this, they can be incredibly distrustful of other scientists of other experimental techniques by the processes of other labs, and they have their own internal communication networks for whether or not something is worth paying attention to or not is very critical. It's very cutting. But at the same time the idea of losing collective faith in what we do is regarded as kind of thinking you can't possibly do that with a good people. I think that we're the people who are supposed to be acting within this particular domain. Well, if that's the case you can damn well act like it. And you can go out and justify the fact that you, you wish to be when they survey all the professions, and who is the most trusted and wonderful people. If you want to be at the top, then we can act like and put processes in place such that this is the case. If there's a certain degree to which if we if any of us are kicking anyone off pedestal when it comes to something like this, the amount of people who don't get removed the amount of things that can't be pursued. We are individual people in general working on individual projects. And it doesn't get done to be seen as kind of concomitant to the fact that there is a very small amount of deeply critical analysis going on within any realm of science. COVID has changed this to a very minor degree, because in general it's just resulted in a variety of different forms of cheerleading between people who are all very much enjoying that kind of mutual distrust. Yeah, these questions are stacking up. Yeah, I was actually that's exactly what I was going to do, which is the question. Much earlier was it was a statement about the financial reasons to be unscrupulous which you dismissed a little bit. Okay, so what are the incentives that play here why, why are people doing this or maybe you can say like whatever. The vast majority of time there are of course instances where people are reporting their face to trials the wrong way around because they're trying to raise a series B funding for their company and they needed something that they wrote within a scientific journal to be concomitant to all the amazing things that they were supposed to do. But in general, it's simply the fact that attention and publication is currency within academia. And there is no direct financial incentive until you're talking about the sort of second or third order effective I continue to be employed or I get promoted or I get the job I want. There's no immediate financial incentive all the time. But I mean, does that essentially amount to the same thing, probably not when there are people out there who are faking things in the immediate sense that they can literally make money. Because that's, that's very much a thing, although with some, there are areas where it's more common medical devices in some areas of biotech places that I've seen personally where it feels like people are lying for money in the immediate sense. But the rest of it honestly is people trying to navigate a system where the demands on their time and lights do not belie the fact that they are doing things that are reliable. There are some really marvelous stories on your website Adam, which occasionally pop up, which is the junior group leader or the postdoc didn't have enough time to finish the project and didn't have funding and everything was it's like the walls were falling in from the project so they just sort of meeting up a few things where they probably shouldn't have, and then everything turned out fine of course until they got caught. Elizabeth do you have anything to add to that about incentives and why people behave badly. I'm seeing a lot of paper mill papers and that appears to be clearly driven by incentives like these are people who are finishing medical school this is particularly happening in China where if you finish medical school and you want to work at a hospital, but you're interested really in research, you want to cure patients and you don't have time to do research but you still have to publish a paper. And so this is an incentive not not really for these people because they're interested in science they're not generally not interested in that, but they need to take that box they need to have a paper, otherwise they cannot become a doctor they cannot get a decent salary. And so they'll buy a paper and there's paper mills catering towards to to serve these authors and I think that's one of the incentives if the incentive doesn't make any sense. If you ask people to do impossible things than people will will cheat in order to arrive at that goal. Yeah, I mean, I do think that it doesn't take much reading around the subject to figure out that monetary incentives pervert a lot of enterprises in human behavior. If you don't believe me read the read the book about we work, which is a pretty great story about a way to make $10 billion and not have to do very much but so, and I think that some data and I might get the figure wrong but there's some studies which suggests that one to 2% of scientists admit to misconduct in some form or another. So if we assume that one to 2% of every field cheats, then it almost doesn't even really matter what the incentives are right. The bigger problem, I think then then cheating in the context of misconduct might be, you know, cutting corners which isn't necessarily misconduct but as James said you know you sort of have your under the gun and you got to do x, y and z because that's the end. So is that, you know, really fabricating data not necessarily it could just be choosing, you know, do is p hacking for example or something else but I do want to talk so someone also raised a question about what to do with meta analyses which is a fascinating question because there's so much of the literature could potentially be tainted I mean we're not talking about half but you know, some small fraction and so what should you do and there are some tools some library tools so tarot and other things will track retractions and you can look at it doesn't catch everything and it doesn't catch all the just corrections and other things. So, what do you James Elizabeth advise authors when it comes to trying to make sure your meta analyses are up to stuff. Well I have little experience or knowledge about meta analysis and I'm very nervous about them because I get a similar question a lot and I just can look at a meta analysis and looks fine to me like I, I think it's hard for me like I have my specialty and it's definitely outside of my comfort zone. I've seen some very bad meta analyses where basically the same, the same study was was counted two or three times like one is a pre print print and one is the real study that seems bad to me but that's sort of my basic understanding of meta analyses and I'm. Yeah, I wish we had more people who would make that their specialty and start looking for errors in those papers. Yeah, sorry go ahead. Let me put a book and on that we just we just published something by we I think Kyle is here hello pile. I'm getting a neck and Jack myself just published a little correspondence in nature medicine, which essentially said, if we are going to do this is specific now to plague centric drug trials that could really be read more broadly. If we're going to do meta analyses of papers that we can't trust, then we are we are going to have to look at individual patient data. So we've had such an, this whole event that thing recently has been such an astonishing own goal for humanity in general and the scientific establishment in particular that it does not rely on simply trusting whatever turns up, raising something that is numerically childish as a low risk of bias. And then glomming it all together in, you know, a reasonably, a reasonably simple omnivorous result that says everything is fine. It won't do. We can't accurately answer questions like that anymore, people like may have to get involved and then you have to rewrite the meta analysis where previously you said, everything is fine, we should treat people. This research has killed people. This research has affected government health care policy in places that that have a shortage or an absence of vaccines. There has to be a point past which, when the question is sufficiently important that we don't simply accept the fact that meta analysis of individual summary statistics is a good idea, simply because it's obviously not enough scrutiny. The first study in the whole world is getting through the checklist risk of bias assessment, and then coming out the other side, and then adding great fuel to the fire which essentially amounts to organize your country differently and hope people don't die. This is, I mean, we have the tools to be able to do this to have the collective understanding how to navigate the ethical procedures we know how to anonymize data. It's pathetic that this is a suggestion that's coming up again, obviously it's not particularly noble, but it's pathetic that it's coming up again now and that's to be asked that are doing it. We sort of hard stubble weirdos working in the middle of the night. I mean, I'm going to stop. Sorry James, you froze out but I think you made your point. Good. Unless you want to make it again. Okay, I'm just scrolling through the questions that I'm sure we I missed some things. Somebody wrote, what can I as a PhD student with no extra funding or financial resources due to a look out for in any given paper and be avoid making basic errors in my work, i.e. what are the best detection error detection practices. I can't answer that question but perhaps want to. Well, um, well, look, is here's something that's happening from that perspective. The entirety of everything we've done has been well the test development certainly has been completely unfunded. To my, but I think one of the, the, the lasting annoyances that I have in general is that someone hasn't simply turned up and given an enormous bag of money to at least to do homework. This is, it's a particular bug bearer of mine that it is something that is so obviously necessary and effective and so well done that it is not somehow being within the realm of things that could be considered. It's ridiculous. So if there's anyone out there who's listening on her not me. I have, I have a job now that I have to do for some money, and it would probably be a distraction at this point and also her stuff is better. So, what can I do, what can I do myself, the vast, the vast majority of the computational tools and the sort of the headspace that you develop through using them is not something that costs any money. If you read, and of course I mean everything that we've ever done the tools are available. The statistical code is available I mean sprite is sprite is available in three different programming languages. And it's simply a matter that there has not been a formal acceptance that has a nexus of this work that takes it out into takes it out into the syllabi of the world. It's not it's not picked up and formalized and taught, and that's not something that I can do myself and that's really the reason that people don't encounter. I will say, and that's unfortunate. The john Carlisle who's a British anesthesiologist they call him and he's this there, who developed the modeling to look at Yoshitaka Fuji's data. That has been sort of systematized in the anesthesia literature in a way that you know might be instructive as a model for for other things. And sort of serves it as like as a module or a plug in or something it's pretty complex more so than than sprite and grim, but he was not a he had no training in statistics other than being a medical doctor, when he volunteered to go create this system so I guess that the bottom line is where you you seem dubious but No, no, I don't think it's particularly complicated to do sufficient tests. There was this updated to work on table one of it's it's it's something that already existed. Ronald Ronald Fisher wrote about it a million years ago he had the tremendous foresight to be able to apply it in context but it's, it is. I mean, it's also it's also a nice case where something like this to be turned into a procedure like that because it's simply a matter of there's a computational requirement to series of numbers you plug it into a thing and then everything that is itself. I mean it's a little bit more complicated than that but not much. There's an excellent preprint on this actually that I work for everyone with that was a couple years old now. I think they added his name to the test and now it's stupid they should follow. Yeah, I wish it'd be nice to have that link you could post it but bottom line is. If you're so inclined, you could make your own tests right I mean it's not doesn't seem like there's magic particularly it's just inclination and the willingness to do it but any more questions were almost out of time. Stephanie do you have any final thoughts about you know sort of the state of the state of coverage of science of science these days and well for talking about solutions or you know attempt at solutions I know it's like a broken record at this conference but like posting data in full would like you know if required across the board would like alleviate would make all of our lives so much less busy and so much easier. And recently did a story about Dr. Dan Ariely Duke and that study that he did in 2012 about signatures standing at the top of a form it found like would make you give more honest answers then if you sign at the bottom and the key experiment and it turned out to heaven based on fraudulent data that wasn't discovered until you know eight years after it was published in 2020 when they tried to replicate and it didn't work and it was only when they posted the original data that other people were able to go into it and see like it didn't add up. And so from 2012 to 2020 kind of a nice evolution of like that a science and behavioral economics in general of like going from not posting being the default to more more people posting being the default and so you know I if there's like one thing I think would make like a big impact in transparency and in air detection it would be that. We didn't we didn't have time unfortunately to talk about the Ariely story which is unfolding as we speak it's very interesting and I'm sure we all have our thoughts about what's going on there. And so I'd like to take this time just to thank the presenters. I found it fascinating I hope all the attendees did and thank the organizers for for convening this great great hour and really interesting stuff so thank you all. Thanks everyone. Thanks everyone thank you Adam for hosting us. I didn't have a chance to show my cat he's not cooperating. Oh, next year. More pets. My, my, my name is Niza. The hugs and generally it's prevent the cat going up to the keyboard.