 So, like Fiona said, our first speaker today is going to be Rink Hoekstra, who's going to outline a case that he experienced to do with editorial misconduct or misbehaviors. So I'll just quickly, he's also based in the Netherlands and unfortunately can't be with us today. So he's pre-recorded a video for us. So I'll just share that now. Hopefully everybody can see. First of all, thank you Dan for organizing the session and for. No check is in everyone here that okay. Fighting me. I think this is a very relevant and important topic that we do not discuss often enough. And my role here is to talk about one particular case of clear editorial misconduct. And this happened about 10 years ago. I finished my dissertation a few years before and I was still in the process of trying to publish a few of the unpublished chapters. And one of these chapters was on comparing the interpretation of confidence involved and hypothesis tests in a similar situation. It doesn't really matter for this story, but we deemed this paper appropriate for the journal educational and psychological measure. And for years, I've been a bit hesitant to talk about this publicly and also to publicly name and chain a journal and the editor of this journal. But I've decided to do it anyway. And the reason for that, the way I've justified this for myself is that I think that editors have a lot of power in the current academic system. And I don't think I'm punching down when talking about this. I think I'm punching up and I think we're, it's important enough because well, the issue. For a second, we can't agree with me. Just, I think your windows are covering the slides. We can see your cursor. So the whole screen was black for a minute. They might just want to keep the other windows closed. Sorry. Sorry, can we see it again? I do mention the journal, the publisher and the editor. Okay, so I submitted this paper. It was reviewed within a month and I received two reviews and an editorial decision. So it was quick, but the sad part was it wasn't positive. So the email that I got from George Marcolides, the editor at that time and still the editor of the journal was, well, you don't have to read the entirety of it. But basically rejection of the paper without a chance to re-submit. I'm not optimistic that a rewrite would improve matters. So apparently it was bad enough to reject it straight away. Okay, so we were thinking about where to submit next. And then something weird happened because I received an email on the very same day that I received this rejection from Fiona Fittler, who I knew from two previous conferences that we both visited and where we were both in the same session. And she wrote an email and she said, I reviewed a paper of yours. I think it was from you. It was anonymous, but I recognized it from this conference we were both visiting. And I wondered what the status was of this paper. And I wrote back, like, oh, curious that you didn't receive the reviews and the decision, but it was rejected. And well, then I had to wait another day because, well, the time differences between Europe and Australia. And then I received, like, seven emails from Fiona saying, ranging from, hey, I'm sorry for you to, well, there is something really bad going on. So she could compare her reviews with the reviews that, or with the review that she submitted. And it was clear to her that the two were completely different or not completely different, but quite different. At least they did not match, which you would expect. And they did not differ on tiny details, but there were some clear differences which I will show in a minute. So what was different between the two reviews? Well, Fiona wrote in her review, I believe one of the first sentences, such studies are immensely important at this stage in psychology, in statistical reform. And this one is a good example. So apparently she was pretty positive about my paper. What I received from the journal was such studies can be, not are, but can be immensely important. Apparently this one was not. And indeed, this one still needs work. This was a sentence that she did not write. Another example, she wrote below our some minor concerns. I did not receive this sentence. So it was just removed, which changes the tenor of the, of her review quite a bit, especially when not so negative or slightly, or the constructive remarks you made were turned into negative remarks. And it was also a completely new paragraph. The entire paragraph is new, but I highlighted part here. Here is where I would see the problem. Participants in the study are not a clear different population and resources. Well, again, I'm not going to talk about a particular study, but this happened to coincide with comments of the only other review. So there were two reviewers and now the two reviews were more similar than there were before, which makes the decision for an editor a little bit easier. At least that is my speculation why this happened in the first place. So what did the editor respond? Because how did he respond? Because of course, Fiona approached him and he said, well, yeah, let me mention that a similar incident occurred before where the system somehow blended and even distorted review or comments on a manuscript. That sounds very, very weird indeed. Hard to believe actually. And well, he said, I come back to this in a couple of days and he did. And then he wrote, well, this email, which includes one of the weirdest paragraphs I've ever read, which is this one. I am told that on very rare occasions when the system is undergoing maintenance and several, several editors are simultaneously processing papers. The links between individual reviewer documents submitted to the journal and forwarded to an editor can be modeled. You don't have to be an academic to completely or immediately see that this doesn't make any sense whatsoever. Nobody would believe this, I think. This apparently occurs more frequently when strings of characters across documents bear overlapping similarities. There's a lot of words here, but it doesn't have any meaning. This is completely made up, I would say. And it's not only me saying it, years later, implicitly, the alternatives implicitly had made a change. So what happened? I was asked to resubmit. Suddenly, the rejected without a chance to resubmit turned into, well, relatively minor revisions. So it was easy to publish the paper after that. You could say that's also an ethical issue and I should have refused. Maybe at this stage of my career I would have done that, but then I didn't have tenure and publications were too important for me. So I did accept it, which you can criticize me for. And after a week it wasn't sent back to reviewers, so after a week I received confirmation that the paper was accepted. And then, luckily, Sage would question the editor to step down. No, of course they did not. Again, in an interview years later, our journalist asked Sage what they did and they said, well, we addressed the issue directly with the editor at the time. Because, of course, Fiona made them aware of it. She sent them an email and they said, well, this is unacceptable and we'll do something and basically she never heard back. Then for years nothing happened. And Fiona and I contemplated what to do with it and, well, we never really knew what to do and then at some point we came across a journalist, Kathleen O'Grady, and she was willing to publish about this and among others. So this particular case was mentioned in a piece that she published last year. Among relievers was asked to give a response and here is his admission of guilt, I think, but it's very weak. In hindsight I should have contacted her rather than attempting to resolve the problem on my own. Note that this is something completely different than the modeling of two similar texts or something, whatever he said back then. So here he basically admits that he resolved the problem on his own, which is basically saying, I wrote other sentences. And then he says, still sometimes he had his reports for clarity or to remove inappropriate language. He doesn't say it explicitly, but it's by equating it to it almost sounds like, well, I tried to make the review of Fiona clearer and maybe I removed inappropriate language. He doesn't say so, but apparently he did it for, well, for good reasons and of course I don't believe it. So to wrap up some final remarks, you could say, well, why talk about this? This happened 10 years ago and that's true. Maybe something changed in the meantime. Well, the editor, George Marcolidis, still holds his position. And from the outside at least I can't see anything changing with this journal. Of course I don't publish there anymore, but if you look at the policies, it doesn't seem to have changed. And yes, it isn't a go-to and we have to be aware of that. By no means am I saying, well, this happens regularly. I have no idea about that. Or well, maybe Dan can talk about that later, but we don't know a lot about that yet. But I do still think it's relevant. First of all, because apparently this can happen in the current academic system. And apparently we also condone it. We also maybe even enable it. Even after it happened and it was made public, nothing happened. So I was, 10 years ago, I would have said, well, maybe even a few years ago, I would have said the self-correcting mechanism eventually solves this, right? If it's found out, if it's made public, of course they get rid of him. They did not. So apparently we don't think this is important enough to justify strong measures in this case. So I think this points to a lack of accountability in academia. And I think this is a topic that we should discuss more. Thank you for your attention. Okay, we're going to move straight into Dan's talk next. I'll just start sharing my screen. Can everyone see and hear me okay? Yeah, we do have a couple of black boxes around the slides that I did. Is that? A couple at the top in the right hand corner, some gray squares. Okay, is that any better? Yeah, it is. Yeah. Okay, great. Okay, so just to fill up what's rank, rank's presentation, what I thought might be good would be just to give a little bit of backgrounds on some of some of the issues that are in this space. Obviously, to talk also about some of the things that Rink had mentioned specifically, but I thought it might be nice to start before we get into Ginny's talk on cope and what cope is just to go through some of the topics that we talk about when we talk about unethical practices in editing in terms of publication practices. So just to get straight into it. This is probably a bit of a strong, a strong quote, but the former editor of the BMJ Richard Smith once said about 10 plus years ago what is called, what has been called the age of accountability editors have continued to be as unaccountable as kings, that stories of editorial misconduct are growing. So as I was writing these slides I was thinking, as I was watching talks like the ones with Elizabeth pick and James headers I noticed that we often talk about when we talk about unethical practices or erroneous, erroneous practices we usually talk about the authors of the unethical practices. So, you know, Elizabeth picks of this morning about the number of people who do image manipulation and, and out of Marcus talked about some finales, finding of 2% of people admit to misconduct those kinds of things. And I think on the back of this we tend to put editors and peer review in this role of catching fraud and, and policing the literature. It's not surprising that we see about 95 for almost 100% of people or researchers in in stem and social sciences believe that editors and editorial boards are jointly or entirely responsible for managing these issues. But what we also notice over the last 20 years is we we hear more and more of these practices. No doubt, due to the ease of which people can communicate. And I think this through to things like Twitter and pub peer and the great job that retraction watch does, but given just how opaque peer review is in general, it's not surprising that peer review scholars consider trying to understand and investigate editorial misbehavior is a high difficulty and high priority research area. So some of the common practices I think most people are probably familiar with that tend to get a lot of airtime is requests for citations. There's been a few surveys of academics and researchers about this and, and in one by Wilhight and Fong, they found that you know 20% of over 6000 researchers said said that they had previously been coerced, or strongly suggested to decide literature from a journal. So one of the, the examples that they give. You know, is it you site leukemia only once in 42 references. So could we please ask you to add more references to to the cameo. So obviously this is an egregious example but some. There are other, maybe reasonable examples of, you know, if there's been a bit of a scholarly oversight or some, some important literature that's been missed you can understand why people might do this. So we've noted editors publishing in their own journals original work. And in a survey that we did we found that 87% of the editors that we surveyed were, were supportive of this for, for some or all editors. And again, you know why we wouldn't think too much about submission of editorials. You know when it starts to get prolific I think Dorothy Bishop did a blog about this calling them ppops think that's what you call them just prolific editors publishing in their own periodicals. You know, then it starts to starts to go down the other direction. So in terms of some other things that get discussed as well. I was going to say I was going to populate some numbers in here but I didn't get the chance while rinks presentation was going so I'll post these at the end. So some other things that get discussed so again was with big mention this morning that's that she had submitted to the editors of journals that's about some some images manipulation and five years later 60% of editors had not done anything so we talk about retraction and also a hesitant city to retract, as well as disguising retractions, and sometimes just disappearing articles. We talk about bias in terms of gender and prestige, nationality, publication confirmation lots of biases. There's instances of speeding up and slowing down peer review, as well as IP theft and fellows to disclose conflicts of interest and then what rank have been talking about the notion of editors changing reviewers comments. So these are some, some things that get highlighted in literature. Just to focus in a little bit more on editors alting the notion of editors alting ring reviews I thought I'd just give a quick background and some of the things that I've come across across my journey and looking to this. Interestingly, there are historical reports of of secretariats back in the days of the Royal Society of London, rewriting reviews and they did this to disguise referees handwriting. And I couldn't resist not putting this in obviously this did lead to some people getting a bit upset about it. So this is a piece in nature in in 1871 by an Italian born British astronomer Charles Piazzi Smith. He submitted a paper that I think they sat on for seven months and then rejected and then publish something similar. And so he, he, he basically wasn't very happy about it. And particularly, he wasn't particularly he got a bit of a nasty response by the sounds of it after asking to know the names of the identities of the people who were referring his paper. And then goes on to say that he can't understand what any scientific society in the present day has got to do with the accursed thing in all national history represented by secret committees, secret members secret judgments, veiled profits, and goes on. So going to more current issues. As Rink has said he cuts on a couple of his presentation we do see instances of editors removing offensive language from reviews. So there's been a coat form case specifically dedicated to this. We see instances of editors correcting false statements. Quote unquote, as well as removing excessive cell citation requests. We see this practice of ultimatum editing coming up, which I'll explain in a second as well as Rink's situation of, of where recommendations of reviewers might actually get changed. So in terms of ultimatum editing. There was a case where the editor basically said basically accept my chops on your rejoinder and get it published soon or take your critic elsewhere so it's a bit of a long story that one. I was just to say the editor put an ultimatum to the reviewers to make the changes or to rescind. So basically, this is something that was sent to me by an editor in chief of a journal that I know and this is what editor in chiefs actually see now. So they get a prompt saying please check the comments that reviewers have intended for the author edit where necessary. Just to quickly go through we performed a survey because we were interested in these practices and it ran in 2019, and we were looking at some of these peer review practices and policies, as well as we asked editors their views on some issues, as well to do with these things. Just to focus in on what Rink had said and that situation we did ask editors in what circumstance they thought it would be acceptable for another editor to alter reviewers report without a permission and we put seven situations to them. And these are the results that we got back. So I've just ordered these from sort of the most acceptable to the least. And so most, you know, 60% of editors told us that they think it would be okay to remove comments that were left specifically in for an editor in the interview report just for the editor. And we surprisingly saw the 8% of editors said that they think it's okay to to alter reported they disagree with the recommendation. So I put in, again, the numbers from the audience but I won't be able to do that so I'll post my slides at the ends of the talk when I update those numbers. So just to wrap up the presentation. I thought it focused as quickly on in terms of the policy landscape for editing reports. I mentioned there's a Cape, a Coke forum case where they talked about some a PR getting some not very constructive feedback from a reviewer and the Coke forum agreed that they thought it was okay for publishers or for editors to edit these comments out. And so they recommended that. And then more recently cope had a discussion specifically on this issue and actually released guidance on it on I think Wednesday, which I quickly read this morning. And the cliff notes are effectively that they recommend that journals develop a policy on what's considered acceptable conduct for reviewers, as well as when or if edits may be made. They don't have a problem with edits for tonal language but they do for meaning and intense. And that they suggest that if you were as an editor to to want to edit a report or ultra report that the reviewers should obviously be informed sense, ideally, the edit should be in collaboration with those reviewers. And finally, if, if a journal goes down the pathway of having a blanket no editing policy test to provide authors about how to address some of the comments by reviewers. So this is more of a Jenny thing than me but I thought I quickly raise here. This is obviously the policies that copers come across but I'm be interested to hear from Jenny next about some of the mechanisms that we, that we have in place to ensure that these policies are developed and complied among coat members, and then recognizing that there are not of a lot of self regulation measures out there for journals, what we might do or what happens for for people who aren't coat members. So the very last slide that I wanted to quickly talk through is, as I was pulling these slides together I came across a blog by Philip Cohen, and he was looking at coercive self citation practices and he had put up this quote from an editor who asked his question to themselves when they were trying to think about whether they thought something was ethical or not. And they said would I as an editor feel embarrassed if my activities came to light and would I therefore objective I was publicly responsible for the claims. And when I read this it made me think about that most of the time none of these things are, are public so. And then this is it seems like a bit of a truism but obviously the more opaque the system is the more difficult it is to investigate some of these practices, and just for a little bit further context. For our survey we are noted that 8% of our respondents didn't have a review a policy on governing editing reports, and 15% didn't actually share all reports with all reviewers, and other studies have shown that it can be up to about 50% that don't. In terms of self citation, we know that 1% of journals in our, or in our survey 1% of journals didn't publish reports signed or unsigned, which is consistent with some other studies that have looked into this. And then to some other questions I raise to everybody about some, you know, how many journals adopt system similar to plus one that publishes the name of handling editors on the papers. And presumably no journals provide any detailed on info on rejected articles as well. So my final question to everyone would be, could increasing transparency in some of these domains be a good first step to deter some of these behaviors. So I'll leave it there I've gone a little bit over time. Yeah, thanks we're going to move very quickly now into Jennings talk Jenning do you want to share your slides with us now. After Jennings talk we'll have some closing remarks from some main visit. Yeah, I'll do that. Okay. That's all good. Yep. All right, fantastic well look thanks very much for for the opportunity to do this thanks to the organizers and to everyone who's hung on in there wherever you are it's a beautiful morning Saturday morning in Brisbane which is where I'm based. I'm also the director of a work, a group called open access Australasia. And I was chair of cope from 2012 to 2017 but I just want to make it really clear I'm speaking on my own behalf here I'm not speaking on behalf of cope. But I'm more than happy to kind of discuss any of these, these issues so just to go through quickly so I just want to talk about what cope is its role in the wider publishing ethics landscape, a bit about how it actually works. And then some obvious and perhaps not so obvious editorial misbehavior including one of my sort of hobby horses at the moment which I'll come on to at the end that I'd love to have a discussion about. So cope is the committee on publication ethics. It was started back into 1997 in fact Richard Smith who Dan mentioned just there was one of the founding editors of it. It was three blokes of course because mostly editors them were blokes Richard Smith, Richard Horton of the Lancet and the editor of gut Michael Farthing, and they got together to kind of talk about the sort of ethical issues they were seeing at their journals. And this timeline talks about shows really what's happened since then. I was the editor from 2012. Sorry, the chair from 2012 to 2017 mostly when I was working at plus medicine. And during that time the membership expanded enormously from three back in 1997 and it's now got more than 12,000 members. It's a voluntary membership organization so the journals editors pay to be members of it. It's not an enforcement agency we can have a conversation about whether it should be or whether there's a need to have that in fact we've had that conversation over the various over the years. But what we have found and one of the key things that I think has happened with cope over the years is, is it become increasingly hard for reputable journals not to be a member of cope, because of the work that it does and it's a great example I think of how you can improve behavior by changing the norms within the industry more generally. The other thing just to note of course is the vast majority of editors are actually academics themselves you know it's entwined into the publishing system. And so what every every time we talk about an editor we're pretty much often talking about people who also work as academics and the whole system itself I think is sort of intertwined together. And you can see here that the statement from the cope website that I that I've grabbed which is that it's purposes to ensure ethical practices become part of publishing culture. And they do this through a whole range of roots, most of which are freely available on their website and I'd encourage you to look at them if you're if you're interested. The main thing that cope has is a set of core practices here. So there's 10 of them that are lined up here that range from allegations of misconduct and that can be against authors but also against editors and I'll touch on some of those in a second. Some of I think of the hardest issues that cope deals with which are around post publication discussions and corrections. And we heard earlier today from Elizabeth Bick, who has done some astonishing work in sort of, you know, looking at issues, problems with papers, and trying to get them corrected or attracted and that has been one of the hardest things for journals to engage and I, you know, freely admit that this is by far from a perfect system right now and one of the problems that we have is that there is no easy way to create the literature right now and that that I think has led to sort of all sorts of issues within the sort of leading to the reliability of the sort of what's published. There are other groups around so I've just noted up here the Council of Science Editors which is mostly US based as the European Association of Science Editors both of them have a work that they do on publication ethics. And at the bottom I put the International Committee of Medical Journal Editors also came up in a talk earlier, which is not really a body that sort of deal deals with ethics on a large scale and it is actually a relatively core group that are part of the ICMJE, but they can do some really important stuff and one of the things that the ICMJE did do was mandate trial registration. And that led to a really important change across the entire industry so that now it's virtually impossible to publish a clinical trial in any reputable medical journal if it's not registered and that was the work of the ICMJE taking a stand on that. So let's just, I just wanted to talk then about some how COPE works and how it's sort of how it's a sort of reflection of what the editorial, the ethical challenges are. So the main way that it works is through discussion of cases that happen at member forums and these are all freely available they're all anonymized and they're on the COPE website. And so back in 1999, and you can see the case number there, the first bit indicates which year it was published in, they were already thinking about the integrity of the editors themselves. This was a case where the editor recommended that a paper was rejected but in fact the editor went against it and for reasons that are not apparent actually accepted the paper in a way that was probably inappropriate. Another example of how the cases reflect what's happening in the in the sort of wider publishing industry, publishing more generally, this is a case from from 2012. And this was at a time when we started to see compromise peer review peer reviews being submitted to journals and this was sort of an early this this case in particular is an earlier reflection of what later came to be a really rather quite massive issue which was the sort of paper mills that were published, along with fabricated peer reviews. And the reason for that was because of the need for authors, particularly from some from some countries in particular, to get their papers published and there was a wide scale wholesale attempt to sort of the whole publishing industry as it were through fabricating papers and fabricating peer reviews. But I just want to finish by talking about editorial, sort of what I think is modern editorial misbehavior and to tie it back into what I think is the sort of really big issue that we need to think about more, more generally so we've heard about this already from from Dan but this issue of editors and reviewing reviewers requiring authors to cite site their own work is actually unfortunately quite common. This is a paper where actually a staff member in an editorial office noted that particular editor was asking reviewers, authors to cite papers by themselves. And these were happened much more often than when they asked them to cite when there are papers that they were not co authors of. And this led, you know, there's further work things that happened so this, this is where the editor was actually particularly trying to manipulate their impact factor and the way that they were doing that was asking for specific references to publications in their own journal, and only publications at a time frame that would have that would affect the impact factor. And then even perhaps even more egregiously. This is a paper from, from more recently where one particular editor noticed that another journal editor had figured out how to massage the impact factor by publishing annual reviews that up to the number of citations to their own journal and thought this might be quite a good idea to do themselves and so you can see how sort of poor behavior populates itself across across the industry. But the problem is this this isn't new at all and so I just want to come back to think about editorial misconduct overall and think about how it ties into the overall publication and sort of a scholarly landscape really. When I was at one of the editors at plus medicine back in 2006 we published this paper called the impact factor game. And what became really clear to us when we first started publishing and actually thinking about what the impact factor for our journal might be at that time is that the whole system was completely corrupt and was already entwined with things such as how journals were perceived by society more widely. So we were able to actually manipulate impact factors by having discussions with the organization that that ran the impact factor at that time, and which was pretty opaque to most academics. And so what we ended up with is a system that journals played because they knew their authors needed to care about it in that they would attempt to manipulate their impact factors to encourage authors to publish there. And this is the most vicious cycle that we know that we have right now. I don't have time to go into the whole body of work that's been shown that you know the attempt for the authors who try to publish in high impact factor journals, more likely to try and cut corners, but we know that the whole system itself is highly problematic in that you know one particular system that we have leads to misbehavior sort of across the academic publishing or generally. So I'll just finish with a few things I thought we might kind of have a think on. I think you know let's talk about ethical issues of editors, but I think it's part of the wider academic system more generally. How much do we care about manipulation of citation impact factors and if so, do we have a better system. And then of course there are a whole range of initiatives that are thinking about this right now including the declaration on research assessment. And the United Manifesto Hong Kong principles, all of which are trying to think about how we might move away from the sort of system that we have that leads to the editorial misconduct that I think is sort of highly problematic. So I'll stop there and hope to have a good discussion. Great thanks so much. That was a great set of talks. So I'll keep my comments brief so that we have a lot of time for discussion there's already some great questions in the comments feel free to add more questions in the q amp a or comments. So, I think one theme for me and my experiences being an editor is I was really shocked at how little accountability and transparency there is on the part of the editors themselves that we can do a lot and get away with a lot. And people just assume that we're, we have good intentions and we're benevolent and so on. Part of it is, as Jenny mentioned that the most majority of editors are academic editors. So we're doing this, you know, on top of a full time job and I think people recognize that this is very big service, but I think that sometimes comes with a feeling that therefore we don't need any oversight we don't need to be concerned about what editors are doing. So this story I think, not just what happened but the lack of consequences I think is is really really concerning this just that maybe there, there is no recourse there is no one watching the editors no consequence. I mean obviously it's a single case but there still hasn't been a consequence in that case, which is really really shocking to me and this is a case where you know there's not much ambiguity here, but more generally I think there's, you know, we probably all had this sort of hearing an author's story maybe on social media maybe you know through the grapevine about unfair biased review or decision letter, and I'm often in a position where I just don't know what to believe like I think authors sometimes see things through a distorted lens or misremember things like that, but I do think sometimes at reviewers and editors are biased and right now there's not really a good way for this to come to light and I think there's a few things we could do to increase the transparency and accountability of editors and of journals and we spend a lot of time talking about the transparency of authors, but I think we really need to be talking about how we can increase transparency of the process of peer review and our ability to hold editors and journals accountable. So one really simple thing I mean it's, it's a bit more complicated but just transparent peer review for published articles of an article is accepted. The journal could publish the peer review history, and that doesn't mean identifying the reviewers it just means publishing whatever it would be normally just shared with authors the decision letter and reviews whether they're signed or not publishing that, along with the peer review paper so everybody can look at the peer review process that doesn't completely solve the problem because that only applies to accepted papers and probably most of the problematic things happen and rejected papers. And there's a couple of solutions to that one extreme one would be overlay journals, where all the peer review history is published as it goes along and so there are some of those actually so meta psychology in the field of psychology does this and I'm sure there are some but I think a less drastic step and I'm for overlay journals I think that's great. But if we're not ready for that and the minimum I think journals should do is have a policy that authors are allowed to share and post their own decision letters and reviews that they received from the journal, however widely they want so they can share it with other scholars and post them publicly on their blog or on their preprint or whatever. That information should be for the first of all the authors should be able to share it with whoever they want. And second of all it's only by allowing even rejection letters and and and reviews from rejected papers to be shared that we might be able to detect patterns of bias or unfairness or corruption. Sometimes it's not clear from an individual decision that an editor is biased or engaging in practices, but only if we can see a pattern across multiple rejection letters could we see that pattern and so I think that should be on controversial it should just be a right of authors to post their decision letters and reviews if they want to, even for rejected papers. I think this also raises some interesting questions about the distinction between academic editors and professional editors there some journals where the editors are full time staff and paid their salary for their job. And I think that some of the attitudes I've felt as an editor as an academic editor is almost like too much generosity and too much charity and interpretation of editors behavior because of our dual roles as academics so first of all, like it's challenging for people to to challenge me because I'm a senior academic in the field and if they, you know, upset me as an editor that might affect their chances of getting a job in my department or something like that they could be worried about that. So there's some advantages I think to a professional editor model. And also I think it helps us think of it more as a job that comes with responsibilities, it's a very very powerful job so should come with a lot of responsibility not lower standards, I think it should come with higher standards. I'm not saying that there's everything is better in the professional editor model but I think it's an interesting way to think about the pros and cons, and different ways of kind of thinking about the responsibilities and privileges of being an editor. So I'll wrap it up there and turn to the questions from the audience. So, I mean one I think we've mostly dealt with this but Jason asked about whether publishing peer reviews along with articles would deter this behavior. So that would deal that would work for accepted papers right but it doesn't work for rejected papers and so it's not necessarily published. So I'll maybe combine that with one or two other comments and throw it to the panel and see who wants to comment on these. And then from Cooper that's related to this which is, are there any good examples of journals that are already increasing transparency and editorial practices, which journals and publishers are moving in the right direction so I'd love to hear from people about that. And there was another question about whether people know about misbehavior and peer review for grants and whether funding agencies ever engaged in this kind of editing you know so it's just editing. So what are some examples maybe those questions of like what are some examples of journals or funding agencies doing things right or wrong, and what what might help and I'll throw that to any of the panelists that want to address those issues. Well I'll take the first pass. I mean, so I think that the BMJ actually has been a kind of leader in this for many years and that's the field that I'm most familiar with. They've published open peer reviews for a very big have signed peer reviews and they've for a very long time and they, you know, publish open peer reviews when they as well, the BMC series of journals do that. We had this long conversations about doing this at plus when I first started and to be honest, back then it wasn't anything that people willing to consider and so we, you know, pick one battle at a time we were fighting a battle on open access so that was the battle that we chose to fight there. I think increasingly of the view that, you know, with us I mean that these things should be published they should be open. And I think that, you know, they don't have to necessarily be signed but they make it's a huge step towards transparency and come just accountability. I don't think editors have a role themselves in holding each other accountable. And that's one of the purposes of cope is that you know you raise the norms within the profession and you know professional include you know, editors who do as academics. You have to understand what good behavior looks like and and that is often, you know, a lot of people who come into being an editor to be honest, don't get a lot of mentorship and I think that's again some one of the thing that professional organizations like cope can help to do. Does anyone else want to jump in something I was going to mention was in that same survey that we did. I think of the respondents that we had the general editors and most of which were lead editors, I think something like 35 or 40% said they were happy with the way that they were conducting peer review so a lot of people seem to be set and content with the way that they're operating. And then of the people who said that they were intent on making some changes most often when it comes to things like transparency so taking blinding as an issue they want to do increase anonymity, not open it. And some of the things that I had seen on that front tended to be a fear of alienating reviewers given how difficult it is at the moment to get reviewers. I think there's there's a fear that by opening up or becoming more transparent it makes your life more difficult in getting reviewers to review for you. Something that I had noticed on this note. Yeah, I think there's now some perical data on this I mean it's all bit hard to interpret but I think the evidence so far suggests that it doesn't make much of a difference. There's no transparent review there's ones where authors can opt in or out and reviewers are just told which situation they're in if their reviews will be posted publicly if the papers accepted or not, or others where it's up to the reviewers to opt in or out and authors have no say in it. And then there's some journals like the journal I'm editor of a collaborate which just require transparent review it's not optional. The issue of like the tension between masking the identities of authors versus transparent review I think they're often conflated but they're pretty orthogonal you can publish the peer review history regardless of what you decide to do about whether reviewers identities stay hidden or authors identity stay hidden or not. So I think we should distinguish between openness and transparency about the content of the reviews and editors letters versus the identities of the people behind them. We did jump in panelists if you have any other thoughts. Another question someone asked is, should distinction be made between stealth editing of reviews and transparent editing so if you fix the type of for example but bracket it. And I think that's definitely. It's very, it's good to be more transparent about it. I think another, there's kind of different degrees of positions about what's okay and my own personally is that I think you should always get the reviewers consent before editing. I could imagine. So, part of my reason for having a pretty extreme views it's such a slippery slope and seeing George Marco ladies say that he still continues to edit reviews for inappropriate comments I'm like I don't trust your judgment anymore. And it really makes me feel like we shouldn't leave it up to individual editors judgment so even something like a typo now I think I would try to err on the side of just either not fixing it. If it's too much trouble to go to the trouble of contacting the reviewer or contacting the reviewer if it's a big enough type of that I want to fix it then make sure the reviewers on board and I leave no room for motivated reasoning on my part to be like I'm just fixing a type of I have a very brief thought on that in the new guidelines I had mentioned about you know that it's okay in some instances change tone. But he was thinking that if editors were to go ahead and do that then it makes it quite difficult for people to actually study how common some of these actions also. So there are obviously some some data into into more nastier reports and trying to get an understanding of that and if we were to edit these. It might make it difficult to kind of get a get a grip on how common or uncommon some of these issues are. I also think the line between tone and substance is so blurry, I just don't trust humans to rely on that distinction. Well, I just made one quick comment on that is that you know, having you know seen lots of reviews, I've got to say editors. I'm sorry, all reviewers often don't don't very nice to put to find a point on it. And as an editor, I think one of your jobs is to is to kind of act as the, you know, negotiator between the reviewers and the authors and you know I often used to write editors letters that kind of said, you know, despite what X says we think why. We had we did used to have discussions about you know the kind of tone of reviewers and in the end what we did was if people were unnecessarily rude we would just never use them again. But you know it's a really difficult thing to do you know we've rarely, if ever edited reviews I mean actually can't remember ever doing it. But you know I did feel incredibly uncomfortable passing on just frankly nasty reviews to authors, you know which unfortunately is more common than you'd hope really. Yeah, I think sometimes editorial misconduct arises out of editors feeling that their hands are tied and that, like in the case of rings paper, maybe George Markleady's was justified in rejecting the paper sorry rank. And maybe Fiona's review is wrong and the other reviewers right but he just needed to own that decision and say despite one positive review I'm going to reject it for these reasons. And so I feel like sometimes it's just so easy way out instead of fully standing up to reviewers or authors or whoever you stand up to. That's exactly right it's about standing it you know we used to say this all the time you're the editor you're the one that makes the decision you have to own the decision and you know the review you can't hide behind reviewers. All right, I think there was a question about where people can find the results of the poll if they're not on slack. Maybe Dan can quickly answer that. I'm not sure whether this year whether the meta science symposium is created an OSF page if they have. I'll post it on there. I'm not sure if there's another better way of doing I have your email address with all of the attendees. Yeah well absolutely please. Maybe one of us I can tweet it out if that's okay or do you not want it. I'll treat it out from the slack. Either or on slack I don't know what to tell you. Any last thoughts in the last minute we have. Thanks rank for joining us so late. And thank you everybody. I think that wraps up our session. Okay, I think that's it. I'll close it for you. Thanks everyone for a great session. I'll just put in the chat that if you want to continue the discussion on all this topics. Just putting in the chat the link to the limo site, where you can do some for networking with everybody. And I think the next is a half hour networking session now and then otherwise, but to see as many of you as possible for tomorrow is a three of no size 2021. So thanks everyone for joining everyone. Thank you.