 Welcome to everyone. Good morning, good evening, good afternoon, whatever you are. Welcome to this webinar of the META Science 2023 conference. My name is Lex Baalter and I'm the privilege to chair this webinar. It will be dealing with META research with the purpose to improve scholarly publications and together I believe that our four panel members will argue for more transparency and more META studies and META research in general. We will learn about ongoing studies as we go along and also hear from them what the lessons learned are while they did these studies or trying to do these studies. We have slots for slots of 15 minutes and every slot will be followed by five minutes for Q&A. If you have the question and I put it in the chat that already please put it in the chat and then you help me enormously when you write short sentences and clear questions and then we can cater for as many questions as possible. At the end of the four presentations there will be short panel discussions about the generic elements that we learned together and also there you can bring input in the chat but I prepared a few questions as well. Now without further ado I'd like to start off. I don't like to rub time from the presenters here. The first one will be Gauri Kopala-Krishna. I'm still struggling with that word Gauri. I'm sorry for that. It's about the ability and also the inability of peer-review to detect questionable research practices or worse in manuscripts. And firstly we'll focus a little bit on the big survey conducted in the Netherlands that studied prevalence and also the drivers of questionable research practices and fabrication and falsification. Gauri, the floor is yours. Please share the screen. Great. Thank you so much Lex for that words of introduction. I will begin to share my screen and please kindly let me know if you're unable to see it. I will also just momentarily stop my video just to ensure that there's no interruptions in the transmission. I'll start sharing my screen and I'll come back and turn my video on as soon as I'm done. So you should be able to see my slides at the moment. Please let me know if you can't. It's all right Gauri. Please go ahead. Okay. Thank you. So today in the brief 15 minutes that I have, I hope to give you an overview of questionable research practices, recent large survey and some of its findings on the prevalence of these questionable research practices. And I hope to also go on to describe the different types of peer review and the different forms in which it is occurring in our research communities. And I lastly hope to end by touching a bit on how if at all peer review can help improve research quality by potentially reducing the occurrence of these questionable research practices and maybe perhaps even research misconduct. Trying to get my slides to move just one second here. Right. I think this slide is a bit of a no brainer for most researchers but potentially worth repeating and reminding in our published or parish rat race world that we still live in unfortunately, but increasingly public trust in science and trustworthiness in published research is more and more important, particularly in how policies are shaped in our communities. And this we have very clearly and transparently seen with the recent covert 19 pandemic the interplay between policy and the importance of rigorous research. Fortunately, research has also shown that questionable research practices and research misconduct, as well as the rise of paper mills and new technologies such as artificial intelligence can pose important challenges to maintaining the rigor of research. One commonly known and respected antidote that we all speak of very often in our research community is having high quality emphasis being high quality and rigorous peer review, but this system is of itself not without its flaws. So QRPs. What are they in the world of research integrity meta research we define them to exist on a continuum of research behaviors. So on the one spectrum we have the most ideal behaviors known as responsible research practices of which open science practices are a part of. Most desirable and most detrimental for trust and quality of research existing on the other spectrum, known as research misconduct collectively or fabrication falsification and plagiarism. Now, while a lot of attention has been given to research misconduct, the bulk of the problem, research, recent research, which I will also share with you some data shows that the frequency or the prevalence at which questionable research practices or the subtle trespasses as they are also known occur is much higher than fabrication falsification and plagiarism. No doubt these are indeed the cardinal sins of research. So in a recent survey in which I was involved together with Lex Bauter and a team of researchers from the Netherlands, we surveyed about 7000 academics working in Dutch academic institutions and medical institutions. And in the survey out of the 11 questionable research practices that were assessed on a seven point like at scale, the scale range from never to always questions were referring to researchers engagement in questionable research practices in the last years. So what you see here on this table, I've summarized the top five questionable research practices, which had the highest prevalence across all respondents, all disciplines in academic fields and academic ranks. The prevalence here we basically define that to be the proportion of respondents who scored the questionable research practice that is being listed here as engaging in them sometimes or frequently in the last three years. So what we see here for the top five, the percentage ranges are in roughly about one in five researchers engaging in one or more of these questionable research practices, and that is a concerning number. Now if we go on to look at what our survey findings showed on the engagement of falsification and fabrication, which are also known as the most severe sins or research misconduct. And here we have defined fabrication as the making up of data or results and falsification as the manipulation of research materials data or results. And what we found here are quite staggering and concerning numbers which are likely to be an underestimate. And what this table basically shows you in summary is that one in two researchers were estimated to engage frequently in at least one of the 11 QRPs we surveyed and one in 12 were estimated to engage in falsification or fabrication and or both in the last three years. Now in this survey we did not only want to report on the extent at which these behaviors were happening but we also wanted to take a step further and to look at what could possibly be the associative explanatory factors that are underpinning some of these behaviors in this survey. So this is quite a busy slide. I only want to show you that we looked at 10 different explanatory factors in this national survey, but for the interest of today's topic, I only want to speak to the results on scale number 10, which looked at respondents' views on their perceived likelihood of reviewers being able to detect QRPs. So in this scale, we basically asked respondents on the perceived likelihood of whether reviewers would be able to detect common issues that we all face as peer reviewers, such as if the study authors were selectively citing, was the study having insufficient reporting on study methods, was it insufficient in describing limitations and flaws, whether the right study design and instruments were used to give you a flavor of what we asked in this particular scale. Now based on the responses that we obtained, we performed a multivariable regression on each of the explanatory variables we controlled for a number of background variables we had asked in the survey and each of the different explanatory variables except the one in question in the model. And what we found is I only want to focus here on scale number 10, which is looking at the likelihood of or perceived likelihood of detection by reviewers of QRPs. And what we found is that this particular scale had the largest effect size in lowering the odds of research misconduct. So basically in other words, the perceived likelihood of detection of QRP by reviewers had the largest effect in lowering the odds of misconduct. And as I wanted to say in other words, researchers at least within the 7000 that we surveyed in 2020 still believe quite strongly in the perceived value of peer review in upholding research integrity and research quality. And perhaps the emphasis should be here on the perceived value as some critics of the peer review system may argue. Now as we, you may already know peer review is a broad term, it is increasingly representing many different types and shapes and forms of peer review. And the most common that we are all exposed to and which you will hear probably the most about is journal peer review, which is basically peer review that is completed on a final submitted manuscript to the journal. And within this journal peer review, there are also a number of different variations. There can be a single blinding, a double blinding of reviewers and authors. There can be an open peer review and within open peer review, there are also a number of different types of openness. Different journals are using different types of models of peer review and there appears to be no clear consensus on whether one model is more advantages than the other in increasing research quality or improving peer review processes at this moment. Now with the increasing uptake and policies promoting open science practices, there is also an increasing view of the peer review that happens before a journal submission and that is through preprint sharing. So preprints can be simply described as non journal manuscripts that have not yet been published to a journal, not yet been published, maybe sometimes have already been submitted to a journal. Now, especially in COVID-19, we saw a lot of COVID-19 early research being shared as preprints. And there was also some extended heated debates in social media platforms on the peer review of these preprints. So there's also peer review that's happening on preprints, which is increasingly something that we see with open science becoming more and more a practice among researchers. Then finally, I want to touch on post publication peer review. This is again the type of peer review that happens after a manuscript has been accepted and published by a journal. And this can happen in a number of different platforms on social media. It can also happen on specialized platforms like pub peer and is often seen as the kind of peer review that happens by the research community at large. It can also involve the general public with the advent of open science. As I've earlier emphasized, I believe that these different types of peer reviews are more likely to happen besides only the journal peer review. So what is the potential link between questionable research practices and peer review? I think this is an interesting model here that has been proposed by in fact, Lex Bouter. In a recent publication, he proposes one way of looking at the interplay of a number of different factors that we study in research integrity. It illustrates how different types of research behaviors from misconduct to misbehaviors to responsible research practices and open science can potentially influence the trustworthiness and validity of research to a number of different pathways. Now, when we talk about QRPs, while the link between the QRPs and its impact on trust and trustworthiness of research quality has been researched and well established over the years. The consistency and robustness, however, on having empirical evidence on how transparency as afforded by open peer review, which sometimes may be part of open science practices, can improve trust and validity of research is still the kind of empirical data that is still lacking. So this is really to point to the fact that we still do not have conclusive and clear empirical evidence on whether open peer review can truly improve the trustworthiness and quality of research. So finally going on to my second last and concluding slides. Now we know that questionable research practices can happen at all stages of the research lifecycle. So a question that comes to mind here as I was preparing these slides is why is peer review not happening at all stages of the research lifecycle as well. Now clearly that there is some movement in that direction with with the advent of registered reports or registered reports referring to basically manuscripts which are focusing on the study protocol which researchers submit before conducting their study. This then undergoes peer review and can potentially enhance the rigor of the methods before the study is actually conducted. There's clearly some movement and recognition that peer review should be happening earlier rather than only at the final stages of a submitted manuscript. Similarly, there should also be a push for journal peer review in particular to emphasize other aspects of a manuscript particularly on the data analysis and the data set used and I believe Florian was the next speaker will touch more on these specific issues within the journal domain. Despite the pitfalls of peer review, we do know that it can and it has detected flawed research. One of many examples in a recent one being that of the retractions in Lancet and New England Journal of Medicine on the Serge Speer publications around hydroxychloroquine and COVID-19 mortality. So when peer review is working at best, it can work. The findings of the Dutch survey as I presented to you a test also showing that it can have a strong significant association in lowering the odds of research misconduct. Yet it is still not working as it should be. So to close, I'd like to actually just end with a few thoughts here. As we embark on open science practices, a journal peer review clearly will not be the only way in which peer review is happening. So we will need to be able to deal with the fact that peer review will be happening in a number of different stages, but also on a number of different platforms which may or may not involve the general public. Upholding trust and integrity of research will increasingly require, I believe, the norm of collective ownership by the research community. It also brings into issue the need to extend QRPs to not only within the research and research lifecycle, but potentially also to peer reviewing. So what are questionable research practices that are happening within peer reviewing process and amongst peer reviewers that we should highlight and avoid? And finally, what we need is robust systematic analysis of the different forms of peer review, some of which I've touched on today, in order to be able to weigh one against another and make better informed decisions on why and how the peer review system should change. So on that note, thank you. I will stop sharing my slides. Well, thank you, Kauri, for this clear presentation. Interesting, fascinating facts and also showing the way ahead at least in your view. Your paper is now open for discussion. The public is still a little bit shy, so please don't hesitate to put your questions in the chat, dear participants, but it gives me the opportunity to ask you a question, Kauri. Peer reviewing. I'm sorry. Sorry, go ahead. I'm just trying to turn on my video, which I can't seem to be able to do. So maybe someone would be able to. We can hear you loud and clear. So that's not the biggest issue. We've seen you already. My question is peer reviewing is a lot of work. And now we have open data and research protocols and all types of stuff to do peer reviewing. What is your view on the contribution of automation? There is now software helping you and peer review. Do you see a place for these programs? Do you have good examples of where they can be used and where they already are used maybe? And who should do that? Is that an additional task for the peer review, invited by the journal, or should it be done by a journal staff? Or should it also be self-assessments by all the teams looking at the manuscript before submitting? What is your view on this? Yeah, thanks a lot, Lex, for this question. I think it's a great question and also very timely, especially since there's a lot of discussion about technology and the use of it as well as artificial intelligence. You asked me a few questions here. Well, my first answer is that yes, I think that we should move with technology. We should use the tools that are available to us in the best way possible. So I do think that journals should move to trying to automate some processes that will hopefully speed up the process of peer review, but maybe also help to actually introduce some checks that might be possible to be automated. For instance, checking on whether data sets have been shared, checking on whether the links are correctly shared, checking on whether standard reporting guidelines have been adequately fulfilled. I think that these are tasks which can be time-saving. I do not specifically at this moment are able to mention a specific journal that's using that, but I'm quite sure that there must be already journals that are automating these simple tasks. What those tasks should it be? I think it really depends on what kind of task this is. My first answer would be that the journals and the editorial team behind handling manuscripts should ideally be able to automate some of these tasks to make it relatively easy for peer reviewers who are already offering a free service amongst the many responsibilities that we have as researchers. So I think trying to actually ease the burden on peer reviewers is really actually where my stance is. Is it something that authors and the author team should be able to use themselves? Why not? Again, I go back to my general comment. I do think that technology and the tools that are available to us should be best used in order to make our research work as efficient as possible, whether it is pre-submission or post-submission. Thank you very much, Gauri. Thank you again for your nice presentation and the answer to my question. I suppose we can go on, Florian, and I'd like to introduce you, but I'd like to thank you first. I'd like to thank you a lot because you stepped in at the last moment while one of the other speakers was unexpectedly not able to attend this meeting. It's great that you're willing to help us out and I saw your presentation already. I'm sure that also the audience will find it fascinating. Your topic is again using COVID examples, COVID vaccine examples, to be sure, and you show things that went wrong during COVID with publications and you argue for transparency as an antidote for these processes going wrong. Florian, please go ahead and lighten us with your slides. Thank you very much, Lex. Thank you very much, Gauri, for the invitation. I'm so pleased to be there with all of you. Yes, I stepped in two days ago, so it means that the fit with the topic was not so perfect. I will try to answer this question. How can trials and publishers increase transparency in their publication processes? But I will focus on clinical trials and on data sharing because this is my field and today I just can address things that I know. And the second problem with today for jumping in this presentation is that I would have loved to learn English first, but it was not possible. So I will try to do with my French English as you will see. So let's go. So we have the question. How can journal and publisher increase transparency in their publication processes? And I'll try to give an answer to that. So first of all, I don't have any conflict of interest within the past five years, except with I received funding from, let's say, France, Brittany, in the west of France, where I live, and from the two ministry of health and research in France. And now we have a project funded by the European Union, OSIRIS. I will mention this project at the end. Gory is also involved in the project and I think Lex is one advisor of the project too. And let's go. So how can journal and publisher increase transparency in their publication processes? So first idea that came to my mind, can they sweep the swag under the rug and do they sweep the swag under the rug? Let's see. So this is as promised by Lex. It's a study about COVID vaccine. This is the phase one and phase two trial of the vaccines putnik from Russia. So you can see here, these are antibodies levels. So before the vaccines, nobody has antibodies, of course. And this is at 14 weeks, 21 weeks, 28 weeks. And you can see that the concentration, the amount of antibody is increasing and is quite stable between 21 and 28 weeks. You can see that. And this is quite the same in the other type of figure. But there is a but and the problem is that if you have a close look, you can see that this distribution here is very similar to the distribution here. This one here is very similar to the distribution here. This one here to this one, this one to this one. There is an excess of homogeneity and this is not a good news. It's possibly random. Who knows? But it's possibly too good to be true. And when things are quite too good to be true, it can also be the sign of possible manipulation of the data. So that was noted by Henry Koubucci and his colleagues, and he wrote a letter to discuss this problem. So this problem was also mentioned by Nature, who made a news story about that. And in this news story, you can see what said the launcher, the launch set. The launch set declined to commence on its policy for providing data in support of clinical trial that it publishes. But it said that it invited the author to respond and indeed they responded and said that the data was available but nobody never got the data after request to my knowledge. And the launch set commented that they would continue to follow the situation very closely. So that's very reassuring. They would follow the situation closely and that's what they did because they publish the phase 3 trial, at least interim results of the phase 3 trial. And you can see here Kaplan-Meyer curves about the efficacy of the vaccine on this phase 3 trial. But you can see also here number at risk of COVID-19. And among the number at risk, you can see something very strange. Here you have not people that are lost to follow up, but you have people that are gained to follow up. This is very strange and does not occur very often. But of course it could be a typo. But we had also this table, this very nice table showing the efficacy of the vaccine, 91.6% overall and in different age groups. This is very important because if you remember when we had this vaccine for the first time, it was very important to give those vaccines to the people that were the older because COVID-19 was very more severe for the very old people. So this is a very interesting analysis and important analysis actually. And if you look at this, you can see here that the point estimates are all around 91% and there is a lot of homogeneity again, despite very large confidence interval. So again, an excess of homogeneity. And that's what we note in a letter. Also with a lex factor and also with a gory, also with Enrico Bucci too. So we work on this and it took quite some time to get a very small letter published and interestingly, we got no feedback from peer reviewer. It was just the feedback from the editor. So it took quite some time. I bet one month and a half or two months to get the letter published. While the context was very sensitive, the European Agency was looking at the application for the vaccine. I think our observations were very relevant and timely. The Lancet did not help. That's why I say sweep the drug under the rug and of course it's no. So journal may have a policy for data sharing. That's an idea. And indeed, journal have a policy for data sharing. Now we have a policy since 2017. And the policy is people should write a data sharing statement into paper. Could be yes or could be no. And the white data sharing plan registration could be yes or no. And as you know, the editors, the National Committee of Medical Journal Editors consider that this is an ethical obligation to this. So just to write the data sharing statement, yes or no. So you're not in fact mandated to share your data, but just to write if you want to do it or not. And actually the Sputnik trial, the data sharing statement, and most of the data will be available. And you can see this is a data sharing statement where you have to, your data request should be approved by the security department, which is quite unusual and not very... And you say, okay, it's a vaccine Sputnik, it's a problem. Sputnik don't give the data, okay. But at the end of the day, Sputnik is not a load in Europe, but he was a load and approved in many other countries across the globe. And in fact, if you look, it's the same for each and every vaccine. You don't have any access to the individual patient data. While it is very important to get this data, it should allow for secondary analysis, reanalysis, and also individual patient data meta-analysis. So other policy was not sufficient, at least for this example. So of course, the journal may want to implement the policy. I have a lot of examples, but I don't have the time to go through all examples. So just one. This is in prepost studies, a prepost study in surgery journals, so in clinical surgery. So you have all the journal of surgery. Before the policy, 62 papers. After the policy, 62 papers. And what happened with the policy? So before the policy, there were no data sharing statement. And we got the data for only two trials out of 62. And after the policy, there is data sharing statement. Oh, just in 11 papers out of 62. So not that much. And on the data sharing statement, only half, so five, intended to share the data. The other did not want to share the data and it was specified. And for all of the other papers, there were no data sharing statement. At the end of the day, we asked for data. And even those who have an intention to share the data did not share the data. We got the data for two trials, exactly the same as before. And those were trials without data sharing statements. So implementation of the policy in published paper is not so good. And then there is no enforcement for data sharing when people promise to share. So implement the policy. I put lol. Monitor impact of the policy. That's something that journal should want to do and might be able to do. And now we have a community consensus or core open science practices that should be monitored in biomedicine. And as you can see, data sharing is one of these core things that should be monitored. And now we have some tools. And indeed, as I said, we have automatic tools that enable us to monitor this kind of practices. But we can in general have a look at the impact of clinical trial data sharing in terms of how much people want to share. But the impact is also how much people really share. This is, in fact, what I showed you with the example of the surgical journal. But then when people share the data, how much of these data are reused. And here on this histogram, you see that in this bar plot, I present to you the number of studies planning to do this. And we don't have a lot of studies exploring actual reuse. And then when you reuse the data, this reuse should have at least a publication associated with the reuse. So some sort of output. And here again, we don't have a lot of information on how much data that are shared, which are going to be published. And then when we have a publication, with this publication, we'll have some impact on clinical practice. I'm a clinician. So I'm very interested in the impact of research in terms of changing guidelines or things like that. And in fact, we don't even know how much of the published reuse can translate on real impact. So we need some research. And I think we need research on how much we reuse the data, how much this leads to some output, and how much this output impact things. Because in this copy review, we found very little research about those issues. So multi-retouring, this is back. I think it's very complex, but deeply needed. And last point, what can journal do? They can take it seriously. So in this short paper, we have some things that journal should do. They should certify compliance about the data sharing policy of their journals at the ECMG. They might want to adopt more binding policy. They also journal should have some infrastructure in place for editorial screening. And I think that a road-producible research editor will be very important in those journals to handle these kind of issues, because those are issues that matter a lot. And perhaps they should also take actions in case people are not compliant with their policy. For instance, they should embargo future publication if they are not shared their data, despite promising to do this. Editors currently are just saying that people who want to reuse data are parasites. So they can evolve a little bit. It would be great. Finder and institutions, they can actually monitor reward data sharing, and they should support a lot of data sharing. And researchers, what can we do? I think we should commit to data sharing, of course, when we have data. But we also should engage in evaluating the impact of clinical trial data sharing and provide the necessary feedback to improve the policy. And actually, this is what I really like in the project Osiris, which I'm part of. And we are going to try to gather some evidence about a lot of road-producible research practices. And I want to keep in touch with you if you are interested, because we will have a lot of very exciting projects in the next future. So thank you so much. Take it seriously. That's my last one. Thank you, Florian. Really interesting. And thank you for sharing all these fascinating examples. And you've shown again that despite all efforts, actual data sharing is still rare, and it hardly ever happens. It's quite amazing, isn't it? But also, thank you for pointing out some ways ahead. We should get our act together, and more specifically, journals and publishers should get their act together. That's what you argued for. I've got a question by Dominique in this vein. Dominique said, do you think that Culp, the Committee of Publication Ethics, has a role to play here when journals do not enforce their open data policy? Because it often happens that there is the statement there. It doesn't happen. The sharing, what should the journals do and should Culp as a kind of oversight committee come into action there as well? What is your view on that? I would say that any effort to help in this regard would be very important. So if Culp would help, it would be good. But perhaps at the beginning, at least, Culp would be overwhelmed by the request because I think the problem is totally undamaged. And at the beginning, it would be a little bit difficult to manage. But at least some guidelines about that for editors how to handle this could be very, very helpful. There's also a question by Ellison. Ellison Efrenel. She says, cell journals may retract papers if there are integrity concerns shared by the editor and the authors fail to provide the data within a defined short time frame. What do you think? Should that apply for other journals and for randomized journal trials as well? It's difficult because this is very... So this is a very hard consequence and would lead to retraction of each and every randomized control trial in the field. And these trials are helpful. For instance, I told you that there is no data for a vaccine. But I was perhaps the first to get my shot. So I still have some trust, of course. But I think things could be better, very better. Yeah. Yeah, well, the retraction might not be the best response in exceptional cases. It's what you're saying. And I would say that we need some efforts like very negative consequences for people who are not holding their promises. But we need also to have a new generation of researchers that know how to share data and who knows how to prepare data and who knows how to share it responsibly. Because at the moment, I think that we have a lot of researchers that want to share the data but don't know how to do it. And they want, at the time, they are writing the paper and they don't even understand what are the implications of data sharing. Yeah, I fully agree. Dominique responds. He has a job for you. He says scope doesn't have guidelines yet on sharing data and punishments for that. So maybe you should help them to write the guideline. So you're not busy enough. So it might be an idea. Not alone. I can imagine. And in the same vein, I was wondering, Florian, what do you think we talked about in the vaccine example that the role of regulators here should be like FDA and EMA in the detection and prevention of flawed publications like some of the examples you mentioned? Do they have a role to play as well or is there no role whatsoever for them there? So my wish would be to make a new model for drug approval based on the registered report system. It will be the registered drug approval pathway. And I think health authority should be at the earth of that. But it could be quite long to explain. We have a small paper on that. Hopefully I can share it in the chat just after if you want. Yeah, that would be nice. But people might be interested in that. It's really nice. Please do. Raffaele already, right? So you've got at least one customer for your paper. So thank you again, Florian. And we should move on now to Katelyn, Katelyn Barker. Katelyn is active in the PhD program that British Medical Journal and also Maastricht University is organizing on responsible conduct of publishing scientific research. And it's really interesting that program and having all these wonderful PSDs. Melissa is another one. She will present as well later on and doing fascinating stuff on publication ethics, responsible conduct of research and education and what have you. So I'm really glad, Katelyn, that you were willing to present a presentation today. And your focus will be on the confusing variation there is in the way retracted articles are identified. And I believe you even see some room for improvement there. So please explain. Wonderful. Thank you so much, Lex. And hopefully everyone can see my slides. As Lex mentioned today, I am going to be discussing some of the, as he noted, confusion. Surrounding the way that retracted publications are represented across different platforms. Some of the implications of that variability. Some potential reasons that we might be seeing that as well as some of the emerging work that's ongoing that may help improve some of the consistency and transparency of these efforts. Now, I will say individuals who don't work closely in this space sometimes can be a little bit surprised to hear that there is any complexity around the way this information is communicated. There can sometimes be a belief that when there's any sort of change to the original article including retraction that that information would be essentially reflected in real time through every platform through which an individual might access an article and that there would be full information that would be available to help individuals assess the article and determine if and how they may want to use it. And while in certain ways that's a very understandable assumption, unfortunately it's not reflective of our current reality. So I'm going to begin by describing some of the challenges we see manifesting in the representation of retractions and the communication of the retracted status and the retraction notice because for many of us this is where it really made aware of the lack of transparency. So what you can see here is not an uncommon occurrence. So this is a retraction notice but it is not linked to the original article nor does the original retracted article link to it. The HTML is not available so that's what we see where it just says nobody and I will note that the downloadable PDF includes bibliographic information it tells us what article has been retracted and why it has been retracted but from a discovery perspective this is essentially a non-entity because after all it is not connected to other objects it's not indexed and the PDF that does contain the information isn't searchable through the search interface and as such we end up with somewhat of a brittle in that we have this question if a retraction notice exists but nobody can find it does the retraction notice really exist? Somewhat unsurprisingly when we examine how that same article and its retraction notice are represented in different bibliographic databases we notice that there is not a lot of consistency that's displayed. So here are three screenshots from three different databases and this in all cases refers to the item that was retracted by that retraction notice that you just saw. So of these different screenshots which were all taken the same day at the same time there is only one that indicates clearly that the item has been retracted and links to the retraction notice while the others don't offer a visible representation indicating that there has been an update that has occurred to this record. Even within platforms the representation can be inconsistent so here you can see three examples of different retracted publications in a single platform again these screenshots were all taken at the same time on the same day. So in one case the title is amended with the word retraction and in two there is a banner with an update notice that's attached to it. However I will just note with the update notices neither one of them actually refers to a retraction in one case we have a coragendum so with both of these although we know that maybe there has been an update that it is a retraction is not something that is apparent here. And for two of these items which are not visible in these screenshots the publication type has been updated to indicate that it is retracted although it remains article in the other so again some inconsistencies there. Now there is a very sort of representation of a retraction in a single journal and the examples you can see here are ones that they first noted in 2021 although I will note that these screenshots are actually from just last week so these records have not been updated since that point in time. So we can see that in two cases there's a red text box with links to the retraction notice but in the third case that text box is blue. In one case the title has been amended while in the other two it is not. Now these articles were retracted in 2012, 2015 and 2019 respectively and the item with the amended title and the red text box is the most recent and arguably is the strongest visual representation of the retracted status. So it is possible that there is this variability by product of changing workflows and that there is a movement towards this clearer representation. However these updated workflows are seemingly not being applied retroactively if that is indeed what is causing the variability and that is problematic if that is what is happening here. So we have this inconsistent user experience both across different platforms and within a single journal and we might question the subsequent impact of that inconsistency. Various studies have found that a high proportion of citations to retracted publications do not indicate that the publication has been retracted. In a study based on the PubMed Central Open Access subset 5.4 of citations to retracted publications in the sample acknowledged that the publication had been retracted. This finding was echoed in a study looking at the citation of retracted publications in the field of dentistry where again it was approximately 5% of the citations indicated that this previously retracted publication had in fact been retracted. Now somewhat on the higher end there is a study I'm currently working on where we're looking at a sample of systematic reviews in the field of pharmacy and we found that of the systematic reviews that were citing previously retracted research about 20% of them noted that the research had been retracted or were refuting its findings. However I would also just note that 43% of those systematic reviews really use the retracted publication as something that was central either as incorporating it as data within the systematic review or using it as sort of a foundational to the overall argument. And a systematic review as a research method is one that really does require at least if done properly that researchers engage very critically with the published literature and so there is arguably a likelihood or a higher likelihood that individuals because of this engagement could potentially recognize issues surrounding this retracted research but nevertheless the vast majority of systematic reviews in the sample that were citing previously retracted research did not indicate that that research had been retracted. Now obviously these studies can't tell us about a researcher's motivations in citing the work. Why would somebody cite a retracted publication and not indicate that it's retracted? Are they aware that it's retracted? If they are and they chose to use it regardless what would be the rationale and all of these questions aren't things that can be answered based on these studies alone. However at least I would argue that to be aware that one is citing something that has been retracted and to not indicate any sort of awareness of that retraction would be counterintuitive and to cite that retracted research as something that is foundational to one's own research or as the underlying data or component of the underlying data or something that's really supporting your own findings and again to not indicate that you're aware of the retracted status arguably would be counterproductive. I personally believe it's far more logical or likely rather that individuals who are citing retracted publications and are not indicating that they are aware of the retracted status of the publication are likely unaware of the fact that the publication has been retracted. Now we may wonder why it is that this persists particularly when we consider the number of high profile retractions the number of mass retractions that have occurred in recent years. Now one of the challenges in this space is that publisher workflows vary over time we potentially saw an example of that earlier workflows also vary between publishers because there are just a large number of underlying organizations underlying organizational structures that support this work and there is also a lack of guidance in certain ways. I do want to qualify this last clause because there is certainly guidance on what should be included in a retraction notice and when retraction should be issued and how publishers might go about this decision making process. And publishers utilize this guidance to varying degrees however where there is a potential gap in the available guidance is around metadata transfer and display and this is the issue that the nice communication of retractions and expressions of concern or crack working group is attempting to address and as full disclosure I do co-chair this working group along with my colleague Rachel from Crossref so I am certainly not an impartial reporter of this working group's efforts but the group was launched last year with the aim of supporting a consistent timely transmission of retraction information to the reader whether that reader is a human or a machine. And in the crack working group we broke apart into two subgroups so we had a publisher subgroup that was investigating existing workflows and practices such as if and how publishers update PDFs titles metadata following retractions as well as expressions of concern or removal and then how statements of retraction are issued. We also have an aggregator and end user subgroup that was looking at what information gets received and how that is subsequently communicated and a very broad range of practices emerged which was really somewhat reflective of the different organizational structures that were undertaking and were supporting the work of the publishers that were represented in our subgroup the majority of them reported that their standard practice was to leave the full text PDF online but to watermark that with the word retracted on all pages. For the remaining publishers a variety of practices emerged which could include adding a retraction notice as a cover sheet at the beginning of the PDF, removing replacing the full text of the article with the retraction notice, reducing the article to the abstract and then watermarking the abstract with the word retracted while other publishers reported not updating the PDF. One publisher also noted that their practice differed depending on where the item was published by which I mean if it was published in a supplemental issue there would be one workflow whereas if the item that was being retracted had been published in a regular issue there would be a different workflow instead of practices that emerged. Practices surrounding the HTML version and the associated metadata were similarly varied. About one third of the publishers didn't report a standard practice with regards to the HTML. Now this doesn't mean that they don't have practices or they don't engage in any sort of corrective action but rather that this may not be standardized across their entire set of journals, journals may have journal by journal practices and policies. They may still be in the process of standardizing some of this work or it may be that they simply did not report this information in this forum. For those that did report standard practices they could really range from maintaining the HTML unchanged to adding annotations to titles or watermarking the HTML suppressing the HTML or replacing it with the retraction notice or possibly with a link to the retraction policy and publishers also noted that with regards to the XML feeds that they were submitting to those aggregators they might remove the full text and the metadata from the XML feed. Publishers also noted that the practice could vary depending on the timing and the stage of publication so if the article had been assigned to an issue the HTML would remain but would be potentially annotated or watermarked. If that article was online ahead of print and had not yet been officially published the HTML may be removed or suppressed. And with such a range of practices on the publisher side it's not surprising that the aggregator or the end user subgroup was also reporting a variety of ways metadata were received and displayed. Participants noted that they received metadata regarding retractions in a broad range of formats from XML feeds to Excel spreadsheets to email communications and they may or may not contain consistent or robust retraction notice metadata so it might be unclear who was authoring the retraction notice and there might not be an opportunity to really identify how it is that one could link to the original retracted item and all of this can cause confusion around how to act upon these metadata which can be further exacerbated by a range of terminology that can be used terminology like ARATA or withdrawal when perhaps retraction notice or retraction is what was intended. So individuals and organizations have internal practices in place regarding how these materials are issued, communicated or received but despite these internal practices there is this continued use and there is this confusion and variability in how things are represented. And the goal of this work isn't to say that one group or one publisher is doing things right or wrong because this really is a universal problem, there is no platform or no journal where this variability is not present. So the issue isn't that people are doing things differently, the issue is that things are not mutually intelligible and ultimately are leading to these inconsistent user experiences which is what kind of brings me to the work of the crack working group which is working to develop recommended practices for metadata elements as well as the use of those metadata elements to facilitate the timely and clear transmission so that some of this variability can be reduced and so we can have a transparent representation of this information across every system through which individuals may be receiving or accessing this information. So on that note I'll thank you for your time and attention and just note that the forthcoming recommended practices document should be released in well next month and so if this is something that's of interest to you I would encourage you to take a few moments and to consider reading that through and providing any feedback you may have to that document so that the final recommended practices can be as reflective of the community's needs as is possible. Well, thank you very much for pointing out this highly problematic situation. It's not only a fact that journals are very reluctant to retract when it's clearly needed. They also succeed in hiding that well which is quite amazing when I see your presentation. I'm flabbergasted again by this work but it just seems to show the roads for solution as well and one of the questions the one by Raphael is as follows how can publishers and the research community help to improve transparency when it comes to the publication of retraction notices? Considering Cope already has guidelines for its adherence to the guidelines seems to be a challenge. Do you think that a preprint peer review would prevent these situations and you also once soon get the answer from Gauri. Interesting, so Gauri prepare for that while Caitlyn gives your answer already. Yes, so I think in terms of publishers and the research community I would sort of separate that into almost two different groups because one thing that I think is really important is for the research community to start essentially demanding some of this transparency and where retraction notices are vague or incomplete or for information to not be consistently displayed to put it kind of crassly primary consumers of this information to really question why it is that this is happening and to demand improvements I think is fundamental to their actually being change. In terms of publishers retraction can be very complex from a publisher perspective because it can have significant ramifications and stigma and so obviously they don't want to just immediately as soon as there is any sort of concern just to retract without any consideration there does need to be due process but potentially greater use of mechanisms like expressions of concern when there are issues there is something that is potentially problematic I think could go a long way to at least alert the community that there may be an issue with this particular publication. I will also just say a lot of systems are not built for this sort of information many of them don't have fields to display retracted information and so there does need to be some development work from publishers from platforms to ensure that there is a way to communicate this and I think PubMed is a really great example of one platform that has been very intentional and has put resources towards ensuring that this is something that is clearly communicated and so I would encourage those who are responsible for platforms to consider how it is that PubMed has approached this and maybe see if there are some lessons learned there for their own development work. Thank you, sensible advice. Gauri, what about preprints? That's actually a very interesting question. I don't have the answer for that but that could be a very interesting empirical study to see whether peer-reviewing of preprints can actually improve retraction. Certainly I think that the peer-reviewing, as I have emphasized already earlier, needs to happen earlier in the research life cycle. I definitely do believe that there can be benefits of peer-reviewing preprints as we have already seen that there is huge benefits in terms of peer-reviewing registered reports. So basically looking at the study protocol being able to actually make the peer-reviewing much early on in the life stage of the research. So to answer your question Raffaele and unfortunately I'm not sure whether preprint peer-reviewing will actually improve the retraction situation but I do want to add that I think that what is important could be changing the stigmatization around retraction. And you have already commented that in the chat especially since not all retractions are due to misconduct. So I think that the stigma around retractions is also an issue that we need to address when we talk about retractions in the research community and that that is not always the case that there has been fraud that has been conducted. So that is something that needs to change. I also think that we have a lot of guidelines that are extremely useful but the implementation of guidelines is something that we need to actually look more into and that is something that I think that has not necessary. That's actually a more difficult issue to deal with and unfortunately I don't have an answer in terms of how do we improve implementation but certainly we need better implementation of guidelines and not just more guidelines. Thank you Gauri, it makes a lot of sense. Let's move on to the last presentation and that might even be an upbeat in this becoming more depressive program so far. Many problems and little solutions but Melissa will present a randomized clinical trial. It's still ongoing but DRD is promising and it's focused on an intervention that might improve at least one of the problems we are facing so it's really interesting you will also explain much like it's not completely easy to implement a randomized clinical trial and that you have no clue what the results are but still the design is already fascinating so please the floor is yours Melissa. Thanks so much Lex and thanks to everyone who has stuck around I really appreciate you being here and look forward to your questions at the end of this session so first of all before I go too much farther I just want to make sure that I thank all of the people who are involved in this process with special thanks to lots of people that you probably recognize their names but also to the librarians out there who peer review so thank you all hat tip your way and I did not put in a conflict of interest slide but I do want to begin by saying that I'm a librarian and this is a perspective from a librarian a librarian who is trained to think that peer review meant something to think that there were journals that were out there that were better than other journals that were out there but also as a major open science advocate who now thinks maybe journals are worthless and we should just get rid of them so I look forward to really seeing what the results of this study are but as Lex mentioned it's not complete and I do want to just talk through some of the issues that I have been seeing so really what the problem that the study was designed to look at around was that systematic reviews are of course supposed to be the pinnacle of evidence they're supposed to be wonderful and great and drive clinical decision making but they're really bad like really really bad they're poorly reported and a lot of them are also poor quality and this is especially true for the search strategies which is really kind of like the base methodology of what a systematic review is constructed from doing a really sensitive highly efficacious and reproducible search. Librarians out there myself included have been really frustrated by the fact that a lot of these journals are really poorly reported and are really poorly poor quality but they're still getting published so there have been studies out there looking at whether or not other kinds of methodological reviewers like statisticians can actually impact the quality of reporting and the quality of research that gets published and we know that librarians are currently methodological experts but they're tapped, they're underused for this kind of involvement in the peer review process but a lot of them are willing to actually be involved. So the idea behind this study is that we should really see if what librarians are out there saying that we could actually have an impact if we were peer reviewers was true so we are working on a randomized controlled trial to test out whether or not librarians or information specialists could actually impact the quality of reporting and the quality of systematic reviews if they are peer reviewers. This is a lot different than some other kinds of research that has been done in this area before really the most that we've seen is a lot of research that's observational and based on things like looking at open peer review reports which have been a wonderful source of evidence but of course mostly looking at published manuscripts things that have already been out there they've made it through the peer review process and then we can understand how poor quality and reporting quality is based on those published reports and of course there's also been just a lot of opinions out there that have been published but that kind of research though it is very useful and I certainly have participated in it myself really misses a lot of things so it misses what are the manuscripts that were rejected I mean the ones that we're not even seeing in the system that were actually stopped by the peer review process we don't know if there were peer reviewers involved whose critiques were so complex and they wanted so much change that people just decided to withdraw their manuscripts instead and we really have not seen anyone looking at a testable intervention as to whether or not librarians and information specialists actually make a difference in this space so we put together a randomized control trial and what do we need for that we needed two things we needed the peer reviewers and we needed the journals luckily at the same time that I was starting this study and pulling together a list my colleagues at Yale were similarly pulling together a list of librarians who be relating to peer review and so we joined forces and created the librarian peer review database which is really a list of all of the librarians out there who are willing to peer review systematic reviews and then we had some willing journals and this is really kind of the crux of what the session is talking about that journals have so much data that's available to really understand how the peer review process could potentially be impacting the quality of science and these two journals BMJ and BMJ Open were willing to let me go in and play with their systems and have access to everything so that we could really effectively test this intervention this is just a diagram of what the flow of the study looks like and it looks kind of simple from here so we have eligible manuscripts we randomized them we have some that are regular practice some that we have regular practice plus we invite a librarian to and then we see what happens but it's not really quite that simple the reality of what has to really happen in the back end of these systems is far more complex and it just appears so basically it's a very very manual process to do this kind of randomized control trial on peer review so every single day I receive about 10 spreadsheets of data and I have to look through about four to six of them on any given day so I have to check every single day whether or not there has been a acceptance a decline or an auto decline of all of the invitations that I have sent out because then if there is I need to make sure for the auto declines and declines that I invite someone else immediately and then I also need to look to see if there are any new manuscripts that are meeting the inclusion criteria for this study which is a time-consuming process because we have to figure out if they meet that inclusion criteria and then if so to put them into sort of the machinations of what this project actually entails so I need to end up looking up every single manuscript pretty much every day and especially if they have a change in peer reviewer status and anything that's new and so from those I need to manually figure out whether or not there are peer reviewers that need to be replaced I need to make sure that I'm monitoring the list of peer reviewers to make sure that they're not being over-sampled I need to make sure that I'm flagging all of the articles for the research I need to make sure that I am keeping track of who's been invited so that I can see whether or not again someone else needs to be invited and I need to make sure that I increase the number of reviews that are required for my intervention arms so all of that is fine and it works really well the spreadsheets are a little complex but you can figure it out but there's still a lot of challenges in doing a randomized control trial around peer review and the number one challenge is that finding peer reviewers is hard it's not just hard for me it's hard for all the journals but I will say that for finding our librarians who are willing to review it's more challenging because I was starting with a pool of about 150 which is not very many compared to like the worldwide number of scientists that are out there so it's a small pool and people are busy and we know that finding peer reviewers is a challenge no matter what partially one of the challenges is that this is really dependent on not just the people that I'm inviting but the people in the system overall because maybe I'm upping the number of peer reviewers needed to three but if the editors had already invited ten and then four of them agreed then I better hope that the librarians that I'm inviting get in there first before those other reviews come back because otherwise the intervention essentially is not happening it's really dependent on speed so checking every single day is really essential to make sure that again the intervention arm isn't getting left behind I think it's also partially somewhat dependent on the journal and this may partially have to do with the prestige of the journal and it may also be somewhat more topically bound as well I certainly have seen that there's a lot more acceptances of reviews in BMJ than BMJ open whether or not that has to do with the more esoteric kinds of topics that BMJ open might contain versus the prestige of BMJ in comparison I don't know but it certainly seems to be making a difference and I would imagine that that would be the same in other kinds of peer review of randomized control trials but one of the biggest problems too that I've noticed is that this is a really untapped resource librarians are willing to do this but they haven't had a lot of experience in the peer review system because that's really not what we've ever been engaged in and so a lot of what I'm seeing is simply just non-responses to the invitations and this isn't just a problem for the reviewers I'm inviting it's a problem everywhere but every single non-response you don't click on decline adds another week to the manuscript so when you're going through and you're getting 10 non-responses that's 10 weeks that this article is sitting in peer review and nothing is happening to it because it's just this turn of non-responses but there's a lot of benefits in addition to these challenges so hopefully hopefully we'll find out if we're right or not you know librarians are sitting there we're saying if only we were involved things would be so much better we wouldn't get any systematic reviews published that said that they searched EBSCO this is an in-joke for those of you who know about systematic reviews but we don't actually know if it's true so hopefully we'll find out whether or not that this is true a lot of these kinds of interventions have not been successful so we'll see whether or not this one works so fingers crossed everyone but I think more of the benefit is really that this is an opportunity to directly influence what's happening in the peer review process and without this kind of intervention we again we just can't really understand what is really happening and how things might improve so it's a wonderful opportunity to really be able to get in there and see behind the scenes what's happening and to try and make a difference but I think most importantly of all for this particular study is that it really is enabling me to help open up the peer review process to different kinds of peer reviewers that then future editors and people who are assigning peer reviewers might be able to tap for these studies so I'm hoping that it will create a base of experience and a base of knowledge and a new base of peer reviewers to add to the system so these are just a couple of references and I would encourage you to ask any questions and thank you for your patience and for listening I'm sorry I was muted I was thinking you Melissa for a great presentation and I was also saying that it really takes courage for a librarian to say that maybe we should stop with journals it takes courage you have to give up completely you're still giving them one last opportunity we should try but it's rather complex whether any questions please type them in the chat and while you're doing that Melissa please explain to me how awful is the problem I know you've studied it how poorly reproducible are these search strategies in fact because you have some fascinating data and you have not presented but this is your chance to picture a little bit from it thank you Lex I have studied this and I would say that they're almost universally awful so right now I've just recently done a study where I found that one percent but it was one out of a hundred articles that were just a random sample from PubMed from a given month one of them was reproducible and very few were had just even basic reporting things it was something like less than 50% could even actually accurately state what database they used and that's a problem you know that's it's like a major component of your methodology and I don't really blame people who are doing this independently and don't have access to librarians and information specialists who can help guide them but it means that the quality of the evidence that we're seeing is just terrible and that's just the reporting side right the actual quality of the search strategies themselves have assessed in the past and it's pretty bad I mean I would say that maybe 10% of searches that I've seen are relatively decent but most of them are not and most of the things that you see are just like two or three keywords stuck together with or's or and's and it's kind of random and very few of them will actually have the kind of detail the kind of sensitivity that's necessary to really produce a thorough and in representative systematic review so I think it's a huge problem that's why we're frustrated that's why librarians involved but it's kind of scary to behold yeah Rafael has an interesting question here it's it's about the topic we've been discussing as well together are you making a difference between the reproducibility and replicability for this trial can you maybe explain that yeah that's a good question so for the the trial that I'm talking about for this presentation the randomized controlled trial we are not looking at the difference in replicability what we are doing is we are looking at four measures of reporting according to the Prisma S guidelines which is an extension specific to search strategies and then we are looking at the Robus tool Robus which is risk of bias in evidence synthesis basically and so we're looking at the domain two for that which is more of a quality kind of measure so we're solely looking for this randomized controlled trial at reporting and at quality basically what kind of bias might have gotten introduced into the search for my prior study that Lex was referencing and that I referenced as well we were taking the exact search strategies as described and trying to figure them out and then trying to reproduce them exactly one of the issues is that there's a data sharing problem in systematic reviews similar to the data sharing problem that was described earlier by Florian and which is to say is that we don't have all the full data for a lot of the systematic reviews and we cannot actually most of the time 99.999% of the time figure out what was actually retrieved by the original search strategies because people are not sharing the results of those search strategies they're only sharing the kinds of data that was you know in the end like maybe the list of the included studies they don't say where they got it they don't say what else within that sample etc and so we can only really gauge it by looking at what the general overall number of results was if they said that they had gotten 200 and then when we repeat it we get over a million then I think that we can tell it was not reproducible. One interesting suggestion in the chat by Ellison you may like Melissa she says should journals send systematic reviews to librarians first to decide before to bring a peer review among other peer reviews? I mean I'm not going to argue against that I do think that there's just a problem with the scale of the number of librarians who are involved there are some journals who actually employ librarians to do this and I think that that is really a model that other journals could adopt because it's a huge problem and it's just an embarrassing look for the researchers who put down that they searched of it. The last question can be asked by Florian he raised his hands he wants to ask you a question as well I guess and then we move on to the panel discussion this is a nice most boundary between the two Yes it was just to say that you said that it was the same as in clinical trial data sharing but I think it's not exactly the same because in clinical trial you have this importance of the fact you are sharing individual patient data which are very sensitive and for systematic review there is no sensitivity at all plus systematic reviewers are already data parasites so it's like a moral duty for them to share things because they already work from things that are shared so I would say that it's even worse for systematic review than for clinical trials You agree Ms. Red, Melissa? I see you smiling. I think they're both really bad but it's so easy to share systematic review data in comparison but it does seem like it's worse just because it's not hard but people don't do it Okay Let's move on to the final piece of our interesting meeting at least it's interesting to me I hope it's interesting for everyone and that is a short panel discussion we spent the last five minutes on that I prepared a few questions and the rule of the game is please put your mics open they give a short and crispy answer to it and then I hope for some disagreement and then the panelists can debate among it and also the people can add in the chat My first question to you is should peer review be open open in the sense of open identities that the identity of the authors and of the reviewers is open and especially the last is quite sensitive of course please say yes or no for some reason why Can I start with you Kevin Yeah, so no I don't think it should be open largely because the world is small and people may be less inclined to give very critical reviews if there are power dynamics at play or if they are attempting to maybe manage their relationships with others in their niche field Yeah, okay that's the usual objection that is especially for early career researchers it might damage their career is there unlucky there Gauri do you agree what is your take on it I almost think that you chose me because maybe you think that I will actually say an opposite answer to Caitlin because this was a random sample really Right, well I actually think that it should be open and yeah of course I speak from more from an idealistic point of view it needs to be carefully looked at and perhaps it might not be as conducive particularly in different disciplines where the communities are far much more smaller the repercussions can be much more severe but as a general short crisp answer that Lex has asked for I am a proponent of open science and I do think that open identities should at the very least be seriously considered and that we need actually better empirical data that goes beyond surveys because I think at the moment a large part of the data that describes some of these concerns are based on perceptions so I think we really want to go yeah so that's I'll end there Thank you, Florian what is your take on it So I would say yes because after all the world is big and it's much more easier to give an unfair peer review when you are not known Yeah, that's another great argument, thank you What about you Melissa, do you agree to that? I say yes I agree with Caitlin as well but yes it's from like a researcher perspective also it's a treasure trove of data so I can't be upset about that part but also I think it makes people more accountable Yeah, well recently there has been a great review on research on open peer review but it shows that what Gary says already we have no clue whether it works or not and whether it's better or not so we need more research on that the review only shows that some people like it and other people dislike it but who cares what matters is whether it works or not So let me move on to the second and also last question that might you find interesting as well and that is, how can peer reviewers be rewarded best for their work please speak one incentive and give one reason and I'm going to start with you now because you are struggling and finding the peer reviewers I think it of course just needs to go into promotion and tenure so something that we reward I really used to like pub lawns and now it's sort of obfuscated behind web of science but that was one step in the right direction but it would be nice to have it even more public what is your take on this question for you so two things so first of all I would say that making the review public and also giving it a DOI this is the first thing because one of the things that I had preferred in my career was to be peer reviewer for 2D329 published at BMG and I think it was very important today and I was very very happy so that's why I am saying it here because I was very happy to be a reviewer of that this is I think something important yeah well that's clear you are arguing for DOI so indirectly you are arguing for open peer review as well at least at the reviewer side that's clear otherwise it doesn't help for your career Katelyn what is your take I would absolutely agree the need to have it reflected in promotion and tenure as well as just performance evaluations I would also say publishers and editors enforcing high standards for peer review so that the prestige and the work that goes into a high quality peer review is recognized by those committees if a journal is known for providing perfunctory or substandard peer review even if the individual reviewer does a great job that line on their CV may not resonate in the same way so editors really need to uphold standards for peer review thank you Gauri what is your dream incentive indeed I think I am very much in line with what has already been said I think that it definitely needs to go into performance rewarding and recognizing performance of researchers in funding applications as well however we need to make sure that we are not just blindly rewarding again peer review because we don't want to end up down the slippery slope of quantity versus quality wow sensible words by all of you we've come to the end of our meeting I'd like to thank you again dear speakers it was wonderful there was complementarity between the contributions it was lively and it made me think it was thought provoking I hope the audience appreciated this as well I'd like to thank the audience as well it's not easy to stay at least in Europe on Friday afternoon until the day has ended it's not an ideal time for a meeting and it was well attended thank you so much by this I say goodbye to all of you and please enjoy your weekends see you later