 Well, we've got, welcome, we are live and my name is James Wilson from the Research on Research Institute in the UK, and it's my pleasure to be moderating this session which is billed as a debate, it's probably going to be a set of slightly contrasting perspective rather than a full on formal debate but I think it'll be illuminating nonetheless and we've got a great panel to take us through some different insights and angles on one of the biggest meta science questions that the state and fate of peer review which has of course been even more visible and prominent on everyone's agendas over the course of the pandemic when particularly the use of preprints has been of course one of the big science culture stories of the past 1820 months. Peer review of course is a critical underpinning of the science system and a mechanism through which we at least strive if we don't always meet some of the goals of rigor and objectivity in science. But we also of course all know particularly those who are academics and live and die by peer review that it is a system with with with flaws as well as strengths. And it's also a system that's under pressure from various directions. The demands on the peer review system in line with growth in the overall science system have swelled in recent years and in many parts of that system are creaking, if not breaking altogether so there's lots to discuss here So what I want to focus on in the next hour is this specific question of when in the cycle should peer review take place. Are we doing peer review at the wrong point in the process should we do it differently. And as I say we're not going to have a formal debate with two people on each side it's going to be more of a spectrum of views and we hope of course to be bringing in as many of your comments and questions as well as we go along so do make liberal use of both the q&a function in zoom and also the chat function. As I said we've got a great panel to join us and guide us through these issues I'll introduce them all now and then more briefly one by one as they speak we're going to hear first from Emily center who is a senior lecturer, the Center for clinical brain sciences, the University of Edinburgh. Welcome Emily Emily will be followed by Liam cofee bright who's assistant professor in the department of philosophy, logic and scientific method at the LSE the London School of Economics. And then we're moving right over the other side of the world to Remco Heason who is a philosopher of science at the University of Western Australia and it's Remco just told me it's 10pm there at the moment so he's doing he's doing very well to look bright and bushy tail we'll try and keep him that way for the next hour. Last but by no means least we're going to hear from Daniela sederi who is the co founder and director of pre review, which is a project set up to bring more equity and transparency to the evaluation of research content. So thank you all very much for taking the time. I think some but not all of you have slides but we'll keep. Hopefully the four bits to good times we have plenty of time for discussion. I'm going to go first to Emily and ask you to to kick us off over to you. I can't see you or so I'm hoping that you can hear me and see my slides. So thank you very much for for having me I am going to kick things off. So the debate is that you know the question is should peer review occur before after publication. I don't think I've actually necessarily answered this question in this introduction set of slides but more, I guess try to highlight some of the issues that we have in the process at the moment. So I'll start with disclosures and I think for this talk. These are particularly pertinent. And I am editor in chief of BMJ science and where we, we do peer review, I think in the kind of more traditional sense. And also my co founder met and on the managing board for PCI peer community in registered reports, which is a platform for reviewing preference. So just a bit about my perspective. I understand where I'm coming from. I am a neuroscientist but I'm generally interested in the modeling of human diseases in animals in the laboratory. But I take a meta research approach to the work I do so lots of systematic reviews trying to understand what makes experimental models valid and useful to translate to the clinic. So if we think about journals and why journals are considered important while research is incremental and journals serve as the permanent record of science. So a research articles should should underline the word should contain sufficient information to allow others to both replicate research but also evaluate what's been done and articles also help us learn. So I guess the peer review process attempts to validate the methods that are used in in studies and also the scientific processes and they're seen as a marker of credibility. It's the question that's often asked, was that studied peer reviewed and if you know if it's a yes and I think people interpret that as that's an incredible piece of research. So articles are also the mode with which peer review articles of the mode with which we evaluate our scientific output as academics and journals and I guess you know the editors and then publishers are to some extent the gatekeepers of this process. Maybe a slightly naive view, but I think the peer review process and just the, you know, process of how we disseminate our research. I imagine it worked quite well for the 19th century scientists where the incentive structure was really around. And this this quote here you know I must find the explanation for the for this phenomenon to truly understand nature. And now it's, I must get my results to fit the narrative so I can get my paper into nature. And I think these disincentive scratch at structure has a huge influence on how, how we present the research that we do. So to give you, I said I'm interested in the modeling of human diseases. And this is an example of how sometimes our research in the past hasn't maybe had as much impact as it could have done. This was a study published by colleagues, actually based in Australia, and they did a large systematic review, looking at all interventions tested in models of stroke, and they identified over 1000 different interventions. The results under these interventions were tested in animal models of stroke 374 shown to be effective in these animal models, just under 100 were taken forward to be tested in clinical trials, and only one intervention club busting treatment with thrombolysis was shown in trials. So we've got this huge attrition, they are some additional interventions that don't have the supporting animal data. And to be honest presenting the data like this is slightly just a genious because it's not that you know the animal studies showed that there was an effect of this intervention and it was taken for a clinical trial, but a lot of the animal studies were conducted after the trial was completed for thrombolysis. So a lot of my research has been trying to understand what are the different sources of bias that happen in preclinical research. And like I said, we do lots of systematic reviews. And we're essentially assessing peer the peer reviewed literature trying to look at how things were reported and identifying the impact of potential sources of bias. And there's lots of different reasons why effects might be seen in animal, you know, in an animal study and not translate to a clinical setting. And I'm just focusing for the purpose of this talk, you know, looking at one threat to the validity of research internal validity. So this is the strength of the cause effect relationship in the study. Are we seeing the effect of an intervention because of the drug itself or the intervention itself or is it because of other unknown sources of bias. Emily, I'm very sorry to add the reporting of measures to reduce risk of bias in the preclinical literature, not just in stroke but across lots of different disease areas. In this example, I'm showing you the reporting of randomization and blinded assessment of outcome very key methodological factors to ensure robustness in the study. Across stroke, motor neuron disease, Alzheimer's Parkinson's EAE, which is a model of MS and GLOMA. We see very few studies report. Emily, I'm very sorry to interrupt you. There's a black bar at the top of your screen. Blinding the assessment of outcome. It's not just that these biases are prevalent, but it appears that they're also important in meta analyses. We have stratified data, looking at studies that do and don't report measures to reduce risks of bias. And we see in this data sense as an example taken from a stroke review that we did the studies that don't report randomization. It's associated with much larger estimates of effect than studies that do take this measure to reduce risks of bias. Emily, sorry to interrupt you. You said, well, you know, you're looking at reasons for translational failure. And, you know, and maybe it's, you know, some of this research isn't published in good quality journals, you know, peer review is should pick up things, you know, when people aren't randomizing or blinding their studies. And in our data set, we've looked at the prevalence of reporting measures to reduce the risks of bias across a sample of papers. We've looked at look at the papers, the journals, sorry, by the sort of impact factors. So these are the high impact factor journals, you know, the single name journals. And these are the lower impact factor journals. And essentially we don't see a relationship between journal impact factor and the prevalence of reporting randomization, blinded assessment of outcome, or in a primary sample size calculation. We do, however, see that, you know, the large, the high impact factor journals are associated with reporting conflict of interest statements. And there are a number of improvement strategies that are in place to try and facilitate improved reporting. The journals have also tried to use to support the peer review process. So the equator network has lots of resources to ensure that we enhance the transparency of the quality of our reporting and health research. And specifically for clinical trials, we've got the consult guidelines for animal studies, we've got the arrive guidelines, you know, these, these are just two examples of different resources out there and generally they're endorsed by the journals by, you know, funders, universities, various societies, saying we ask our authors and our peer reviewers to ensure that the manuscripts have been reported in line with these guidelines. But it appears that endorsing guidelines doesn't necessarily translate to improve quality of reporting. In our research group we've conducted two different studies, an observational study and a randomized controlled trial, with different publishers looking at checklists, reporting checklists implementation. So with nature, I think it was an observational study so it was a before and after study before they implemented their reporting checklist to see whether or not the implementation of their checklist then led after the implementation led to improved of these four key measures to reduce risk of bias. And none reported or for before the checklist, there was a slight improvement after. And I randomized controlled trial with plus, we randomized authors to being requested to submit a checklist. And success was reporting all of the arrive items. And in neither the control of the intervention group did we see any manuscript me all these items. So what we know is that journals and peer requesting authors to complete and arrive checklist doesn't lead to improve reporting. The introduction of a checklist, at least at the nature publishing group was associated with improved reporting that we didn't see in other journals that the conditions obviously are different. Improvement strategies I think which focus on priority aspects of reporting might have greater success but I think this is an empirical question. As it stands, I believe that peer review is probably not fit for purpose. There's very limited guidance or framework around what to assess. There's issues around credit for researchers in terms of doing peer reviews and there's obviously very limited training although that it doesn't exist in some spaces. And this is the purpose of this discussion today. Things to consider when we're thinking about peer review is, you know, when it happens. Are we maximizing impact and you know the title is before after publication but you know there's arguments around before after even conducting the study itself you know should we be doing registered reports so submitting introductions and methods for peer review before we even do any data collection and who does peer review this. I think important questions here around inclusion and equity if we think about, you know, the kind of status quo of journal editors selecting those groups then traditionally, there's not very many underrepresented groups in that journal editor space and they generally will select people who are like them and you've got this. Lots of underrepresented groups not included in this process as much as they should be. And I guess it's important for us to ask what is it we're trying to achieve, do the what what we're trying to achieve in terms of, you know, in giving research studies credibility and improving their rigor does that match with how how we are currently doing peer review and I would I would argue maybe not so much. So finally, I think development and implementation of alternative methods to status quo will need resource like I said I think some of these questions are empirical questions that we we could answer but we need to do that research. And I think the research is required to determine the most effective approach, but we need to engage in alternatives so that we've got those that data. I think education generally will help including training and critical appraisal that something that's not, I think, as, as common as it should be. I suspect, like with most things it's rewarded incentives are really a what going to drive changes. I'm going to end on saying I think any experimental design can be subverted but what's important is knowing how to recognize when this happens. Thank you very much for listening and to, to my team. And then. Great, thank you very much Emily. That was great. We were having an issue with your slides there was a strange gray box floating in the title of many of them so apologies there's various messages I think we were. Sorry, I didn't realize that my sound was muted as well. Yeah. It's not the end of the world but we might if you're happy with this idea we might get a PDF copy of them or something to just. Most of the content was there it was just the titles that were being. Apologies. In the, in the, in the audience has identified the cause the problem I think it would probably just with your. Hopefully it was just with yours if it happens with any of the other speakers will try and shout at you early on so we can catch it. Now, it was great. It was very, very, very interesting. Now I know we've got a couple of questions. I'm conscious of time so I'm going to just take a couple of very quick questions that are coming on the Q&A. That are directly relevant and then try and make sure we get on to all four talks and you know before before before the hour is out. But Samuel Fletcher Emily has just asked to what extent do you take your empirical results to be disciplined specific or generalizable to many or disciplines and there was a bit more of this in the chat as well on the difference between. You know, medical biomedical sciences and say social science on that. Yeah, it's a really good question and to be honest, I don't know. I think we, we need to do the research and the research that we've done so far has only been in the biomedical space it has been, you know, with plus and nature these kind of generalists biomedical journals rather than field specific but how that is generalizable to other areas. I personally don't have those expertise, and I think collaborating with others who do work in those domains will help us. I think, you know, there's lessons to be learned from other domains, you know, I think physics and math often get brought up as, as doing things differently. So I think there's lessons to be learned I think that interaction and not one of the problems we have is this kind of silo working and to ensure that what we do is more generalizable I think will be helpful. Great, thank you. Well I imagine our next speaker Liam cofee bright may also pick up on some angles from other bits of the disciplinary landscape as well so let's hand over to Liam to give us the next take on this on this debate, Liam. Oh, thank you very much. Just to say that my presentation just leads immediately into Remcos presentation so save questions until after Remcos. And that means he has to answer this great. And also, if you, you know, try and tell me you don't like the slides or something wrong. It's not that I can't hear you. I'm just ignoring you. I don't value your input. So let's get going to everyone can see the slide so I'm going to be presenting research which is joint between Ramco and myself and also for some bits which I'll indicate it was also joint with Marcus oven who can't be of us today but he's philosophy professor somewhere in America, I think. Okay, so we saw this is based on largely based on recognize paper is peer review a good idea and it's taking up some very similar themes from what Emily was just talking about. So we're considering, you know, pre publication peer review in particular this is this charming process where before you can get a paper sort of entered into the public record of science validated in that way Emily said that you take it that it's going to be in the literature if it's peer reviewed gets that status increase. You know, the way we did it right now is as is, you know, you submit to a journal, the journal has referees look at it, and if it passes and the editor agrees, then it gets to go in. Whereas largely what we think, you know, spoiler, we think that's not the best system, and we better off switching to a different kind of system so that's what we're going to be discussing today. Oh, no, it's not moving. What's a good cartoon. Well, that's it, we're going to look at this cartoon forever. Okay, good. Okay, so what we're going to do is compare the status quo with a sort of another way of doing things which as any mentioned is more typical it's not exactly the same but it's more akin to what's been happening in mathematics and physics and in this system you'd sort of, you replace journal peer review with open crowdsourced peer review. So what happens is you can post your manuscript on, you know, a preprint server preprint might be a bit inappropriate if that's just the way you get published but on a preprint server like archive is a famous example of this. And so like having done that that counts as you having entered your, your piece into the sort of the scientific commons into the public domain. And that's currently how things are largely done in mathematics and physics and as was mentioned at the start that there was a huge increase of this during the pandemic and epidemiology and some related fields and virology. So this is sort of only adding the proposal that we would also allow for that to be sort of like in in a similar way to you get in open peer review journals now. People like can leave reviews members of the community can leave reviews of the paper they can assess it and give commentary on it and it would be available with the paper. So if you go into the archive and you download the paper you'd also be able to see reviews and commentary from peers and fellow scientists. And so that's sort of like, we want to sort of compare what the effects of adopting this proposal would be with the status quo of the pre publication peer review system. And then we wanted that because we thought the literature makes it possible to do a comparison between that kind of system and the kind of system we have based on empirical evidence is already out there and an understanding of what the consequence of the different kind of systems are. That means we're not going to say that that proposal the self upload to archive and then allow for open review. We're not saying that's the best possible systems maybe there are other ways of organizing information sharing and whatnot in science which would be better, but we are going to say that when you compare that system to the current current available evidence in the sociology of science in meta science in various studies that have been done. It does look like that on all of the factors we were able to identify hopefully that's you know all the relevant factors but at least the factors we were able to identify are proposed alternative. So I wouldn't suggest it would be no different or better. And although there are some cases which we would discuss at the end, where it wasn't quite clear just the evidence didn't really decide the matter one way or the other so further study. And when we say better, we're going to be looking really largely or is entirely at what we call epistemic consequentialism but that's the fancy philosophy words we just mean by that. So when we think about its effects for the sort of production and sharing of knowledge reliable information, something like that. We don't think that our argument depends on any nice to use about exactly how you cash those things out but just sort of broadly is science or sonic fields doing its job of informing us about the world and things we'd like to know about the world and getting that information shared out there. Okay. So, what's going to happen now is having introduced it in a smooth and yet humorous fashion to put it on ease I'm going to go for a bit of the positive factors, and then just a moment it gets difficult by shared coincidence my colleague run co will take over and handle the rest. Okay. So, this gives an example of the kind of positive factors we're going to look at. What's like, so, you know, this gives you a sense of like how we reason about this was like, Hey, what is it you want the peer review system to do like the journal publication system to do. Well one thing you want to do is ensure that people have reason to quickly share information discoveries once they make them. Right, I mean it sounds obvious but it's, you know, it's the obvious purpose of journals right is to ensure that information is shared got out there disseminated. And journals you might think play a role in that because we reward scientists, I mean, as Emily mentioned this right it's the it's the currency of science is to have things published in journals as it stands. And so you might think well you know the incentive to share your information is tied to the journal system. Would it mess things up to get rid of pre publication peer review, but actually quite the opposite as far as far as we can tell right so. The incentive to it's true that kind of the currency of science is being associated with a good manuscript and as it stands that's a good manuscript in the peer review, having been peer reviewed. But in mass and physics where they switched this archive system it's just to be associated with good, good manuscript that's been uploaded to the appropriate archive and shared that way. The incentive to share comes from being seen to be associated with the good work not intrinsically from it going for the journal system that just happens to be the social norm now that's the one person change. What the journal system does do is introduce these delays as there's a bureaucracy there's a as was mentioned a kind of over a creaking at the joints overworked bureaucracy associated with reviewing getting editors to assign people and so on and so forth. It's just time in which the work has been done but it's not being shared it's not being able to be used by the scientific community. So it's not affecting our incentive to share that exists where we're not you have the peer review system, the pre publication peer review system is what it's affecting is how quickly people are in a position to take up and make use of that information and the diagram shows various journals and how long you can see you know it can be quite a substantive delay between a manuscript being prepared and actually getting to get out there. And finally, in some fields there's even incentive to hold back and so to speak, withdraw that money manuscript from any public circulation, why it's under review because it can break anonymity and make it more difficult to get out there. So you know if anything it sometimes incentivizes holding work back for this time, which obviously wouldn't be an issue if you can upload to pre to a pre print server. So, you know, on what might be thought of as like the most basic obvious thing you want journals to be doing ensuring work is shared and disseminated. This is an example. We are the proposal we have we just have to archive. There's no evidence that it would hurt. And there's some evidence that it would help in this regard. And that's we think pretty typical. Another example is, and this is maybe the main thing we want to stress really when we're stressing the positives is journal, the journal systems that now exists just takes up a lot of time and resources and you know the, the labor of people with PhD is which is socially quite valuable labor, if we do say so ourselves. And, you know, that maybe that time could be used otherwise right now there are people who serve as editors, they're usually also working scientists they have to take their time out to do the job of wrangling peer reviewers and then there's this kind of game of like, the last sucker right the person who's willing to say yes, I don't say that many hey I'm often the last sucker so you know, but find a person who's willing to be like the peer reviewer which they're giving their time, but they're not giving their time to a paper which they've selected as the best use of their research time they're giving their time to something which a professional courtesy now exists in which norm exists whereby you have to do this sometimes and so sometimes you give your time to reviewing. But we just kind of think that researchers are the best judges of how to allocate their time right so what people read and what they want to focus on what they want to give detailed pop feedback to and what they don't. We think it'd be better if like rather than existing this kind of soft pressure of a professional norm that you know you do some reviewing service which which we now will subject a subject to rather people just make decisions based on what they think for their research for their lab group for their for their intellectual advancement. You know people sometimes say to us but won't that lead to like this they're being less reading or less reviewing less giving feedback. And we think that's an example of a question where, well we don't know right I mean there are still incentives to read people's work and try and learn from their ideas and use it to improve your own work that exists whether or not pre publication peer review exists. It's not like it's going to go away. And we also think people scientists you know tend to take some intrinsic pleasure in exchanging discussing ideas and exchanging their opinions. So whether they'd be more or less of it who knows that's an empirical question. But in any case, even if there was less of it that might reflect the fact that on average if scientists weren't subject to this norm, they would rather allocate their work time doing something else and that's fine we endorse that we think they're the best judges so. And that's another point where we think no evidence that it be harmful and some evidence that at least we think we good on this basis assumption that people are better at deciding how to allocate their own time then editors are related. Okay, but like another problem with publications right now is that there's a very one known effect of a gap where men on average publish more than women, even when you factor in things like how what they're doing outside of work and what tasks are being done outside of work and with a career stage and things like that. And there's explanations of this and this evidence for both of these, and it can kind of mix but to some degree there's gender bias in peer review differs by different fields but there's evidence of that. And there's also pretty strong evidence across many fields that women are subject to greater expectations of bias that women expect to be held to higher standards than men do in the peer review system. And it's to spend more time doing things like ensuring the manuscript meet certain like standards of writing caliber like sort of the sort of improving the aesthetics of the rhetoric of it which isn't necessarily related to the sign quality, but they do this out of expectation of harsher treatment from the viewers. And so this is a very good work by Aaron Hengel an economist at Liverpool if you want to look into that. And now, you know, whether you if you switch your system without pre publication peer review, whether that will result in, you know, women no longer expect this and so get more work out there, or may no longer enjoy this unfair decision against equalize that way. I have a way we take it like this is just a sort of arbitrary by a skew on what research is out there in so far as gender at least in some fields might be correlated with what kind of perspectives you're bringing to bear this might be especially relevant to social sciences. It's a skew in what kind of knowledge is available. And so just getting rid of zeroing that factor out by the fact you're just able to upload your work and not have to jump through the hurdle of impressing the best reviewers get sort of that skew and we take that to be a good thing or at least reduces that skew. Another huge resource it takes up is literally just money. The scientific publication it like we it's just much more expensive the way journals do it than the way open access or archive places do it. The Elsevier and the Springer, they're some of the most profitable companies in the world just because they don't do that much but they take a bunch of labor from PhDs for free. You know, I don't know what their relationship is with meta science. I think they're a big scam but I don't know if I can say that but they're a big scam. And so, you know, it's really not clear to us that the value added by these journals is worth more than what we could do with these savings. Elsewise in the fact we strongly suggest not, but at the least whatever your judgment that is just as a matter of fact, money wise much cheaper to switch to something like an archive system and just get rid of the publication system of profit margins, which capitalists use to buy cocaine, whatever it is they do. Right, so like, I, you know, get rid of that and replace it with the archive system. And then also, and this is where it's going to start to move into sort of things which we're discussing. There's this other factor, which is right now, as you've mentioned a few times the peer review system is how we decide, you know, which can we just get promoted, what kind of jobs they get who's going to be eligible to win in grants and things like that so how will you've done in the past, according to the peer review system and getting work out there affects what you can be able to do in the future and sort of generates processes of cumulative advantage where the people who did well in the past get better, do better in the future too. That's because we're using the system as like as a means of like allocating credit. And you might think if you sort of break this down of like, well there's the sort of credit you accrue because people read your manuscript and thought it was good or the work was very valuable or you helped contribute to solving a problem or synthesizing thing or whatever, which lots of people made use of. But I think that's like long run credit that's of the considered opinion of your field or the scientific community regarding the value of your work. But then there's also a kind of short run boost to your image, like the mere fact that a publication is in nature or science. That is in itself impressive like it just looks good in the CV, like literally a CV line people for it that way. That was even that somewhat cynical cartoon which Emily showed. And so basically, you know, we think that when you think about this, if you want to be using this kind of credit way of allocating scientific careers at all. It's the long run credit is the one you actually care about right like the short run credit is only valuable if it's going to be a good proxy for the long run credit because what you really want to know is are they doing actually useful work, not just is it is it in a journal which I associate with and so if one thing just switching to our system is already in automatically so to speak placing more emphasis on long run credit and as as we'll see I think it's also something to be said about it. There's more to be said here but that's going to be got in a very close bit like it's not just putting more emphasis on credit but we might also be suspicious of the role of short run credit. Liam sorry to cut across but we're running tight on time so I'm going to. This is my literal last slide. Okay, great. Thank you. I've got you there. Got you. So, okay. So the last thing to say is, and as was mentioned by Emily as well. Journal editors, like the current way the system works give this kind of outsized influence to, to journal editors who are kind of able to decide who to reviews what gets published and what doesn't. Is it like a small number of people, the journal editors are deciding what work gets out there or at least who has a chance to get that getting the work out there. And it's more democratic to just like have the whole scientific community judge that might sound like I'm just making a moral political argument, but it's not just that it's you know democracy is nice. But actually we think that will lead to a better way or more accurate evaluation of sign of the book and on that note, I switch over to Renko. And, and just to say sorry we are running a bit late on time so Ramco if you can keep your half as concise and punchy as possible that would just give us time to hear both from Daniela and to have some discussion. Sorry about that this is a 60 minutes session I think I've been told we can go over by up to 10 minutes but no more than that. So sorry that. All right, I'll make an effort to that to that effect so can you see my slides. And they're not, there's no longer a weird bar. That's blocking the thing like was it Emily. Great. Yeah. Okay, so thanks Liam. I'm just going to do the last positive factor and then briefly run through the neutral and the uncertain factors right so this key point here about epistemic sorting is sort of in a way defensive move right so it's like the thought is from a defender of the current like trying to put forward a strong case for why we have journal peer review. And it's this idea that we're trying to sort papers into journals based on a sort of hierarchy of journals that reflects the kind of notion of quality. And so the best papers going to best journals the mediocre papers going to mediocre journals, and so on. And so both outsiders and insiders can use the journals as an indicator. And this is a kind of signaling function so you can find easily the highest quality work. Now, there are some caveats about how valuable it is to have such a thing and whether we even can have such a thing, but I've just been told that we have 60 minutes instead of 90 minutes for this session. So we shall just skip over that. The key point that I want to make is that we think there's reason to believe that the sort of post publication model that Liam describes can actually do these epistemic sorting role better, or at least just as well as the current system can. And so the argument is based on the Congress a jury theorem, which is a famous mathematical results that the key feature of which is that if you need to get an accurate opinion on something, it's better to ask more people than fewer. Right, because everyone has maybe some little bit of relevant insight. And if you just sort of take a vote, then you have a better chance of getting it right if you have more people in that voting population. That's roughly the Congress a jury theorem. Now why is that relevant here. Right, that's because we think that with a post publication peer review model, you're going to have on average more reviewers per paper than with a pre publication journal solicited peer review model. Why would that be basically two reasons. Right. First, you sort of opening up your pool of reviewers by letting everyone that wants to review not just the people that get actively invited by an editor. The second reason, maybe the most important one is that you can get because you're opening up the peer review system, you can get more reviewers per paper with the same amount of work. Right on the current system, you know, a paper might be received to reviews get rejected from a journal, then receive two reviews at the next journal and get accepted there. So now, each journal is basing its decision on two review reports, but an open peer review system with the same amount of work you would have had for reviews for that work. Right, so we expect for those two reasons, the average number of reviewers per paper would increase if you move to an open post publication peer review model. And because of something like the Congress a jury theorem. That's a reason to expect that it actually make better quality judgments. Right. Now the standards Congress a jury theorem as this key assumption it's kind of a binary setting, you might want to relax that assumption, then you get a different version of the theorem. Again, I'm skipping over some details in the interest of time. And the other important assumption that I want to highlight here is probabilistic independence of the reviewer judgments right so you have to have the reviewers. In a probabilistic sense, make the judgments independently. You might worry about that right especially in an open peer review model will they actually be probabilistically independent if they can see each other's review. It's a complicated issue. We have a bunch of things to say about it but the most important one I think here is at the bottom, which is that if you're a genuine expert you're being asked to review a paper. Then you should have some kind of independent line of reasoning for why you think particular bit of work is good or bad. Right, that's kind of just what it means to be an expert right you're not just parroting other experts opinions, you have your own reasoning as good or bad as it may be in a particular case for thinking whatever this you think. Okay, great. Based on the conversation theory theorem we conclude that, and some related arguments we conclude that open peer review might do a better job at epistemic sorting than close pre publication peer review. So that's the last of the positive factors there are a couple of neutral factors here, which I'll go over very quickly again in the interest of time for detection, we just think peer review is not really the time and place. Peer review as they say is neither a replication or rely detecting device. Right, so you just need to have something else. If you want to detect fraud. And so for our purposes here it's just a wash. Similarly when it comes to hurting or fetishness right, you might think that getting rid of journals would discourage people from going after fads or popular topics. You think those are a problem, which we don't take a stand on here. But we don't actually buy into that, because we think that there's always reason to follow fads in a credit economy right where people need to draw attention to their work. All right. So these are certain factors they're quite important right so these are some of the key objections that people have raised where we think we don't really have a full response because there's just not enough evidence on this issue currently. All right, and the first of the two is this one on the role of prestige bias. Right, so the thought goes present system less small fish sees the headlines right a graduate student can publish in nature. If you take away the journals right if you take away nature, then how is a graduate student ever going to get attention for their work. Everyone's just going to go read the papers by the most famous people. The first part response right the first part is just what I already said, we just don't really know. We do know the places like nature also suffer from prestige bias. So it's just a question of how this shakes out on balance, and it's actually not clear what the what the precise answer there is so more research is needed. The point is that if you're so concerned about, you know, graduate students or marginalized researchers researching the global south or smaller institutions whatever it may be. There's all kinds of researchers the vast majority in fact that are not getting a ton of attention for their work. And if you're so concerned about them then just sticking to the present publishing system is not really a very satisfying response right if you're genuinely concerned about these marginalized researchers then we should actively be looking at reforms that help that group, mostly orthogonal probably to the way we organize peer review for example, something like more randomized funding distribution. So the other uncertain factor that I want to highlight is once you turn. Once you open up your view right you've got kind of a social media like model of science publication now. And so you should be worried about some of the problems that have played social media. For example, mobs that come and distort reviewers scores by because they have some sort of political extra grind or for some other reason that is orthogonal to the sort of epistemic goals of science. So that's I think a real problem something we need to be concerned about especially if we let anyone that wants to make a review on this on this archive system. Right you could have scored restrict access but then I think you lose also a lot of the benefit of what we're proposing here which is again to bring in different views into scientific issues one of the benefits that we're hoping for is that. Maybe if a particular field is a bit of stuck in the rods then some outside it can come in with new ideas. And that's exactly the sort of thing that you risk losing if you start restricting access to this open peer review system. So we don't like that as an answer but it does suggest the direction to go and then we just distinguishing between known experts recognized experts and what I like to call putative experts which is everyone else. Right. Just having two separate scores that keep track of these two different groups can give you a sense for what where the opinions are coming from that you're seeing. If they come apart right that recognized experts and everyone else's opinion. Then that doesn't tell you whether this is the outsiders calling out group think or whether it's the outsiders manip trying to manipulate the score, but at least you'll know that you have to pay extra attention that particular case. Great. So that's basically in a highly abbreviated form how we respond to these two uncertain factors. I'll just leave it here so that Daniela has a chance to say a few things as well. So here's just a summary of the factors we discussed. And I'll hand it back over to James and Daniela. Thank you Remco very much and sorry to have to tighten the time a bit will move straight to Daniela there've been lots of questions picking up really on that last issue of uncertainty about the potential for trolls and, you know, pylons in the in the open review space but we'll pick up that as a discussion point for the whole panel I think after we've heard from Daniela so over to you Daniela sorry again the time is. Hopefully that leaves me something to discuss. So, thank you everybody. I have shared my slides, I will share my screen but I won't actually go over everything because I do want to leave spaces at the base. Although it's interesting because I think we all kind of a very similar views. So hopefully we can disagree on something or otherwise it's just, I'm just kidding but. Okay, so we know all the question. And as a, I heard debates is like we are supposed to answer the question we wish we were asked so I'm going to go ahead and do that. The presentation is available at the Beatles preview, as I put it on the chat so you don't need to need to read that. We heard a lot about what we think all is wrong with the peer review system and I think that that is what you know they ask that the question that we try to think about a lot and peer review. The peer reviewer pool is small is very homogeneous, not just gender as Liam was was also touching on that has been demonstrated kind of easier to demonstrate with data that we have available but it's also very homogeneous in terms of geographical distribution, we can expect it to be very homogeneous from a racial at the technical point of view and disabilities and other dimensions of diversity are not represented. In fact, there is actually, and I didn't have links to publications here but I can. And editors and editors also are tend to be male in their middle late career their research careers of points of their meat to late points of their careers and are often opaquely selected by general editors as experts in the field. And yet there is not any formal training. There is more effort has been has been put into put some trainings into how to review but expertise or reverse a lot around elements of prestige and years of engagement with research we can collaborate with expertise, but it doesn't mean that if someone has been in research for many years they can actually be good reviewers. And also it was a reviews that goes unrecognized. For the most part. So, I think one point that I want to make that as that I want to reinforce in all these discussions many of these things have been discussed already but it's just that peer review in the process of peer review and scholarly participation doesn't happen in the vacuum but it actually happens in the context of a huge mess that we have built through history and that includes many systems for pressure that manifest in the peer review process. And so I think like, often we talk about like, let's just very logical and programmatically see what how we can fix peer review, but we kind of like turn tends to forget how we can try to make a lot of changes but we also need to go hand in hand with like systemic changes that can lower the barriers to participation access and so forth. So a peer review. We pretty worthy of you we have a print review platform. I'm just I have some slides about it but I'm not going to go into details. And but we really wanted to make it our mission to engage to bring more equity and transparency to peer review by by engaging in powering researchers, particularly researchers that have been traditionally excluded from the process. And the review of print is is the means to do that and if I have to answer try to answer the question right now, I guess I am on the position of post publication peer review but just because I think the preference of publication. And, and I'm just going to make more of that point at the end. So, this is just to say like we really wanted to make our researchers in power or researchers to engage in peer review and so the platform is kind of the home for the communities we want to see flourish and then we organize trainings we really believe that training requirements and mentorships are all go hand in hand and a community reviews via chats like this on zoom around preprints to bring more perspectives to to the publication at a point in time in which change can still happen. So this is just a screenshot of the platform where basically anyone with an orchid ID can make an account and review preprints across preprint servers and do that rapidly by answering a series of yes or no question this is just a screenshot that shows a review that was a comment on bio archive and next with five rapid reviews, which are just answers to some yes or no questions to capture the essence of the review, or they can do also in a full length. One aspect about we talked about trolling so someone raised that issue and because we want to really think about how to bring everybody in. We also have to recognize that vulnerable groups and identities and I once and I was very considering one of them as an early career researchers afraid of putting my comments out there. So we have a way of a built in anonymity posting, however, there is also built in accountability because as you join for review you get assigned to personas one that is your public persona that kind of imports all public data from orchid and it connects to your orchid ID, but you can also choose a pseudonym that is assigned to you as you join as the form of any random color and random animals so you may end up being yellow octopus. And with that pseudonym that you can decide to request feedback provide feedback on pre review, however, if there is a violation of the kind of conduct that the community itself can actually report. And then there is a process to moderate it post publications on number three publication, then you can be safely safely, but you can be blocked or removed from the platform because the back end will always connect to your orchid ID. We can talk more about other ways that we can prevent trolls but I think that the for us and I'm not going to go these are just people details about the platform. Oops, apologies. For us, the important things that they want to build a space in which community research communities can understand what or kind of buy in some of these core values, but also find their own sense of belonging in their own portion purpose and shape culture together. And so in a pre review, we are, we just launched communities and what that is is that on pre review different groups that can start a community on on the platform and we have some initial changes in this MVP that can be changed. But the goal is to give more freedom to these communities to kind of grow in their own space on on providing reviews when that could be like changing the way that the rapid, the rapid review, sort of question is low and complex, organize their own events, but kind of being part of this kind of network of preparing reviewers to share the values of being transparent and being constructive and wanting to work together towards a better way of providing evaluations. So I think that these communities that are either already there are growing on pre review. And I want to, I'm going to leave on this, this is of two more elements that I think are very important to this discussion. And we talked a lot about like the rightfully equity diversity and inclusions and ski things. We really think that equity is one of them, the most important lens through which we should be thinking about how the future of scholarship goes through regardless of its before or after publication, but it's like thinking about how can we to work together to bring tools and resources can actually lower the barriers of the or remove those barriers altogether that have been put for research communities that don't have that have not historically been brought to the same level as others. I am just going to skip over this and I just want to end with like, I think trust is the element. Also that we ultimately want to, to bring trusted to these new ways of doing, of doing peer review and evaluations, call it a publication. However, a lot of the, the discussions around trust in the open science movement is always like how can the establishment trust that these new ways will work that these new experts will actually or reviewers will actually be experts and is it going to be a rigorous process and I, we want to sort of flip that thinking and it's like how can we actually work together to build a system that communities that have been traditionally excluded can trust to come and not be exploited and not have their knowledge appropriate and, and really count on having their voices centered and put forward. So I think that we need to work together and think about how we can, who we do reform with, and here just some examples of things that we're doing with different communities. And so the answer to the question is I think it doesn't really matter when we do peer review, we understand for the, let's do it as early as possible, even before programs. To be honest, but we can have that discussion, but that the key thing is just if let's build it and people will come will definitely lead to having homogeneous communities to come to that because we've seen that over and over so that will not work. So I just want to leave with that sorry I took probably more time that I should have, but let's get on with the discussion. Thanks very much. Excellent, thank you Daniela that was a great overview and thanks also for all those resources and your slides that you've posted online, which are fantastic. So yes, we've got 10 minutes folks for discussion we stretched the session. I think there was some confusion of those. This was a 90 or 60 minute it is now 70 minutes session. So hopefully all our speakers and our wide audience can stick with us for another 10 minutes. I'm going to dive into the Q&A box which I'm sure some of the speakers also have been looking at over the course of the talks and a lot of the questions here are centering really on points that you've touched on Daniela and which which also came up in Liam and Remco's presentation around really whether the creation of these structures, enabling, you know, more open online commentary can be protected and safeguarded against some of the bad behaviors and more troubling features that we see in other areas of online debate. I used to for five or six years editor blog on the Guardian newspaper science policy blog and was very, very familiar with the challenges of, you know, moderating open comment on a big global news site like the Guardian. You've already suggested Daniela and Remco as well some ways around this but if I just dive into the questions and pull out some of the points that are being made. Hi Heather Douglas. Hi Heather has posed the question as how do we keep post publication from becoming Reddit cesspools. Maybe this is about having open IDs but who moderates it and she's raised very important question about the risk of behavior being worse as it were or inappropriate behavior being more directly targeted against women or other minority groups who and also I guess researchers who are working on topics that are politically controversial and may attract the ire of either left or right for whichever reason. So how do we kind of protect these spaces and stop them being overtaken Rose France and making a similar kind of comments but also how do we stop other bad behaviors that we see in conventional peer review you know reviewer circles paper mills people just pumping each other up because it'll it'll help the process or help their, you know, them get on in other ways. Alavo Amaral offering a positive counter example. Can we solve some of those difficulties through hierarchical moderator models like Wikipedia which you know obviously does have ways of weeding out inappropriate contributions, etc. You don't want to hear from me over to you who would like to kick off Emily should we go back to you. We haven't heard from before but try and keep comments relatively punchy because we're down to seven minutes now. I really answered the question actually make it open up to the others who have addressed this kind of more directly. I guess I'll maybe add another question in instead is, is there evidence so you know to date about trolling in post public you know, is it happening or is it, is it a fear that we are imagining more than is reality. But I think Daniela and because I think they've talked about this moderation or be good to pass the buck as well. Yeah, that is a very good question to me in terms of the existing sites and you know platforms like f 1000 is this a problem now I if it is I'm not aware of it but I can just say real quick that we have not seen it we have not so we have ways for any community member to report report version of color conduct I read the reviews still now and they're published because I got notifications and I have never seen that. I also want to say like it's not to us it's not just about like preventing that person to come and say like oh you are like, you should just get out like in your real clear violations right but it's also like how we provide criticism and because in the console academia I feel like we are raised to like oh it's going to be like you have to be, you know, we have to oppose the other person opinion is just about how we provide feedback that often is not constructed and helpful and it makes the other person feel like defensive. So I think that in that aside for like having a kind of conduct that it's very clear and enforcing a center in that in the messaging of the platform or the environment whatever the event that you have is also making sure that we building training for for how to provide constructive feedback and so I'm just going to put a link we just read some some guides and guidelines around that where we have clear example like you hear you say the same exact thing. But this is a way that the author will do something with and this is a way that you're just going to make them feel bad and nothing will happen. So I think there's a spectrum of bad behaviors and we should just get rid of. Thanks. Thank you. Just a couple of further points on this. So one, I really liked any other suggestion and I'm happy to hear that it's largely the fears aren't working coming true there. And I think the thing that makes that work is there's this kind of reputational score element to it, and also meet that also provides a means of enforcing the code of conduct. So one of the worst things in academia both actually happened to the anonymous peer reviews, and also that when it is like an on like the econ job rumors of whatever this is this very little abusive spaces online in academia. It's because you get a lot of anonymous commentary. And that's not a feature of these I think so like, I do think that some of these fears are based on anonymous context which peer review incorporates pre publication peer review incorporates. There aren't as much of a feature here, but I wanted to pick up on the other thing which is have asked how could we, you know, how can we ensure people are properly rewarded for this labor. Well as, as mentioned in our slides, there are just huge monetary savings made by switching from the current system to anything like an archive system. It would take some of that money and make this like property compensated what like you just actually not just people volunteering their time depending on how conscientious they are but rewarded for it properly using our communal resources in a way we find more valuable. That would be my suggestion that's the kind of thing which we could do because we'd be freeing up so much resources by switching to this system. Thanks. And I think that also goes some way to answering Bianca's question which is about the reward system. Remga anything specific or otherwise I'll try and take another question before we wrap up. And let me just say one thing real quick, I'm really excited at that here from Daniela that she's experimenting with sort of the mixture of parts being anonymous and non anonymous. I didn't quite catch from your presentation if this also applies to authors. But like, because someone in the Q&A mentioned part of the one of the questions that you just mentioned James was that issue about like the role that double anonymous review can play in protecting from bias. And there's no reason in principle why you couldn't do that under an open model as well. I don't I'm not necessarily advocating that but this is something you could experiment with at least one host that reprints that reprints are posted somewhere else and they have names so it's just it's just for the reviewers. Yeah, thank you for that that's very helpful. We're just going to squeeze one last question in and allow all of you to do sort of 30 seconds on this before we wrap up and close David. I've seen has basically supported the proposal to move, you know, towards post publication review models but has raised an important question which is how do we coordinate initiatives in this space and make sure that we don't end up dividing and making our impact simply because there's too many different competing platforms and tools to both do this but also to create rewards and incentives for doing it. And I think most of us recognize, even as we try and participate and support alternative models of doing this it can be quite confusing and time consuming to sort of be, you know, filling in and getting the credit on this platform that for doing this or that so there's just final thoughts on that. Again, I'm going to start Daniela with you because you're probably at the sharp end of this as offering one of, you know, one such platform but a quick thoughts from you and then from others and we'll wrap up. Can you repeat that question super real quick because I was like just looking at how do we avoid, even if we want to support moving this direction how do we avoid the move itself being weakened by the sheer proliferation of different platforms and tools for doing it. I think doing the reviewing or getting credit for doing it. Right, so I think that just the intro interpretability is going to be the answer we've tried to bring in some other elements like a product incorporate product into pre review and now we're working towards basically one interoperability preview with other servers and I, you know, I, I think that in the end it is very helpful to have different experiments that happen like PCI is a great example of how there is for print reviews that happens in a, it's a similar you have an editorial group that chooses the reviewers and so or we had, I think, right, I came in at the tail of the previous talk there was like crowd fighters that I think was like talking about also crowd bringing volunteers to like mentors and provide feedback but I think that ultimately we do need these differences to see what will work and what will not. And I am I am for like let's collaborate if we can but also like let's if we don't agree on exactly the model let's also try different, different experiments to see where we all go but technical interoperability definitely helps if we if we can manage that. Sure, yeah, thank you for that anyone else want to come in with a Remco you is that a hand yes, Remco quickly. I actually agree with Daniela said and I would actually emphasize that it's not necessarily the goal to create a new monolithic system that then dominates the whole publishing industry right like the whole. I think a big, big part of the idea is to create a proliferation for the sake of having a proliferation not for the sake of identifying the best one and then settling on it, for at least, I could see it developing that direction. There's a difference between being aware of the different things that are happening versus unintended replication division of labor. So, you know, I'm a big fan of like 1000 flowers bloom. And I think that's essentially what's happening. Well, everyone else said something so I feel foolish if I didn't weigh in. I agree with many of what my esteemed colleagues have said. Admirably succinct thank you all very much for that. We got through actually most of the questions that were on here sorry Samuel Fletcher your question, you'll have to send to Liam and Remco separately but apologies for the slightly tight timing I do think that that was a great session and some very practical and concrete proposals being in advance there for how we can. If you like move beyond the debate to an actual constructive next phase in the peer review structures and systems that surround us so thank you all very much. I'm sure you'll want to join me again in saying thank you to our four panelists to Emily to Liam to Remco and Daniela. I'm James Wilson thank you and we'll pass over to the next section. Thank you very much. Thanks everyone.