 Just get the notification that's being recorded, sorry. So they're not that many overview studies. The larger part of the literature focuses on particular types of innovations. So for example, the impact of replacing double-blind review procedures with a full transparent form of review. Then when you look at the relatively few overview studies that are out there, you find that they usually got the one of two dominant perspectives. So on one hand, there is research written from an activist perspective in the sense that it usually assumes that the peer review has a problem that needs to be fixed. So for example, it suffers from bias or from a lack of transparency or from escalating misconduct that it fails to detect. And then these studies try to gauge the effectiveness of innovations in fixing peer review. Another type of study focuses primarily on the question of the uptake and incorporation of peer review innovations into the editorial and review practices of journals and editorial routines. And here, the conclusion is often that peer review actually doesn't change that much so that innovation doesn't really trickle down to journals as it were. Our own aim with this project is different from both of these perspectives. So we don't try to assess the effectiveness of innovations in repairing peer review. So we feel that the research and review practices are quite varied across fields which makes actually problematic to make generalized statements about what is wrong with it. And secondly, we do not treat journal editors as obligatory passage points in innovation. So we are more interested in analyzing the constitutive effects of innovation. So what practices they get rise to and perhaps also what new practices that don't necessarily have an equivalent in existing conventions. So we really focus on innovations as distinct objects of study and there are good reasons for that. So one is that even if innovations are not widely adopted or not immediately adopted, they can serve to reimagine the status quo of publishing and review practices. They can serve to problematize how we currently do things to draw attention to what is usually taken for granted which in turn can untie conventions of practice and thereby actually enable longer term reconfiguration of practice. So it's not always the case necessarily that innovations solve some clearly defined problem. It's also, we would argue the case that they can themselves be an agent of change. And innovations are of course also sociologically interesting for all sorts of reasons. For example, because they can play a role in how actors position themselves in publishing ecosystems. So publishers can use innovation to demonstrate added value at a time when there is a critical discourse about big publishers and also researchers themselves may use peer review innovations as a way of trying to promote certain shifts in the epistemic culture of the field. So innovation is frequently more than simply a change in review practice itself. The data for our project was collected through a survey that we sent to a broad range of actors, as I said before. And in the data collection, when one had made sure to contact the big publishers, but we also relied on a lot of snowballing to get the survey to other types of actors, academic editorial boards and various, yeah, not-for-profit actors. The service is certainly not sort of a comprehensive, systematic kind of look at what is out there, but certainly allows for interesting, for identifying interesting trends we feel. In total, we have received 102 responses. So self-defined innovations by 84 respondents, which means that some respondents fit in more than one initiatives. And we described the material according to an inductively developed, we call it a taxonomy, based on, I'm not sure if it's the best word actually. It's based on five main categories. So for example, what is the object of review? How are reviewers recruited? Does the innovation entail specific review of Fokai? And I will discuss these dimensions in more detail as I go along. Here is a screenshot of the Excel file that we used to compare and visualize the submissions, whether they're all striped and broken down according to these five main categories or differentiating dimensions. And the nice thing here is that this way of handling the data allows us to also capture potentially emergent effects of initiatives. So instances where elements of review practice are reconfigured, even if they're not the explicitly stated aim of the innovation as such. Okay, so the first analytical dimension is what is being reviewed? What is the object of review in a given innovation project? And here it's interesting to note that there are only two initiatives focusing on so-called registered reports. So that's a system where you submit a research design where you spell out the hypothesis and your research methods for review and then later you submit the actual results. And that forces you to stick to your research design that's meant to prevent selective use of data and what is considered an opportunistic reframing of research questions. And perhaps because it's essentially a discipline and instrument, it focuses mostly on fields where there is an established discourse about misconduct and research waste. So it's more of a niche thing at this point. More common are initiatives that expand the focus of peer review to include not just manuscripts but also data sets and source code. So we have, for example, five initiatives in our sample that encourage the position of the data and then very common since recently are preprints. But at the same time, this is far from homogeneous practice. So on one hand, there are dedicated and partly pioneering platforms like archive, of course, and then later bio archive and met archive where preprints can be posted potentially in parallel to journal submission but in principle independently of journals. And yeah, some initiatives also actually builds on these pioneering platforms. So some innovation projects submitted as part of our sample and they provide additional functionalities to these platforms, which indicates in turn the status of these platforms as a widely used infrastructure. So new projects already building on these existing platforms which have acquired a sort of an infrastructural character. For example, Cyrate Pre-Review which allow users to recommend, review and comment on preprints. And a few platforms also integrates preprint deposition and peer review in a single unified platform like F-1Pause Research and E-Life. And then finally, and this is probably, yeah, more recent development, big publishers nowadays also offer optional preprint deposition for journals in their portfolios, for example, Springer Nature and Ember Press with platforms like in-review and review comments. And this implies really significant scale because these services are then offered really across the entire portfolio of publishers on an optional basis. The second main category that we use to differentiate innovations is the role of reviewers. And the first question that we asked here is are there any explicit criteria someone should need to act as reviewer? And we have two journals in our sample that includes patients as reviewers which is an interesting experiment. Of course, these are journals with focus on biomedical research. And then we have a substantial amount of evaluation that is carried out by professional staff in publishing companies and organizations that operates preprint service. For example, preprint publishing usually requires some kind of screening submissions for adherence to some formal guidelines but also a topical fit to some extent and regarding journal publishing, most big publishers of course arrange nowadays for an initial screening of manuscripts for plagiarism, language use, images and also thematic fit. This is done by publisher staff and the use of AI. And only after that editors take over to swing manuscripts and invite reviewers and do some more domain specific review process. And this presupposes a distinction of technical and substantive evaluation criteria. So forms of review, forms of evaluation are labeled as technical and delegated to professional staff where substantive review is reserved for domain specialists. And of course the distinction is not always clear cut and sometimes quite fuzzy. A related question is how are reviewers selected? So for example, are they picked by editors or can they sign up individually? And here an interesting finding is that there's a large number of initiatives that involves access reviewed forums. So for example, in connection to preprint deposition and often on top of commissioned reviews. So that means that users need to register and can then for example, comment on preprints. And that also means that review tasks are self-designed in contrast to a scenario where an editor identifies the signs suitable reviewers to an manuscript on a case by case basis. And it's not always completely clear what form of verification of identity and competence is done before users are then basically allowed to use the form or how forward the check is. There are also some initiatives that give reviewers the possibility to invite co-reviewers typically in the context of mentoring them. So senior reviewers mentoring junior reviewers. That's a different principle to access review forums because here reviewer recruitment relies more on disciplinary structures and acquaintances among researchers and therefore also more on disciplinary gift economies you might say. And then there are also initiatives on the way to diversify reviews in geographic and demographic terms. But these are generally based on incentivizing measures rather than on setting minimal thresholds. So it's usually encouragement to editors or to editorial board members to try and have sort of yeah, suitably diverse reviews assigned to manuscripts. If peer reviews performed by multiple review actors do different reviews also have different tasks and responsibilities. Again, initial screening of manuscripts and preprints for plagiarism for language use and so on is often delegated to publisher staff before submissions are passed on to journal editors. It's a basic division of labor for review processes and forums. There will be, we suspect often an emergent self-coordinating dynamic at play where individual comments build on each other. So one person comments on one part of the preprint, for example, and then somebody else comments on the comment. This carries to risk of balances in review folk kind principles since there's no central authority in the shape of the editor to steer reviews and to make sure that all elements of a submission are considered in equal measure. ECR mentoring for junior reviewers potentially also affects distribution of review tasks since it implies an authority relationship between reviewers. Another main category is the nature of reviews. So kind of review criteria, does it even innovation initiative imply or specify? And preprints, review of preprints obviously tends to be based on journal agnostic review given that they're not connected to any particular journal. Although it's not always completely clear that this is also part of an epistemic strategy. So sometimes it's perhaps simply a result of how the platform is set up. And it's of course also not clear whether reviewers always also stick to this requirement of journal agnostic review. They might also of course simply reverts to a disciplinary review style. But there are also three terms in our sample that explicitly mandate that Southmas only review as part of an explicit epistemic strategy. So reviewing a manuscript in terms of fit with the journal profile is seen as a distortion of peer review in this case. And some innovations also quite a few innovations add special review criteria on top of traditional conservation. So reproducibility is very common and or the inclusion of source data. For peer review mentoring initiatives, review criteria are more likely based on disciplinary review conventions because of the one-on-one mentoring of a junior reviewer by senior reviewer. In contrast, again to forum based review which is less likely to be based on journal or even disciplinary review conventions because the forums do not necessarily nap onto journal communities. But again, I should add that most cases of forum based review are on top of commission reviews. The final main category focuses on questions of openness and transparency. So our review reports made available is the identity of reviews made public and so on. And that's a major area of innovation, but again, a very heterogeneous practice. So we don't see a singular development here towards a singular notion of transparency. A substantial number of publishers and also individual journals offer the possibility of publishing review reports of accepted manuscripts usually on an optional basis. Same goes for disclosing review identities that's usually optional. And some journals also at the same time I've actually moved away from single blind towards double blind peer review due to concerns with bias often in particularly prestigious journals. And there are also some special configurations where reviews are published while moving away from single blind to double blind peer review. And yeah, sorry. The very final sub point on this heading is transferable review. So that's a system where a rejected submission is passed on to another journal alongside the review reports. And that's on one hand related to creating transparency but also a measure to manage peer review as an economy. And the variation of this are systems where manuscripts are submitted to a family of journals whose editors then collectively decide which exact journal should handle the manuscript. So here the selection of a suitable outlet is done by editors themselves rather than by authors. And we didn't get a ton of information on this type of innovation. That is something that is going on. Some publishers really try to offer it on a broader basis. Partly also as part of their pre-print platforms the reference, for example, review comments can be sent to partner journals alongside the reviews but as far from pervasive at this point and a potential difficulty here is that researchers don't want to have negative reviews sent on to a new journal. Okay, so yeah, we're still in the process of writing our paper but we certainly can already draw some conclusions. We feel the first one is that experimentation with the object of review can be seen to create additional transparency by multiplying review focus and review occasion. So for example, by enabling review of treatments in parallel to manuscript review by including a review of source code of beta that a manuscript is based on. So it's not simply a shift of review focus but really a net increase in review work. And perhaps related to this, many innovations in the categories role of review and nature of review entail increasing the versatility and mobility of review actors. And this can work in multiple ways. One possibility is by introducing or reinforcing distinctions of technical and substantive review whereby tasks that are labeled as technical can be delegated to staff or to AI. It can also work by removing disciplinary or social boundaries that hamper the ability of certain actors to review. So for example, access reviewed forums only require registration but then allow users to self-assign review tasks and not necessarily on the basis of a disciplinary community structure but on the basis of a platform structure. And sometimes increasing mobility versus those of review actors is the side effect of an innovation focusing on epistemic aspects. So for example, soundness only reviewed is on one end the strategy to avoid gatekeeping but also has an economic dimension by decoupling review work from journal communities. And then we also found that there's not a linear development towards an agreed upon definition of transparency which is probably not a surprise to a lot of people in the panel but rather diverse and often filled in journal specific trends. So making review reports and review identities transparent is mostly optional and there's also parallel trend towards double blind review in the context of journal publishing. And perhaps it's actually easier to publish review reports than identities. As I said in the beginning, the project was not designed to study the uptake of innovations but we still feel that we can indirectly say something about the effects of innovation activity on practices on the ground. And on one hand, there is always this suspicion that not that much is actually changed in practice. It's a finding of previous research in any case. It is quite clear in principle that the big publishers are engaged in a lot of innovation projects. To be fair, these are usually optional for users but they're also scalable across large parts of their portfolios that suggests a lot of evolutionary development but development nonetheless. Only a minority of innovations really requires users to subscribe to particular reform projects like registered reports. That's more disruptive and therefore usually more niche and happens on a more circumscribed level. The very final observation we would like to offer is that it's interesting that most innovations can be described actually in terms of established labels like registered reports, like pre-publishing, like cross-review commenting. So it's not completely free-form innovation that we're dealing with but innovation along established lines of alternative practice which suggests that innovation has been going on long enough to have certain history in its own right. And this is where I end and I look forward to questions. Thank you. And I will stop sharing. Thanks very much Wolfgang. So before we move on, perhaps we could deal with a couple of questions which we've got and which have been contributed by members of the audience. So first of all there's a question from Mario Malitski who is asking about, I guess the motivations behind some of these innovations asking particularly about innovations which have the objective of increasing quality of peer review. But have you come across that or indeed any other particular motivations in the data that you look at? Yeah, it wasn't really the focus of the research but I would guess it's usually a combination of motifs. The problem is it's really analytically hard to separate what really motivates a particular innovation initiative. In some cases it's relatively clear. So there are often, so academically driven projects often have the aim of dealing with some kind of widely perceived problem in the field like particular forms of misconduct. You have to put a lot in psychology which is partly responsible I think for the interest in registered reports in psychology. For big publishers it's always multiple motivations on one hand of course, simply about improving peer review but at the same time as a way of demonstrating value you could say to academics, test new business models, practice make existing business models more efficient, make existing infrastructure more efficient. So yeah, whole range of motivations. It's really a tricky question. So yeah, I have to be brought as well in my answer I think. Okay, thanks very much for that. And also a question from Cecilia Tilley who asks about are there any innovations in your data around reviewers getting paid or somehow being allocated resources for doing their reviews which is a rather different business model from the conventional one, isn't it, of peer review? Yeah, thanks for the question. We didn't have, we didn't have instance submissions that really are based on a pay for review model as it were but we have a couple of submissions that reward reviewers with APCs. So it's a way of connecting basically review work to publishing of the reviewers. So the reviewers are paid with a discount basically on their next future publication in a given platform which is an interesting phenomenon and possibly more disruptive than it might at first sight look like. So you could argue that a lot of review work is based on a sort of a gift community on the idea that you should repay the word that a journal has invested in you by reviewing yourself and we don't really know what happens when this kind of cycle of indebtedness is all of a sudden supplemented by a currency-based transaction. So does it make people feel that they don't have to review for a journal if they already paid for a review, for example. So there's all sorts of interesting questions related to that but I'm afraid I don't have a very definitive answer on this. No, thanks very much, that's brilliant. So thank you. Do keep your comments and questions coming in as the presentations progress. What I suggest we do now is move on to our second speaker, Victoria Yan from ASAP Bio, who's also going to give us her perspective on a project that overviews innovations in this space as well, re-imagine and review. So over to you, Victoria, welcome. Thanks very much. Thank you so much, Stephen, for the introduction and thank you Wolfgang for presenting the results of the survey. I found it very interesting to hear how you categorize the innovations. So my name is Victoria Yan and I'm the coordinator of re-imagine review at ASAP Bio and we have been following with close attention to the emerging peer review experiments that has been coming up in the last few years, especially paying close attention to experiments are enabled by the rapid open dissemination of research through preprints. So this published first and validate curate later model allows the encoupling of events that are traditionally entangled in journal peer review. So what previously took place perhaps in a black box behind closed doors can now be tackled, implemented separately after the open dissemination of a preprint. So now there's a transition from peer review and curation as a geek keeping mechanism of scientific research to becoming a earlier provider of feedback to research and also providing additional context for readers. So what we see now is the shift of peer review to meet the demands of researchers. And as researchers what we want which is improvements on the peer review process on both preprints and journals alike. So in that sense as authors, we want to decide when the work becomes public. And there's also increasing demand for open transparent reusable peer review. And this will be more efficient and also more constructive. And lastly, curation of primary research will be used to sort research into a list. This will provide context and this can take place after publication, after dissemination and after peer review. So the curation activity can also be more inclusive and involve more than two to three reviewers. So at re-imagine review, we've been tracking both active ongoing experiments as well as proposals in peer review innovations. So since our launch in 2018, we have seen a growing number of new projects and experiments that are addressing a lot of challenges that are working towards increasing transparency, efficiency, and many of them are built on a framework of review and preprints. Especially in the last year, many new projects such as rapid reviews, COVID-19, the novel coronavirus research compendium and rapid peer review outbreak science has come onto the stage to organize the review of a growing number of COVID-19 preprints. So this community, through producing peer review that is public on preprints, are finding themselves answering the question of what is the purpose of peer review? So they can be used in new use cases and by new users. So in this case, compared to journal publication process in which peer review is used to inform the decision of an editor in a concept and reject decision, the review now cannot be used to provide earlier constructive feedback. And in the recent research of bioarchive users, many preprint authors want to share their research in a faster and more quickly as well as receive feedback on their work. Also this peer review reports and open peer review can act to highlight and curate the research. And this is in fact an old idea. At the dawn of peer review, it was thought that peer reports can be used as publicity for the research. And lastly, we're of course confronted with information overload. So the rate at which preprints are shared per all papers is growing at an exponential rate. So to meet all of these challenges, what we need to do is to grow a community of peer reviewers of preprints. So the peer review innovations we've been tracking re-imagine review are leading this effort at growing this community infrastructure. So they're also diversifying the review poll. And especially pre-review is been very active in the mentorship program by mobilizing earlier career researchers in the review activity, as well as realize involving early career researchers to provide highlights on preprints. And in addition, we have been seeing an increase in geographical representation of peer reviewers. So TCC and Africa Archive are growing the capacity for peer review and curation Africa. And another really interesting example is cross-institutional journal club that's harnessing journal clubs to provide review of preprints. And of course, what we need to do is to connect all of these different communities to the projects. And what that needs is better discoverability and better understanding of the up and coming peer review projects. So we need a way to organize these peer review projects. And this has definitely been a challenge in re-imagine review as we're seeing many new projects coming up with new terminology for the activity that they perform. So to come up with a standardized way to describe this activity, we can then increase the awareness and understanding of this review projects. And once we have that, this will encourage participation to connect communities to the projects. This will enable recognition of the review as scholarship once we can understand what review has taken place. And this enables also review reusability and help editors identifying potential reviewers if we can have better visibility of the reviews and reviewers. So to this and what we have been working together on with a group of publishers, technologists and pre-print server representatives is to develop a taxonomy to classify the peer review activities. So the scope of this taxonomy is to describe a transparent post dissemination pre-print review services. And we want to capture all the different new activities that is performed by the peer review services to inform readers in what has taken place for a given peer review on a pre-print. So to this end, we have developed this current draft of the peer review taxonomy. So in addition to what Wolfgang described before in the previous taxonomy, we're interested in other aspects which is the review coverage and also one thing that we've been thinking a lot about is competing interest in this field. So what we can imagine is to implement this taxonomy on a level where reviews are aggregated. So where we will be piloting this particular taxonomy is early evidence space, society and reimagine review. So early evidence space is a platform launched by Unbol where referee pre-prints can be discovered and society introduces curated lists of evaluated pre-prints. So once we can easily distinguish which review service and what type of review process they perform, then this can help us understand better what they do and this can help researchers in understanding all of this new experimentation that's happening in this space. So of course, improving the intelligibility and understandability of these projects is very important and could increase adoption. But what we think is very important in this field in general is providing actual incentives. And this we've found has been very difficult for many projects. And this is a very similar question to what can incentivize people to perform peer review in general? So at ASAP BIO, we have launched two new pilots to tackle this question. So the first one I would like to introduce is the ASAP BIO peer reviewer recruitment network. So during this pilot, researchers can submit their pre-print review sample to our partner journals who are looking for reviewers. So this can connect the reviewers and help them break into the review activity, especially for early career researchers. And this also provides publishers in journal and access to a pool of reviewers using their peer review on pre-prints as a sample, as recognition of their experience and their work. And the second one that we are piloting is a crowd review trial. And this has been inspired by the crowd review model at SINLED. So we want to use... So what this is, is that the author comes in to the review of their pre-print. And within seven days, a crowd of interested researchers can participate and collectively leave comments and feedback on the pre-print. And that will be synthesized as a public peer review through BIO archives trip mechanism. So through this, we want to learn whether this format will be an engaging format for researchers to participate in generating public peer review on pre-prints. So overall, I would like to say what we see is that peer review is rapidly adapting to meet the needs of researchers. It's increasingly becoming more modular, more inclusive and more flexible. And a lot of this is enabled by review of pre-prints. And secondly, what is important in this field of many experiments is that we need to understand what they do and help them become discovered by researchers. So we will do this through highlighting the pre-print review taxonomy on early evidence-based society and re-imagined review. And we're at ASAP BIO, we're actively experimenting in trying to understand how can we provide better incentives for the community to participate in open pre-print review. So with that, I'd like to thank you for your attention. And much of this work is also the work of my colleagues, Jessica and Arachid at ASAP BIO. So any questions, please ask me in the Q&A. Thank you very much for that. Really useful and interesting overview. So thank you. We've got a comment from Mario in the chat. And maybe that gives rise to an interesting issue that you touched on, which is the idea of crowd reviewing as opposed to the traditional, if you like, model of one, two, three gatekeepers, really. And the different pros and cons of each of those, Mario implies that it's more likely in a crowd-based situation that errors might be spotted, for example. So how do you feel, you know, what's the evidence-based in relation to the pros and cons of crowd versus the more traditional gatekeeper? Yeah, yeah, absolutely. I think the traditional gatekeeping type of peer review, where the reviewers are selected by editors, I think in that case, the editors can be confident that these are experts. In the case of the crowd review, it's very much relying on the reviewers themselves in judging their own abilities and their expertise and self-nominating them in this process. And in that sense, I think maybe potentially more meta reviews and inter-reviewer ratings can help us identify whether these reviews are of good quality. But I think a great advantage to this crowd review approach is that we're diversifying the reviewer pool and that overall can reduce inherent bias. And I think it allows the reviewer pool to be much greater so we can make it a much more engaging process. And I think another advantage in the crowd review aspect is that reviewers, if they don't have so much time, they can focus on a specific aspect of the paper that they're an expert on. They can say, I'm not a particular biochemistry expert, but I'm great at statistics so that we can actually harness researchers' expertise in targeting specific aspects of the paper. So these are some of my first thoughts. Okay, thanks. Yeah, Nicholas DeVito has got an interesting question relating to, I guess, the connection between reviewing of preprints and reviews carried out by journals and the extent to which, and this is a question I know that's been dealt with in a number of areas, the extent to which the review of preprints can inform, if you like, the triage of papers at a journal level as well, so that it's playing a role in that kind of process. Do you have any views about that? Yeah, I think of course the initial prefront review can help the triage, the traditional journals, identify who potential reviewers could be, but I also think in the ideal situation, much of this reviews that are performed on preprints could be directly used by journals. For example, in the process of review comments where these reviews are directly used by the journals to inform the decision. Or another great example is the PCI-friendly peer committee in friendly journals that use the reviews generated by peer community and directly without additional review. And that greatly increases the efficiency of the peer review process and reduces review based. So yeah, these are, I see as great compliments to the existing journal process. Thank you very much. I'm interested in the question of Bianca Trovo raises. And I don't know whether other speakers as well as Victoria would like to address this, which is to do with incentivization of conducting peer reviews. And particularly the extent to which the peer review itself can be considered to be an academic output, which is I guess the case in relation to open peer review where that may be a more obvious case. But it's still seen as a, if you like a public good rather than something that's maybe that can contribute to your academic promotion or tenure or whatever it happens to be. Victoria, do you have a view on that? But also if there are other speakers who'd like to chip in, I'd be interested to hear from you as well. So Victoria, are you first? Yeah, of course, I think what is important is for funders and for institutions to openly discuss the value of peer review. And this as an actual output and the part of our scholarly work. So I think this is still very much lacking. I think there may be many more open and public statements and encouragements of researchers and to know that this work will be recognized as an actual research output. And I think another aspect of this is that the peer, the open review is enabling the recognition. If the review work is not open, not published and not visible anywhere, then when and how will this work ever be recognized? So I think step one is to forward this work to be public and also for us to be discoverable. Yeah. Do any of the other speakers have a view on that? As peer review, if you like a peer review reports as part of the recognized scholarly discourse as an outputs in their own right? Samia, yeah. Steven, hi, this is Samia. Yes, I completely agree. And I think that publishing peer review reports offers so many benefits. It offers insight into the credibility and strength of the peer review process, context for the study, but also so importantly recognition for recognition of scholarly output for the peer reviewer themselves. But I don't think we have a system yet of a taxonomy, if you will, for reports of way of actually, and so I think we're putting this out. I do believe that institutions are paying more attention to diverse forms of scholarship, but I don't think it's become quite systematic yet. It hasn't become embedded in the culture. Thanks for that. John, if you've got anything you'd like to add? Just a little, I would agree with Samia that nothing has become embedded in the culture, but I think there is a lot of scrutiny being given to what's happening in other communities, particularly around information science or computer science, and thinking of things, I think it's called, is it Stack Exchange? An online community, I suppose is the best word where people discuss issues and they rate each other's contributions to that discussion. And so people rise in the sort of community level of the estimation of the community based on the perceived quality of their contributions. The peerage of science now is sadly folded up, but started some years ago by Yana Sapanen actually had that component built into it using where peer reviewers looked at other people's peer reviews and made assessments of them. So there were the sort of small beginnings of this kind of internal evaluation, but it's never risen to the level of anything formal and certainly not to the level of anything that counts in an academic sense. But who knows, that may well evolve. Yeah, thanks. And Wolfgang? Yeah, we had a couple of projects that rewards videos by making videos siteable. There was also a couple of cases where particularly good reviews are published as sort of independent contributions in journals as well as putting a spotlight on them. And then personally what I like that's not in our sample but there is a journal in my own field of science and technology studies which is called social epistemology and it has like a system where authors and reviewers are encouraged to have an open exchange in a sort of forum after publication of a paper which gives them a chance to elaborate on, you know, well on the review process basically on their arguments that they exchanged also as part of that. They're also siteable and it's a way of connecting review work to publishing which makes reviews more likely to do it. And it's also really interesting and it sort of gives us a rise to really an exchange among individuals which is also nice for building something because peer reviews often, yeah, based on experiences of mutual indebtedness, right? So that social component is also very important, I think and it's something that can be promoted through publishing review reports. Yeah, thank you. Now, Bianca, you raised this issue and I think I can, if you're able to, allow you to talk and switch your mic on. So do you go ahead if you want to expand on this? Can you hear me? Hi, hello everybody. Yes, we can, yeah, go ahead. Yeah, so regarding the point of incentivizing reviews, I actually recently made a proposal on the blockchain with blockchain technology and there are also several parallel proposals that use blockchain technology in order to decentralize the peer review system. And I think open access is really important but also we should consider that maybe younger researchers don't want to be overexposed in their reports and also to avoid some social cognitive biases. It would be nice to have a system that can combine the open access even during the process of a paper being published. So not just after the paper is published with a double design system and this could be possible through cryptography. And that's why with my project and also other similar projects that leverages distributed ledger technology, you basically can have accountability on who is actually performing the reviews. But at the same time anonymity in terms of the identity because you could use cryptographic ashes. And in terms of the incentives, the main idea circulating is to using token economics. So you could have a community around the platform that has a specific token which is basically a factum of, it's a value that can be distributed and can have both team power. And it's something like when you have an author that proposes a paper, the community can see like already is happening with preprints, right? It's something that happens. For example, if you have people commenting on bio archive papers or on Twitter, even the only difference is that right now with the web to system, the value flow is dispersed because these are centralized platforms. But if you have a decentralized platform, the fact of actually contributing with comments and some micro, micro peer reviews or even more complex peer reviews could be rewarded. And in this way, the recognition is not just material in terms of economic incentives, but also could form a reputation on the platform. And this could be spendable. For example, in a new impact factor system that doesn't recognize only the citations of a paper but also peer reviewer. Because anyway, it's an essential, it's the essential thing in science, right? Because without peer review, we don't have published papers. Yeah, so that's... Fantastic, I know it's really interesting. Thank you very much for that. And I noticed some other contributions to the chat and to the questions focusing on the number of issues arising from this like, for instance, bias and how that's affected by different models and so on. Before we go on to our next presentations, and thank you both speakers in the first half, I really appreciate it. Before we go on to the next presentations, I'm gonna suggest we take five minutes break so everybody can stretch their legs. And then we'll come back and we'll have our second two speakers and further discussion as well. Thanks everybody for your contributions today. Let's take five minutes, shall we? Thanks very much. So we'll meet again at the turn of the hour. Thank you. Okay, welcome back everybody. Let's get going again. Thanks very much again to our first two speakers. And now we move on to the second part of the session. And Samia is our next speaker. Samia Swaminathan from Spring in Nature and she's gonna give us her perspective on these questions. So thank you very much, Samia, over to you. Thanks Stephen. I just want to say thanks to Wolfgang and Victoria for great presentations. I think we'll see actually some themes that were raised in their presentations echoed throughout. I also want to thank everyone for a great discussion so far. So I'm Samia Swaminathan, I'm Head of Editorial Policy at Nature and Spring in Nature just by way of introduction to the Spring in Nature journals portfolio. So we publish some 3,000 plus journals across multiple imprints, around 300,000 plus articles per year across the breadth of research disciplines and working with a network of 750,000 peer reviewers per year. So we really feel we have an opportunity and a responsibility to push for reform and to scale. And we're in a position to really scale reform initiatives across diverse portfolios and diverse disciplines. So just want to throw up the schematic from Brian's piece on strategy for culture change a couple of years ago. And what I hope to do is really talk about how publishers can use the levers of policy and infrastructure to shift norms in research communities and then ultimately also through infrastructure help drive toward a better user experience for researchers and all geysers, authors, reviewers and also readers. So I'm going to tell you about three initiatives. One is in review, which is an initiative developed in partnership with Research Square that focuses on journal integrated pre-print sharing and transparency into peer review. Then I'll tell you a little bit about code peer review and how we developed code peer review in a technology facilitated manner in working in partnership with Code Ocean. And then finally, I'll tell you a bit about transparency in peer review, what we are doing around transparency and publishing reviewer reports and recognizing reviewers and as well as what we've learned about attitudes to transparency and recognition from researcher surveys. So it's an absolutely a team effort at Springer Nature and I just wanted to acknowledge at the outset the contributions of various people across Springer Nature and also an early disclosure on the advisory board of Research Square. So diving into in review. So for a number of years, the imprints across Springer Nature had a very progressive permissive policy to work pre-prints. In 2019, Springer Nature actually unified on a policy on pre-prints that encourages pre-print deposition that supports CC by licenses that support citation in reviews. But we wanted to also move toward an ethos where pre-print posting was normative for the community. And so we worked with Research Square to set up a mechanism for authors to share their research as a pre-print while under review. And that's really what the review is. It's journal integrated pre-printing, authors can opt in at submission through the submission system to deposit the pre-print which gets deposited, receives a DOI on the Research Square platform after undergoing some quality control screens that are carried out either by Springer Nature staff or by Research Square. And because of the integration with the journal peer review system, authors receive a very high degree of transparency and real-time updates. And depending on the portfolio that Springer Nature interview also offers a very high degree of public transparency in peer review all the way from the submission through publication. So for example, if you're at a BMC open access journal, you can see the version of the pre-print is deposited on the Research Square platform and then versions that mature through revision are released in real-time. And there's a public peer review timeline populates alongside the pre-print on the Research Square platform. If this is a journal that then offers transparent peer review once the paper is published, the peer review reports along with reviewer names are released alongside the paper. So there's really a huge degree of transparency all the way from submission through publication. So in review is now offered across 486 Springer Nature journals. We're seeing an aggregate of 31% opt-in rates. This is a snapshot from January to July of 2021 and a really important caveat that this opt-in data does not include data from scientific reports, which is one of our largest journals. In fact, it is the largest journal in the Springer Nature portfolio. So there's a great deal of disciplinary variation in terms of opt-in. We're seeing opt-ins upward of 60% in some journals. In review is also available across all nature primary research journals. We're seeing an aggregate opt-in of 27% across the nature journals. And again, a range going up to 37% in some disciplines and below 20% in others. We're also unfortunately can't really see the numbers for the country opt-in data. That's strange, but I can tell you that we're seeing a very sharp uptake amongst researchers from China, around 40% uptake in 2020. And that then falls off as we progress down that list over there. So I just want to switch now and tell you about Code Peer Review at Nature Journal. So Code Peer Review was really developed in 2007 at Nature Methods. And the journal developed a policy and practice to peer review and share code when code was central to the paper. It's been established and in place at Nature Methods and Nature Biotechnology Map for many years over a decade. And alongside, we've developed resources to help authors submit code for peer review, including software submission checklist, guidance to authors on code peer review, guidance to reviewers. So it's a very kind of integrated part of the peer review process for these journals that publish a lot of papers where computational issues are front and center. But what we do know is that Code Peer Review, the traditional way of Code Peer Review that's not technology facilitated in any way can be quite cumbersome. And it's especially cumbersome for the reviewer who then has to find the code, set up the environment, install the dependencies, run and reproduce results. So even though it's a really critical feature of the peer review that's offered at this journal, at these journals, it is cumbersome and time-consuming, labor-intensive. So in 2018, we set up a pilot and partnership with Code Ocean, which offers a container platform with executable functionality on three nature journals, Nature Medicine, Nature Biotechnology, Nature Methods. And the idea was really to create a much better user experience for authors, for the reader, but especially for the reviewer as well. So in this facilitated environment where the code and the data is hosted in a container capsule on the Code Ocean platform, the reviewer basically has a one step to verify the code. And this can be done anonymously while the code is hosted in a private environment. Then when the paper is published, the version that is associated with that paper is locked down, it receives a DOI and it's bi-directionally linked as a research output from the published paper. So what we found in the course of that pilot, and this is a snapshot of data from the three journals that we ran this trial on, is very positive feedback from both authors as well as reviewers. We really found that this technology-assisted way of peer-reviewing code was really facilitated the process. 54% of all authors from across the journals opted into the trial and it has really made the process a lot more seamless from submission all the way to publication. So we've now expanded the facilitated code peer review that is in partnership with Code Ocean to six journals, to six nature journals, nature, nature biotechnology, nature methods, nature protocols, nature computational science and nature machine intelligence. These are all journals where computational reproducibility is often very critical. We've also expanded the practice of code peer review to 19 nature journals. And I think the experience we've had in delivering a technology-enabled subusion for code peer review has been quite important in sort of expanding the practice of code peer review itself across our journals. So with these two examples, I hope you can see how we as publishers are really using policy first to lead with but then also developing infrastructure-enabled solutions to shift communities towards open and transparent research practices. So now I'd like to talk about what we're doing with respect to transparency in peer review and what we've learned from publishing peer review reports in the first instance. So Biomed Central, which is a part of Springer Nature, has been publishing peer review reports now for over 20 years. They were an early pioneering publisher in posting peer review reports and reviewer names. Nature Communications first introduced publishing peer review reports in 2016. That practice has now been extended to nature and a number of nature research journals in the last couple of years. What we're learning from the practice of publishing peer review reports is that there's a great degree of disciplinary variability. It's available as an optional practice across our journals and we're seeing a real range in uptake across disciplines. It goes up to 80% in some areas and around 40% to 50% in others. And nevertheless, over the years, we're seeing a real shift to a transparency. So as of 2019 at Nature Communications, we were seeing overall aggregate uptake of around 70%. And we also know in surveys that we've done with authors that there's a great desire for transparency. And I just want to give you some numbers. So 78% of researchers and surveys have said that they would be comfortable with an anonymous review, with revealing for a journal that released an anonymous review. 38% feel that openness and peer review is beneficial and improves the quality of the output. And 44% of reviewers from China, of researchers from China, actually also find openness is beneficial. So the trend towards greater transparency and transparency as beneficial to the system is really becoming clear and that's a sea change. Certainly for me, from the days that I first started at Nature. So I also want to tell you a little bit about a second area of transparency and recognition at the Nature journals. And that's publishing reviewer names. So in 2017, we initiated a pilot at Nature that allowed peer reviewers to have their names formally acknowledged on the published paper. And the goal was twofold. One, to provide transparency into the process and two, to recognize the contributions of reviewers in peer reviewing as well. So at three years snapshot of data, we found that 91% of nature authors opted into the private. 55% of reviewers opted in and approximately 80% of Nature papers have at least one reviewer named on the paper. So we wanted to understand a little bit more about the authors and reviewers who took part. And we were really, what we really also wanted to get at was whether the system of the pilot, the initiative to allow authors, reviewers to disclose their names on papers, whether that was creating or distorting existing inequities in the system. And what we found was actually, there isn't a great difference across the demographics we looked at. Men and women authors were opting in at around same rates. Women and men reviewers were agreeing to be named around 50 to 56%. There wasn't a great difference in career stage in reviewers agreeing to be, opting into being named in terms of career stage. And then we also surveyed our reviewers to understand a bit more about their attitudes toward reviewer transparency and recognition, particularly in relation to naming reviewers. 78% of our reviewers felt that naming reviewers would result in better reasonable reports. 68% felt that this would improve transparency and 52% said that they would consider being named if given an option. So this pilot is no longer a pilot. It's now been integrated into all nature journals. So all nature journals now allow, give reviewers the option and opportunity to be named on the published paper. So just to summarize, what we've learned from these three efforts as well as others is that publishers can use levers of policy and infrastructure to help shift community norms. Of course, that doesn't happen in isolation. And it most definitely happens in synergy with changes that are occurring in the community. And we've seen this most clearly in relation to preprints, but also other areas like reproducibility. Researchers support and embrace open research practices, preference code, but infrastructure and user experience can be important considerations when trying to encourage uptake. There's very significant support for transparency across many researcher demographics, gender, career stage geography and across communities of researchers, i.e. authors and reviewers. We see considerable variability in norms and sort of trends towards openness and transparency across disciplines, which I think is not surprising, but it's an important consideration for us as journal editors and publishers when we think about introducing new initiatives and when we think about how we want to work with the researcher community towards shifting norms. So with that, I think that's my last slide. With that, I will stop sharing. Thank you very much, Aditha. We have really interesting and a whole variety of interesting questions arising from that. We've got a couple of questions from Mario and I want to ask you one of them now and suggest you maybe address the other one by typing in an answer in just a moment. But the one I'd like you to address is the one about whether you've explored why Chinese researchers are more likely to opt in to in review compared with other countries or whether you've done any analysis of the reasons behind the country differences in some of the experiments you've talked about. Yeah, it's very, very interesting and it really does stand out and it is different also to I believe what bio-archive and med-archive might be seeing where actually from what I can remember from data from bio-archive, it's really a trend to work researchers in the US and Europe. I don't, we don't fully understand why that is. We are in the process of actually reviewing surveying our in-review authors. And so hopefully we'll get some more insight there. And I just want to throw something out for speculation. It's entirely speculative, not based on data. But I think trust with researchers is really important, right? And I think I'm a huge advocate of preference. I think that they are really for the good of the research community. But I think there's still a lot of nervousness around about early sharing of data, about the potential for that to end up, you know, scooping the author and whatnot. So I just wonder if the journal integrated approach creates a greater degree of comfort. I don't know, I actually don't know. And it may just be that it's, because it's integrated into the workflow, it's just more convenient. But it's a very good question. The numbers are striking. I don't entirely, I don't have a good answer. Well, your answer absolutely brilliantly cues up John. Although, so it seems to me that it would be good to move on John to your presentation now. Mario, I've noticed your comments about really wanting, Samuel to have a go at answering the question about the availability of the reports in other forms. So could I invite you, Samuel, to maybe address that by typing in an answer? And I've noticed some other questions relating to some big picture issues around scaling up, for example, which we'll come to, I hope, after John's presentation. But John, can I invite you now to make your contribution? And then we'll engage in some panel-wide discussion as well. And keep the questions coming in, please do. And after this presentation, I'm hoping we'll be able to bring one or two members of the audience here as well to switch on our mics and contribute. John, over to you. Well, thank you, Stephen. And thank you to the conference for the opportunity to participate in this session. Very interesting. And I hope I'm not responsible for lowering the quality of presentations here. Good afternoon from Coates Spring Harbor Laboratory in New York, where it is a full day, very much like this. So the laboratory has a long history of being a research institution, but for all of its history, it has also been a place where scientists came to share and communicate their science. And I've just listed some of the milestones in that process here. And believe it or not, all of these things are still going on, even the oldest ones. But I'm here really to talk at Stephen's invitation about case histories. And I'm going to talk, case studies, I should say, and I'm going to talk about two things, one, a journal and one, a pre-print server. The journal is Life Science Alliance. It is three years old. It is a Gold Open Access Journal by Medicine. And it was launched jointly by Embo Press, Rockefeller Press and Coates Spring Harbor Laboratory Press. This was a product of some lengthy discussion about how we might do this and why we might do this, but we all decided that we were interested in this experiment. And so we set about forming an organization to publish this journal. The publication model has two professional editors, in-house, one in Coates Spring Harbor and one in Heidelberg. And a young principal investigator has been both Europe and the United States. LSA, as we call it, has two editorial processes. One, the conventional direct submission that everyone here is familiar with. And the response to that, of course, is either a polite decline or request for peer review. But the more interesting and innovative part of this is that we also have developed a process we call informed transfer of papers from the partners nine journals, what we call in front-line journals. These are all highly regarded journals in their respective disciplines and they're all hybrid. In other words, they give off as the option of open access, but also have sustaining subscription process. So informed transfer essentially starts when an author submits to one of the front-line journals and she has given the option to opt in to subsequent LSA consideration if her paper does not make it at that front-line journal. Once the editors have gone through their standard, the front-line journal editors have gone through their standard process, they select which of the papers they decline to recommend to LSA. And these papers may come with reviews or maybe they were desk rejected, but the LSA editors are exposed to those papers and they decide within two days whether they have a willingness to commit to review that paper or not. The authors have then informed about that interest, confirmed that they want to proceed and then the LSA editors continue either with commissioning peer reviews if the paper doesn't already have them or if there are reviews, the LSA editors may accept the paper on the basis of the reviews with either no or very minor revisions or they may decline to accept because they feel that there are major revisions still required. So informed transfer process really combines the idea of portable submission where the authors know exactly where their paper might go if it doesn't work at the first place they submitted it with and also portable peer reviews because the reviews go with the manuscript if they exist. So the benefits of this process we feel for authors are that it avoids the very well-known phenomenon of serial resubmission with all that involves. It accelerates the kind of publication. It also means we're not unduly burdening our reviewers and all in all it is seen as a service to authors which was what motivated the three partner publishers to set it up in the first place. Once the LSA peer review process takes place then there are a number of features of it and I won't go through all of these in detail. We encourage reviewers to talk to each other about their recommendations feeling that in that dialogue can emerge a realistic set of recommendations to authors that is basically asking for essential changes to be made only. We also have a process of screening for data integrity. Source data were mentioned earlier in the afternoon authors are encouraged to submit that source data for with their papers but if reviewers have questions about the data then the sort of have questions about figures then source data must be submitted. We do do transparently review which involves the public posting of the reviews, author responses and decision letters and authors can be named if they still wish. There is preprint support in the sense that authors may submit directly to LSA and to BioArchive at the same time. We've extended the scooping protection process that was basically articulated by Embo Press in which the decision on a manuscript is not influenced by a similar paper as well. And we also consider manuscripts that have been reviewed in a journal agnostic fashion by the review comments project that Victoria talked about earlier. And we are also now involved with the ASAP Bio through their new preprint reviewer network which is a way essentially of enabling early career researchers to demonstrate that they have reviewer capabilities and have that work be transmitted to a network of journals. So a summary of this is a case study we are three years in and we feel that LSA is really doing a very satisfactory job we've published over 500 papers. The volume is growing. We've been able to expand the editorial team. This first impact factor now of a decent number. The journal is comfortably in the black and the amazing thing is that three highly independent and opinionated publishing organizations have managed to remain friends while working together but also while being independent. So the challenges involved in setting all of this up was the inevitable challenge of decision making amongst multiple organizations. We had to deal with the technology challenges of different manuscript handling systems amongst the journals. We had to decide how to divide up the general labor of publishing a journal amongst the partners. Perhaps the hardest thing was getting to it was articulating what standards were expected by LSA so that the frontline journal editorial teams could figure out what papers they should be recommending. And that took a while but it has it isn't really no longer an issue. And of course, beyond all of that we had the always a difficult task of getting authors comfortable with the idea of a brand new journal particularly one that worked in a slightly unusual way. So we feel that the principal lessons are that authors have seen and welcomed the benefits of this informed transfer process. And I think we can also conclude that the recipient journal, namely LSA can have an editorial process and a peer review and even a business model that are different from the ways that all of these things are done in the upstream journals. So we're very happy with the way that LSA has done. And if there are any journal representatives in the audience, we would be very delighted to talk to other publishers and other journals about participating in this project. Let me turn now to Bioarchive. You've heard a lot about preprints, I don't need to elaborate on this slide except to say that a pre-print of course is posted before a peer review and the benefits of that are it's very rapid distribution. And the fact that really every aspect of the manuscript is under the control of and its distribution are under the control of the authors with the possibility of community feedback through a variety of mechanisms. So Bioarchive was launched in 2013. It's a not-for-profit project, a community-based, it's funded by Post-Principal Laboratory and fortunately by the Chandler Zuckerberg Initiative as of pre-print servers are the reading and posting is free but we see this as an office service in essence. It's not a product, it's not a publication and it's not a component of a journal submission but an independent entity, independent of publishers and journals but nevertheless with strong integration namely manuscript transfers to and from an expanding number of journals currently numbering over 200. As a pre-print, a manuscript is not peer review but it is screened through a multi-step process involving an in-house team of scientists scientifically qualified people plus a group of 180 principal investigators in a variety of disciplines throughout the world. So the manuscript only appears on Bioarchive after it has been seen and approved by that group of people. We are aiming to get a manuscript up in 48 hours after its submission and once it's there the office can revise it as many times as they like until the manuscript is accepted by a journal. A couple, my colleague Richard Sever, Mike Eisen and I wrote a paper which was mostly concerned at the time with access to scientific information at a time when all the discussion was about plan S. We somewhat tongue-in-cheek called it plan U for universal but as part of that consideration we made the comment in the article that when there is a critical mass of pre-print then you have a fertile environment for doing peer review in a different kind of way and evolving new and different ways of research evaluation and that is made easier because the hosting and the archive of the manuscripts themselves is taken care of by the pre-print server. So in the past two years there is no question and we've heard this already from earlier speakers pre-prints are definitely doing exactly what we had thought might happen. They are accelerating innovation and evaluation and the pandemic has added impetus to that. Victoria alluded to a couple of these pandemic-related pre-print assessment projects that have taken root at Mount Sinai in Oxford at the project at Johns Hopkins and also MIT Press's CCR19 review job. There are also projects that predate the pandemic that are specifically oriented towards early career researchers, pre-lights and pre-review. And there are a growing number of organizations that have an arrangement with Bioarchive by which they can post the peer reviews that they commission on pre-printed manuscripts back to the appropriate pre-print on Bioarchive itself, a project we call Transparent Review in Preprints or TRIP. In fact, E-Life has gone further as you may well know by articulating a new policy in which they will only consider manuscripts that have been pre-printed. And Victoria alluded to the new ASAP BIO crowd review in which authors of Bioarchive manuscripts are given the opportunity to request a community review. And we also have just recently introduced a new function on Bioarchive in which an author can request a specialized kind of evaluation from what we hope will become a growing list of organizations that offer to do that kind of specialized evaluation. The first up is DataSeer, which looks at the data in a given paper and makes recommendations on where that data should be deposited most appropriately. So what we are trying to do on Bioarchive is assist readers by aggregating the varying forms of evaluation that are emerging around an individual pre-print. So some of that is what you might call classic peer review and that involves author or tin and that's being done, for example, by E-Life and review comments. And in that case, the content of the reviews is displayed on the manuscript. Then there's another form of review that what you might call community review which may but may not have author opt in, pre-lites and pre-review are part of that, in which case what is displayed on Bioarchive is the link to the content on a separate site. Then there are the automated tools that authors wish to take advantage of such as DataSeer, in which case the content will be displayed, but there are other kinds of automated tools emerging which do not involve author or tin and in that case, we will display the link to wherever that evaluation lives. Then of course, there are blogs and the link is displayed there and tweets where we display both the link and the snippet. So what that essentially looks like is we have recently revised the UX here. So at the bottom of the screen, you'll see a set of icons that contain symbols of these different kinds of evaluations and a number that shows how many there are. This appears underneath on every Bioarchive manuscript. And if you click on one of those icons, then it will bring up the tab on a dashboard that are appropriate to the thing that you clicked on. So in this case, it was the community reviews and here are some links to pre-review, pre-lites evaluations. And you can see there are comments and blogs, no automated assessments here but lots and lots of tweets on this particular paper. No trip because this was not part of that process but when there are trip evaluations, then the dashboard that comes up alongside the paper reveals the content of the reviews themselves as you can see. So this is a rather heasty skate through a lot of stuff but if I'm trying to sum up what we have learned so far from Bioarchive and the attempt to aggregate reviews and evaluation, well, it's very clear that there is a growing interest in public assessment of pre-prints and there is also therefore a growing number of peer review pre-prints. In fact, we have 2,000 of them on Bioarchive already and more organizations and groups and individual people are engaging in this process and thinking about how best to do it as you have heard from Victoria and others. So the nature of the assessment varies from the comment which we post on the site to the kind of commentary, contextual commentary that you might get in pre-lites or pre-review to the more rigorous analysis that you would get from a process that is a sort of recognizably formal peer review. Where that content lives varies from Twitter to independent websites to being on Bioarchive itself. And as I've said, some of these assessments are requested by authors but some are not. And the interesting thing I think and the thing that is the challenge going forward is that some of these assessments has the effect of changing the manuscript status from unpublished to published. So perhaps for further discussion and particularly among the audience who thinks about these kinds of issues deeply what do we mean when we say that something is published? Conventionally that means certification or endorsing or somehow taking responsibility for the content having gone generally through a process of peer review. It usually involves or in fact always at the moment involves applying to this piece of work a different DOI that is not the DOI that the preprint server attached to it. And then there's a kind of social contract with the author because published means the creation of what it's still called a version of record. And in that social contract the author commits not to republish the paper in another venue with or without modifications. So I think the going forward we are going to be grappling with some interesting questions. Who gets to publish and journals obviously do but what about an individual? If I set myself up as an arbiter and say that my reaction to a particular piece of work that is ex does that to am I then a publisher? What about a group of individuals? What about an academic institution? And what is it that endows any of these entities with the right to call themselves a publisher? And I think over under all of these considerations is the inevitable question about the nature of manuscript assessment as it affects scientists career. We all know how inextricably bound up with the process of publication career advancement is. And I think we have opportunities with the emergence of preprints to begin to unwind some of that current in intricacy and figure out new ways in which we might provide recognition for contributions to science and to the scientific community. So thank you very much for the opportunity to present these remarks. I will turn it back to Stephen. Fantastic, thanks John. Thanks for those, for guiding us through those innovations and also for raising those really interesting big picture questions as well around trust and authority and quality and incentives because I think those are some of the really key issues that we're dealing with here, aren't we? Could I invite you to put your questions to members of the panel? Also, if you want to make a contribution, you're very welcome to do that. So please do raise your hand. And while you're thinking about that, and I'd maybe encourage some of you who've contributed to the chat or maybe raised questions to think about contributing. Dorothy Bishop, for example, raised a really interesting point around where, at what points in the research process, peer review should occur, particularly favoring registered reports that are very research designed, a very early stage of that. And it'd be really interesting to hear from you, Dorothy, to say why you feel that. We've got some very interesting questions that have partly been addressed through people typing and answers, but are probably worth revisiting as well as a panel. And one of them from Tom Stafford relates to this issue of incentives or barriers to carrying out peer review. That is for publishers or other similar organizations carrying out more innovation in the peer review area. What are the incentives, what are the barriers for undertaking innovation in this area and how can they be overcome? But anybody like to address that to begin with? Somia, yeah. Well, I think actually publishers are doing a lot. It's not the case. And I don't mean just bring a nature, but across the ecosystem. If you look at reviews comments, that's a publisher-driven innovation. Very interesting one in partnership with BioArchive. So I do think publishers are doing a lot. And I'm not sure that more innovation is necessarily what's needed, but perhaps better innovation in the sense that innovation that's more coordinated even across publishers, perhaps more systematized. I mean, I think Wolfgang's talk really highlighted sort of the chaotic nature of innovation, right? We're all doing transparent peer review, but we're all doing it in a slightly different way. Which means actually when we look at research institutions or others in the ecosystem wanting to build on that, make sense of that, it becomes challenging. So I think perhaps what's needed is not more innovation, but more coordinated targeted solutions, a better way of really identifying the core issues at stake and then working together, not just in siloed ways, one publisher or a group of publishers, but with funders and institutions. So that you can really talk about change across the sector. Wolfgang, I know you've done a bit of thinking about the nature of innovation in this space. Well, I would like to add something actually to the question about the nature of innovation. So I think it's important to also see innovation in peer review, not as a kind of a self-contained kind of area. I think it's important to see peer review as part of the scholarly process. And I think peer review practices are very much shaped by other elements of the scholarly process. So for example, by the emphasis that is put on authorship. And if I had to sort of identify a very general problem that I see in peer review is the fact that it's hard to get people to do it. And that has a lot to do with what is valued, which is a problem of authorship and of publishing rather than peer review itself. So one way of stimulating innovation, useful innovation in peer review, I think would be also to think of the environment of peer review, like publishing and how to sort of de-emphasize perhaps some of the issues that have to do with publishing. Victoria. Yeah, I'm also going to echo some of what Samia has mentioned, is perhaps maybe pure sheer number of more innovations is not going to be as informative as understanding which innovations have been successful and getting more data out of these experiments. So many of the platforms that launch don't necessarily see themselves as purely just an experiment. And I think what's really important to do is track your usage, track which strategies work, which changes and perturbations and peer review process is actually affected by engaging users or producing better quality of review. So I think a huge missing piece in the space is better recording of their information of the data and reporting of this outcome. So I think that will teach us a lot about overall which strategies are important and which directions to go to in the future. One of the things I'm interested to ask you all about is the relationship between pre-print servers and journals, which a number of the presentations you've given have touched on. Is there beginning to emerge a sort of idealized relationship between the two in terms of process, workflows and so on? Or do you think a whole variety of different models will emerge and persist in this area? You'd like to have a go at that one. John? Maybe I could start the process anyway. I think this is an emerging and an evolving, it's an evolving area. There's no question about that. And the pipelines that I referred to are really very simple processes in which an author can choose from now, as I said, over 200 journals to very simply send the manuscript and associated information to the submission system of a journal of her choice. This is a process that has been extremely heavily used. I don't actually have numbers in the top of my head as to how frequently, but it's in the many thousands, the tens of thousands now. And it's a process that many authors like because it doesn't resemble the typical submission to a journal, which a direct submission to a journal is often a rather tortured process. This is a rather simple process. Now, it doesn't transfer all the information to the journal that the journal wants, but it gives the journal an opportunity to have a first look at the manuscript and at least to say, thank you, but this is out of scope, not what we're looking for and so on. So there's a certain contribution to efficiency there, both on the side of the journal and also on behalf of the author. So I think that's a useful contribution and some, obviously, we, the pre-preserve of values, the opportunity to serve its authors in that way and the journals seem to like this process more and more of them are signing up to doing this. I think journals probably need to answer the question more directly, is this really useful? I mean, you have to ask journal editors if they are weighed down by things that they under normal circumstances would never see. But I think there are some efficiencies there for two of the stakeholders in the process. I don't see any reason why that shouldn't continue. Any other contributions to that question and the relationship between pre-print servers and journals? You know, I think that they have pre-prints, the advent of pre-prints, and they've been around for a long time in some communities, but I think they've pushed journals to do more and to also open up, or to open up their policies, be more innovative. So I think that's, they have disrupted, they've had a disruptive influence and I think that's only good for researchers and the research community and publishing industry as well. We at Nature have had pre-prints alongside since archive begun. So I think there's always been a degree of comfort and support for pre-prints. Let's leave out statements made in the 60s, but certainly the first editorial that we published on pre-prints was in the early 90s, I think. And I think it's certainly been absolutely fantastic through the pandemic. We've seen that. It's been transformative, really. I think there are some areas that, you know, we kind of talk about a lot and that's community peer review. And I think it's fair to say that the promise of that hasn't really borne out. It's certainly the case that certain pre-prints attract community attention. And certainly in the context of the pandemic, there have been COVID pre-prints that have attracted that kind of really intense scrutiny and it's been absolutely fantastic, right? And there's been very quick response, much quicker than, let's say, feasible in a journal capacity. But by and large, pre-prints don't attract comment. They don't attract community input. So I think that's an area where the promise of what's possible is still very much an open question. And I think then there are also issues around equity that Victoria raised. So I think, you know, I think they're fantastic, a very important addition to the scholarly communication space. And I think there'll be many opportunities for synergy between pre-prints and journals. Sam, could I take this with one of the things you just said? Please. I think when it comes to assessing whether there's a community reaction to a pre-print, it depends very much where you look. I think you're absolutely right. There is very little direct commenting, although when something really attracts attention, there is often on bioarchive a huge outpouring of comments, clearly if something is poor or wrong. But if you look on Twitter, I would say there's been a massive evolution in the quality of the discourse on Twitter around individual pre-prints. And I think that's only getting better as scientists begin to learn how to use Twitter for their own purposes. There are fantastic tutorials on the breakdown of complex manuscript in bite-sized chunks on Twitter and make it very much more accessible to a bigger audience. And then the other thing is the extent of private communication with pre-print authors, which, in surveys that we've done, suggests by far the most common form of public, of the far the most common form of reaction to a pre-print is private email and not something that is public. So there could be in the future some public, some community shift towards more open commentary of the sort that the author gets privately, which might make it the comment much more useful for the community at large. Thank you, John. Yeah, and actually some research that I've been involved with recently as well to do with Rory confirms that issue that he raised around the most common form of commenting is via private means, emails particularly. So yeah, absolutely. And now we've only got a few minutes left and I want to deal with one other question that has been answered by Victoria in the Q&A form, but also I'd like to raise more generally, which is raised by Alavo, which is a really interesting issue around, does the concept of peer review itself as a kind of unified entity, is that problematical? Shouldn't we think about this as a more heterogeneous thing than just rather than a homogeneous one? And I'd be interested in your sort of summative thoughts on that and how that relates to the future as well. Because we are talking about lots of different ways of carrying out reviews and how those can be situated in the whole landscape, which is becoming increasingly interesting, it seems to me. So Victoria, you've commented on that. So can I ask you to speak to that for a minute or so and then I'll turn to the other panel members as well. Yeah, so my first thoughts are with this day and age, I think many people are publishing more methods or data sets or increasingly new and diverse types of research outputs. And I think it is very important to think of it as very differently from a classical manuscript, which may be a full package story. So in that case, we can think more about a more customizable, flexible approach to doing review on different and very new types of research outputs. So I think we shouldn't do a one-size-fits-all review on all research outputs. And they could be separately experimented on and that was my point here. Anybody else like to comment on that point around the heterogeneity of the review process that's emerging? I could say a little word. Sorry, Samuel, you go ahead. Oh, no, go ahead, John. I think just briefly, I think what we're seeing is the emergence of a difference between peer review for publication and peer review for the purpose of the quality and advancement of science. And those are two different things sometimes. We are very hung up on peer review for publication and publication, goodness knows, is very important for career development. But ultimately, what science is about is the advancement of knowledge and what is truth. And that could be what we are seeing as we get more community engagement in the expression of scientific work where we're less about publication and more about progress. Samuel, just very briefly from you. I was just going to agree with everything Victoria said and actually pick up on a point that was made at an ALP SP session we were at this morning around whether all papers need to be peer reviewed in the same way, which I think builds on the point that Victoria and Barbara made about heterogeneity and the need for that. Fantastic, we've really got to close there. I'm very sorry because we've got so many interesting comments and questions coming in. Thank you everybody for your contribution. Please join me in thanking virtually all of our speakers. Thank you for your fantastic contributions. So thank you everybody who's attended for your questions and your engagement. I do hope we can find other four to continue this conversation as well. And we'll be following each other's work. I've no doubt with real interest. So thank you very much everybody. I was going to say have a good evening, but for many of it's not an evening, so have a good rest of the day. Thanks very much. Thank you. Bye-bye. Bye-bye. Thank you.