 Hello, I'm Kara Kodratz, Head of Communities at E-Life. It's my pleasure to welcome you to our May's ECR Wednesday webinar. This series aims to give early career researchers a platform to discuss issues important to you and your research career. You can follow us on Twitter at E-Life Community and with the hashtag ECR Wednesday. The session is being recorded and will make it available on YouTube in the new future. Now it's my pleasure to invite Hedja Ibrahimi, a postdoctoral researcher at the Heron University of Medical Sciences, Iran, and a member of the E-Life Early Career Advisory Group to introduce today's session and our panelists. Hello, everyone. Thank you, Kara, and hello. Thank you for joining us today for our Early Career Researcher Wednesday webinar. I'm Hedja Ibrahimi. I am an E-Life Early Career Advisory Group member and I will be moderating today's session. Just a few words about our today's host. E-Life is a nonprofit organization that is operating a platform to improve all aspects of research communication by encouraging and respecting, recognizing the most responsible behavior in research. Their role of Early Career Advisory Group is to influence and support E-Life's work to catalyze world reform in the evaluation and communication of science, and in particular to represent the needs and aspiration of researchers at early stages in their career. For a research culture that is healthy for science and for scientists. Today, our webinar panelists will talk about the process of pre-printing in clinical and health research, consideration ensuring results that are specific to medicine, as well as career implication of releasing your work early. Let's start with a little housekeeping. During the webinar, please be honest, respectful, inclusive, accommodating, appreciative, and open to learn from everyone else. Do not attack the mean, distraught, harass, or treat others or encourage such behaviors. If you feel uncomfortable or unwelcome on any of these webinars, please contact E-Life via mail at elive-safety-king at protomail.com. We reserve the right to ask anyone to leave and or deny access to subsequent webinars. As Korra mentioned, the session will be recorded and we will make it available to our YouTube channel. And if you not need help, please send a chat message directly to Korra or Shane. Following the presentation, we will relay your question to our panelists to ask questions at any point during the webinar. You can type your question into Zoom's chat box. And you can also tweet us. We are at E-Life community. Please use hashtag ACR Wednesday. I will read out your name and a question in the Q&A at the end of the webinar. And now I would like to welcome our speakers. First of all, welcome Dr. Joseph Ross. Dr. Ross is a professor of general internal medicine at Yale Medical School of Medicine and associate physician of center for outcome research and evaluation at Yale New Haven Health System. He's also has co-founded the preprint server medical archive and is currently the U.S. outreach and research editor at BMJ. Welcome Dr. Ross. Thank you. And I'm delighted to be here and looking forward to the conversation and questions over the course of the session. I just have a few slides that I put together just to sort of set the tone for how preprints in medicine are being used. Shane, if you want to go to the slide, Jim, as I'm waiting to come up. So Med Archive, which is a preprint platform that we launched for health sciences preprints. It's built on top of the same infrastructure of a preprint platform that many of you guys are probably familiar with bioarchive, which is a preprint platform for the basic sciences. I'm assuming everyone kind of knows what a preprint is. The preprint is like an early version of your work that has not yet undergone peer review that you're ready to share with the research community to tell people about your results. So we felt there was a real need for this in the medical and clinical sciences community. Itself is a not-for-profit entity. It's a service to the community, not a product. It's publisher neutral, and it's currently operated by the Cold Spring Harbor Laboratory, which is based in the United States. But it's managed in partnership with BMJ and Yale. And we launched this platform in the second quarter of 2019. We did not know that COVID was coming. And after really a long, you know, probably 12 to 18 months of preparation where we were working together to try to identify all the ways to make sure that both the scientists could share their work early, but it would not have, you know, put patient care or public health at risk. Because there's obviously a lot of concerns about sharing preliminary clinical research. Next slide. We wanted to set this platform up because we think there are really important potential benefits to using preprints of medicine. It enables the, oh, you went really far, Shane. Can you go back a few slides? Yeah. Oh, okay. So I see. So it's not, it's not. It enables the rapid early sharing of new information. This is important because it establishes the providence of ideas, providence of ideas while papers are being peer reviewed. It facilitates awareness, prompt scientific feedback, enhances collaboration among scientists and it demonstrates scientific productivity, which is really clear, clearly important for early stage researchers. It's also quite important because it makes less, what I call less publishable studies more readily available. We know that it takes a much longer time for medical education and qualitative research to make its way through peer review and into the literature. It's a proof of quality improvement in healthcare delivery innovation studies, as well as for studies that are confirmatory or contradictory to other things, you know, results are published and even negative or inconclusive research findings. But all of these findings are really important to share and to share right away, particularly quality improvement and healthcare delivery innovation research. The idea that we're doing innovations and not reporting the results for several years is really backwards to me. I also think that using preprints, it fosters more complete results reporting. This promotes research transparency, particularly, you know, for those of us who go to scientific conferences, right, and we present our abstracts at these meetings, perhaps with a larger poster that has more detail. Wouldn't it be terrific if you could actually preprint your work at the same time so you can send somebody to a full, you know, full paper when you're when you're presenting and finding. I also think, you know, they're a compliments trial registry results reporting which is really bare bones but provides sort of core data unless you explain what happened in the study and anything that happened between protocol development and study completion. And it also enables you to link protocol sensitivity analysis and supplementary materials, which not all journals publish so it puts everything in one spot. There are also, you know, concerns and perceived risks, you know, when we were getting ready to launch this platform, we heard a lot from many different groups of editors about the potential harm to the public that wrong information would get out there would be magnified by media reporting. There's this issue around what I think called persistent preprints where the results conclusions are changed through the course of peer review, but these old versions of set up there. There's a lot of worry about the manipulation of preprint platforms by commercial interests, and of course about undermining established medical communication norms through peer review journals through conferences through trial registries. Inevitably, when we talked to authors about our idea for the preprint platform they had only one single worry, would journals not publish our papers if we've preprinted because otherwise we're on board we like it we want our work to get out there more rapidly we want to enable more people to see it. Next slide. When we launched Met Archive back in June 2019 we actually tried to establish a platform that mitigated many of the concerns and risks that we heard from editors. So we have very clear submission requirements for authors. We submit the names of all the people involved their affiliations we require a trial registration for clinical trials. We require a statement of ethical oversight for all studies, and other, and we suggest the use of like reporting checklist from consort and the like. And we even enable data sharing people want to do that. So we tried to put a sort of system in place to your for the more responsible dissemination of people's research. We also have very clear posting criteria Met Archive is for research articles only. It's not for commentaries, it's not for viewpoints. It's not for narrative reviews. It's not for editorials right if it's a scientific study. This can be any type of scientific study it can be a clinical trial it can be a retrospective analysis can be a qualitative study can be a Delphi process. It has to be a scientific study systematic reviews meta analysis those count to, and even if it's a data paper or a methods paper, those count as well if you're writing a paper to explain to people. The data source should make available that qualifies. We established a screening process so that every paper is looked at by multiple people to make sure that nothing sort of gets through that could be potentially, you know, put public health at risk. We also signal the need for caution when scientists and non scientists read and review or the preprints on the platform if you go to the page you'll see there's you know big bold writing that says you know this is a preprint what is this you know don't don't report it in the media etc etc. Next slide. In the last five 2019. Obviously we couldn't have anticipated what was about to happen across the globe, but it and our platform ended up being a real repository for a huge amount of coven related research and our platform has steadily grown you can see, you know, for the beginning, you know we were getting 100 or so new papers than 200 or so new papers each month, sort of going along. And then 2020 hit and the platform exploded to the point where, you know, we were getting, you know, at the peak in May 77 daily submissions of new preprint submitted to the platform that includes weekends. So, you know, 75 new papers 75 new papers so so it really became the repository and after a little bit of a dip through the summer it's picked back up and now we're we're somewhere between 60 to 70 a day, which is terrific it's really growing steadily. Next slide. Oh, just to show you that are now when I last want to put the slide together that we had more than 1200 papers I think it's actually more than 1500 now I'm sorry I didn't have a chance to update the slides in advance of this. I think 500 of these papers are have been revised, so we didn't allow for authors to update their preprints over time so if you do revise your analyses or do change the way you know you're presenting your results either in response to peer reviewers or somebody seeing your work on the platform. It's really a straightforward process to upload a new version and history of versions is saved on the site. And also just to say about 20% of submissions are being rejected. Lots of case reports case studies and editorials had come to our side that obviously we weren't posting. And there were some that we decided would be better not to post that instead of we should go through peer review first so you know this was we're particularly sensitive to this in the, you know during the pandemic and with lots of, let's just say speculative research being done on new therapies and whatnot. This is what our next slide this is what our platform looks like. So here's a page of an article that was posted actually early in the pandemic. You can see the title the authors right there you can see that language I told you this article is a preprint and not been peer reviewed. It includes the abstract a link to the PDF. And what more, you can see it next click, you can see the link to all the blog posts that we've found that that talk about the preprint so that you can see how others are discussing the study. You can see a link to account that all the comments that are made directly on the site, which can be short comments that can be scientific comments and peer review like comments, and click, click, you can see that a link. Sorry, the animation's not directly lined up but you can see that the link to the comments is right at the top, so that people can go right there, and then click again. This is just to show you that about 8% of papers have preprints have comments on it, and about a third of them are tweeted about there's every tweet every paper is actually automatically tweeted, but about we when we track through and look at samples we see about a third are discussed on Twitter by the authors or others, not just the auto Twitter. You can see there's a link to the metrics page which shows you how often the articles been used the abstracts been downloaded and the PDFs been downloaded. Next one. That's far about 10% of papers have been posted that have been up on our platform for more than a month have been published. This is steadily going up in lots and lots of journals just to show you there's been 670 unique journals. Just to show you that you can see in our site, we directly link to the publication that in red, the, if you click again you'll see under the DOI, we link directly to the paper when it's identified. We have a crawler that goes through to try to find it matching on the authors and the titles if somebody submits a preprint and they don't see what they don't see the paper their link they can just tell us and we'll put it up. And next slide you can just, this is just a paper that linked to studying the New England Journal. So it just makes it very easy. And I think those were all the slides I had just as sort of background about met archive and I'm happy to answer any questions and we get to that part of the session so thank you again for having us. Great. Thank you. That was wonderful. Up next is Dr. Mattius. Dr. Mattius is a consultant in clinical infection and associate professor based at the University of Oxford United Kingdom, and a research in clinical infection founded by the contrast. Dr. Mattius leads research into hepatitis B infection and has been heavily involved in translational research during the COVID-19 pandemic. Dr. Mattius also is an affiliate of met archive. Welcome Dr. Mattius. Thank you very much. Thanks for the invitation and it's great to be able to address this topic which is close to my heart. So let me just get my slide deck up. So thanks for the introduction. I won't repeat all of that, but we'll just headline myself as why archive. So some kind of perspectives from the research and clinical perspective on this. And to say I am an affiliate for met archive which means that I help as a volunteer. Thanks to in screening and sort of discussions around some of the work that gets posted there but this is a voluntary role so I don't have any stake in met archive and I'm really coming to this as a sort of impartial commentator. So thanks again for joining. So as a kind of mid career researcher, I think sort of coming into the research arena, there's often this kind of desperate fear about sort of impact factor rate of publication, this kind of awful sense about publish or perish. And of course, you know, short funding periods typical in the early and mid career research phases, and the market ever more competitive so that stress sort of seems in a way to be compounded by by the COVID situation. We often have phase in our lives when we're dealing as clinicians with ongoing clinical responsibilities and training needs, and also managing a work life balance so many people have caring responsibilities or other commitments outside work, which makes life complicated. So the impact factor beast, of course, the peer review system can be regarded as a beast as well doesn't always function in the way that we'd like it to. It's can take a long time it can be very unpredictable and of course again all of this is compounded by the situation of the pandemic. So I made this slide kind of thinking about well, you know wider time delays matter in publication. And I think there's a few reasons. I mean, you can laugh about this by saying we're all a lot older by the end of the project but it kind of matters because life goes on and and and making progress and getting, getting up the career ladder kind of makes a difference. Everyone is cross. So what I'm alluding to here is it's difficult to kind of keep a team potentially together you want to apply for new funding you want to make progress, and people get bored and tired, and despairing about getting things published. Sometimes you feel that by the time you actually get something into print the data have actually been superseded and there's no room for updates in the conventional models of publishing. And as I've said really it can be very hard to move on with next steps, whether that's applying for a new post applying for new grant. Applying to start a new project is very difficult when you still feel that you've got things that are open ended and unpublished. So this is not a small problem in the conventional pathway to academic publication. I for one was really delighted and embraced all of this kind of wealth of pre-print opportunities that have come out really in the last few years. And actually it's, you know, reflecting on this really interesting I think where I had colleagues who who just felt this was sort of an utterly unthinkable thing to do to share to share work in the public domain before it's been peer reviewed and published and now this is just a very widely accepted pathway, which makes life, I think undoubtedly better for all of us so this isn't an exhausted an exhaustive list of options but it's some of the ways that you can can share work prior to peer review publication. Of course alongside the the acceptance by the scientific community came acceptance by funders which was absolutely critical I think so as I'm funded by welcome and the trust said in 2017 that they would formally reprint as part of grant applications and end of end of review reports for the UK the UK research institutes and the Medical Research Council followed suit very soon after and this is now widely accepted. And when I look at my own CV so I've just been applying for the equivalent of senior fellowship positions and actually about 10% of my publications are pre prints. These are the ones that are newest and that might be most relevant, most significant in terms of actually underpinning my own application so this makes a really huge difference. And of course the percentage of your publications that might be in this category varies according to the stage of your career and all sorts of things but that 10% to me is is really crucial in terms of advancements. Also interesting I think and really kind of epitomized by the last year is the fact that this is a way of communicating work with the mass media so here are two examples from the UK very different kind of news agencies sharing this work but you know very clearly and stating this is on med archive this is the Guardian the study which is published on med archive and they give you a link so that you can click on that and it takes you directly into the journals page. The online aims a very different audience very different demographic reading this but you know likewise loud and clear referencing med archive and indeed the this paper that's written about in the mail online has subsequently been published in nature. So it actually reaches a mass audience in a way that I think would have been very unusual a few years ago, and it's absolutely crucial in the in times of the pandemic situation. There's a real interest in pre printing for fairness, so some of you will be aware of this acronym about fair publishing which says that your work should aspire to be findable accessible interoperable and reproducible. And I think, particularly for the pre print literature, making your work findable and accessible is is absolutely those boxes are absolutely ticked by this model. And indeed, as I think Joe has mentioned about med archive in fact being able to kind of share your data shirts surf cheerful methods share supplements and so on also ticks the other two boxes. So in terms of a kind of moral and laudable way to publish and share data this really does tick the boxes. I've also actually written about this myself and thought about fair and not as the acronym but just as a thought about sort of equity and justice and actually thinking about how we should make scientific publishing fair to the people who participate in research to our patients, who both may be participants but also may benefit from, from results being shared and disseminated and available to our funders who pour money into biomedical research to our educational institutes to clinical and academic teams within which we function. And also to a global community which is very diverse and a lot of my research work is conducted with partners in low and middle income countries. If you think about that context and then access to some of the platforms and the publishers who charge vast fees for publishing papers is totally out of the question. So actually when you think about equity in publishing, I would also say that pre prints are really able to kind of support these, these questions. And I think for health. As Joe says there's always been this kind of fine line to be trod between sharing research early, while also not disseminating results that are potentially misleading or haven't been thoroughly scrutinized. But certainly being able to share methods, giving the potential to pull data sets, allowing this kind of open and transparent discussion of results. There's an interesting question around duplication isn't there because on the one hand you could say pre prints help us to avoid duplication you can see if someone else has done a study and avoid duplicating the exact same thing yourself. But actually also underpins necessary attempts at duplication whereby the scientific community can really see if something is robust and stands up to scrutiny by deliberately duplicating efforts to see if they can gain the same results. And of course opportunities for collaboration, transparency and demonstrating progress. So one example that I think is interesting from the COVID era is this kind of dialogue about hydroxychloroquine, which of course was widely raised as a kind of, you know, miracle cure or or prophylactic for COVID-19, which has subsequently been kind of heavily overturned. But having these many studies available. You can look you can see 948 results on med archive if you search for COVID hydroxychloroquine. And actually this nice comment on Newsweek by some scientists saying, you know science is not infallible, a large number of studies did suggest that this was an interesting drug that might have benefits and then had to be withdrawn. And that dialogue gets sort of expanded and amplified and and having it in the public domain and keeping it transparent must be to the collective benefits of all of us. And so just to finish with I'm going to give you two examples of papers that I myself have pre printed in a pandemic so I work on mainly hepatitis B virus although not for the last year but that this is my, my main research interest. It's a neglected disease. It doesn't have a lot of advocacy and it's very slow, typically to publish papers. And of course in the middle of a pandemic trying to publish on hepatitis B is, you know, nigh on impossible. But to be able to put work onto a pre print server means that we've been able to share it with our community and and move on to the next stages of the project really successfully. This is another example, something contrasting that I was involved in an antibody testing for COVID-19 and this was a big multi multi center collaborative effort done very fast early in the pandemic. And this was slow to get through peer review for the opposite reasons so such a political hot potato that it jumped between multiple journals and so various political reasons didn't get picked up right away for publication. And if you look at the metrics here, I'm quite proud of my hepatitis B work being tweeted by 10 people and 1000 people looking at the abstract, which is, you know, orders of magnitude higher than it would have been, because it's still not in quite in print we're nearly there now in a peer review journal but I've got, you know, I've got an audience of 1000 that otherwise would have not been accessible and I've been able to put this on my CV and put it into grant applications and so on. Meanwhile, our article about COVID antibody testing is, I can't, I'm covered up here but 99,000 downloads of the PDF from Med Archive and and I think about 70,000 of those will before it got into a peer review journal so 70,000 full PDF downloads just shows what a big audience you're potentially reaching. Of course there are still downsides, the ground is still moving here you do have to think carefully about what you might submit where I would still recommend careful discussions around co authors. Think about why people don't agree to share their work on a pre print server and what the barriers might be. There's some concern about receiving negative comments but actually this is a forum for really good thorough feedback which actually makes work better at the end of the day. People are hugely anxious about their work being scooped and I've always said this is actually an anti scoop mechanism right you've got your work out there it's in the public domain it's date stamped with a DOI. There you go it's not going to be scooped because it's got your name on it already. Is there a loss of quality well maybe but there are plenty of poor quality articles that get into peer review journals and actually people police themselves so so you know good good work gets lots of citations and metrics and so on and the community does police itself. There is lack of control around timing so this is a problem for a small minority of papers but I've come up against this with COVID work where you know sometimes you want to be able to make a press release and there's concern around the timing and I guess this is still the dialogue with with Joe and others involved in med archive. But typically the turnaround times are short. And some limitations around eligible material as Joe has already discussed so there are limitations of what you can put on med archive but actually there are lots of other diverse options for sharing other kinds of work. So if it won't go on med archive you can undoubtedly find another site for it. So I'll finish there and basically the idea is that we get impact factor back on a leash. So thanks to the archive community and to Eli for organizing and I'll also be very happy to engage in discussions and questions. Thank you. Thank you. Thank you Dr Matthews. That was great. Very informative. Thank you so much. And that's promising that we are hearing that 10% of your funding, your research are publishing as a preprint first. That's great. Thank you. And while we are waiting for Dr Jeremy fast. We are going to ask some of the questions that we received. So I will start with them. The first question is for everyone. What is the process by which submission are rejected, who decides and is there an appeal process. But maybe I'll start with that and then I can turn to you as an affiliate. We can describe your role but so when a paper comes into med archive, just like to buy archive it goes through sort of standard administrative checking you know was everything submitted that needed to be was if it's a clinical trial was it registered is it ever clinical trial registration number is there an ethical oversight statement and all that stuff and the staff will actually return a lot of papers going back and forth to make sure all the required materials are there. There's no formatting requirements, but there's information that has to be part of every submission. They're also looking to make sure that it's a, it's a, an article that fits, you know, is it a research article, you know somebody else asked in the chat, you know, why are so many not being posted. And it's important to say that we're posting them we're not publishing. And so many are not being posted because they're they don't fit our criteria their case reports or case studies which we don't allow, mostly because of concerns around our ability to understand whether patients gave informed consent. And there's often a lot of patient identification within a case report, that's better for journal to have to manage. So, you know, editorials or narrative reviews or like guidelines and recommendation statements, we say, you know, there are places for those, but matter of pride is not it we're a research platform. So somebody submits it goes through. Once it goes through the administrative staff, it goes to our affiliates and Philip I can talk about what that's like but essentially somebody takes a look at the paper. And it just says, you know, does it quack like a duck walk like a duck. Like does it look like a duck like is it a research paper they're not reading it they're not pure viewing it. They're not looking to see if it's right or wrong. They're just trying to make sure yes this is a report of a scientific study. If anybody has any concerns or flags it for any reason, you know, maybe it's a, maybe it's a study that's reporting, you know, some new adverse event from the COVID vaccine, like, you know that there's a high rate of seizures. We may not want that to immediately get out and be spread around the media we might think, you know, so what someone should really make sure that this is right and should go through peer review first or we should at least have a discussion of it among their oversight board. But if we do not post a paper or reject the posting of a paper, of course you can appeal. And then we'll have a discussion and we'll explain our logic to it but feel if I don't know if you want to talk about what it's like as an affiliate, you know, and seeing a queue of 50 papers that need to be reviewed and how you think about it. Yeah, I've definitely been aware of the increase over the last year, some of which are mine of course. And I should say actually just just while I've got the floor. It's not that 10% of all my research goes on to Med Archive 100% of my research goes on to Med Archive it's just at any given moment as a cross section about 10% of all my of all my papers as it were still in the phase so it kind of wrapped that's just to give you a sense of that that ratio but actually I'm putting I would say I'm putting 100% of my papers onto a preprint platform. To answer the question about my experience I would say what Joe said is sort of largely reflects my role indeed so I take a look to see does this have a title that looks like an academic paper. Does it have a set of authors with affiliations that look reasonable and then you can very quickly see by looking at the kind of the data and the results and so on that this is the kind of paper that would be that would be posted here. I have on a very few occasions been asked to look at take a special look at something which has been flagged for a query. As Joe says which is part of that discussion around is this appropriate for us to put out into the preprint domain without peer review, and then there would be, I think several of us involved in a discussion about whether there was potential harm in disseminating and there's a, you know, a variety of kind of metrics that you might look at around, you know where it's come from who's written it, why it's why it's been written and what kind of data are in there. And so there's a few things I've been involved in where we've made a decision that it shouldn't be posted on the on the preprint site. Thank you. And thank you for clarification, Dr Mattis. We have a question from Alexandra, I think we you had answered most of the part. And just this part. I can ask, how do you know that I think it's for Dr Ross. Do you know that others won't post elsewhere without undergoing peer review first, as you suggest. Is this something you are monitoring. Yeah, that we, there's definitely been papers articles that have been submitted to our platform, where we're concerned about it being posted, and then we later see that it shows up on another preprint platform. So there's nothing we can do about that right I mean we can only, you know, enforce the rules that we think we're doing we think that we've set up this platform in a way that benefits, particularly authors, but also you provide some of those safeguards for the broader scientific and research community. The only thing you know the, I would just tell everyone that if you see a paper on our platform or elsewhere that you have concerns about leave a comment that's actually the most important and effective thing and I think that's like first, the first step is getting people to post their work, which is great and it's happening filipos talked about it Jeremy's going to talk about in a second the value that comes with posting. But the second is like now using the platform as a really means of engaging discussion through Twitter, leaving comments letting people know if you see a paper that's flawed. My hope is that we can soon get to the point where we're posting everything because people are actively commenting and saying oh I wouldn't believe this paper because of a B or C. And really making it a forum, but when people take a paper that we won't post to take it elsewhere there's not there's not much we can do. Great. Thank you. We have another question from Alexandra. And I think both of our clients can answer that. Is there any exclusion from a screen, for example, do you screen all of the preprints by academic in the field or is there a certain type of papers that you are just a screen. The way it's a fill up I may not realize this but there's outside of the sort of special reach outs to individual affiliates. Everyone screens anything. And that's why it's really just a broad like you know doesn't look like a scientific paper because some people are expert in particular types of research maybe they're you know expert modelers. They're working with us and volunteering their time as affiliates and they see a big cohort study. Well, does it look like a cohort study or this you know, we're just asking for the really the basics as they're taking a look at it. Yeah, so yeah just to endorse that really so I can see a queue of papers when I log in as an affiliate and I tend to just kind of, you know, work, work through them in order. Because there's something where I think, actually, I don't, I don't feel kind of expert to take a judgment on this and I, then I might kind of flag that but broadly speaking it's just does this look like a bona fide piece of scientific research. Thank you. The other question not to say for which panelists. So, is there any difference between studies conducted in underdeveloped countries versus developed countries in terms of posting their work on preprint servers. Can you say that again. Would you say is there, is there any differences between a studies conducted in underrepresented countries or developed countries. No, no, and we, we are hopeful that the platform is a good place for, for research from underdeveloped countries because sometimes that work is harder to publish if it's smaller in scale or maybe there are other concerns because it was less resource as a study so for whatever reason, but, but it still needs to have ethical oversight, it still needs to have me all the same requirements, but otherwise there's no, there's no difference between, you know, articles submitted from different countries or other parts of the world. It does have to be written in English that is the only sort of formatting requirement that we do in both. And that's my question about the statistics, do you know how many of the preprints posted on metal archives are coming from developed countries such as North America Europe and how about how this is that it's for for example for Asia, Africa, South America. I can look it up while Jeremy is giving his presentation and then I can connect it. Our next question. So, I think we, I can see that Dr fast has joined us. Welcome, Dr fast is an instructor at Harvard medical school and attending physician in the Department of Emergency Medicine at Brigham and women's hospital in Boston. He's also a different chief of brief 19.com. Hello, can you, can you all hear me. Great. Thanks for having me so I'm late, had a conflict with some media people anyway, let me, let me dive into a screen share. The hardest part of screen shares always finding the right screen to share. So just bury with me for just a second while I like go through my 100 different screens. There it is. Nope, that's wrong. Great. So, is that is it can you all see that. Great. So, again, thanks for having me so I'm late it's, excuse me, I hope it's not redundant to anything that's been said it sounds like having a really great conversation. I could kind of use the opportunity to talk to a couple of case studies and to think a lot about pre prints and action and example of one case where we saw pre prints and it made a huge difference, and famously, I think, and an example of the opposite. And then talk to my own experience and a little bit of a journey because I've really come around on this I was not necessarily like on the pre prints bandwagon until, you know, about a year ago so here we go. I think this is the sort of the case study for everyone. Looking at why pre prints matter. This is the recovery trial out of the UK, doing great science during the pandemic. And, you know, just thousands and thousands of participants in the study. On June 22, actually on June 17, there was a press release that basically said, by the way, Dexamethasone the steroid that's inexpensive that's in most hospitals all over the world has a mortality benefit for treating COVID-19, which is the first therapeutic of any kind novel or existing to have a mortality benefit, but they put in a press release which sort of, you know it's hard to deal with that. So everyone was demanding demanding a pre print I'd never heard that before we have to see the data. So, five days later they put out the pre print. And this is the sort of infamous or famous beautiful graphs from the pre print from the recovery group in the UK, really showing that especially, especially or really only in patients with oxygen requirements or who need mechanical ventilation. Dexamethasone has a mortality benefit compared to usual care. So that's the third and fourth panel and C and D. And in fact, the opposite was true. Maybe a signal towards harm among patients who did not need oxygen. So this this sort of immediately was practice changing in June of last year. Unfortunately, as an frontline doctor, I was not going to let a hypoxic patient pass in front of me without being highly considered, or almost automatically considered for Dexamethasone and this is based on a pre print so changing practice. And I think it really does kind of add on to what was just said which is that, you know, these are groups groups who are well known and methods are very, you know, transparent. So it's better than a press release by far. So it's nice to have that. So then you think of June 22, but it really took three or four weeks until the New York Journal of Medicine published this. So July 19 was the first publication of the recovery data, and you'll recognize the figure. And I just want to point this out that we waited three or four weeks where some people who are waiting for the peer review version they're waiting for the definitive proof that the pre prints not enough. But just look side by side these two figures, because that's the point of preference is data or data. And when you see good methods or you see bad methods, the data are the data so either there's good methods and the data are true or those bad methods and the data unreliable. So if you look at this study and you read it carefully, you realize that it's not a perfect study it's an open label study, it's not randomized control, it's not randomized placebo control, it's blinded it's randomized open label, but nevertheless thousands of patients huge mortality benefit. And what do they do in the general really add to this. Well you can notice that they, they put usual care on top, instead of on bottom in the, in the formatting, you can see that panel B is now panel C. And that is literally the difference is that they they did some nice adjusting of the order of which the data was presented, and the font is different, and that's really literally it. And so for a month, you had doctors who were sort of scared of pre prints not giving dexamethasone, because it wasn't in the right font, basically. So I just think that's unfortunate. I mean, no one's ever going to be able to tell this is a classic example of short cherry picking, but right around the time that the pre print came out. The cases went up but there was actually a dip in mortality I'll never know people if this is because of anything, but I always find that interesting that cases kept going up, but actually, there was a, there was actually a little bit of a decrease in mortality was that that was that preprint saving lives, I don't know, but it certainly something I noticed. Now the opposite situation is something that should have been preprinted wasn't preprinted. And wow would a lot of embarrassment been saved had it not been preprinted have been preprinted. So this is the hydroxychloroquine or chloroquine in a with and without a macrolide in a huge supposedly huge multinational regional registry analysis done by a group that no one had heard of called search sphere. Apparently, I had hundreds and hundreds of hospitals reporting COVID data to them, not preprinted. And I just want to point out this is from a blog that I helped to run one of the podcasts. You know, there's a lot of problems with the study okay, like, there was no track record from these people who who studied it which again it's not. You know, I don't really care about that if the good science is good science but, but that was like sort of like the tip of the iceberg. If everything else lines up then you can just say great they're just doing good science in a place you haven't heard of. But the problem was they don't list the hospitals who are participants, and they basically the data is really really shrouded you can't really see direct data. And so an open letter was sent from a bunch of physicians and scientists all over the world, basically to the authors to understand that saying look, we'll understand. There's no ethics review in this paper. We don't know what hospitals were in the data sets. We don't understand why there's more depth in Australia. In this study that have been reported for the whole country of Australia during this period. That seems odd. But that doesn't have any face validity. There were there was data from Africa. It was basically almost impossible to imagine that was true, because you need sophisticated emergency, you need electronic medical records to basically sync. And they were acting as if this was happening in real time. No one ever heard of it. But, you know, the reviewers at Lancet didn't catch it. One of the big things that that was so fascinating was that the confidence intervals were actually a big tell in the confidence intervals for the data, the confidence intervals were too narrow to be appropriate for the data they had. And so someone on the internet, actually Columbia University was blogging about this like wait a second, if you can reverse engineer what they really found from their confidence intervals. And it looks like kids are dying more than adults of this disease. We know that's not true. So basically you had a peer review process happening after Lancet publish this thing, but the public was doing it all skip through this. And so here's the open letter. And that was on May 28. So look, let's go back for a second. May 22 Lancet puts it out there. On May 28. There's this open letter that has a listing of all these problems on June 3 Lancet says, yeah, we're concerned with expression of concern. We're going to get back to you on this we're going to do analysis. Okay. And then June 13, they retracted. And now you look online and see it's retracted. I think that this is a great example of where pre printing would have helped if this had been pre printing were required for anything like this if it were required, not not allowed but required. Then people like the ones who were writing these blogs and statistical, you know, geeks and nerds who love to do this stuff. Some of my favorite people, they would have been the ones to catch this before. So this is weaponized people who really think the hydroxychloroquine is like the cure to COVID. Oh, look, the Lancet study with the show that it was horrible was actually a fraud. And so this made us this way the scientific community look very bad. So, I think that you know those two examples the sort of the pro and the con both lean towards pre printing being a better idea. Sorry, my allergies are like hitting me here. I want to end with kind of this idea that people have that pre printing could ruin your chances of getting published and I really have struggled with this. In Harlan Kremlitz, you know, one of the founders of met archive really helped me kind of see the light that it wouldn't. And I wanted to show you my own journey here and we pre printed a bunch of stuff but my the first thing I think I pre printed that ever got published was this one. The suicide deaths, during the shelter in place to stay at home period Massachusetts, we pre printed in October. And you know there's the figure which is sort of like suicide deaths by month every month from 2015 up until the present. The gray zone is our sort of like area that we predict would be the normal. And during the pandemic and you could see that basically suicides didn't go down or up they were sort of normal. So then, you know, two months later, you can see that we got this into jamming network. And basically the only difference is that we added a month of data. That's it. We look at like what we gave them which I call the MVP the minimum viable product. And then two months later, or three months later, that's available evidence MVP or may. And so that's really, you know, my first experience was look, we could do this can be out there. The only downside. The only downside of doing it this way was that when the, when the study came out in January, it got a little less media attention than it did in October the October media attention was huge actually got like the New York Times covered this study that we did. The Washington Post did it I got to read an op-ed for the post about it was really it was like the rollout was big in October. In January it landed was like well people are interested but less so so fine but I'm not in it for that a minute for getting the data out. I have to kind of combine the two moments so you know if you're worried about that aspect but I don't think it's a big deal. Same thing true here, mortality from external causes of death injury overdose suicide during the pandemic in the whole United States. This data came out on February 12. We preprinted this on February 16. And you can see that there's, you know, five things you looked at we looked at homicides drug overdoses accidents motive you have crashes and suicides. And these are our data, of course, two months later, it's in JAMA. And again, I'll send the same story here's the and here's the men archive. Here's the JAMA. The difference is the font and one month of data. And so I think that, you know, preprinting in summary, you know, I think that in some cases preprinting save lives I think that people started adopting decks, because they saw that these data were good. This was a established group but even if it wasn't they had great methods. And they started adopting it. I think that preprinting gives you that public peer review that would have saved a lot of embarrassment and the hydroxychloroquine surgery sphere story. I think that someone as someone has already said it's anti scoop like I worried about this idea of like scooping myself, but I'd rather be scooped by myself and scoop by somebody else. And lastly, the idea of you preprint your minimum viable product, which is like as much data as you have, and then by the time you get to publication you can give them the most, the best available evidence to be a, and that's really the difference for me. And so that's been my story. I, and I'm happy to take questions I hope that helps. Thank you for joining us and thanks for your great presentation. So we are going to continue our questions. I want to remind you if you have questions you can just post it in zooms chat box here at the bottom of your page, and you can also tweet us at your live community using hashtag is your Wednesday. All right, we are going to continue. Okay, we have another question how much public commentary review did you receive on your preprint and incorporate into your revision during peer review. This is for me. Yes, you are that the Matthews posts are fine. If you want to hear us, you're okay to answer that as well. I mean, I can chime in. I mean, I think that a couple of things. I think that we did get some feedback that would that helped. It actually, most of the feedback that I got helped me prepare for more like public defense of the work, especially with the suicide story, people are really resistant to this idea that suicides didn't go up during the pandemic. They seem to in a sick way they want suicides have gone up by some proof that doing lockdowns is dangerous. I don't understand. I don't want suicides to go up I want them to not change or go down. That's what we found those good news. So it was, I think that the, the, the feedback that, especially on pre prints and then it goes on Twitter gives you a sense of just what's going to be the criticisms, and you can anticipate that a little bit in the context of just how you frame it for the public when you do the rollout. The one thing I'll say is the ironically the pre print experience has actually underscored for me. Good things about continuing on and doing peer review, because in every case well not every case in some of the cases, despite the fact that we pre printed it and people it went viral or whatever people looked at it. So we got really good comments during peer review. So I don't want to say that peer review is dead it's not at all. In fact, my first jammer paper which I didn't show here. We pre printed is quite different. Not because the data are different because but the, the, the referees really wanted to help us frame it in a different way or more useful way. And so the data are the same but the presentation is quite different. And what we chose to share was different. So actually it's funny it pre printing to me. Both underscores the importance of getting information out early, but also ratifies that there is something positive began from the peer review experience because we've had manuscripts that have gotten a lot better. And not the facts didn't change, but the manuscript. Sorry, I was just, I was, I was going to agree with some of that I mean I think we've, we've definitely had a variety of comments backs, mostly kind of small technical changes so not again not things that kind of change their own message or, or change the results but technical comments have been helpful in shaping a manuscript. The other thing that I would say is I had a piece of work that when I was kind of early, early in independence which never managed to reach it into a peer review journal it was a retrospective look at some hospital data and there were some holes in it and so it was difficult to kind of get into a peer review journal the peer review is kind of to shreds in it and that we didn't get it into peer review but actually having it on a pre print server has meant the data are useful other people have picked them up and contacted me about them and said oh you know we're interested in the same thing and our data have also got problems and putting this together when you've got a relatively rare disease or condition and you're looking back retrospectively they're kind of inevitably some of the critical issues which can be really difficult but by putting your putting our work there on the pre print server we actually kind of started a dialogue with various other people and that was probably, there's a good few years ago and I'm still contacted every so often by people who are kind of interested in that and so I think that shows as well how you can kind of, you know your work and have benefit without necessarily getting into a conventional peer review journal so that that was another kind of very positive attribute really for me. Great. Thank you. Our next question. Dr first you have appeared on many news channels during the past year. Do you have any consideration in discussing results of pre print compared to peer review article. I used to. I definitely used to be very nervous about that. And I have never once regretted saying something that was pre printed, but I have regretted the opposite that I allow myself to get scooped on a number of things. I had really Harlan Carmel tonight have really beautiful data that we still haven't even pre printed because the model is won't let us but. There's a strongest compelling argument showing the indoor dining drives outbreaks. Like I think we have the best data for that. And I kept holding kept holding I should have pre printed it, I should have talked about in public, because by time, all the crappy data came out to show the same thing, we had nothing to add other than we know what to say what you're saying is true but here's a better proof of that. So yeah, I've never once regretted talking about pre print. I mean, I certainly frame it that way. I always say, my colleagues and I have published a manuscript that's on a pre print server hasn't been peer reviewed. So take that with the grain of salt but here's what here's the data that we found it's from the CDC it's from here. And so I just frame it that way. So I am learning by experience to not keep making the same mistake. I can just stop holding back, because the data are the data so I only regret not being more of a convert sooner. Dr. Ross and other panelists. Do you think NIH pilot pre print project had influenced the trends in submission to pre print servers. Well, you know, I think two things that NIH did specifically supported the use of pre prints first of all several years ago now. They said that they would allow pre prints to be cited and grant submissions, which is critically important, particularly for early stage investigators whose work is just getting off the ground to be able to provide a citation to your work that may not yet have been published that the work you're proposing to do. And obviously there was more in the concept of basic sciences but it applies in clinical health sciences as well and when during coven when NIH launched their pilot to now PubMed index pre prints from certain servers again gave it. I don't know of a greater authority or, I don't know, make people believe that it really is the path of the future so they conferred their sort of external heft on it so I was delighted to see both steps forward. Thank you. I just can ask questions from you. Do you think there is a difference between clinician and other scientists in approaching post their work as appropriate. Interesting question I mean I think it's been really nicely covered by both actually Joe and Jeremy's presentations justice kind of discussion and careful thought around. You know how what what and how we share clinical data and when in the in the publication process. But you know actually what's happening and what we've seen great examples of is the whole scientific community engaging in peer review and isn't that a kind of way better and way more and way more robust kind of approach then sending it out to two or three people specifically who, you know may have the right or not so right expertise or they may have other, you know, agendas, they may have other sets of beliefs so actually putting something out in the public domain I think as clinicians we've learned and and absolutely epitomized as we've seen by by the pandemic situation. We've really learned about the value of sharing and getting that kind of real time feedback. So I would I my experiences in the last five years that the ground is totally moved from people being really very cautious and hesitant and uncertain about doing this to just seeing it as part of the normal process to to publishing and and I now don't see that there's a divide between clinical publications and non clinical work, certainly not from from where I'm sitting. Thank you Dr Mattis. We are just about to a time. So we are, we have to finish our webinar. Now, we have received some other questions that we couldn't ask you, you can post your question at a live community. We have a Twitter using ECR Wednesday hashtag we can continue the discussion there. And if you enjoy today's webinar, the next ECR Wednesday webinar will be announced soon, and we hope you join us then, but for now, I would like to say thank you to our speakers and whoever one who turned in today and contributed to discussion. Thank you so much. Thank you.