 Well, let me just welcome everyone here. My name is James Wilson from the Research on Research Institute and it's our great pleasure to be teaming up with the Center for Open Science and AMOS to bring you this series of virtual symposia in the lead up to the Big Main Metascience 2023 meeting, which of course gets underway in Washington DC next Tuesday. And this session has excited more interest than most. We've got a fantastic panel here to discuss whether academic journals are broken. I'm gonna hand over to Jess Butler, who's gonna chair the session. Jess is a research fellow in the Institute of Applied Health Sciences at the University of Aberdeen. Jess, over to you. So a very warm welcome everyone. Basically, I think this is my dream symposium. I asked people who over the years I've learned a tremendous amount from. I'm a data scientist basically in Aberdeen and very interested in open science and convinced that open science methods are key to improving the quality of our research. But what I have found in learning about registering your hypotheses of there's James Hageings and making your data encode open and publishing negative results is that while I think these are the right things to do for more rigorous science, for better research, for better quality scientific record, we don't get rewarded for these things. So I'm not sure the powers that be recognized yet that this is the better way of working which leaves us with not a scientific methods problem but a research culture problem which can be incredibly overwhelming depressing. But when thinking about a place to start changing research culture I think we should focus on publishing. So commercial academic publishers play far too powerful of a role in deciding what research is good, what research reaches the public. So my goal today is this session will be in two halves. The first half will be three experts talking about the problems with current academic publishing. And then we'll have a chance to chat, a huge focus on Q and A for this for me. So we have these people in the room, let's take advantage of it. And then we'll have three short talks all these talks only 10 minutes each about what to do next. So fine, academic publishing is broken. What do we do now? And then we'll have another chance to chat with the panel scheme discussed debate. So with that, I will dive into the first half. So our first speaker is Dorothy Bishop, emeritus professor of developmental neuropsychology at Oxford. I know Dorothy from her role in founding the UK reproducibility network and from her fabulous long running blog, Bishop blog. I will put a link to all of these sites that I'm name dropping in the intros into our chat. And I think Dorothy has the best retirement plan of anyone I've ever heard. She's basically moved into a sleuth role in determining problems with paper mills and academic publishing. So Dorothy, I turn it over to you. Okay, sorry, I've got a bit of a cough today but thank you so much for inviting me and for sort of having the idea of this particular session. I'm gonna dive straight in and be, it's a provocative title so I'm gonna dive in and be provocative. And so first for those of you who are not aware of the phenomenon of paper mills which might not have affected your area too much, they are basically just commercial operations rather similar to essay mills for students where you can basically pay somebody and you will get a publication and it's fraudulent basically. It may be plagiarized, it may be generated by a computer or whatever. And this guy, Nick Wise in fact now has a Twitter account where he puts up authorship for sale ads which he scours the web for in WhatsApp, Facebook groups, telegram channels. There are people who are quite overtly doing this sort of selling. And there are other references here if you want to pursue the idea of what's a paper mill in more detail. This is an example of the sort of thing that you get if you allow paper mills. It's a paper that purports to tell you that you can predict autism from EEG. Anybody who knows anything about EEG will find it extremely weird. The language is weird because it contains these things called tortured phrases which is a phrase that Guillaume Cabernac invented and he in fact scours the literature for these weird phrases which are a hallmark of some paper mill types of output. And what you can see is that words have been changed. I'm having difficulty forwarding to the next one. Let me try and click it. There we go. And you can see the words that have been changed so that you have wonderful things like instead of EMG artifacts, you have EMG antiquities and instead of signal to noise ratio you have signal to commotion proportion which is rather wonderful. But you would have thought this shouldn't happen. You shouldn't, these sorts of things shouldn't get into the literature because peer review should protect us against these fake papers. And the things that happen with paper mills is that sometimes things get published because they're just very convincing fakes. And there are some areas of science where this is a really big problem. Sometimes also the author is invited to suggest reviewers and these are actually friends or fake reviewers. But the more worrying one is where you have an editor who's actually complicit in the fraud and who is waving through fake papers or indeed encouraging them and also using fake reviews so that what is supposed to be a peer reviewed literature isn't peer reviewed at all. It's full of this garbage. And the publishers have been portraying themselves as victims of these paper mills. Now, paper mills have come on board relatively recently at scale. They've been around for quite a while but they've really sort of grown and mushroomed a bit like a sort of virus sort of entering the system and it's understandable that initially publishers wouldn't know how to cope with this. And they have indeed taken some steps to tackle paper mills. There's a very good report by the Committee of Publication Ethics and STM which is telling you what the problem is and how we need to deal with it and having some ideas. Publishers have been at the forefront of trying to train editors and reviewers. There have recently been some mass retractions of things that are known to be fraudulent paper mill products and they're very interested in developing criteria that will allow them to detect paper mill products. These are typically AI systems that are fascinating and try and keep them out of the literature. This is all good but I'm increasingly dubious about the sincerity of at least some of the publishers and why is that? Well, this is huge conflict of interest. So they've been encouraging huge expansion of certain journals. So this isn't some journals that are published by Hindawi. It's recently been in trouble because it's been noticed that needs a paper mill product. But you can see this is the number of publications over time and that in 2021, 2022, it went mad. These are mostly papers in special issues and there's sort of many fold increase in the number of papers. Why would a journal want to do that? Well, if you look at the actual income that they're making from this, it's astronomical. It's in one year. This one, Computational Intelligence Neuroscience was making over $8 million and that's just one journal. And there's many, many of these journals and this is also just one year. So you can see that the financial incentives are rather remarkably large. This week, I was a bit shocked to find real confirmation that this is not just a problem with some dodgy editors because an editor was actually removed from the Journal of Political Philosophy. And the story, this is still a very new story. It only came out this week. And it may be that there's some more subtle details that we're not aware of. But basically, I assumed if somebody says an editor's been removed, it's for doing something wrong. But this editor apparently was removed because of communication problems. And another member of the editorial board who's also the editor of another Wiley Journal said that what was happening is that editors were being asked to publish a lot more papers. So again, to replicate this picture where things go up in scale. And the editors weren't very happy because they felt they were being made to relax standards and to handle ridiculous amounts of stuff. So this is, I think, really worrying if you're seeing a publisher doing it. There's also, I've been very interested in the idea that there's been real negligence in how editors have been appointed for special issues and how dodgy editors get into place. So I got fascinated by the case of a man called Keifer Zhao who was named as editor for one special issue in one of these journals, Computational Intelligence and Neuroscience. And it was another special issue in the Journal of Environmental and Public Health. In 2022, he handled 284 papers for these two special issues and he was very efficient. He managed to get an initial response after peer review to authors in 19 days on average. Unfortunately, many of these papers have been identified as papermill products and have comments to that effect on pub peer. A lot of them are what I have termed AI gobbledygook sandwiches where they've just got a load of rubbish in the middle of the paper. There's a bit from Wikipedia about artificial intelligence methods. If you Google him, he's a PhD student at Hong Kong Polytechnic University. So what the hell is he doing editing two special issues for a so-called reputable journal? I wrote to Hong Kong Polytechnic University and they actually conducted an investigation which is very impressive. And they concluded he actually wasn't to blame. He was not the editor. He'd given his email password to his PhD supervisor who's now at another Chinese university and had been using it. So this man hasn't been responsive to emails nor his university has told me they are going to investigate. That was a while ago, but we'll see if they do. I informed Hindawi about this. Well, in fact, I told them about some of the individual papers this guy was publishing, which were frankly crazy way back. But in 2023, I informed them about this and they said, thank you very much. I'll look into it sort of thing. But then last month in March or two months ago now, a new article appeared in this journal where he had been the editor. So it makes you think they're not really being taking it back seriously. And there are other grossly inadequate responses to reports of papermill activity. I mean, this is just par for the course I'm afraid. And I think James Heathers may tell us more about this. But this is just one example that's live at the moment. Alexander Magazinov sent a detailed account of a citation cartel operating in the Journal of Energy Storage with names of the people. A big network of people are all citing each other and placing dubious stuff. In September 2022, the Committee of Publication Ethics agreed to look into the case. And then there's been lots of emails to and fro between him and Cope and him and Cope and Cope seemed to be trying to make something happen but nothing happens. Eventually a new editors appointed, which is interesting but still no action. But then just in March, there's another special issue published with members of the citation cartel as editors. So like things are not happening. So in some papermills are a massive problem. They pollute the scientific record. There's a loss of publishers who have been negligent in responding to the threat of papermills despite repeated warnings, not all publishers and I would say some of the society publishers are fine and are really taking it seriously. But the responses that publishers are making appear designed to appease critics rather than to really deal with the problem at source. And I have to say the more I look at it, the more it looks very similar to the way oil companies will pretend that they care about climate change but actually it's all just greenwashing activities and they're not really going to change at source because it's just not worth it from the point of view of their income. And that's it. Thank you, Dorothy. So we're having some trouble with opening the chats. Maybe a question for Wendy, opening the chat to all participants currently only open to panelists but folks can put questions into the Q&A box. We're gonna let them accumulate for three talks and then we'll chat. So thinking about Dorothy's talk, since we publish our science in a pay per publish model I wanna get my research out there, I pay. It behooves these commercial publishers to publish many, many, many, many more articles and just open up the money bags, which means the editors have little or no control or time to control what's coming in, which I mean, the stuff Dorothy posts on Twitter and masks it on, these articles are word salad. There's not even a pretense at making them look like real research sometimes. So basically there are a hundred fold more articles. Guess what? Some of them are going to actually be like generated by AI. This is how it's going to work. So next onto James Heathers who we thank for joining us at an ungodly hour, his time. James is the chief scientific officer at Cipher Skin. I know him better as the co-host of Everything Hurts which is my favorite podcast. They have 200 some odd episodes. I can recommend all of them, especially his interviews with other people and especially if you're looking at the interviews the interviews with Michael Eisen that Everything Hurts did were absolutely great. So just like Dorothy James has a day job at Cipher Skin but also is sort of very well known for his efforts in error detection in the academic record. So he is here to break your heart about the impossibility of correcting the scientific record when you're an error detective. Over to you, James. It's tremendous time for the VPN to decide that I'm misbehaving. You look clear from here. Okay, superb. Well, thank you all for joining me. If those of you on the east coast have a special shout out to everyone else who got up at six o'clock in the morning this is in fact the middle of my night but I thought this was important. And it's always good fun to talk about the impossibility of things. So I'm gonna make the assumption at this point that everyone knows the sort of basic mechanics of a pre-publication review the way that we typically do it. What journalists, how they work, the process by which the pre-publication review is achieved, et cetera. That should be fine I think given the audience here. So let's say that a hypothetical body of papers existed and we wrote to the editors in that case. What would happen if the papers were really bad and we were, do you mean, am I not sharing my screen? You are not, we are seeing your face. Oh, no, no, that's a terrible idea. That's not to the face thing. Okay, there we go. There we go, okay. So if we have a hypothetical body of papers let's say they're really terrible and we talk to the editor. Let's say that precedes over a period of time and how bad could it possibly be? Well, this took me about 20 minutes to reconstruct and I've never been able to do this before previously to look at it, I didn't have the time. This is the best one I've got. I pulled this slide. Why don't you complain through the official channels? So this is a case within the social psychology. The authors of French, the details aren't important. I just need you to see the timeline here. Now this was very difficult to reconstruct but I did actually manage to find all the pieces with a little bit of help from the erstwhile Nick Brown who was more involved in this than me at every single point. Now this is still going. This particular case, which is a back and forth between Nick and myself and various editors and psychological societies and national research integrity societies and probably some other parties that I'm forgetting. Now the reason that I'm forgetting is because it's been going on for 90 months. Now, those of you who are good enough at arithmetic to probably notice that these papers were a problem in the first place, we realize that 90 months is seven and a half years. For me, that's a postdoc in Portland, back to Boston, a postdoc, a research scientist job, leave an academia all together. Scythe Skin, which was three years until recently, I forgot to tell Jess to update my buyer and now another startup. So this is why when you use the people who are excised about the scientific record in general, it does not make a lot of sense in some respects. She talked to the editors involved because you may be stuck in a process that takes 90 months. Then have to go through the individual pieces here. I just wanted everyone to see that. So I'm gonna try and hit this quick click because it's the 10 minutes and I've tried to think about these in terms of the highest level that we can think about mechanics here possible. Dorothy's already ably covered some of this, but I run companies now. So I have a tendency to think in very high level sort of executive babble, but I've found that's really helpful in meta science over the last few years. So let's think about this in terms of markets for a bit. Firstly, journals and their publications are technically interesting. If you want to start one yourself, you can, you're allowed to. That's perfectly okay. Many of the smaller publishers have gone really their Wikipedia article entries are fascinating because one day someone had the idea that they should be in charge of something. So they started a series of publications and now they exist and that's literally the end of the story. You are allowed to be entrepreneurial in this space. Like our first world previous PhD student. Secondly, everything can be published somewhere. Any given week paper, any paper mill paper, any paper that's vaguely coherent will eventually find a happy home in some outlet or publication somewhere. Thirdly, commercial journals have to continue their business trajectory. And if there's one thing that I think is a source that's underappreciated in learning about how the mechanics of publications work, I strongly advise you to go to the SEC website and look for the documents that are provided in the share prospectuses. But when a publisher goes through an IPO, initial public offering from a private company to a public company and they have to tell the government what's up, basically. All of the financial details, all of the mechanics is French because basically you will see a business model that says we will continue to have more papers. We will continue to have what they call market capture. Is it all these people who publish in the journals that we own the title of? We will continue to publish in them because they have to. So they think of those things in completely different terms here. Repeat business, the longevity, not only, but you can put all these three words together, what it's actually amounts to. There's an upward pressure that's been collectively maintained over the amount of publications in the first place. So we are going to need more peer reviewers, more editors, and more space for everything to be read. I think everyone is probably familiar with this to some degree because you can find papers complaining about this in the formal sense, making many of the same points that we're making today back to about the 1960s. None of that is new. So, as Jess said, I have a long history of complaining and being difficult about things that have been published in various scientific journals in various fields over a long period of time. And if there's one thing that's changed since I started doing it about a decade ago, I think I've started to have more empathy for the people who are on the other end of the spectrum. Because what happens when we have a big collective upward pressure on the amount of papers that are being published and need to be handled and the amount of editors who need to be involved? What happens is you're going to need more editors. So in general, over time, purely by virtue of the numbers involved, they are less likely to be fully trained. They are less likely to have handled something like this before and they're less likely to have people around them to help them because they're not part of a community that considers them to be sort of informally trained over time. They're just thrown into the job, slightly unseen for the most part, the role of editor in this case. And so it's very similar to the role of peer reviewer in that there is not a formal mechanism for learning to do it, nor is there an informal method of apprenticeship a lot of the time. And of course, these people are busy because everyone is busy. As might be expected, retractions take time, effort and money. And I say retractions there, but I also mean the issue of a correction or an expression of concern or any number of other mechanisms that are formally investigating something that might be a problem in the meta scientific sense. My thinking of this for the last couple of years has been colored by policy work that I started doing for a couple of journals thinking about what would a process like this ideally involve the conversations that I'm waiting to have and the things that are most interesting are actually coming from the lawyers who are involved in this process in the back end. Now, a lot of people who are editors to a journal may not have even met the council that works with the publication body who are the people in many respects that are approving whether or not something like this had happened in first place. That's of course at the very top end of good journals, we might say, where people are compelled to take this sort of thing seriously. And below that, there is significantly less legal representation and significantly more, I'm sorry, I lost your email. Not everyone will keep emailing you back for 90 months, trust me. In general, the same thing happens with editors. It happened when you make data requests to authors. So, a wonderful paper last year by Gabellica and Al, and I think 6.8% of authors who made a data availability statement will actually send you data from the paper. And it is a part of that community and it is a part of those signed people who do not see the engagement in a post-publication process as part of their job. They're just also making decisions about that at a formal level. So, they're getting back to you very uncommonly in a more important way. It's nice to be more formally ignored. Now, here's the big one for right now. I don't have time to get into this or what I'm doing for work right now, which does involve artificial intelligence. This is all going to get a lot worse because the one thing, the large language models, which everyone is presently calling artificial intelligence, but in my opinion, isn't the one thing that they are really good at is bringing words together in order. And they can do that in a way that's infinite. We've obviously had nonsense generators for scientific papers for a very, very long time. And no doubt that will be touched on by other speakers in the discussion. The large language models are really good at it by comparison. And the level of interest and excitement around them and the way that the tech industry has responded by building services around these services that are available for them to buy themselves is quite startling right now. There are probably spaces now where there's probably 50, 60 companies even just within a niche. There is an explosion of people who are paying attention to how do we get a computer to put words in order? We already have an upwards pressure problem now. The next two years have been really interesting. They've been really interesting coming back. One minute, James. One minute, super. So a few more ancillary points than I've done even if correcting the record matters. If it takes 90 minutes, 90 minutes, 90 months, if it takes 90 months, it may not even happen enough even if you could get credit for doing it as a meta scientist. It may not happen quickly enough for you to have any barrier or relevance on your career. It certainly didn't for me. That is, it's just the system does not presuppose it. It's not set up to reward it but at the same time it also doesn't work quickly enough to reward it if you wanted it to. Journals are wildly unequipped for the headspace and resources necessary to re-dedicate things that they've already published. And there is no formal cross-country support for being able to do any of this work at all. These are the points that I thought to include when someone sent me the teaser title with the word impossibility. This concludes my discussion of impossibilities. Thank you for your time. Thank you so much. So we'll have Q&A after the next short talk so we'll be able to hear more. And just to reiterate James's point, these commercial publishers are operating as designed so you can open up their business plans and they are just like, yes, we will publish a hundred fold more articles, hooray, and we will hire and not pay a hundred more editors and not support them and not introduce them to our legal counsel. So they're not hiding it, right? This is a commercial business and they want more article processing fees, let's go. James, you had a nice talk that I can't find on a quick Google with an F word in it. It was like an hour long talk somewhere about... Yes, I have many talks with an F word. No, like in the title. And it was about similarly, like looking over the SEC filings for these company prospectuses. I'll have a think. Pop it into the chat. Yes, I can provide that for you. You may find it difficult to Google from a work computer. Same. Okay, so our next speaker and our last speaker of this first half before we open the floor to chat is Bjorn Brams. He is a professor of neurogenetics at the University of Regensburg. I know him also from his blog, which I will pop into the chat. If you are looking for a place to go to find well evidenced research on how bad academic publishing can be as a business. So again, they're not hiding it. And Bjorn has done a great job of synthesizing a lot of this evidence from the perspective of people who have to use these businesses to prove to their bosses that they're worth promoting. So Bjorn's talk is called, publishers are drowning in money. What are they doing with all the cash? Over to you, Bjorn. All right, thank you very much. If you thought after Dorothy and James' short presentations, oh my God, how can it be worse? I'll try to explain how it can be. At least for me, it can be even worse. So this should be the right screen. Yes, so you should all see the title now, right? Publishers drowning money, what are they doing with all the cash? Let's go. And so one might wonder how if the quality that they publish is so bad and reproducibility projects show that this is not just a paper mill problem. Even the articles that are nominally okay still can't be reproduced to a large fraction, somewhere between 80 to 40%. So around average at all, about half of the experimental research that is published is not reproducible in one way or another. So how did they end up making so much money off of that? And one of the ways in which that happened, there you go, there's been notice a very long time ago when it was still all about a subscription publishing. You see, this is from a case from July 2003. And this is when Springer wanted to buy Bertelsmann, which is just, you know, it's about a merger. It's a merger procedure and there's EU regulation that has to look into that. And what they found is that these publishers are monopolists from the point of view of functional interchangeability, which is the main criterion for something being a market. Two different publications could hardly be regarded as substitutable by the end users, the readers. And this is very obvious for us. Every article should and usually does only exist once. And so the person or entity that owns it can charge whatever they want for reading. But the same is true, of course, obviously for journals. If I'm a biologist, I cannot publish in physics journals. And as long as I don't have a permanent position or as long as I didn't have a permanent position, I also can't just publish in any old biology journal. It has to be a certain rank in order to get a job. So essentially at my topic, at my rank, there's only one journal in most cases that I can reasonably publish it. So from all perspectives, be it as an author or being as a reader, they have a monopoly position or a position with very, very little or hardly any competition, which means that you can charge a lot of money. So that's where all the money is coming from. Then the question is, well, to make a lot of profit, it means that my costs should be really low. And so we looked around at some of the smaller publishers and asked them, hey, so how much does it cost you to publish something? And here you see the different steps that one needs to publish. So first you have to handle the online submission. You have to give them a DOI, you detect plagiarism, to check the references, you produce the output, you have to go indexing all that sort of stuff. This is just one example. And that is also an example from a publisher that sits in a middle to low income country. So this may be on the low end, the $200 per article. So this is per article. And so we did, together with Alexander Grossmann, he's the expert in this. He checked, okay, if I wanted to be it, as James said, anybody can start a publisher or a journal right now. If you wanted to be, what would your costs be? We checked it out and we calculated in a way that was very publisher friendly, so to say. And we came up with that, an average article, if one would make this, if one would do this now with a lot of services around it would cost a publisher, a very generous 600, it's usually less than that, but a very generous $600. We also know that the cost of an article from subscription to open access, if you average over all of that, have been fairly constant over the last decade or two, but somewhere between $4,000 and $5,000 in terms of revenue, US dollars. And so we took the lower bound of that, the $4,000, which is still fairly accurate, still today. And then we know from the publishing of the public publishers, pardon the pun, how much profit they make, which is usually about 30%. So that's about $1,200. And so that means on average, from all the three million articles published today per year, we have about $2,200 that are not profit nor cost. Where of course then for each article and each publisher, only the publisher knows where that money actually goes. And so the question is, what do they, and that's more than half. So it's more than half of what is currently being paid. And the question is, where does that money go? And some time ago, Elizabeth Bick asked or mentioned to James and me that it would be nice if they would just maybe shell out some money, use some of that money for some decent quality control. But as we just heard, it doesn't seem to be where the money is going. So where else could it be going? Well, of course we all know they lobby politicians and they bribe probably a lot. There's a research works act from many years ago. So they do all that kind of stuff. But what they also do is they do invest it in other things. But what one first can see here is that in fact, in terms of quality, it's actually the other way around as one would think. One would think that the higher you go up the journal rank ladder, the more scrutiny there is because one aspect of it is there's more rejection. However, in the sources that I list there on the bottom, what one can see if one looks at methodological quality, so how well was the research done in all kinds of different fields, a summary of that is that the higher you go for journal rank, the lower the quality of the work that's being published there. And probably this we don't know yet, but probably also the lower the replicability. At the same time, what we also know from a whole bunch of sources is that as we go up journal rank, the higher the price in terms of now in the open access situation of article processing charges. And so essentially what we're doing is collectively we're paying more for less reliable research when we're using the current journal hierarchy. And of course, this is because we're not paying for quality, we're paying for prestige and prestige is what is being monetized by those corporations. And so it's quite clear and that has become clear also already by the first two speakers, Dorothy and James is that the publishers are for orthogonal interests. They don't care what they publish as long as we pay. They could publish empty sheets of paper and print them and send them out to us as long as the financials are okay. What is being published is completely irrelevant to them because as long as we pay, why should they worry? And obviously we pay more for less quality. So it's quite clear the incentives are you should publish more lower quality work because that's where all the money is. Now where does that money go? And that's something that has come into focus recently for us as those that suffers from it. And that's science trackings or surveillance publishing. But the publishers have been investing into tracking software and tracking technology for about 10 years, which is roughly about the time when we didn't need subscriptions anymore, right? So since about 10 years there's been a gazillion different ways of coming around subscriptions. And yet we still subscribe. This is probably something that the publishers didn't anticipate that despite not needing subscriptions we would still be paying for them 10 years from 2010 or 2012 ago. So what they did is they, a large number of them as you will see in a second, they invested into software that collects your user data and not just on their publishing platforms but of course they buy the tools and software that helps you in your research workflow. So if you look here at three different major publishers so you have Elsevier, you have Holtspring and Holtspring holds both digital science and spring on nature. So I put these two together and then you have Wiley. And if you look here, this is on, so say on the y-axis on the x-axis you see horizontally what is actually vertical integration. You have the discovery, the workflow, right? So you have the discovery of a research question and then collect your data and now analyze the data, write a paper, publish it. And this is roughly the point where we know the publishers use to work so that what now are essentially ex-publishers. And then you have Outreach and then Assessment and so forth. Let's stick with Elsevier, I can't go through all of them but let's stick with Elsevier. So they have their Scopus database and mentally their citation managers that they know precisely what it is that you're finding and what you want to cite. Long before just from here, long before you start writing or analyzing they already know what you're interested in. And then you analyze your data, you write and cite, you publish with them, then they have solutions for Outreach and then also very prominently they have Plum Analytics and Pure which is licensed by a lot of our employers to find out which faculty should get more money and which faculty should get less. So they're steering where the money is going via these analytics and assessment tools. And this is the same for all of these or many of the largest of these publishers. So this is where most of the money is going essentially. It's not in quality control because they don't care about that but it's going and getting ourself surveyed as completely as possible from discovery to assessment. So essentially the saying goes if you're not the paying for the product you are the product. But academics are so smart they pay for the prestige to be the product. So when everybody else outside of academia is happy to not pay for their tools we even pay for the tools and then get surveyed. And this is also as mentioned before all done in the open. So Elsevier doesn't say that their publishers anymore they're saying that they grew from their roots in publishing and now they offer data analytics. And if you look for their about page it says Elsevier is a leader information and analytics for customers across the global research and health ecosystem. And so essentially your data that you provide to them will end up in a product that tells let's say South Korea what they have to do to become particularly world leading in some aspect of science and beat all the competitors of which you are one usually if you're not in South Korea or in some other country that buys these services. And Paul Abrams the chief communications officer of RELEX told us this year that less than half Elsevier's revenue come from academic journals. So most of their money comes from selling your data and from selling and licensing these tools that they use to collect data and to influence where research should be going. And how much is that? Could you wrap this up one minute? Sorry? One minute. Yes, yes, I'm done. I was surprised there was still a slide in there. I actually I think I see a one slide in terms of solutions those are all problems, right? And so this is about three billion and Dorothy was talking about some million in euro a million dollars or euros or pounds. Now this is three billion pounds for Elsevier. And if a little bit more than half means comes from science trackings as 1.7 billion euros. And that's every single year that they're making from with our data. So essentially academics spend billions for the privilege to serve as a corporate commodity, right? So that's what we are doing collectively. And the solution that I would say is that we should do as people in the 90s has already said this was published in 1999 but it's a story from 1992, 1996 and no, 1993. There it was described in a meeting held by the Royal Society in 1993. So that's now a 30 year old idea is that we should maybe upgrade the way we communicate science. Journals is a concept from the 17th century. And there's a reason very few people outside of academia are still founding new journals. And I will post in the chat box the link to our long form ways of explaining how we would like to do this today. We of course wouldn't do it like they would have done it in 1993. But there are 10 experts here on that article where we describe in more words than I have left today on how we can take all that money that we're currently paying to become a corporate commodity and using it for something useful for academia. Thanks a lot. Thank you very much. We should offer like trauma support like halfway through this. Like if these are just like hitting bottom for you for the first time, it's tricky to handle. So I was gonna go until 1250 with Q&A for this section. And Bjorn, you have a question that for the citation on your cost estimate article if you could pop that in as well. So I will try to synthesize some of the questions that are in Q&A also encourage people to turn on their cameras and raise their hands. But my question for the panel and please everybody chime in. So these people are that these academic publishers have a business model. It's a good business model. They're making a ton of money. They have no incentive to increase rigor to hire people who are specialists and fraud detection, things like that. No, they're gonna do some data mining and sell it back to us. Once you get your head around these facts they're all pretty transparent, right? Like nobody's hiding any of this as the published business model. So I don't hang out in circles of people at UKRI or whatever who are writing the checks for multi-million pounds for each university. What's the feeling for the people writing checks to pay these processing charges to pay the subscription fees? I feel a little insular knowing all about the problems. And I wonder what's your feeling when you talk to, I don't know, people at these agencies, deans, whoever about how bad this is. It's egregious, Bjorn, yeah. That can be my last thing that I can say. I just recently talked to someone published in Nature Communications and paid 5,000 euros plus tax. And they say, oh, it's so worth it. I'd ask them, well, it says a lot of money, isn't it? And they say, oh, it's so worth it. So that's what they say. But it is worth it to me to publish in Nature. Jesus, promotion guaranteed, right? So absolutely worth it to spend taxpayer money to get myself a promotion by publishing in a fancy journal. Particularly fancy, it's just using the name Nature. But that's good for them, right? They figured they can spawn 150 nature journals and all of our bosses will be fooled and we'll pay the fee and we'll get promoted. Who's writing the checks for all of these APCs? I mean, I've been talking to academic librarians a bit and they don't like paying these fees. They don't want to pay these fees. It's all coming out of that by checks. But they feel compelled to do so because they're being told by academics, we need to access this journal, we need someone to publish here. So I think, yeah, I mean, that needs to be a viable alternative before we can, and they feel stuck by that. Librarians are paying the fees. I know they're crossed about it as they should be. But I wonder about, there's no magic man on a throne at UKRI, but they're writing the checks that pay these APCs, no? Or welcome trust or wherever. Well, sometimes, quite a lot of the time, there are university-wide or sometimes national specific schemes that are providing the money for this. So a lot of the time these things are quite heavily comped. They're not paid, certainly they're not paid by individual researchers. A lot of the time they're paid out of grant loans and that money has been specifically recouped from the government in the first place and later than slotted in. Sometimes I've heard of conversations with people where they're like, well, we got an extra $25,000 for publication costs from the government, but we haven't spent it because we've sent it elsewhere. I hope it's okay if we redirect it in the budget to something else. One more thing when we're talking about money, which is something I have to do all day now. If you talk to people who are within the publication, industry themselves, who are that chain, they have a completely different language, understanding, lexicon, view, everything to what we're talking about. This is even now, even given the fact that I'm referencing things from the 1960s and beyonds, referencing things from the early 1990s, these are not new ideas. We're talking more about the change in the trajectory of old ideas. And the people who are actually managing the elements of the company to hold this stick together are completely walled off. But the vast majority of this, this is not, they think of similar and overlapping issues in completely different times. They don't really know a lot of what we're talking about happening and if you ever bring the heat and a conversation with them, the response in general is, why are you being so mean to me, at least sets the center of my experience? Dorothy's put her hand up. Wait a minute. I just am seeing, sorry, I did not see where I could see hands. I think David Rheinstein had his hand up. David, do you want to come in? Allow to talk. Sorry, I didn't know I had to allow to talk. Hi. Good morning. Here in Boston, Massachusetts, New York actually. Sorry, it's very early. I just wanted to, and I guess maybe this is a little bit more towards the second part of this talk, but this conference meeting is the where I'm looking for. Sorry, the coffee hasn't come in yet. But basically, I want to get your thoughts on this anyways. I mean, the discussion seems to assume, and I'm reading from my written comment here, it doesn't seem to assume that when a paper gets into it, maybe implicitly assume that when a paper gets into a journal, it then has value, it has influence, it gets people tenure and on the other side of the coin, it influences decisions that policymakers and scientists make. But aren't we moving to, I mean, we can now, and in some fields like economics, people put all their work up on the web. You can publish yourself. Obviously, publishing is antiquated. Aren't we moving to a world where research credibility is based on having positive proof and rating of your work? In other words, demonstrating that it is valuable. And there are several, this is a little bit of a plug, but there's several projects in that area, and I want to know what you thought of them. One of them is E-Life's published interview model. Another, which is the one that I'm involved in, is the Unjournal, where essentially we just pay, we pay evaluators to evaluate work and give it a rating. And that kind of makes the idea that publication is the target a bit obsolete. And why isn't that a solution? In other words, and why should we then focus on what I think is perhaps the antediluvian or whatever you wanna say model. What do you think about this? And what do you see as the key blockers to this? We've tried to anticipate, we're trying to anticipate blockers to this. So I agree there are many options for us to make our work public. I would disagree with the idea that the powers that be, who are quickly evaluating our papers, put much stock at all in a comment left under a pre-print on the internet. So- Well, it's not, I mean, sorry, I'm jumping to talk to you, but just if you look at these models, it's not a comment. It will be a DOI, the responses will be DOI publications in themself. And, but okay, go on. But that's a very good point. And I'll let you continue. Sorry for talking over here. Any panelists wanna respond to David? I don't think- It is related to what's gone before as well as to what went before that. I think we've got a very narrow focus here if we're just only talking about sort of Western countries. So the paper mill model is largely supported by China, Iran, Russia, where the notion that it doesn't matter where you, it actually doesn't matter where you publish. All that matters is that you have published. The money that comes to pay for these, people were saying, oh, individuals don't pay for it. In some cases they do because they need to advance their careers. And it's relatively cheap, relative to some of the other things they may have to pay for. If you want to become a doctor in a Chinese hospital and publication is required, or it was, I think they're changing that now. They definitely are, yeah. But I mean, and a lot of institutions will pay because they want their rankings to go up on some, one of these multi-various scales that institutions get rated on. So, I mean, it's all totally corrupt. It's all totally wrong. I mean, I quite agree with David. I mean, I've been arguing for that sort of thing for 10 years, but it's just, it's like trying to turn around in enormous shit. And I think people who have sort of been pushing for this, sometimes underestimate the drag on the enormous ship that's coming not just from their colleagues, but if you look at it as a worldwide thing, where there's many, many people from different cultures involved, we really probably need to be putting more efforts into communicating with people in many different cultures to make it clear that that sort of approach to just publishing as much as you possibly can nevermind the quality field of the width, isn't a sensible way forward, but it really is embedded deeply in some places. And people are quite shocked at the idea that a paper mill paper wouldn't count for anything. Well, on that note, I'm gonna move us onto constructive actions here and hopefully leave even more time at the end for more Q and A. I encourage panelists if you'd like to take a look. We have some questions in the Q and A section that folks can answer. So without further ado, to start the second half with concrete suggestions, we have Dan Goodman, Senior Lecturer in Engineering at Imperial College London. I know Dan from his neural reckoning, Twitter handle and mastodon. Dan, I first got to know your name about a year ago. You wrote, you posted basically suggesting saying that you had stepped down from editorial boards, from all editorial boards, I think from commercial publishers and stopped doing post publication peer review. And that's my favorite kind of political activism, just like stop and do and stuff, right? So you think that the right thing for us to do may to be withdraw some services here. And it seems so astonishing for an academic to say it out loud and it seems so naughty and like we're sort of not allowed to do this, but we absolutely are. So with that, I turn it over to you, Dan, on the end of standard peer review. Great, thank you very much. Yeah, so I thought I would start my slide with a nice peaceful background scene from palms to teeth of some swans in the midst, because I need some peace before talking about peer review, it gets me a bit heated up. So I thought I would just quickly start actually with my story because I think I'm more of a newbie to this area, perhaps than some of the other speakers on this panel. So I think like all scientists, I originally just accepted peer review as a sort of fact of life, background fact about the universe. And of course, like all scientists, I like to break about it because that's what we do as scientists. All right, December, 2020, I started as a reviewing editor at E-Life. And this was kind of my dream editorial role because E-Life is a journal that I saw as one that's trying to actually change things. And it is a great journal and I still very much supportive of it. But despite that, I immediately, almost immediately started to sort of have doubts rising about that role of editor and review. Specifically, I felt like we were making decisions that were too rushed and not fully enough informed and that even knowing that we didn't have enough time to do it better than we were doing. Around this sort of time, about a year later than that, I started a new project which was sort of internally called NeuroMatch Journal. NeuroMatch is an organization for computational neuroscience that I started at the start of the pandemic. And we were basically thinking about how we could do it better. So that was NeuroMatch Journal. And as part of the thinking about how we could do it better, I sort of eventually came to the opinion that there's unavoidable issues with pre-publication peer review. So that's what we do at the moment. That is peer review before the paper gets published. And I thought that those were unavoidable. And so I therefore decided to resign, as Jess said, resign all my editorial roles and stop doing any pre-publication peer review. So I, as Jess said, I announced that on Twitter in retrospect, perhaps a mistake. There was a rather angry vocal minority. And you can see some of their comments there. Some slightly threatening tone to some of them. But despite that there being a small vocal minority of people who are very angry about this, overwhelmingly what I saw was a massive flood of support for this idea. So this, you know, it's got a lot of engagement. A lot of people have privately contacted me to talk about this. It seems that it sort of hit a nerve and that people feel quite strongly about this. Okay, so the last bit of my story and I've finished my talk with this as well, which is that we decided not to do Neuromatch Journal because we decided that we shouldn't be doing a journal at all. And we've switched our efforts to doing something which at the moment is called Neuromatch Open Publishing. You can go and take a look at this on this website here at nmop.io. And I will talk a little bit about it towards the end of this re-presentation. Okay, so I want this to be positive. But first I have to just say a little bit about what I think is wrong before I can say how I think we can do it better. And this is all like very much a personal opinion. So what's wrong with peer review? Well, first of all, I think it fails on its own on its own criteria. It doesn't catch all of the errors that the sort of in terms of evaluating technical correctness, it doesn't catch all of the errors. And it can't, right? Because, you know, there's only a handful of reviewers. They might necessarily be very well matched to the problem. And also if there's an error that's found after publication, then that can't be reflected in pre-publication peer review. And hence the necessity for all of these other services like pub peer and so on. And because we have all of these incentives to get published, it actually gives, and this peer review is a time limited thing, it gives authors an incentive to try and hide the problems as well. So it actually makes it more difficult to find the errors doing peer review this way. So it also fails on its ability to evaluate significance. So this is another sort of function of peer review. And I think we really can evaluate significance. Ultimately, the only thing that determines whether a paper is significant is whether or not it influences things over the decades that follow. So what we end up doing when we try to evaluate significance is largely introduced by us. We might have certain preferred topics, certain preferred authors, certain preferred institutions, certain preferred methods, and we do a lot of gate keeping. So a lot of what we're doing when we say we're evaluating significance is just introducing bias. It also has, I think, a mental health cost peer review done this way. It's part of a sort of culture of overwork. And you hear a lot of people complaining about how they have to do all of their reviewing and editorial work on evenings and at weekends. And that's because it's not something that is rewarded in the academic system. So they have to do it in their spare time out of it. It also introduces, because there's a rather random element to peer review, it creates a lot of career variants. If you get a big paper in nature, your career is set. If you don't get that big paper in nature, you might end up in a completely different country, a completely different university. You might have to leave academia. So I think that sort of uncertainty is also very problematic and makes science a very challenging sort of field to work in terms of mental health. And of course it's wasteful. So there's huge publication delays. When reviewers at a particular high profile journal make demands, you feel like you have to respond to them even if they don't really make sense. I've had papers that have been in review for years and a lot of that time was spent responding to review requests that I didn't think needed to be actually responded to. And I think that's not an unusual state of affairs. And one of those reviews are wasted if the paper is ultimately rejected. And of course, as we all know, there's massive financial costs as well. Okay, so what's the alternative? Well, I'm gonna say two things about that. I think it starts with just, instead of doing pre-publication peer review, switching to post-publication peer review. And why I think this is a good way of doing it is because where does really are, since the science works come from, it doesn't come from the fact that we have two people read the paper before it gets published. It comes from the fact that there's a whole bunch of people who basically will see something that they don't agree with and try their best to destroy it. And ultimately if they either managed to show that that theory being proposed was wrong or they find that actually, yes, the data do support this and they change their mind. So basically the real meat of what makes science work happens after the paper is published, not before. And it also comes with a host of other benefits. So first of all, there's no delays. This speeds up science. It's good for people's careers. You get some of that from pre-prints, of course. It also allows us to focus our efforts on the most impactful papers. A lot of papers get cited very little or not at all. And we put as much effort into reviewing those as something that may shape the direction of the field for years. Something that is getting hundreds of thousands of citations probably ought to be reviewed a lot more in depth than something that basically never gets cited. So we could better use our resources by doing post-publication peer review. We can't cross find errors at any time under post-publication peer review. There's a larger pool of potentially better match reviewers. Those better match reviewers are people who read the paper and care about what it says, not necessarily the people of the editor could find and agree to review it. And I think it also reduces the incentives to hide the problems and puts the author in control, which is also good things. But I don't think post-publication peer review is enough. I think we have to take it further. And we think, what is the goal of doing peer review? Well, I think that there's various things we could do. We could talk about the fact that peer review gives feedback to the authors and that it provides in a way a certain sort of context to other readers. And I think those are the things that we should be focusing on in a replacement system. So by feedback, I mean that's feedback from the readers to the authors of the original work. So that might be finding mistakes that could be just very small mistakes or it could be more fundamental things, maybe suggesting changes. And one of the things we want to experiment with is if someone has contributed a lot to a paper by these sort of suggestions that they could even get added to the paper as an author, which is not something we do under the current peer review model. And there's also providing context. So in a way, which journal of paper gets published in is a certain form of context, right? So it's saying that this is interest to the... This paper is of interest to the readership of this journal. But we can have a much richer form of context by having arbitrary comments attached to articles. And some journals are already starting to show the peer review, but this could be an ongoing process. So basically, this allows us to provide additional information to readers, helps us evaluate the work, and it can be critical as peer review often is, but it could also be positive. For example, like you might have someone say, oh, the authors haven't realized that actually this problem they've solved also solves this problem in this other field, right? And that's really useful context for people reading it and helps the authors and another positive thing to say. And there can also be other things under that, like commentary to make the paper easier to understand and so forth. Okay, so that's my view of what I think are important things, but I don't think that that's the only possible view. I think what we want to do is to try out a bunch of different approaches about what sort of peer review, what sort of feedback is useful. And to do that, we have to reduce the cost of experimenting with those things. Because at the moment, that's hugely expensive, starting up a new journal or trying a new approach, not just in terms of cost, but in terms of organizational effort to try and get people to take part in it is massive. So that's brings me on to my final point because that's basically what we want to do with Neuromatch Open Publishing. This is something, by the way, that will change name fairly soon. It's not quite 100% decided what the new name will be, but they won't have Neuromatch in the title because it's gonna be a separate organization. What we want to do is have a sort of end-to-end publishing system that is commonly owned. So owned and managed at the start, at least by university libraries. And the idea here is basically this guarantees that this will never be sold for profit and it keeps it rooted in the communities it's trying to serve. And everything, of course, should be, I think, free to read and publish. And all of the data, I mean, the text of the articles and everything should be sort of open and reusable. And one of the things that we want to really build on this is that this should be a sort of infrastructure that enables people to do experiments much more easily than they're able to do at the moment. I'm gonna get there, Don. Okay, brilliant. I'm basically done. Other than to say, we're actively seeking funding for this. We'd like to start building it as soon as we can. So if you've got a couple of million burning a hole in your pocket, please do get in touch. Yeah, and I think that's all for me from the moment. Thank you very much. Thank you. Shout out to James for trying to tag Wylston in the chat. We can't get, and Wendy, if she's still on, the attendees can't see the Q&A or the chat very well or they don't have access to the chat. So I've flagged him. I think he's not actually listening. We'll see if we can get it sorted. Otherwise, it looks like Dan is moving questions over to chat so everyone can see them. So sorry about that. So I love Dan's point, like, and I actually would let, if anyone wants to chat in the chat about whether they've thought about stepping down and if they've, does your promotion criteria include, like, who do you peer review for? How much have you peer reviewed? Mine don't at all. So I've had no pushback at all. So our next speaker, Chris Chambers. Chris, I just Googled your title right now. And it's literally head of brain stimulation at the University of Cardiff, which is like the best title I've ever heard in my entire life. I was thinking about how I think of you and I think of you as the king of actually getting shit done. So Chris has basically switched his intellectual and academic focus towards moving, towards improving research culture and open science. So he'll basically created and popular as registered reports, which I think are the only reason to peer review before publication. But anyway, hopefully Chris will tell us more about preprints and community publishing. Over to you. Thanks, Jess. Right, I'm just sharing my screen. Can you see these slides? Yep, looks great. Super. Right, so in the next 10 minutes, I'm gonna tell you all about ways we can use preprints and combine them with peer review in a way which takes back a lot of agency and a lot of control of the review process from publishers. We've talked a lot in this session about the damage that profit making publishers do to academia and to science and the overriding point I'm going to make today is that in order to dismantle that power structure, the first thing we need to do is to secure control of the entire review process and do it ourselves because that is the hook upon which everything else lands. Now I'm gonna do this through the medium of registered reports. This is not really a talk about registered reports, but I'm going to use it as an example because the registered reports article type is one in which we've created this preprint format quite successfully. And this is gonna, in many ways, it's a good that I'm following Dan actually because this is gonna very much follow the same kind of ethos that we need to really completely reinvent the system. My approach here isn't quite as ambitious as Dan's out of the gate, it's more incremental and hopefully those of you watching this will gain something from looking at the differences between the approaches. Now, just for those who don't know what a registered report is, I need to explain it quickly. So a registered report is a type of article which we established about 10 years ago which seeks to eliminate various kinds of bias in the peer review process and the publishing process. And it does that by performing peer review before authors actually do their research. So peer review happens in two stages, initially at a protocol stage where reviewers assess the quality of a study proposal and then the journal performs and manages this peer review process. If review goes well, then the article is accepted in principle regardless of the outcome. So the idea here is that the results of the research will have no effect on the publication decision. So we eliminate reporting bias and publication bias. Now, it's viewed within the journal landscape. It's fairly successful so far. It's been launched by about 350 different journals and it's been launched at Nature in the last couple of months and the impacts are promising. I'm not gonna go into these in detail. Suffice it to say it is working to eliminate bias. It is working to improve reproducibility. It is working and engaging in the early career researcher community and it is working in terms of ensuring visibility of this type of article. But it has a lot of problems and those problems can be summarized under five major categories. First is which that the stage one review time, the time it takes authors to go through this pre-study evaluation can go for several months and can have an uncertain outcome. And this can be difficult to slot into the often very tight timeframe of academic life. As it exists now, registered reports is limited to one journal at a time. Just like with regular articles going to journals, one after another, if you get rejected, you go to the next and so on down the sequential chain. It's not well suited to the kind of programmatic research that characterizes so many fields in which you might have one overall kind of protocol or programmatic plan for a piece of research, which in theory should lead to multiple stage two outputs or final completed registered reports. But at the moment, the format is limited to a kind of one to one model. There's various inconsistencies in the editorial standards. So as the format has gained in prominence and gained in visibility across the sciences, I've noticed that the standard of editing has also become a little bit shaky at times. And this is because many editors are not really well trained in evaluating research before it's being done, doing specific design review, really thinking deeply about issues of theory and methodology rather than getting distracted by shiny things of results. But the most important one for today, the most important limitation of the journal-based registered reports model is the fact that just like every other journal article type, the peer review and the publication process is controlled virtually entirely by academic publishers. Now, we do the work, but they get the reward, okay? And most of these publishers, as we've discussed, are commercial banking huge profits at our expense. And so we're basically performing a huge amount of labor and we're not benefiting from it at all. And that's why in 2021, we created the peer community in registered reports. Now, some of you may be familiar with the broader peer community in project, which is a very large-scale, very impressive program in which there's a number of different communities across different fields, which perform peer review at the pre-print stage prior to journal submission free in a free non-commercial platform. Now, we created a peer community for registered reports specifically across all fields. And the idea here is to take the regular registered reports review process that you might get at a journal and just do that at an earlier stage before journal submission, okay? And once authors go through this review process, this review of the pre-print stage, the submission is recommended by the peer community in registered reports and the revised manuscript or the revised pre-print is posted on a pre-print server along with the peer reviews and an editorial recommendation, which is a short kind of blurb, a synopsis of the research and why it was awarded in principal acceptance or stage two acceptance. And then at the end of this process, authors have the option to take their pre-print and just leave it there with its own DOI. It's a piece of peer reviewed science that's on par with anything else out there. Or they can take it to a traditional journal if they need that, so they can go to any journal that they want if they wanna propose it to a journal. But there's also a list of PCI registered reports friendly journals that we have on board which have committed to accepting the recommendations of the PCI registered reports review process without further peer review. So these journals have essentially committed to replacing their own internally managed review process with one by us, by peer community and registered reports. And when I say us, I really mean us because it's us, the community who are doing everything. We do the peer review regardless of whether it's through a journal or whether it's managed by peer community and registered reports, we do most of the editing. We're basically we are peer review. So there's absolutely no reason why it needs to be managed by a publisher. He's a schematic of how the process works. So you begin by submitting your registered report to PCIRR as a private or public URL. So it can be a public pre-print or it can be an embargoed stage one submission. This goes through the stage one view process here where it can be, it's evaluated at desk. It can then be peer reviewed and revised just like you would normally do with any kind of registered report. And then it gets a stage one recommendation in which case a public or private recommendation is posted on the registered reports website at PCI. Then authors go away and do the research and when they're finished, they come back with an updated pre-print which is now stage two submission. Okay, and this gets reassessed by the same recommender and reviewers and then recommended at the end. It's now a valid, citable article with its own DOI. It's just being peer reviewed before any journals ever touched it. And then as I say, authors have this option to submit to a PCI registered reports friendly journal where it will be accepted without further peer review. Here's just some of the journals, an example of some of the journals which are PCI registered reports friendly in my field. They cut across quite a broad range of psychology and neuroscience journals at the moment and there's more joining all the time. And there's also these PCI registered reports interested journals over here which don't automatically endorse the recommendations of the peer community and registered reports initiative but they do keep a close eye on submissions and they often make offers to authors. So it gives authors all of this control to decide the fate of their article by simply taking control of that review process ourselves as a community and doing that through a pre-print process. There are some other features that we've been able to build in. And I think one of the aspects we haven't really talked about in this session so far is the extent to which the dominance of academic publishers puts the brakes on innovation. Publishers manage peer review using clunky 1990s software which really makes a huge amount of money for them but it's not very good and not very flexible and not very dynamic and is very expensive. And we can take the opportunity by creating this initiative of the PCI registered reports initiative to build in additional innovations such as programmatic registered reports where one stage one pre-print can lead to multiple stage two outputs. So you can have a program of work which then forks out to become multiple registered reports all with one review process. And perhaps the most important innovation, the one that's really working very nicely and it's very popular is schedule review where we eliminate that stage one review time almost entirely by performing peer review in a planned manner. So authors initially submit a stage one snapshot which before they've even started writing their manuscript and then the recommender lines up the peer review process for the future in about six to eight weeks time. And so when you do that, you actually perform a lot of the key aspects of peer review in parallel and the review process can be done very quickly at the point that the manuscript is submitted. I don't wanna use up all my time so I'm gonna skip this bit but the slides are publicly available. It gives you an working example of how you can use this platform to if you combine the schedule track and the programmatic track to get an entire PhD peer reviewed at the outset and all your papers accepted before you collect your first data point. But I'll leave that there's a temptation to look at later in the slides and there's more information here about PCI. There's lots of submissions coming in and it's a very interdisciplinary initiative. So we welcome submissions, we welcome adopting journals but I hope that most of all, I hope it gets us all thinking about the fact that we don't need to rely and we shouldn't be relying on commercial academic publishers for managing peer review in whatever form it takes. Thank you. Thank you, Chris, bang on time. I think this is the answer. I think registered reports, peer review before you start your damn study is the natural right thing to do for science. I do think it's hard to pitch. It's hard to get the rewards for working to this better standard. It's hard to get the powers that be. Their brain's just like short circuit when you give them like a four word title. But so we have one more speaker. It's Lizzie Gad who couldn't be here due to a scheduling conflict. She might make the Q and A. I'm conscious that despite being a savage about trying to keep time, we're gonna hit our 90 minutes at the end of Lizzie's pre-recorded talk which I have here, 10 minutes bang on. So if the panelists wouldn't mind staying an extra maybe five or 10 minutes to take a few questions, that would be great. I understand if some of the participants have to leave but I will share Lizzie's talk. It's a segue from Chris's. So Lizzie is, hang on one sec, research policy manager at Loughborough University coming from a career in academic libraries. So she's different than the rest of the panelists. She's been pushing for things like open access policies for her whole career. I know her she did a conment at Glasgow as the head of research culture. And so her work has been in trying to change how we get assessed for our research. I can highly recommend the Harnessing the Metric Tide report. It makes some pretty bold recommendations. This was a report funded by the UKRI, them who fund all of us about how they should change, how they assess us for RAF. So I'll pop a link in there. I am going to try to share Lizzie's recording here. Ken, one of the panelists, I'm going to start it right now. Tell me that they can hear her talking. I can see the slides. Yeah, I can't hear. I can't hear. Okay, so I am sharing. I'm going to see if this helps. I know almost. Yes, all good now. I've lost the audio again. I don't know if it's coming through for anyone else. Nope, go near it. Can you hear it? Can you hear the raised hands or a bone? Okay, I don't know. So I'm just playing it on my laptop. Are you all picking it up through the computer mic? It's not being broadcast. I was picking it up earlier, but I can't hear it anymore. I'm going to try one more second. If this doesn't work, I'm going to post a link to the metric tide and we'll open up the Q&A. Okay, my apologies. Lizzie's talk was the talk I was most excited for. She's doing God's work. She's trying to change the way ref grades are researched. It's a disappointment. So this talk is being recorded. I'm also going to figure out how to circulate Lizzie's pre-recorded talk with this and post a link to the metric tide. Basically, all the great ideas in the world about how to change where we post our preprints and our peer review don't matter. If the powers that be don't reward us for working that way. So my latest thing is like, I'm not joking when I say, I think we should try to plant students and low level jobs at UKRI. Like it's time we infiltrate the funders and that that's our political activism as well. You're like, my friend, no research career for you off to the funder. You have to help us get these things rewarded. So harnessing the metric tide does not recommend planting spies at UKRI, but I think there's nothing wrong with thinking along that chain. How are we influencing people who hold the purses? Okay, let me gather my wits and look at the Q and A. Jess, there's a comment that you might need to just share the audio from Zoom. Share the audio from Zoom. I think if you go to share screen and you go to advanced, I'm not sure if that's exactly how to do it, but I was just poking around and I could see that there's a computer audio option. I don't know if that might work in solving your problem. All I can see is who can share and who can start sharing when someone else is sharing. When you hit the share screen button. Yes. For me, it's a share sound as a thing you can take. So it might be that. I'm doing, oh. Share screen and then share audio. That'll do it. Okay, how many PhDs does it take? Okay, I'm gonna try again. Share sound. Thank you, Chris and Dan. Hello, my name is Lizzie Gad. I work at Loughborough University in the UK. I also chair the International Network of Research Management Societies, I-Norms Research Evaluation Group, and I'm a vice chair of the Coalition on Advancing Research Assessment. And I've taken as my title today can research assessment reform fix journals? Okay, so what's the problem with journals? Well, as I'm the last speaker of six, I'm hoping the problem will be very clear to you by now. But essentially our problem really boils down to the fact that academic career assessment is so publication centric. So here's the results of a survey done by the European University Association back in 2019, which asked researchers which of their activities were the most important? And you can see, surprise, surprise, the Runaway winner was research publications with 80% saying this was very important to their careers. And of course, it's not just any type of publication that matters to careers, but journal articles. And this data is a bit old, but it shows the increase in journal article submissions, the red bars, to three successive iterations of the UK research assessment exercise where journal output increased exercise after exercise even in the humanities. And of course, when we say researchers are assessed by journal articles, we really mean the journals in which they are published and not the articles themselves. This forms then a negative feedback loop where journal brand obsession leads to researchers seeking to put their best work in a very small number of journals, which because they contain everyone's best work, then get very highly cited, leading all parties to believe that these are somehow inherently better journals, but of course, a journal is only as good as the work that goes into it and the editorial board that gets attracted to work at it. Meanwhile, the journal takes credit for being such a highly cited journal and article processing charges for those journals go through the roof, largely in line with their journal impact factor. As this data from Heather Morrison shows where highly cited journals have higher APCs than those that are less well cited. So how might assessment reform help? Well, it'll come as no surprise that there've been a significant number of calls in recent years for research assessment reform. We had Dora, the Declaration on Research Assessment in 2012, essentially a backlash against the use of the journal impact factor to assess individual researchers or articles. Then we had the Leiden Manifesto in 2015, which is 10 principles for the responsible use of bibliometrics across a range of evaluative settings. We had the Metric Tide Report a couple of months later, which had five principles for the use of all research metrics, which were updated with the publication of the Harnessing the Metric Tide Report last year. In 2019, we gained the Hong Kong Principles for Researcher Assessment that are based around research integrity. And in 2022, the European University Association and Science Europe established a coalition to develop the agreement on reforming research assessment. So how might these reforms help us with scholarly communication reform? Well, lots of ways, but I've only got 10 minutes, so I'm gonna focus on three. The first of which is to require the community to value a broader range of things, not just publications. All well and good. But the challenge with measuring what matters, of course, is actually measuring what matters, given that we don't actually have a lot of alternatives to publication data right now for assessing those broader contributions. That's why one of the Harnessing the Metric Tide recommendations was to undertake a community-led piece of work to identify things we do actually care about with a view to providing alternatives. But of course, once we agree on alternatives, we must give them equal or greater weight than the legacy indicators. There's no point measuring what matters if we're still heavily weighting the things that don't. One of the specific things research assessment reforms say we need to think about more broadly is our outputs themselves. We need to broaden our perspective as to which outputs count, so practice-based output, software protocols, et cetera, and what are the dimensions of them that count? Do they adhere to standards? Was data made available, et cetera? This makes absolute sense, but we do have to be careful that this doesn't drive us back to the journal literature as a source of metadata for the things that we do care about, that are still tying us to journals, but in a different way. So this piece in the scholarly kitchen argues that the journal article should be the fundamental unit of data sharing. But of course, that wouldn't allow for data to become a standalone unit of scholarship, and would embed journal articles as the accounting unit of scholarship forever. We're already starting to see journal metadata used to develop open science indicators, again, something we care about. And of course, with more journals taking up the contributor role taxonomy credit to surface those broader contributions, again, that we care about, it's not unreasonable to predict this being packaged up and sold back to us in a future PSI-VAL module. So yes, to valuing a broader range of outputs and output qualities, but beware that this doesn't send us back to journals as a sort of data about the broader things we care about. And finally, a third key message of research assessment reform is to value the content, not the container. And I think this is the message that gives us the greatest hope for scholarly communication reform, as I'll explain in a moment. But it's not just the content of the output that I think we need to value, but the content of the peer review. And I think if we start to see peer review as content and to make it as visible and valued as the output content, this will take us a long way. So I'm gonna leave you with three suggested paradigm shifts in research assessment that I think will really help us here to fix the scholarly record. And the first thing is a shift from summative to formative output assessment. We need to shift from scholarly communication that is unidirectional scholarship for glory fanfaring to scholarly conversation where the purpose of publication is to enter into scholarly dialogue with peers about the research itself. Because if publication became about the peer review and there was no glory in getting published, but only in the feedback that resulted, we'd fix so many of the problems that we've heard about today. There'd be no point to paper mills because there'd be no economic basis for them if there's no reward for publishing other than a peer review report. And guest, ghost, or gift authorship would become a thing of the past because if researchers only published to communicate with their peers and to get feedback rather than for glory and the concomitant financial rewards, a gift authorship suddenly doesn't feel so much like a gift. The second paradigm shift that would change the world of scholarly communication overnight is taking EDI seriously. I mean, really seriously. Because I hate to say it, but publishing in venues that are only open to wealthy scholars, which will largely be in the global north, is turning a blind eye to structural racism. And yes, that includes those journals that make provision for poor scholars to beg for a waiver. Publishing in venues that are not representative of the scholars working in those fields is equally problematic. Behold, if you will, the names of the editors in the top 49 economics journals from the Australian Business Deans Council list, I think it speaks for itself. The databases that we use to define our scholarly record are also hugely inequitable. So this data shows the percentage of journals indexed in scopus that are from the global north, 81%, relative to the global south, 18.4%. How fair is that? And that's before we get onto the use of this publication data in all forms of assessment, which just privileges the privileged, both at the level of the individual scholar, as with this, but also at national level, as this data is sucked up by the university rankings and used as lazy shortcuts to identify institutional quality. In this case here, to identify who qualifies for high potential individual visas. And I believe that if we took our EDI, our equity diversity and inclusion policies to their natural extension, and said, actually, because we care about equity, we can no longer in all good faith publish in journals with APCs and no longer publish in journals that all have white middle-aged men as editors in chief and no longer take scopus or base our publication assessments on Syval or engage with the university rankings, that would really be world changing. And my final paradigm shift in research assessments that would change the world of scholarly communication forever would be simply global agreement as to how we're going to do it better. Because research is global and scholars are mobile and effective research assessment reform therefore also needs global buy-in. Or, as our Dutch colleagues have already found, efforts to do better are going to meet with considerable resistance. Because the truth is that the initiatives that have gained the most traction in the responsible research assessment space and the Skolkom space vary in global take-up. And whilst they might be global in name, they're largely led by the global north. And whilst the intentions are all good, we have to be sensitive to this and I think learn better ways of more equitable global engagement if we're really going to reach agreement about a better way of doing and rewarding scholarship. My time is up. Thank you very much for listening. Do find me on email or Twitter if you want to chat about these things. Thank you. So that brings us basically a little over time. If everyone's okay, I thought I'd take questions until 1.45 from the audience and from myself. My first thought is that I think we've come so far. I've learned so much. I think as a group, progress has been made fairly quickly in the last maybe five-ish years. I feel there's a demise of science Twitter. I see a demise of science Twitter from what I'm watching. I don't see it picking up as strongly and mastered on even though that's a nice place to be and I encourage everyone to share their handles. How do we stay in touch? So how do we people immersed in this sort of thinking trying very hard to make changes locally, share news about policy changes, share news about job opportunities, share news about new progress. I do feel like discovery will be the new problem just as we all start our own journals. We all start our own metadata scraping tools. What do panel members think about that? Social networks are incredibly annoying. Once they achieve a sort of critical mass is really the only time they get to work in the first place and watching one essentially getting ruined from the top down is intensely frustrating because things like this invariably get lost. I am honestly, I am waiting for something else that is similar within this ecosystem that has the same kind of traction as some of the initial offerings and the really big networks. And if I had to hazard a guess, there's been a lot of talk of interoperable social networks in the last year or so. I mean, especially given that they're all suffering from some pretty interesting business and uptake problems in different ways. I think the short and unfortunate answer to that is we have to wait until the landscape of that changes because there are communities everywhere that have been not destroyed as much as they're, it's social network really works when people who have no inherent interest in the thing itself are participating because they have to when they start to have network effects. There will be something else that replaces the present ones a little bit in the process of destroying their network effects. It's gonna be dependent on the technology. It's very difficult to build one from scratch. I'm sure I'm absolutely certain someone's had an idea somewhere about let's build a global network of sciences. Let's go, come on, it'll be fun. It rarely works. It rarely works. Just as a business model in general, so many competitive products have been tried. You have to wait for social contagion to happen and then catch it, unfortunately. It's my rather bleak answer. Chris, how are you keeping track of people? How are you pushing at editors, pushing it at provosts and stuff? So was that to me? Yeah. So I think there's a number of levers here. It's great to try new things, but it's really important. One of the lessons I've learned is that it's much easier to combine our strengths. So for example, when we had this idea of registered reports 10 years ago, it was thought that perhaps we could we could do that in some way separate from journals, right, the outset. And I thought that's pointless. It's a great idea, but it'll never work. No one will use it. So we use the infrastructure of publishers to build it and to give it a reputation and to make sure it was mainstream. And then we took it away. But then we didn't just start out and scratch. We actually joined the Peer Community In Initiative, which had already been going for five years and had 14 communities. And we just became another one. So we gained from the where work as well. I think there's a lot of the time these sorts of reforms come and go. They sort of spark, the sparks go out because there's too many, too much reinventing of the wheel. And it's kind of like that classic XKCD cartoon, where there's 13 competing standards and someone comes up with a new one, which integrates them. And now there's 14 competing standards. We really need to try new things, yes. But then we need to know at what point can we compromise and join things up to make them stronger? Because that's what the publishers have done. Very successfully. That's why Elsevier has 2,000 journals because they join things up. They're constantly thinking about this. So I think there needs to be some of that strategic thinking. And I think you don't necessarily need social media for any of that. What you need is strategy. You need to know the key people who are involved in the key initiatives and get them on board. And social media is very useful for spreading the word, but I don't think it's essential. I do feel like academics, ooh, we love to prove things from first principle. Let me prove using a model that if we selected on the units of teams instead of the units of individuals, you'd have a more robust record. And let me publish that paper in a computing science journal. I get that, right? It's very tempting. But I think we really fall down on political strategy. And that's the part where we could use education and admit we're not great about it. So your point about standing on the backs of the publishers to use their platforms is just very sensible, right? To me, that seems really right. It's the only way. I think, yeah, being pragmatic. And I think, yeah, sometimes you have to, I remember one of the very first talks I ever gave on registered reports. So somebody stood up and said that by doing this with an Elsevier journal, I'd betrayed the scientific community. And I, fair enough, took it out on the chin, but that's the price you have to pay. If you wanna get things done, you have to accept that you're going to be morally imperfect. And then you have to use that as a stepping point to something better. And I, you know, this is pragmatism. And there's not enough of this. There's academics love to nitpick. You give them an initiative or an improvement and they'll find everything wrong with it or could possibly go wrong with it. And they'll use that as an excuse to go back to what they were doing before whilst ignoring all of the problems with what they've already got, the status quo. This is the academic mind, is how it works. You have to shake free from that to some extent, I think, to get anything done. But I'm optimistic that we're doing that. I really think we are, that there's a huge, there's a huge push now. And it's coming from all different sort of sectors within the academic community, that things are changing. Yeah, I've heard someone a few years ago and Chris would have laughed. He would have laughed like a man with a flip top head at this one and someone referred to the overnight success of registered reports. I think they were about seven years down at that particular point in time. Someone now who is attempting to build companies from scratch, the, there is a bad duration mismatch between what people think is possible either with an MVP, some idea that they come up by themselves or with what they think will happen with an initiative and the time horizon actually necessary to be able to do something about this. It requires not, there's no idea, this is such a complicated and settled system. There is no idea that it's so good that hasn't been thought of where if we click our fingers all of a sudden, everything will be different. Progress looks like what Chris does if you're talking about it in a formal sense. And it looks like what Chris does because this has 10 year anniversary, a little while back. And probably for the first three or four years, it was only weirdos like me who were genuinely interested in what the model actually offered. And the ability to outlast the processes that will prevent you from making something like that normal is part of the key. I've said this before in talks and I want to monopolize the conversation here but I like to repeat this everywhere I go. There's no more dangerous word in the discussion of everything to do with academic infrastructure and the environment that we've collectively created than the word should. We should, this should happen because first of all, it's not reckoning with the practicalities of what has to happen. Okay, well, let's see a business plan, right? Let's see a strategy. There's a key word, but it's also a temptation to, as you man says, find out all the things that might not conceivably work with it before it's actually happened. I mean, if you could cancel global publishing with a cranky tweet, we probably would have canceled it by now. You can't click your fingers and define multi-billion dollar businesses out of the publicly traded companies out of existence. It's like trying to cancel ExxonMobil or cancel Hyundai. It's not doing to work like that. You have to understand the time horizon of business model, then an astonishing amount of patience and hard work is required. I know that's a really annoying answer, but it's the only one I've got. Dan, I'll come to you in one second, but I wanna give a shout out to low-key middle-aged women working in libraries, R&I departments at universities, trying to get funders up and running, including the sub funders of UKRI. They're like real quiet. They're radical, right? Like if you wanna change pure so that your university tracks which publications are registered reports and get them to consider that and what gets printed out when it goes to the ref prescreening at your university, you should do that. You should talk to the middle-aged lady at the library who's actually in charge of the interface of pure. So I think we need some bombastic leaders at the head like challenging people aggressively. And then we just need like a whole bunch of quiet doers just reshaping the way our GUIs look when we input research outputs for ref assessment. Anyway, sorry, I cut you off Dan. Yeah, so I completely agree with Chris and James that you have to work with the system as it is if you want to make change. And I think that a lot of the reform methods that have happened in the past have failed because they didn't take that enough into account. They were like, if only everyone were to do X where thing is X is what they were planning to do, then the problem will be solved. But it's just not realistic to do that. Just from our point of view, what we're planning to do is very much to work side by side and alongside the existing system. However, having said that, there is a danger of allowing the publishers to be, the commercial publishers to be too strongly involved in all of this, in that they're very, very good at cocking reform efforts. And we've seen that with open access, right? So we now have lots of open access, but we're now paying APCs and those are incredibly expensive. And so only people from rich institutions can afford to pay them. And actually ultimately their profits have grown up as a result of this switch to open access. So I don't know if we can outwit them on that front. I feel like we're going to outwit them. So Welcome Trust has a platform for publishing their results, you know, independent of journals, potentially prestigious, like a nice brand recognition there. That seems to have been a bit quiet. Is there much move for more funders to do things like that? I don't think many know about it. European Research Council has got something similar, but in each case, it's for the research funded by them. They are both, I mean, I published a lot in Welcome Open Research and think it's great, but I think they're not terribly well known outside the UK. And I think the ERC one, you know, they need more people submitting to it. It's a different model. I mean, it's like anything, it just proceeds slowly. But I think the funders have a vested interest, one would hope, in ensuring that the work they fund is available to people without them paying huge sums of money for it. One thing I've learned working with funders, they're very scared of academics. They are. I was surprised at this actually, because I always think of funders as being kind of the top of the food chain. And you get a lot of comments, you know, from people when they reflect on these topics, that, oh, just change the way the funders work, change where the funders work. But funders feel like they have to do certain things because the scientific community instructs them to. And funders are worried that scientists will object very loudly if they start making rules like you, we are no longer paying APCs at all. You can put your manuscript on a preprint server and update it after each round of review and you get a green open access. And then whatever you do after that, we don't care, we're not paying a cent. There'd be absolute pandemonium, they think, if they did that. They're extremely conservative. And for that reason, they're often the last to act. And so I actually, I really agree with your point earlier, Jess, where you said let's plant some spies and assassins within these funding organizations because that's what we really need to change those things. And we need some funders with a little bit of vision. And it's difficult to find that within the ranks of UK or I, unfortunately. So I was thinking about how we keep, I could talk, we need dinner and drinks and to like select spies and get them planted. So thinking about like next step forwards, I've really enjoyed the UK reproducibility network. If there's people who consider themselves junior or not very educated on the topic or who are interested in becoming a spy and getting planted at UKRI, I can highly recommend joining the UK reproducibility network. Your university might have a rep, but it might be a central place where we can try to get together. There isn't much of a meta science community because we don't get paid to do meta science. But it's something maybe worth thinking about going forward. Maybe James, James, are you there? The research on research institute might be a place as well for us to stay central and stay chatting. Big thumbs up. So thank you everyone. Thanks for your indulgence for letting us run 21 minutes over. It's been absolutely delightful and I'm glad to see everyone. And I hope we can keep chatting and keep making the good changes here.