 So, this is second webinar organized by MetaScience 2023 conference and also co-organized by AMOS, which is association for interdisciplinary meta-research and open science. Session two is about reducing research weights across science, and we will have two moderators, Antica and Maria. I am Maugrata Lagish from University of New South Wales, Australia, and currently I'm in Okinawa in Japan. So, hello from Japan. It's afternoon here. It could be morning or evening in your place. And now I will hand over to Antica and Maria, who will be the moderators of this session. And one more thing. Sorry, I just one more slide. So, AMOS is organizing conference end of this year in November in Australia, and you are all welcome to join us in person conference, and there will be more details on our communication channels and website. Thank you. Thank you, Alozja. So, well, we start now with our symposium. Thank you for joining us. I hope that you will learn a lot and also start to think about certain topics that some of us definitely find very important. My name is Antica Julina, and I'm a senior researcher at the Rudjar Boszkowicz Institute in Croatia. Among other organizers are Maria Purga. She's here. She's my PhD student, and also Dr. Tim Klanscek, who is also a senior researcher at my institute. We also have Professor Sinicina Kagawa, who is in the audience. He is a head of School of Biological, Earth and Environmental Sciences at the University of Sydney. So, all of us are ecologists by training or evolutionary biologists. And we also have a Professor Paul Glasiu, who is a medicine for medicine, medical field. And he's a professor of evidence-based medicine at Bond University. He's director of the Faculty of Health Sciences and Medicine. And we all got somehow connected with this topic of research waste, and you will see why. So, we will start the seminar with two short presentations, because we think it's very important for the audience to understand what we think when we say research waste. So, one presentation will be by Paul and one by Maria, because we only so far have two estimates of research waste in sciences or in research. These will be followed by five-minute Q&A session regarding presentation. So, if you have any questions on the presentation, presentations, please put them in the Q&A box, and I will read them after these two presentations. After that, we will have a panel discussion. This will be 40 minutes of preset questions for the panel, and then we will open Q&A session so you can all chip in. And I think with that, there's no time to waste. And let's have Paul. Paul, can you please then tell us more about research waste in medicine? Certainly. I'll just try and share my screen. And just let me know that that's actually coming up. It should be appearing now. It is there. And sorry, I've just got to get the... Remove me. There it goes. Okay, I'm sorry. It jumped to the right to the end, so you get to see the whole presentation. So, I'm going to talk briefly about the waste in medical research, just to give you an orientation about that. And just to declare, first of all, my interests. So, as Antika said, I'm at Bond University running the Institute for Evidence-Based Healthcare, but I also run one of the Equator networks. And Equator is enhancing the quality and transparency of health research, which began with the consort guidelines. People are probably more familiar with the Prisma guidelines, but the Equator centers around the world try to house and promote and develop reporting guidelines for making research more transparent. And the Equator centers, interestingly, were founded by a statistician in Oxford, Doug Altman, who sadly passed away a couple of years ago. But he wrote a very influential article in 1994 in the British Medical Journal as an editorial, where he said, huge sums of money are spent annually on research that is seriously flawed through the use of inappropriate designs, unrepresentative samples, small samples, incorrect methods of analysis, and faulty interpretation. That was, as I said, very influential people saw it as a problem, but he never quantified the huge sums of money. And several years later, Ian Chalmers and myself, Ian was the founder of the Cochrane collaboration, which some of you will have heard of. He and I were discussing various aspects about this problem of the poor quality of research and various elements of waste. And that we managed to calculate that 85% of research is avoidably wasted. So there's unavoidable failures in research going down dead ends, et cetera. But that's unavoidable waste in research. We calculated that 85% was avoidably wasted. And for medical research, that would be over $100 billion per year in medical research funding. Related to this was some work by Begley and Ellis. So Glenn Begley was working at AMGEN and for commercial reasons tried to replicate 53 studies that had been in science, nature, or cell in order to decide whether they should continue that line of work. He wanted at least one replication before he would invest in it commercially. And they could only replicate six of those studies and therefore go along those lines. As I said, he was doing that for commercial reasons, but that was a sort of a major step in the replication crisis in its recognition because of the implications in these top tiered journals. And in 2014, building on the 2009 article that Ian Chalmers and I wrote, we developed a five part series that documented this in much more detail in the Lancet, calling our adding value and avoiding waste in research. And I'll show you some of that in a moment. But the upshot of that was the founding of a thing called the Avere Forum, ensuring value in research forum, which now has over 40 research funders in health and medical research trying to work out what to do to address the problem. And I'm saying that because they've been working out for quite some time, they've made progress, but it's not an easy task. It's not going to be something that we solve with one simple solution. So how did we get the 85% waste? Well, it was made up of these five things that you can see down the bottom. The questions asked whether there were appropriate designs, whether the research was conducted efficiently, whether there was anything ever published about it, and whether that publication was usable and unbiased. And the red boxed ones, we could quantify stages two, four and five. The avoidable design flaws 50%, non-publication 50%, and unusable reports 50%. And if you multiply those together, that adds up to an 85.7% waste, which we rounded down to 85% waste and came to a total of over $140 billion per year. And there's a blog there that we wrote the detailed calculation in and where we got the estimates from for each of those stages. So I'm just going to briefly talk about two of them. I don't have time to go through all of the details. But let me focus on the simplest one, which is the 50% of research is not published. Medicine has good data on this because trial registration has been going on for decades now and is now mandatory in many places. That enables us to have a cohort of studies that have been completed, which we can then follow up to see if they've been published. And one analysis by Joseph Ross, which looked at the seven-year post-study publication found an average of 46% being published by seven years post completion of the study. And it was pretty equal across country sample sizes, phases, and funders. And people focus on the bias component of that. There's a publication bias, but there's also just simple waste, even if it was an unbiased sample. And those unreported studies are invisible if they're unregistered. They're also not replicable because we've got no protocol. We can't add them to any synthesis because we've got no results. And if you wanted to have open data, you don't even know that it exists because it's invisible. And so there's no data available. So all of those things contribute to the waste in research. Second that I'll illustrate is that new research should build on previous research. I'm going to just give one example of this of a thing called a calcium channel blocker in acute stroke. And this was a trial done by Yannicka Horn in the Netherlands, where she found no effect in her randomized trial of a drug called nepotipine, which is one of these calcium channel blockers. She then did a systematic review, and that's shown in this forest plot on the left, which found 28 similar trials. And when combined, they found no effect. And so she wondered, why did all these studies in humans get done? There must have been good animal research. And in fact, there wasn't. She found 20 animal studies, which when she combined them didn't show convincing evidence to substantiate the decision to do the 28 human trials. And of course, this is all in the wrong order. We should have been accumulating the animal studies and then starting to decide whether we wanted to do any of those human studies. And so this failure to know what we know, to synthesize the previous work is also a waste in research. We're very interested in that working on automation tools to try and speed up that process to make it feasible. And just to summarize, and the reason we're interested in all of those steps is there's a lot of focus on open data, but we would see that the priorities for open science include prospective registration of all studies, that summary results, even if you'll not publish, the summary results should be available on the registry and the protocol and any other study materials. And it would be nice as well to have the open data, but without those first four things, it's not visible and it's not replicable. So I'm going to end there, except to say that this waste can also view as an opportunity that lots of these things are readily fixable by individual researchers and institutes and would be a huge gain to people to be able to fix some of them. And I will finish there. Thank you very much. If I can work out how to stop sharing. Excellent, Paul. This was a really nice overview with some ideas on how to solve research waste. And I think medicine has gone much further than other fields in recognizing this issue and also implementing some of the solutions. So I think many other fields can learn a lot from medicine or health research. And now I will invite Maria to tell us about research waste in ecology. Hi, everyone. Can you see my presentation well? Yes. Is everything okay? Okay. Hello, everyone. My name is Maria. I'm a research assistant at Rudjer Boszkowicz Institute. And as Antica mentioned, I'm going to talk about research waste in ecology. So this all wouldn't be possible without my supervisors, Dr. Tin and Dr. Antica. So a big shout out to them. And you are probably wondering, how did we even start to tackle research waste in ecology? So Antica has read Paul's paper and wondered how much waste do we have in ecology? Is it similar, bigger or maybe less than research waste in medicine? So in ecology, we estimated research waste that accumulate over the classical research cycle. So we have study planning, data collection, data analysis stage, result reporting, and finally publication. We considered that research waste generated in data collection and data analysis stage is actually matter of study planning. For example, if you plan your data collection before the study starts, then the data collection stage should suffer no issues. Well, only if something really unexpected happens. If you plan your data analysis well, you should use appropriate statistical methods for your collected data. And data analysis stage should also be without any problems. Thus, we considered that data collection and data analysis stage are actually part of a study planning stage that generates waste. So we are left with study planning without reporting and publication stages of research cycle for which we estimated the research waste in ecology. So how big is the waste in ecology? I'm not going to bother you with details of our methodology, but what I'm going to show you here is based on more than 2000 studies in ecology. So we have 45% of research that's unpublished in ecology. Main reasons that studies do not pass peer review is because of their low quality, but also we have studies of sufficient quality that do not get published because of publication bias. In short, they don't have what's considered to be interesting results. Of those studies that are published, 67% of studies are poorly planned. For example, they don't have control group or incorrect analysis is applied in their research. And finally, 41% of ecological research are underreported. This means, for example, that study did not report sample size or did not report the value of effect size they measured. Overall, in best case scenario, where all the well planned work is also well reported, the research waste is 82%. In the worst case scenario, where partial reporting happens in well planned studies, the research waste is up to 89%. Probably you'll notice that our research waste estimates are pretty close to the one from medicine, which is 85%. And if you want to learn more about our methodology, and it's a well posed free reading link of our paper in the chat. So we actually believe, but not this, because they are only published estimates of research waste is that social system of science, which consists of how we conduct, publish and fund research, and also how we evaluate and incentivize research is the same across every field. So we have a system where researcher is trained between current incentives where they basically need to publish as much as they can on one side. And on the other side, we have preferred way of research, which leads to robust knowledge. By now, we talked about classical research cycle, but we all know that research is much more than that. We have other factors that contribute to research waste. For example, we have unpublished data, unpublished analytical code, unpublished methods, and accessibility of publications. There are many more, but I don't have time to tackle them right now. So finally, research waste is huge, but what's even larger are the myth opportunities. Think about potential impact that any single unpublished result data set or method, as well as other research waste components, would have on knowledge development or applied solutions today or in the future. So that's why we're here today to try to change this. Thank you. Excellent. Thank you for our presentation, Maria. And now we will have really short Q&A on these presentations. And then we start with the panel discussion. So are there any questions? Please type them in the Q&A section of the Zoom, because I don't see any at the moment. Oh, there is one. So we have a question from Neil Jacobs, which is, what evidence is there of the success or not of interventions to reduce research waste? Paul, maybe there is more interventions in medicine. Answer that question. Okay. So things that have been successful in medicine, I think the reporting guidelines in general have slowly improved reporting. You'll see the so-called consort flow chart of the patient flow in trials now is mandatory for many journals. So the intervention, in a sense, was the journals routinely asking for that. There have been trials of interventions to try and increase the compliance with those reporting guidelines. And that works a little bit. But importantly, the journals actually have to do the work of checking the reporting. They can't just ask for it. They actually need to check it as well. And if they do that, it improves the quality of reporting. In terms of the number of trials published, I can think of two interventions. One is that the most successful group is the National Institutes of Health Research in the UK. Instead of 50% publication rate, they have a 98% publication rate. How do they do that? It's actually a number of interventions, but a key one is that they withhold the last 10% of funding until you have actually published, not just put in a final report, but you have to have published the main findings of the trial. And they have a journal that you can publish those in. So you don't have many of the same barriers because as long as it's an adequate reporting, they will publish the results of that trial. In the US, they have made it mandatory for trials funded by their government to be reported at least in the trials registry, the clinicaltrials.gov. And it hasn't interestingly led to much more publication in terms of papers, but about 10% more studies now actually have their results reported on clinicaltrials.gov. So that's three examples of interventions. None of them are magic bullets. And that's why I said earlier on this isn't an easy problem to solve. There are many components to it. And there's work needed on each one of those elements, but it can be fixed as those examples illustrate. Thank you, Paul. Now we have quite two similar questions from Adrian Barnett and Sarah Jones. And, you know, please keep your answers really short because we will also tackle this partly in the panel session. So question is what would you like to see from policymakers, funders, journals, and similar to research to reduce research waste? You know, how to fix a problem? So if you can just mention a few options here, and then we will discuss them further in the panel. So maybe for ecology, we could have the ability to register reports or have pre-registered studies. So in medicine, I think it's more developed than in our field of ecology. Maybe you can tell what is pre-registration because maybe some of the audience don't know what that means. So it basically means that you have your plan study that you put up online on some platforms and other people can comment on it and peer-review it before you even conduct your study so you can change the way you do your analysis or the way you collect your data or any other part of the research cycle that I mentioned prior to your actual research. And some funders, for example, require this. I think in medicine, many funders and publishers would require a study to be pre-registered. So the analysis plan, everything is already kind of settled before the study starts and then they will fund it or then they will publish it later on. But not if it's not registered. Yes, that's only true for clinical trials and for some systematic reviews now as well. But the vast majority, particularly the pre-clinical research, doesn't have registration. And one more question we will now tackle is for Paul. Could you elaborate on the automation tools you mentioned as potential solutions? Okay, so we do quite a few systematic reviews. The principle is, as you saw from the example I gave, always do a systematic review before new primary research. But that often takes 12 to 24 months to do. So for the last seven years now, we've been working on computer-assisted tools for the 20 or so stages of doing a systematic review. And we've managed to automate many of those stages, particularly with searching functions with forward and backward citations, searching, deduplication, etc. But also right at the beginning, we've got a thing called the methods wizard, which helps you write a prisoner-compliant protocol for your systematic review right at the beginning. And we've got one that will even generate a draft of the results section of your systematic review for you as well. So we've been able to do full quality systematic reviews in two weeks now rather than two years. So it's dramatically cut the time down and made seasonable to do before new primary research. But I think such automated tools are usable in other areas outside systematic reviews as well. Thank you, Paul. And I think now we will actually go to the panel discussion. There is one, two more questions unanswered, but I will try to ask them to the panel after they finish that this preset set of questions. And also I see a question from Zirinka Gregov about do you have any data or idea of wasting economy? We don't, but Zirinka you can try to calculate it, we can help out so get in touch. I wonder if it's similar, I guess it's similar to other fields. Okay, now we will go to the panel discussion. As I said, in order to improve value of research, as we've seen, we need to have discussion with a broad set of actors because it's not just researchers who are responsible, but we have those who decide whom to fund, and also those who decide what research will get published. And this is exactly why we organized this symposium. So we have a panel that is a mixture of all of these different representatives of those who influence and are influenced by research or are doing research. And I would like to then introduce our panelists. So this is Dr. Simon Harold, who is a senior editor at Nature Ecology and Evolution. We have Professor Eva Mendes. Eva, are you here? I don't think she has joined, unfortunately. Okay, maybe she comes. She's here. Eva, you're muted. Yes, I'm here. Sorry, Eva, I haven't seen you. She's a member of QUARA steering board. QUARA is Coalition for Advancing Research Assessment. And she's also a team member of Open Science Lab at the University Carlos de Madrid. Then we have Maria Cruz. She is a senior open science policy advisor and program leader for open research software at NVO. This is Dutch fund, the main funder in Netherlands for science. Unfortunately, Dr. Henry Tonang has not joined us. But if he does, I will just put him through on the panel. He's otherwise a researcher at the International Center of Insects Physiology and Ecology in Nairobi. And he's alumni of Global Young Academy. Then we have Dr. Anna Pesic. She's a program specialist science technology and innovation policies at UNESCO. Professor Fiona Fiedler, she's a researcher and a discipline chair of the history of philosophy of science program. And she co-leads Meta Melbourne Research Group at Melbourne University. And finally, Professor Paul Blasiu, whom I already introduced, he's a researcher, director of the Faculty of Health and Sciences and Medicine at the University of Bond University. So during the panel discussion, please feel free also to put your questions in the Q&A box. And now we will first have 40 minutes of preset questions for the panel. Also, if you want to be updated on future events that we might be organizing, Maria will post a link to Google Forms where you can sign up for updates. Okay, so we will just start with the questions. First question to all the panelists, but please keep your answers to two minutes or so. It seems that increasing number of researchers are researching the research system itself. For example, this involves the work on research ways that we just presented or on how open science affects research quality. And findings of this work should be of great interest to funders and to publishers. For example, a funder might want to know how much of the research we fund is wasted and why, or a publisher might want to know, you know, do open data policy actually really improve data sharing and news. However, it seems that there is this huge gap between this meta research and those who should be informed by this research. So in your opinion, how can we best facilitate the dialogue between researchers, funders and publishers? And maybe some of you have some concrete examples of this being already done in the past. So please, you can start. Maybe Paul, I just see you. So I just Paul first, because you're first on my screen. Right. You know, it's difficult because I think the most crucial group is the funders because funders have a lot more influence in this area than virtually anyone else. So I think the VF forum of the funders was a very crucial development. But it would be nice to see that forum being opened to more researchers and institutions for a better engagement between the various groups. I think meta science forums are very important as well, but I think they are mostly researchers. So we tend to talk in these different forums. I suspect the funders will start to do that. I think they have gotten more interested in the problem because of the waste that they see. But there, as I say, crucial for that dialogue. And at the moment, it's really only a few researchers who get invited to those funder meetings. Maybe Maria can continue because she is representing a funder in this case. What do you think about this suggestion? Or do you have any other ideas? Yeah. Well, I can just talk on behalf of the Dutch Research Council and maybe not even officially behalf of the Council on this topic. I feel there's probably even a lack of awareness. So in the Netherlands, we have the Dutch Research Council, but we also have Zonenve, which is funds health research. And I think they are part of that group of funders. But I think, and NVO, we fund then everything else in terms of research. But I think there's, just because also there's no data, I think there's a bit of a lack of awareness that this is even an issue. And by coming into this panel, I started reading around the topic and it's scary. So much of research seems to, so much funding seems to be wasted. But I think there's a lack of awareness. But as Paul said, there's no magic bullet also here on how to connect funders with this problem. As funders, we're also thinking more about evidence-based policymaking. We are part of the Research on Research Institute. So there's a lot more going on there. I don't think there's been a lot on research waste under Research on Research Institute. But I think this, just bringing the stakeholders together, I'd also say for researchers really aware of this problem. I mean, you also have the power to go to your funder and to tell your funder this is an issue. We as a funder, we respond a lot also to the needs of our community. And at least at NVO, we talk regularly with researchers. And if researchers don't flag this as a problem, we as a funder will think it's not a thing. And myself, I'm also chair of the Science Europe Working Group on Open Science. So my perspective here is also very much an open science perspective because that's the area of policy I focus on. For me and Science Europe, it's a European organization that brings an umbrella organization that brings together a lot of funders, but also research performing organizations in Europe. For me, that's maybe a topic I'd like to bring to Science Europe, something for us as a group of funders to think about. Thank you, Maria. Yeah, I guess the dialogue starts also in the events like this one where people actually get aware of the issue. And Simon, do you think that maybe publishers could also organize some forums in which they could discuss together how to influence the way the research is published? Yeah, absolutely. I would certainly agree that the forums like this will be really important to get different groups of people together. I think traditionally publishers have perhaps been, apart from maybe some of those publishers that are linked to maybe societies or organizations that give out research funding, I think publishers have tended to kind of feel that this issue of waste is perhaps not something that they can influence too much. But I think it definitely is something that we should be thinking about as publishers and trying to understand why some of the stuff that isn't being published could be one of the pinch points and what we as publishers can be doing to facilitate that. And I think having a dialogue with the funders and researchers about why that's happening is really important. And I think definitely publishers could be doing something in that space because perhaps through initiatives like peer review training and things like that in terms of thinking about the research waste that occurs through poor reporting, that could be something that publishers could look at in terms of researchers or peer reviewers spotting papers that might not be reaching their full potential and giving guidance on how they maybe could be. And it's not something that is without cost to publishers because as an editor or the other editor of staff there's a cost to our time in evaluating papers that don't eventually make it to publication. So I think it is in the interest of publishers to be involved in these kind of initiatives for sure. Thank you, Simon. Fiona, can you tell us what's your view on how can improve the dialogue between all of these different actors that should be really aware of what each other's do but they somehow seem not to be? Well, the example I was going to give of what a good working relationship looks like is the one that Maria talked about already which is the research on research institutes in the UK and Europe. And I think it would be great to see more centres like that. I think most of us in meta science or meta research were attempting to go through like regular funding channels, applying for grants with mixed success. But maybe we should be looking at a different kind of relationship with funders where we might take a broader approach to the things that we're interested in that also provide them with useful feedback for how to better allocate resources or detect waste in the system. So a slight shift in how we're operating and creating a different relationship might save everyone better. Yes, great, great points. Emma, please, can you tell us your view? Well, yes, I will just address this issue from the perspective of the Quara, which is the coalition for advancing our own research assessment that you probably have heard about it. Amos have already joined the coalition and the community. I think it's a great, great moment when we are revisiting how we do research assessment, how we do evaluate researchers to exclude from this research assessment the waste research. Because it's not a way to just, I always say that it's a per-percentric approach to research evaluation. And probably this is something that we can discuss further. But we need other commitments and other kinds of quality research. And also I want to point out one of the core commitments on the agreement that signatories the people that they are addressing the coalition for advancing research assessment. Just point it out, evaluate practices, criteria and tools based on solid evidence on a state of the art in research and research. So that means that research evaluation teams, research funders, they have to look at the meta research that is focusing on the wood research. So this is a great opportunity for meta research. And we have probably to go to the funders and make this kind of training and say, guys, there is an evidence here and we have the data openly available for you. So you can have an approach to your decisions based on evidence that there is some research that is a waste. So I think the core, probably the core opportunity here is that we are revisiting and we have to do these together in our community, the research assessment. So that's the opportunity for meta research and particularly waste of research. And I would just follow on that idea, although I'm not on the panel, but I think it's important that we can also test some interventions. We are not sure if they're going to work, but if you don't test them, you'll never know. So let's try this for some time and see if this really reduces waste. Maybe it doesn't. We don't have an evidence yet, but funders, publishers, they have to be brave and say, okay, let's try this model and let's see if it works. And then we will know. And Anna, what's your opinion on improving the dialogue? So thank you very much. We are an international organization that works on education, science and culture. And in a way, it's what we do or try to do, and that is to bring together, we're not researchers ourselves in a way, but we do try to work with scientists and others who are in the science system to try to improve or implement some new norms and standards that can exist in the area of science, technology, innovation policies. So in my view, I think it's a question, of course, for research waste, but it's in general a question of how do we improve the dialogue between science, policy making, funders, and the others. And there are, it's also a question of which level we want to address. Are we talking national level? Are we talking internationally, regionally? What is the best place to have this conversation and who are the best actors to be involved? And again, going back to open science, because I think many of us here also come from the open science perspective, which also opens up some new opportunities for dialogue. There are fora like the European Open Science Forum that happens every two years, the SEALAC in Latin America, and others across the world, where for example, we could also pick up this theme of research waste and tackle it, because those are the fora which regularly bring together the open science community, including policy makers, funders, and others. So maybe this is something to consider in the upcoming editions of this fora, and definitely in terms of some of the meetings that we are organizing at UNESCO in the context of open science implementation, I think it's something that we can also bring up to a bigger community. But I think from our point of view, also looking internationally at what is happening with research and science, on one hand, there is research waste. On the other hand, we also have a huge lack of visibility for a lot of research that we just don't know has happened. We don't have a way of communicating about that research or even knowing the results. So there are these two things that in a way should also be tackled together when we talk about research waste. Yeah, thank you. And maybe later on, I'll ask you to provide some of the suggestions to us where best to kind of try to put these ideas forward to other actors. Now I have a question for Simon and Maria, as funders and publishers. So peer review happens at the beginning of research, and this is done by funders, so they have somebody to review a research project and decide on whether this project will be funded and maybe if it needs some improvements. And then after the research has been already done basically, so publishers, they just get already done work, data have been collected, and then they review it. Of course, there are register reports. This is a way of publishing where you submit your ID and methods and how you will answer the question. And only then you do research, but this is quite rare so far. So where do you see the role of peer review process in reducing the research waste? Can we improve some of the peer review process that happens at the beginning at the end to reduce research waste? And can you imagine some alternative way of doing peer review that would further facilitates, you know, coming up with better and higher quality research projects? So maybe Simon first, because yeah, I see him first. Yeah, that's a great question. It's, I guess one thing that I think it would be important to understand is why is to get a sense of why peer review is sometimes kind of failing, certainly in times of looking at the research waste that comes from unpublished studies, particularly those that perhaps never make it to be submitted to journals in the first place. I think I personally would be really interested to understand why that occurs. And I suspect in some cases it's because researchers are often fatigued by the length of the peer review process and the hoops that you have to jump through, particularly for perhaps studies that are seen as less impactful or perhaps researchers feel less incentivized to jump through all the hoops that peer review often entails. So I think perhaps understanding how researchers can better go through the peer review system in a slightly more straightforward manner would be an interesting thing to explore. I think perhaps some sort of, you know, whether there's things that we can explore in terms of a lighter peer review or perhaps the adoption of kind of third party review platforms that where portable peer reviews can be ported to different journals to reduce waste. I think possibly we could help out peer reviewers a bit more with some of the automated tools, developing automated tools for aiding peer review. I know there's some tools available such as a stats reviewer which are able to summarize the statistical methods and that can be used as a tool to kind of help people evaluate the soundness and the reporting of research. So yeah, I think probably automated tools is perhaps one of the more promising avenues where we could explore, where we would develop those, I'm not sure, and obviously they would take a lot of time to develop, but I think they probably has a lot of promise in terms of how we can facilitate maybe more rapid or more rigorous peer reviews to identify those studies that perhaps fall down on the poor reporting. I think we could probably aid the peer reviews themselves in a similar way by by giving them more specific guidance on identifying methodological shortcomings in papers. So I think there's a tendency in journals just to ask very generic questions in terms of say, are there flaws in the study, for instance, but perhaps there are more specific guidance. I know your research indicated that a lot of ecology studies were underpowered, so if there were perhaps more specific tools available to help peer reviewers in identifying, commenting, and evaluating power of studies, then that might help reduce waste in peer review that way as well. Yeah, thank you Simon. And what do you think about pre-register reports? I mean, that seems like shifting the peer review from when the study is done to actually when, before the study has started, is that something, because more publishers are actually accepting this. Do you see that happening at the larger scale? So, you know, all the publishers in the future will actually accept this kind of papers. Yeah, I mean, that would be fantastic if there was much broader adoption of registered reports and pre-registration, because I think that's one of the most powerful tools that we have available to tackle these issues of research waste. In terms of ecology, I mean, nature ecology and evolution will be shortly rolling those out, but I'm not sure how broadly adopted it is across the field. But yeah, I think it would be great to see that adopted more broadly. My understanding is that it probably works a bit better in more controlled experimental studies. So those aspects of ecology, which are a bit more exploratory. Yeah, I don't know whether we, the extent to which they will be able to tackle issues with those kind of studies or whether there's alternatives or developments to pre-registration that could be looked into so that the field as a whole can benefit from that type of pre-registration. That would be great to know. That's something that we will contact you about, about registered reports of actually more observational studies, which are more common in ecology. Thank you, Simon. And Maria, what's your view on this peer review process as a funder? Yeah, I also wanted to touch on the topic of reviewer fatigue, because obviously the people who review for funders are the same people who review for journals. And yeah, there's this, yeah, people are overwhelmed with review requests. We hear that, I hear that from colleagues in the libraries who work on open access and supporting researchers with publication that researchers get too many requests. So I'm actually going to go, yeah, I think we need to reform research assessment, but also the way we publish, right, because it's not sustainable. This cycle of submit to a journal is rejected and we resubmit, you know, but papers end up being published, but there's so much peer review. People are reviewing the same paper over and over again. And sometimes the papers even get published the same way, you know, as they were in the original submission to the first journal they tried. So I think we really need to rethink publication and research assessment. And I'm very confident with Kawara, of course, because that's one of the things Kawara is trying to address, but really, we really have to move from quantity to quality, and not quality as, you know, it was published in a certain journal that is a very high profile. That's, that shouldn't be the way that quality is defined. So we need to maybe publish less, publish better quality, and do better review. And because I would think, yeah, you need, you know, the papers are now and or grant or grant proposals are reviewed by one or two people that's probably not enough. But but you also cannot ask more people because it's just not sustainable. And the other thing, it's also we need also to move to more towards more collaboration in science rather than focus so much on the individual. And that's why everybody has to publish the first author paper. Even if it's, you know, they didn't, yeah, even if they know it's somebody else did something similar, but you have to have a first author publication. So we have to move from that, we have to move from individualism to collaboration to solve problems for society, right. And another thing I'd also like to talk about, because I I've been a researcher, and now I now work, I contribute contribute to the research community as a policy maker. And I also our research system, especially at universities, you have like two kinds of staff, right, the researchers, and then the support people. And we also have to to move towards a reality where we don't have this strict division, because in your Nature Ecology paper, Antish and Mariah, and then in the audience, yeah, you talked about that researchers need more support, right, the support for methodologists from data stewards. So we have this essential staff that is undervalued, but they contribute to research in a very significant manner. So we also have to rethink the way research institutions are organized and think less about social, you know, it's not about publishing papers, it's about solving problems in a collaborative manner, bringing in all the people that are needed to solve that problem. And do not and without paying attention to, you know, special and who's not special in those collaborations. Sorry, this is a bit far removed from because I think it's really excellent point that research is much broader than what how we currently evaluate it. And what I think currently is the huge problem is that what incentivize researchers so what they want to achieve and that's published is separated from actually the goal of science and that's to create valuable knowledge that, you know, might just be interesting or might actually help to solve some important issues. And then there is this clash between the two and we have to try to reduce that. And another good point that I want to bring from your answer is that exactly we have to all work together because if I don't know, funder incentivize something, but then the publisher is not recognizing this, then the researcher we suffer down the road. So we need really a holistic change at different levels to achieve a system where we have a more robust and the valuable research in sense of the impact. Henry, I see that Henry Tonan has joined us. So just in time for a question. So this question is to researchers. So Fiona Paul and Henry, do you see any possible scenarios where preventing research waste might prevent some valuable future research? So we say this research is not going to get funded because it's improperly well done or maybe it's not interesting enough. And that would prevent actually some important discoveries. Can you think of any such examples? And that relates obviously to the question of, you know, who defines what is research with limited or no informative value, you know, who is there to say what matters and what doesn't. So maybe we start with Fiona. Yeah, sure. So I do think there's a risk here. For sure. I think there's there's probably more than a little truth, to be honest, to the idea that some of these reforms stifle creativity. In many cases, we are talking about sort of boxing ourselves into templates or you know, cooking how to research. But look, honestly, I think that risk is actually tiny in comparison to the other external influences on our research. Influences from funding agencies or industry demands or national interest research priorities and so on. I think it's pretty easy for critics of reforms. So by which I mean, you know, checklists or pre-registrations to dream up hypothetical scenarios where the intervention has impeded progress. But the question we need to ask is whether those cases stack up against these huge estimates of 85 percent or 80 something percent research waste that we've heard about today. So it's a cost benefit analysis in the end. And the cost has to be more than just being annoyed by a checklist in peer review. What we should care about if we're going to care about something is actual damage to progress, not just being mildly inconvenienced. I think arguments in this space often fall into what philosophers of science call successive science arguments. So our scientific theories must be, you know, on track or tracking the truth because look how much progress we've made, look how much we've built. And sometimes this carries over into why would you change science? Why would you intervene? You know, everything's going well. And sure, science is great and everything, but it's important to remember that we don't have any control groups here. We don't really know how successful things would have been or how much more progress we would have made under different conditions. Who should be in charge of defining what research has information value or what research has limited information value? The same people who should be in charge of everything, right? A diverse community of practicing scientists or practicing experts in the field. That diversity is important, not just from an ethical point of view, but in counts, you know, from a cognitive stance as well and cancelling out our biases. Thank you, Fiona. Paul, can you tell us what you see as a potential problem here? Yeah, it's difficult. Marie's comments earlier made me think of Doug Altman, who I referred to in my talk, famously saying we need less research, better research and more research for the right reasons, which I thought think sort of encapsulates the problem. But if we then come to the less research, your fear is that less research may inhibit some of the innovation. I actually doubt that. It's possible, as Fiona said, that that could happen. But actually, I think that's more in the conservatism of our granting systems and publication systems than it is in poorly reporting or unreported research. And I think that they're actually more of a problem because they lead people to do unnecessary or go down dead ends because of unnecessary replications, as in Glenn Begley's case that I cited, where he didn't want to follow lines of research until he was sure he was building on a solid foundation and not on sand. So I don't, there is a small risk for innovation, but I think that's not the main problem for innovation. That's in conservatism of funding. Whereas this research waste will actually help reduce bad lines of research and actually improve the productivity of research. Thank you, Paul. And Henry, Henry, what is your opinion on this topic? You have done it yourself. Thank you very much, Antika. I'm sorry for my mix of technology. Basically, I was in another meeting. I need to exit with the time now, changing the Zoom. I have some issues. Sorry for that. Basically, for me, this topic is quite very challenging. When I look at research waste, I would like to give a little bit. I see research waste in different levels. Most research is characterized by data collection, either in social science or in biological or any type of science or physical experimental. And my first waste or where the first waste in my view situated is, where do you place the quantity of data that has been collected and the quantity of data that has been translated into meaningful research that is peer-reviewed or published? In my experience of around 30-25 years of research, I can tell you I only managed to translate my actual data collection to publication at a rate of 30 to 35%. You can see that I have a lot of experiment, a lot of waste that has been conducted that I've spent funding for but never translated into end result that can be used for knowledge discovery. In that way, definitely research waste in my view delay innovation. It delay future discovery. Secondly, in where I stand in developing war, we often have the issue of not being able to publish as we want because of fee. And I don't want to go into all the mechanism that you see or that is involved in paying us fee in publishing a certain journal. And that also sometimes your paper is accepted but you don't have the money to pay or sometimes you are even frustrated to push it. In that case, the knowledge or the research you have conducted is also wasted because it's not out there. People cannot use it for the next step. I mean, these are some of the two key points that I would like to highlight and to add to what my colleagues say earlier. Thank you. Thank you, Henry. And I think this really aligns well with my next question to Eva because the data that you mentioned, you know, in open science practices, for example, data is an important product of research, even if there's no result out of it. So Eva, in your opinion, you know, we know that open science practices can definitely reduce research waste. And if you think realistically, what do you think are the lowest hanging fruits that we can incentivize that are easy to incentivize and we are not still incentivizing them? So what would be those open science practices? So I'm not talking about the most important ones, but maybe those that are the most the easiest to achieve at this point of, you know, the better ecosystem of sciences. That's a very good question. And I think it's the same answer. We don't have the magic wand, but we are working on that. I think that openness in general is a clear invitation to quality. I always put this example, if I have to teach to my students with a closed door, I just teach and I can invent and I can waste my talk on the learning process. It's the same thing with research. If you close the door of research, if you have publications just on the paywalls and in your farm of citations, that we know that everything happened like that, this is something that you can have more room for waste research. But in my opinion, I think what we have to do is not just say, well, we do closed research, we now open research, and then we change the mobile of accessing, paying for accessing, and now we will pay for publishing because there are countries that will be out of the system. And also, this is not about when we put the money. It's about that we don't need to put money to publish. I think the real change or the real motivation here is particularly the web. Think about this. The web has changed the way we relate to each other, the way we buy, the way we teach, the way we make conferences like this, but it has no change, the way we communicate science. It seems that researchers came to the world to make papers. No, excuse me. Researchers came to the world to change the world and make the world a better place. So the point is not about opening the papers. It's about doing something different. Think about this. How we can just do research like in 19th century, we call papers, we call preprints, we don't print anything. We can communicate in a more effective way and showing up from the first minute. I'm going to do this, guys. Who wants to join me? Collaborative science, open science. This is one of the issues. And if you open your research from the minute zero, you can prevent these practices that waste the time. But excuse me, we leave out of research. This is a career. We are not 19th century researchers that we just leave or do research because you are interested. Of course, we are interested in what we do, but we leave out of this. And if you count only papers, it's not going to happen, that real move forward of the research evaluation. And the way we do research, not only the way we evaluate research, this is a motivator of will. The thing that we have to change is the whole cycle of research based on the web, based on technologies, based on artificial intelligence, and also think about data. If research, if top GPT can make a paper, paper is not worth it anymore. What is really, really crucial is to have and to share the data and make a better research collaboratively. I don't have the magic one. I only have ideas. Sorry. That's what we need first, ideas. Let me go from there. So I'll just ask two more questions to the panelists. And then we go to Q&A from the audience, because we have to finish in 25 minutes. So just once more for the audience, please put your questions into the Q&A box. And if you like already asked questions, you can upvote it, I think. Okay, so my next question goes to Anna and Henry. And please keep your answers brief. So obviously open science practices are really important to reduce waste. But they are not really currently present in education of young scientists. So do you have any ideas of what would be a good educational model to teach open science at the very early stage of training of researchers? Anna? Yes, of course. Actually, you know, I'm not sure how much the audience is aware of this, but in 2021, 193 countries have adopted a UNESCO recommendation on open science. So that is the legal instrument, which is not legally binding as a convention, but is a highly important instrument that can influence the way science is practiced in different countries. And I would really output it in the chat, but I would really encourage the audience and you also to have a look at that recommendation, because what is nice in there, I think, is that it sets the norms of how open science should look like, what is part of open science, why open science should be practiced, and the countries have adopted it. So you also have a leverage, a tool that you can also use when trying to introduce open science more broadly within the research community, but not only it is also a question again of decision makers, policy makers at the university level at the funders level, publishers, etc. So there are some indications on what type of educational model should be used for teaching open science. And again, I think and it's already said it and it is from the beginning of the educational systems and not just when you are already a researcher and it kind of just comes at you at the last moment. It really should be something that should be in the curricula of each and every researcher independently of the discipline, independently of the place that you are doing your research. And what we are trying to do now within the UNESCO open science community and you are all invited to be part of it, we have a working group precisely on capacity building for open science, where we are trying to develop kind of a scheme of what skills and competencies are necessary for open science to if the community would agree, then we would propose it also to the government to indeed include that into the curricula across the world. Of course, there is another point here is that open science is not free science, there is need to invest in it. You can invest new funds into open science or you can redirect funds from current practices to open science practices, but it is important that, including in terms of education and capacity building, it is something that has to be invested in. It cannot be an add-on without a plan, without some funds that need to be dedicated to it, with some resources, both in terms of human and financial resources. So just a kind of a plea to all of those who are developing curricula plans for open science is that there is an implementation plan with it, with the necessary funds that need to be allocated to it. I think that's very important as well. Thank you, Anna. And Henry, what's your opinion on how to improve education in open science of young researchers? I really totally turn for the important question and I totally agree with Anna that it's something that we start at the very early stage. In my little experience, being the data manager for a research institution, I realized that it's still very difficult for the colleagues who are already senior or people who have reached a certain level to even share their data in the common repository of the institution, even though it's mandatory and it's also part of the evaluation. And that is why I believe we need a change of generation, a change of mentality. We need to really start it at the very early stage. I totally concur with what my predecessors say that really UNESCO needs to be one of the champions of this open science. And we need to start in primary, even in nursery schools. As people are growing, the notion of sharing your research output needs to be integrated in your understanding of what you will do in the future if you decide to become a scientist. Thank you. Thank you, Henry. Just a reminder to those who ask questions in the box. Some of them have been answered by typing, so look at the answered questions in the chat box. And I have one last question now for the panel and this is for Simon and Maria. So if you come tomorrow to your boss and you suggest concrete action, that would, maybe after this discussion here, that could reduce immediately research ways, what would that be? Just one action and that's it from each of you. So yeah, what's achievable tomorrow? Simon? Yeah, that's a good question. Well, I guess from my perspective as an editor, I guess trying to reduce the loss of papers, you know, the papers that we perhaps reject post peer review and for methodological reasons, I guess that's a big kind of pinch point for us. So I think that probably starts early on in the research design process about whether it's methodologically sound. But I guess as journals, we could actually, for those papers that we perhaps reject without peer review, there might be a way to suggest more targeted feedback to people about methodological issues. So maybe as I talked earlier, perhaps with using these automated tools to identify methods issues, if journals perhaps gave more targeted feedback, if they were to not proceed to peer review with papers, then that might aid researchers in reducing problems further down the line when they submit to other journals, if that makes sense. No, it does. Thank you. And Maria, what would you say? In the Netherlands, we are in this very fortunate position where the government decided last year to invest quite significantly in advancing open science in the next 10 years. So we actually have 20 million euros a year to spend on open science. Yeah, and as it happens, actually, we are really starting to make plans on how to spend this money. We don't quite know exactly what we'll do, but one of my new areas of action will be open research software. So I think one area that I would like to address is code availability. I know you've written about this as well, and Tisha, and this study is a focus so much on publication, but when we look at other types of products in research, I think code is an area where there's a lot of waste because it's not shared, so people have to reinvent the wheel over and over again. There's lack of sustainability in code. So this is an, I mean, again, no magic bullets or wants here, but that's an area I'm hoping I can address in the immediate future with funding from the government. Okay. Yeah, thank you, Maria. And I hope some of you will consider a better communication with researchers, actually, to inform or ask them to, for example, evaluate the practices, for example, how open science practices reduce or don't reduce research waste, or maybe not just test some ideas of what might actually help to improve research quality. Because I think it's important to stress here that reducing research waste is not just reducing research waste, is improving research quality and impact. So it's not about research waste, it's about research quality and impact. And I guess all of us are here for the same aim, and that's to have better research that has higher value to those who might use that research. So I'll now read some questions from the audience. And you can also still type your questions in. So question from Neil Jacobs is, is one major factor here that researchers have incentives to pursue their readers more than incentives to inform them. Okay. So that's an interesting point. You saw what might be done to tackle this other than pre-registration, which is easier in some fields than in others. So who would like to answer this question? Eva? Well, actually, I would like to say who are the readers? Because the problem is that we communicate science for a very expert level. I think the key point here will be to persuade the readers that the readers are also the citizens that pay for the research. I mean, more research communication in a way of this close research, something like more than not in a closed ghetto of researchers, but opening up the communication and being a communicator with the whole society will incentivize. But again, if the only thing that counts is the paper and the paper is for a very targeted, very specialist community, it's not going to work. If you value or you could include in your incentives the way you address the society impact to communicate science through the readers and those readers are the public at large, that would be even better. But no people, only a few researchers become research communicators because they want to do it. And we need this forum to be open. Yeah, I agree. And I think the problem here is that communicating science is still also not really incentivized. So, you know, as long as, for example, I'm organizing this symposium or maybe talking to funders, I'm losing my publications, which is what I'm valued. But so I'm not actually incentivized to do any of these things. So I really agree with what you said. Anybody else wants to answer this question? Okay. Then we have some questions that have been answered, but maybe they're interesting for everyone to see. So there was a question, is it better to publish in any journal, for example, or quality or predatory journals with inadequate peer review, which is very often in creation, compared to not publish the data at all. This would reduce waste on one hand, but the quality and reliability of such published data would be questionable. And Fiona has answered to this question, but maybe Fiona, you can just tell us your answer here. And I see Paul raised his hand too. Paul, I've already had my chance in the chat. You should. Yes, I was going to say, I think we should move away from the idea of the only way you can report your results is publication. When Ian Chalmers and I were writing that first 2009 paper, we were talking to a lot of researchers and one story that struck me was a large trial of a childhood cancer where the statistician couldn't get the paper published because the clinician who'd let it had died and she couldn't write the discussion section and she couldn't find anyone to write it for her. So the results of that study just sat there because of a missing discussion section. Right. That really illustrates that the important thing is to be able to make the results available, not a discussion section which passes peer review. And that can be done through the registries. So I mentioned that clinicaltrials.gov now has 10% more results available unpublished, but publishes the results on the registry. And we should think of that as the prime repository of the results of studies and that the publications are a secondary thing that sort of tidy up and make prettier the results of the study and communicate them. But they shouldn't be the bottleneck for reporting results. That's just the wrong thing to be doing. Yeah, this is a great point because results can be reported in different ways, rather than just in publications. But these different avenues should also be visible because currently they are not that visible. People don't know that you can find results somewhere else. Even preprints are not that used in some research communities. And that's a great way to put results out. And I have a question also from the audience, not written here, but it was when they were submitting questions to the panel before the panel started. So for Simon, there is a lot of research that is not done. And that's also in a way based at research that might be important, for example, in ecology for conservation purposes. But this is considered as very basic research that is not publishable in good journals, at least would be considered high quality journals. For example, no basic studies on diversity of an area that is currently unknown, or maybe some life history studies on species, previously you could publish them in nature, but now 50 years later, this is something that's very difficult to publish. So a friend who works at NGO and also at the Ministry, they find it very difficult to find researchers to conduct this kind of research because nobody wants to conduct it, because they don't have any value from it. And I guess a role of publishers could be to also increase a number of these important studies that are never conducted because people don't see the benefit of what do I do, what do I get from it? I cannot have this published. So is there something publishers could do to reduce this kind of, well, wasted opportunity, because it's not research you started and never published, research that never starts, but it should be started. Do you mean in terms of there not being a venue to publish this type of work? Yeah, I mean, it's an interesting one. I mean, my kind of view is that there's a pretty diverse ecosystem of publishing outlets out there of journals. So it's surprising that there isn't, that people feel that there isn't venues to find, to publish their work because there's many kind of sound science journals out there. So for instance, like scientific reports that we publish, and many other instances of other journals where, for instance, where you can publish research notes or field notes, especially in ecology, those kind of things. So yeah, I mean, perhaps there's an issue with some journals just being visible to researchers as places to publish their research. I don't know what the solution to that would be. I mean, perhaps if there were some kind of publishing advisors that people could turn to who could advise people of the best kind of venue, if people were able to submit what their requirements were for what they wanted to publish, and then there were some means of having publishing advisors who could say that we've done some research and we think this is the best venue to publish your research. This is what you need to do to get your research published there. But certainly if there are gaps in that kind of publishing ecosystem, then I'm sure publishers would be very interested to hear that because ultimately publishers want to publish sound quality research. Does that answer your question? Partly, because for example, I'm an editor at the Journal of Animal Ecology and I'm pushing there for, well they did include a certain type of studies now, so you can publish one term studies even if they're not that interesting in the sense of there is this school result in evolutionary ecology we have proven, but I'm also pushing to have a type of paper that is just describing a new species or that's really interesting or maybe a certain area that is currently really unstudied and we need data from there, but people are not researching it because they cannot publish in high impact journal this kind of research. So that was my take on it and these long-term studies are now true, so they can be published as papers themselves, but there are other things that could also go in this kind of journal sections. We have a question from Emma Mills. In the second slide presentation, there was a label of soiled student projects. Would it be okay for the speaker of the presentation to elaborate on what was meant by that? So what are the soiled student projects? I think it was Paul's presentation. No, Maria? I don't think I have anything similar in my presentation as well. Well, I guess these are student projects that never get any results out, so they're just done and conducted and then are never published. Okay, I think we have answered most of the questions. Okay, there is another one from Hugh Shannon. This may have been discussed already, but how can we pursue higher education institutes and research institutions to prioritize reducing research waste? That's true because they're also obviously evaluated by somebody. I think I lost the question. Okay, it says answered. Yeah, so how can we incentivize that? You know, they also are looking for high impact papers because that's how their research institutions are evaluated. Yeah, but I believe there is also, particularly now, there is a push for science that, as we were saying before, solves problems, encourages collaboration, is more transparent, is more equitable. There are all these also kind of social pressures that are mounting now in terms of transforming science. And I do believe that many higher education institutions are becoming more and more sensitive to that. So it's a question of kind of pushing for some values and principles, and that push should come from everywhere. It comes from our side, international organizations with some new standards, new principles, new values. It should come from researchers because this is what they want to do. And I think presenting this as better quality research that responds to societal needs, that will solve certain problems, that pushes the frontiers of science further. It's the way to address this, not necessarily talking about it as research waste, where they don't necessarily see the value of investing in tackling this issue. Anybody else wants to say something about this, Fiona? Yeah, I just wanted to agree. I think that some selection, that selection and promotion processes in some institutions are actually changing. But what matters more is our perception of what they are. And what the people, so what the other academics who sit on the promotion and selection panels think about those things. So the actual policies can change. But unless the academics on the panels also change their perception of what those things are supposed to do, then we don't see those translating to different practices. Maria? Yeah, I was just, I think it's going to require a lot of humility, right? Because it's the same for a funder or for a research institution. You have to admit, right? Who wants to admit that they are wasting, that they are producing papers that have no value, right? What institution is going to want to admit, whoa, 85% of what we do is worthless? What funder is going to want to see? And for me, when you read these papers, basically, one of the conclusions is, oh, actually, we can do useful science with less money. But what rector, what president of the research funder is going to want to go to their government and say, oh, actually, we need less money. So this is also, this is going to be an issue for this discussion, right? Because it's, this is going to be tough, right? It's going to admit that, yeah, we're not doing this right. We are wasting money. And that's a tough admission. So this is going to require a lot of humility from people, probably not so, well, this is controversial and maybe not used to be humble, because as a researcher as well, and as something that, yeah, you are trained as well to promote yourself, to go out there and say that your science is amazing, you're curing diseases, even if you know, you know, it's really, you know, you're really just doing a little bit, but you're trained to say what you're doing is amazing. So we also have to change, it requires humility, it requires admitting you're doing, yeah, maybe you're not doing things well, and people are not socialized that way in science, at least I wasn't socialized that way, I was socialized to promote your, you know, the way to go to the top is to promote your results as ruthlessly as possible. And so there's a culture, a big culture issue here. Yeah, that's a nice closing remark, because we are now at the end of our session. I would like to thank all the panelists and organizers and also met the science conference for selecting our symposium to be one of those that can happen. And I hope we all keep the dialogue and for the participants, again, Maria, can you pause the link? So we are working, so Paul, Maria, Tin, Shinichi, and hopefully Fiona, some other people who might join, so we are working on some solutions trying to put these things out there. And, you know, if you'd like to be updated or maybe involved in some of these, please leave your email into that Google form document. Of course, you can also email us, because we really think this change requires input from many different actors, but also as many of different representatives of these actors, because I as a researcher, I'm definitely different and having different constraints than researcher from the UK or researcher from somewhere in Africa or in the US. So opportunities are different and we have different needs to change, to be able to change the way we do science. So with that, thank you all again. And hopefully, you have learned something and started to think about some of the topics that you might not have thought about before. And hopefully, this is not discouraging you from continuing to work in science. Thank you.