 Hello, I'm Shane Also, the Community Engagement Assistant at ELIFE, and it's my pleasure to welcome you to September's ECR Wednesday webinar. This series aims to give early career researchers a platform to discuss issues important to you and your research career. You can now follow us on Twitter at ELIFE Community and with the hashtag ECR Wednesday. The session is being recorded and we'll make it available on YouTube in the near future. Now it's my pleasure to invite Alok Varma, a graduate student at the National Centre for Biological Sciences, India, and member of the ELIFE Early Career Advisory Group to introduce today's session and our panelists. Thank you Shane. Hello everyone. Thank you for joining us today for the ECR Wednesday webinar on the science of science using meta research to make your research more transparent and reproducible. I'm Alok Varma, a member of the ELIFE Early Career Advisory Group, and I'll be your moderator for today's webinar. Let me begin by telling you a bit about ELIFE and its Early Career Advisory Group. ELIFE is a nonprofit organization that's operating a platform to improve all aspects of research communication by encouraging and recognizing the most responsible behaviors in research. The role of the Early Career Advisory Group is to influence and support ELIFE's work and to catalyze broad reform in the evaluation and communication of science and in particular to represent the needs of researchers at early stages in their careers. To foster a research culture that's healthy for science and for scientists. So the ECAG champions many different initiatives to achieve this goal and the ECR Wednesday webinar series is one such a brainchild of the ECAG. Today our webinar panelists will explore ways that you can use meta research or the science of science to make your own research more transparent, rigorous and reproducible. So let me start with a little housekeeping. During the webinar, please be respectful, honest, inclusive, accommodating, appreciative, and even open to learning from everyone else. Please do not attack the mean, disrupt, harass, or threaten anyone or encourage such behavior in any way. We reserve the right to ask anyone to leave and or deny future access to subsequent webinars. And if you feel uncomfortable or welcome on any of these webinars, please contact ELIFE safety team via ELIFE safety team at the protonmail.com, which is on your screen. Okay, so as Shane mentioned the session is being recorded and will be made available online at a future date so keep your eyes open for that. And if you need any help please send a chat message directly to Shane also via the chat box there. Okay, so we have three speakers today each of whom will give a presentation lasting approximately 10 minutes. Following these presentations will be a Q&A session. If you want to ask a question at any point during the webinar, you can do so by typing in your question into Zoom's chat box, or you can tweet us, we are at the rate elive community, and use the hashtag ECR Wednesday. I would read out your name and your question at the end of the Q&A session that happens after all the presentations are done. Okay, so with that out of the way, I'd like to now welcome our three speakers. The first panelist is Tracy Weiss Kerber. She's a meta researcher at the BIH Quest Center for Responsible Research in Berlin, and a former member of ELIFE's ECAC. Tracy, we now invite you to share your screen. Okay, thank you, Alak. Hopefully everyone should now be able to see slides, and they should be advancing soon. So, yeah, today we're talking about meta research and how we can use meta research to improve science, and I have the challenging task of explaining to you all of meta research in 10 minutes. I'll also talk a little bit about how I got started in meta research, and give you a few ideas about some ways that you might be able to use meta research to make your research more transparent, rigorous, and reproducible. So our first question for today, obviously, is what is meta research? Well, meta research is essentially the science of science or research on research, the research process and research enterprise itself. So when we do meta research, we apply the scientific method to study science itself. And this is a very powerful tool that can help us to improve science by identifying problems with the reporting with the analysis and with the conduct of scientific research. So meta research can also help us by developing targeted solutions. If we're examining the scientific literature or other aspects of science and identifying specific problems, how common they are and what effects they have, that can also lead very directly to allowing us to solve those problems. And meta research doesn't just apply to evaluating scientific publications or preprints. It can also be used to evaluate other aspects of the hiring process or the scientific process. So that might include factors like hiring and promotion practices, funding agency practices or policies, educational programs and journal or institutional policies. So I'll show you a few quick examples of some different types of meta research studies. The first one is a study that was published in Plas Biology and the authors have looked at a very large number of clinical trials to determine how well they were reported in accordance with various elements of the consort guidelines. So our authors addressing things like blinding, randomization or inclusion and exclusion criteria. And the authors find that while the quality of reporting is improving over time, many papers are still missing essential information. So there's also an urgent need for us to continue interventions to improve reporting. This is the second example looking at an intervention to actually improve reporting this time for animal studies. And so here the authors conducted a randomized control trial where authors who submitted animal studies to PLAS-1 were either required to submit the arrive guidelines checklist along with their manuscript or they were not. And there were no other interventions along with that. And the authors found that unfortunately just asking authors to complete a checklist during the process of submitting their manuscript didn't really lead to improvements in reporting. Meta research, as I mentioned, doesn't always have to focus on scientific papers. And so this is a paper where we looked at statistics education practices. And here we found that while most papers almost all in fact 97% of papers published in top physiology journals were using statistics. And statistics training wasn't necessarily required to get a PhD in physiology or related disciplines. So two thirds of statistics programs are physiology PhD programs required to be a statistics course for students who were participating. Whereas the remaining third were split between offering statistics as a recommended elective, simply an elective, or having no statistics component for their programs. One of the important things to keep in mind is that meta research is not meta analysis, because most both terms involved the word meta people often get confused between them. And so I just want to emphasize that these are rather different things. When we do meta analysis we start by performing a systematic review and that helps us to identify all studies that have addressed a particular research question. And then in the meta analysis we combine the results of all of the studies addressing of that particular question in order to assess the size and the direction of the effect based on the literature as a whole, instead of looking at individual studies themselves. So again, this is quite a different process from the process that we go through for science of science studies. There are some commonalities between meta research and meta analysis. So when we're thinking about meta research that looks at a body of literature, as opposed to other aspects of the scientific system. Then meta research and meta analysis share the commonality of the, of the fact that both are looking at a body of literature instead of just single individual research articles. So meta research also borrows a lot of methodology from systematic review and meta analysis. However, sometimes meta researchers review other types of records beyond publications like funding agency or journal policies, course requirements, tenure and promotion criteria, or other factors. And while most meta research studies don't involve meta analysis, there are exceptions depending on your research question. For example, if you're looking at whether blinding studies produce larger effects on a particular outcome, you might need to perform a meta analysis to get the answer to that question. So why did I start doing meta research. I became involved in meta research because I was concerned about the problem in my field. I studied preeclampsia and well women develop preeclampsia and very similar symptoms at the end of pregnancy they often get problems, fetal problems and placental problems and different pathways when each in each of those. So for any biomarker we look at we expect it to be perfectly abnormal in some women with preeclampsia and very abnormal in others. This was a problem because we often present data in bar graphs. So I started off addressing this problem with some of my colleagues by looking at bar graph use in the physiology literature and examining the problems with the use of bar graphs and how often authors were doing this. And we published this meta research paper and plus biology in 2015 and since that's time it has contributed to policy changes in a variety of journals encouraging authors to replace bar graphs with more informative graphics. So the important question for today is how can you use meta research to help you improve your science. And there are a number of different ways you can do this. The first thing that you can do is find meta research onto topics that are relevant to your research studies and your methodologies. There are a lot of studies available on topics like open data or adherence to reporting guidelines statistical problems pseudo replication or things like planning and randomization, as well as many other topics related to scientific publications. And if you start to read data and information from meta research studies, these studies often tell you how you can recognize problems and fix those problems in your own research. And they can also be valuable in explaining to others why it's important to adopt better practices so you can have discussions about these papers with your colleagues and your friends. If you're not sure where to start with finding meta research papers that are potentially relevant to you, then we published a recent editorial that contains a variety of different examples of meta research papers. And the editorial is available open access and you can see box one for a variety of studies that will help you identify common problems and explore solutions. So here are just a couple of examples. The first study looks at the problem of power, statistical power, and the authors present data illustrating that underpowered studies are very common. They then go into detail examining the stuff problems that under sour studies create, and they explain why low power is an issue, even when you don't find a significant difference between groups. So here's an example of a meta research study that might be relevant to you. And there are several studies like this, but the authors examined blinded versus unblinded studies and they found that blinded studies generally find or unblinded studies generally find larger effects. This means it's important to use blinding whenever possible at all stages of the experiment. And depending on your reporting guideline, you may need to report blinding for status like for participants, caregivers, outcome assessors or data analysts. This is another study looking at the effects of attrition in experimental research on cancer and stroke, and the authors find that seven to 8% of animal studies are excluding with animals without explanation. And that about two thirds of papers lack the information needed to determine whether animals were excluded. And this is a problem because we are excluding animals in a biased way, especially in small sample size studies that increases the risk of spurious findings. A solution for this problem is to use flow charts to report all excluded animals, as well as the reasons why observations were excluded, which allows your reader to assess the risk of bias. Another way that you can use meta research to improve your own research is to find or start a reproducibility journal club. And these journal clubs are a grassroots initiative that make it easy to start your own club journal club that discusses meta research studies, as well as findings from other studies that are relevant to ways that you can improve your research. And so the movement that has started with psychology, it's now expanded to many different institutions and countries and is also beginning to expand to other disciplines. And so these groups can be a great place to meet like minded people who are also interested in meta research and science improvement, and engaging in critical discussions about some of the solutions for improving science and the strengths and weaknesses of those solutions, and how we can all make our science more transparent, rigorous and reproducible. All right, thank you very much. Thank you Tracy. It's a very nice overview of what's going on with meta research. Up next is Yulia Felling. Yulia is a research fellow in the hospital sick children program cell biology in Toronto, and was a participant of the meta research team of the 2019-20 cohort of the Elife Community Ambassadors Program. And she talked to you about that today. So Yulia, it's all yours. Thank you very much for your nice introduction. I will talk today about the lessons learned from the Elife Ambassador Met research team. And in the 2018-2019 year, the group of early career researchers were participating in the initiative led by the Elife Community Ambassadors. And we decided to create the meta research project. And this study was led by the Tracy and in her method of learning by doing. We decided to investigate and to clarify how to create clear and informative image based figure for scientific publication. So in order to create this study, we started with a definition of our research question and objectives. This research question and objectives should be well defined, clear, logical, feasible, interesting, novel, and feasible. And in order to define so we need to follow several steps. First, we need to define our problem that a lot of people while reading scientific papers are known to go first to the figures and read them. And therefore, it's important that figures are accessible to the broad audience. And for example, figures from the papers outside of the one immediate area of expertise are often difficult to interpret and making science opportunity to communicate much more harder. So after we defined our problem, we decided to investigate and validate our data with a preliminary research. And many researchers discussed fraudulent image manipulation and technical specification of the image acquisition, however, data how legible and interpretable images are missing. So, a lot of recent evidence suggests recently that it's important to include methodological data about the image acquisition in the papers and its information often missing. Next step was to formulate our research question and our research question was exactly how good the quality of the reporting and accessibility of the image based figure among the papers published in the top journals. And the factors that we decided to assess included, for example, the representation of the scale bars explanation of the symbols and labor clear and accurate in set markings, and transparent reporting of the object of the species and the issue that has to be shown for example in the legend or figure in order for the reader to interpret the data correctly. We also examined whether images and labels are accessible to the readers with the color blindness and other problems. So, once we defined our research question we needed to state our sampling frame, and it has to be clear rationale for the sampling frame and it has to be appropriate size. So, we decided to choose all the original research articles published in the April of 2018 in the top 15 journals for the original research in three different categories. Physiology, plant science and cell biologists, and we decided on those top journals, 15 top journals according to the impact factor from the year 2016. The sample frame was ready. We needed to screen the journals and eliminate those that didn't publish original research, for example, review journals. After the journals was ready, the PubMed search for the all needed articles was being done in a defined time period in 2018. And all these articles has to be also screened additionally for the full length original research papers and to make sure that they have all the images that area of our interest and the continuous data and all the articles was reviewed by the two different abstractors. After we decided with our number of our papers, we created our abstraction protocol in order to easily to convert all the context of the paper to the review and all the figures were reviewed by the two different abstractors and in both main paper and supplementary files. Before the we make sure that the training was done to the all the abstractors and the abstractors completed the 25 article training to ensure that all responses was similar before starting the data abstraction. During the abstraction to independent reviewers would blindly screen one paper and primary and secondary outcomes will be obstructed for each article and then later on disagreement would be resolved by the concept. After the all the obstruction was done any additional level of our reproducibility so to say was that we did the quality of our assessment and data extraction. So, this way that 10% of all the articles from the different field were verified additionally to the one quality assessment structure, who did it in all the field and in order to synchronize all the people taking the place in this study. So a few to put in our results in the context that was important for us to also represent the images in order to illustrate what we was abstracting for. For example, we was looking for the missing or inappropriate information on the scale bars, and if you can see from this example images. How could a no scale bar or scale bar that plenty in the background is could be represented in the paper. And according to our data, almost 50% of the journals in the physiology. We had the no scale bar information or partial information in the on the size of the images represented it's a little bit better in the cell biology field and in the plant science, it's up to the 70% had almost partial information on the scale images. So, for example, from the for in our study we also wanted, not only to show the problem, but also suggest the solution and educate people how they can make it better. So, for example, in any case how the scale bar could be represented in the different ways that could be visible and readable for the people. For example, in our paper and research where we did the analysis for the misplaced or poorly marked insets insets are often important part of the images in the scientific papers and often one could see that they are only placed or marked incorrectly, which is making very difficult for the people to find where exactly the paper are showing it. And it's important also to show in a visible way for the reader that were exactly the insets finding and here on the right side you can find the examples of it how you can represent. For example, talking for the results that in the case of the poorly marked insets. There are almost 30% of the papers and the physiology did not market correctly and in the plant science, it's even more in the case of the description of the insets. Also important sometimes to explain what are you doing in your legends that the people are following you. So it was up to 50% in each fields that those insets was not clearly described. Also in our paper abstracted for the different categories also and recommend for the image blind color blind people or other things that you can read from the our paper. So putting in the context paper that met all good practice criteria according to our analysis in the physiology it was 16% cell biology 12 and the new plant science, it was only 2% and such image representation affects clearly the science ability to interpret, and to build upon another scientist's work. So just putting all the knowledge that I learned from the E-Life Ambassador team and the MIT research project, I realized that there are also problems in my field. So for example, in the case of the, I did my PhD on the in microbiology on the aspergillus fumigatus and what this was noticeable that all the PhD students or early career researchers are discussing that often in the papers that background lineage of the aspergillus fumigatus is not mentioned or mentioned incorrectly. For example, it's clearly once you're doing meta research on such topic you can put the number to the problem and communicate such problem much more clearly to the audience. For example, almost 20% of the papers published on the aspergillus fumigatus had never mentioned it background lineage of this function, which is making the results of this paper in a top reputable and useless for the community in the future. So, with all this, I would like to thank you for your attention and if you want to read more about our meta research paper, you can do so under this link and you can contact me on the Twitter. Thank you. Thank you, Julia. It was nice to hear about this project wishes. It's quite nice to assess images and something that we do all the time. So our final panelist is clever neighbors, he's part of the coordinating team of the resilient reproducibility initiative. So clever, please go ahead and share your screen and over to you. So. Well, first of all, thank you for an invitation. I work as I mentioned with the Brazilian reproducibility initiative. So, the context of the initiative is that a lot of matter research actually came about in response to a perception of the reproducibility crisis in the last decade or two decades. A lot of initiatives came from it, like many mentioned already in the previous talks, and one kind of effort that became kind of common was the reproducibility initiatives. Let's get a lot of published results and try to replicate them, try to repeat the experiments and compare the results. And the Brazilian reproducibility initiative is one such a replication effort. We're actually focused on country so we tried to, we tried to reproduce 60 experiments from Brazilian reproducibility from Brazilian biomedical research and to estimate the reproducibility of Brazilian biomedical science. So, much of what I did in the last, this started in 2018 so much of what I did in the last few years was trying to reproduce experiments by other people. Of course, we, I'm part of the coordinating team, we actually don't do the experiments, but we coordinate the labs to do and we work on the protocols and we try to get everything working. So, this is the workflow, the plan for the initiative so we had to figure out what methods were common in Brazilian papers and then we had to find Brazilian authors who were willing and had the expertise to reproduce such papers using such methods. And then we got to a selection of those 60 experiments I mentioned. I actually want to focus, this is a lot of, this previous steps is a lot of research like Tracy mentioned method resurgent or you did that you're looking at papers and trying to extract information from there. But I want to focus on this last part which is what's happening now that we're defining the protocols and actually doing the experiments to do the data analysis and publish the results later. And here I think is what most people who don't do method research can take lessons from method research. So I'm framing this in terms of what if, what if I could go back in time and change the way we publish the way we do science to make my life easier as I was doing this. So, and I think some lessons can be derived from there. I'll mention protocols, data and documentation. So, the first step for us, once we had the experiments we wanted to reproduce we had to to get each one and and take a stretch a protocol from there right a step by step. A step that could be followed by by the replicating labs. So we decided to do that without contacting the authors, mostly for pragmatic reasons, of course the authors have information that is not on the paper, but this would take a long time and previous experiences showed us that this would, most authors would not respond. So we decided to skip this step at this point and go only by what the information, by information that is available on the paper. So we had to get some method section that is usually written in some freestyle right depends on the journal and on the author. And we had to convert this into a protocol that looks at this is an actual protocol from our initiative. It looks kind of like this right. You have the steps that were described in the method section but we tried to structure it in chronological step by step order. And we also had a lot of information that was not present in the original article. So we had to ask these questions to the people who were doing the replications right, and this is essentially about reporting right so what's the thing that would change here is that would have people reporting all this information that is actually essential if you want to do the experiment you have to decide what you're going to do. And if it's not informed in the original article you have to guess or to, of course, an informed guess you are also a scientist but this is not in the original article. And what we use for this and this is the lesson I take is that thankfully there are lots of guidelines right they started in clinical research but now they exist for for many kinds of experimental designs and even for specific techniques. So for PCR, which is one of the methods we're replicating. There's this Mickey guidelines, which is essentially a list a checklist of things that you should mention when you report your PCR experiment to make sure that all the important information is there if someone attempts to reproduce it. So, and this was the basis for the questions we asked, you know, we asked the things that are in the checklist, but we're not in the original article. The second thing I want to mention is the data. So we needed the data, and we needed the data from from the original reports, the original experiments for two steps. Mainly, first, when we are doing the protocols we needed to define a sample size. How large will be this experiment when we attempt to replication and the sample size was that simple size calculation was based on on the original effect size that the original experiment reported. And we needed to calculate this effect size. We know how large is the effect in the original so that we can choose an appropriate sample size. This process was really let me mention first that the second parts of the data analysis in the end if you're doing a replication you want to compare somehow the results from your replications to the original one if you want to evaluate the published literature. And to do this comparison we also need the data we're actually running a math analysis that Tracy mentioned in her talk, and we're comparing we're doing a math analysis of our applications and comparing that to the original. And well to do that I need to know the original value. And the process to get the data when the data is not reported in a structured format is actually very tedious and then, you know, in precise because we have to get the picture and the graph where the data was reported. And we have to actually essentially count pixels of course the software does that for you but you're essentially counting pixels and doing an estimation here. Based on the axis and you just mark this on the software while here's 100 here's three and the software estimates for you what's the value of this point and what's the value of the error bars. Depending on the resolution of the images and then tying this to what you did was just saying this might be like two or three pixels and you're trying to get an estimate from there. So this is very imprecise and very laborious. And what we are doing and what I wish people would have done if I were to to to go back in time is to to report data in a structured format so we're actually having spreadsheets where people are going to this is actually an example of one of our replication experiments where you report exactly the value that comes to the raw data that comes from the machine that you're using this case it's a spectrophotometer. So it measures an optical density and this is what we wanted to have would be much easier to do and it's really easy to share structured data nowadays with many repositories that you can see below. The last thing is contacting the authors we're doing this now just to see if the way we change the protocols is actually the way the authors would have done and now we don't have the pressure of time so we can do that in tranquility. So, so this really depends if we're going to ask the authors like the information that was not in the article, if we're going to ask them well how did you do this, that you did not report. It's really depends on whether the documentation in the on their own labs is actually done in an organized way. So maybe they have this information in a laboratory notebook that is like lost in some file drawer, or maybe they are using some of those this platforms that actually facilitate this work of managing lab and managing protocols. And depending on how easy it is to get this information, it will be easy or hard for them to share this information with us. And this really depends on data management right if this is, you don't need to use one of those platforms but if you have to have a system so you can find some information about an experiment that they haven't long time ago. And of course the last thing is is the authors have to be open to collaboration right they can just see our request for information and say I don't care. And this really blocks any further effort, but they have to be willing to collaborate and then a lot of those things I mentioned before like using reporting guidelines or using a platform to organize your data or to share your data in a structured format. And all of that is false to me under the umbrella of open as to collaborate because you're really making it easier for people to build upon your work. It's not just direct replications that we are attempting here is just if someone wants you to follow up on something you did. It's easier for them if you already put all this information and have the results there. And my last point is that this is not just for others and sometimes this is missing in and people think about adopting those open science of mental research derived insights that the likely beneficiaries you in the future because you know, a month from now, when you discuss with your advisor and decide to change the experiment you have to look back at your old protocol. And it's really good for you if you can can find it easily and if it's really structured and if your data is there in a way that you can really easily reply to the data or something because reviewer to ask. So this is what I tried to keep in mind when when trying to measure the cost benefit. So this is my message. Thank you. Thank you clever yeah this initiative sounds like a lot of work. Important work though, and I really look forward to seeing the results of this initiative. Once it's out. So with that I'd like to thank all of our speakers for their for their talks and will now proceed to the Q&A section of the webinar. Thanks for the questions we've received so far and please continue to post questions. We have in the chat box it's still open so if you'd like to ask there you can go ahead you can also tweet to us at the daily life community using the hashtag ECR Wednesday. So let me start off with a question from one of the panelists I'm sorry I'm not sure how exactly to pronounce the name so I'm not going to do that. One of our panelists wanted to ask Tracy, if she has received hostile or negative attitude from scientists representing the status quo during her first publication for example that regards that that was about bar graphs. And if if if so, how did how did you manage it. So, Tracy that's for you. And so I would say that's a problem that we were very concerned about while we were doing the study because bar graphs are really standard of practice, we did a follow up study in 2018 where we found that amongst papers that had a data figure in peripheral and cardiovascular disease journals, almost 50% of them had a bar graph and it was the most common type of visualization ever that people are using so we did a lot of work to really prevent that and we did that by sharing drafts of the paper with a variety of colleagues and getting their feedback on the paper and revising and adjusting things before it was ever submitted. Once it went through to publication, I think the fact that we had data really helped us with the statistical reviewers in particular lots of people have written papers on this topic, but the fact that we could show that 85 or 86% of papers in our sample had bar graphs for continuous data meant that it was clearly still a problem and one that urgently needed to be addressed. I think that many people who saw the paper felt that way as well and then the visualizations in the paper made a very powerful arguments that resonated with people. So I think, yeah, we took more of a preventive approach and then once the paper was out the response was extremely positive. It circulated very heavily on Twitter the paper was viewed more than 100,000 times in the first month that it was published most of what came to us was positive. I'm sure there are people who didn't like it. Maybe they just didn't talk to me. Thank you so much. So I have a question for you, Leah. So one of the things that we tend to do perhaps as meta research I mean as any researcher is we tend to highlight things that were perhaps done badly or that need improvement. But are there examples of things that you've come across that that you think have been done well and something that we might like to encourage. So Leah, that's a question for you. Yeah, so repository is and nowadays is a lot of positive changes happening and for example that the repository and backup of the original results of the experiment is very admirable and helpful. You know, and it's something that is, I wish would happen more often and the people who starting to do it it's extremely helpful also for the interpreting the data, and also, it's nice that it's happening. Clearly that there are wish that it would happen more often. And also I'm trying to adjust my own research to this guidelines of the reproducibility. But for those people who is already doing it and publishing it's extremely nice and admirable. So you think the future is bright and not. Yes. That's why nice to kill. Okay, I have a question for clever as well. So, so I think that this reproducibility initiative that's being done in Brazil right now is, you know it's a lot of intensive work but is it is other other reproducibility initiatives that have been taken undertaken around the world and if yes, what have they shown or what lessons have we learned from from them. Yes, I think, well, the first initiatives of this kind were in psychology. So those will probably everyone here is aware of them but the audience might not but there's the revolution the replication project psychology. There are the many labs projects there. Well, they have three editions already going into four, I think, and those are in psychology in biomedical sciences I'm only aware of the reproducibility project cancer biology which should be finished. Well, if it hasn't been published yet it should be should be releasing their results very soon. As far as I'm aware, ours was the first on to take the country as the level of analysis and to try to look at the reproducibility of the science produced in a country. But, well, there are indeed many, many replication projects that came before and we actually borrowed a lot of the structure and we learned a lot from them actually talking to the people who coordinate and those. And, well, at least you learn a lot, even if the results is not very informative in the end, you learn a lot about what I just talked about that how to reproduce papers and how easy these papers and what we could change just to make this easier right and it's just adding to what you asked to do I'm very forgiving when I see a bad paper because a lot of a lot of these is very hard and laborious to do, you know, open science is not something that is just well I'm deciding now that I'm adopting every possible practice they're recommended. It takes a lot of work and much more work that people are used to. So it takes time and a lot of learning about reproducibility I think it's learning how to make it easy for people to reproduce so that the incentive doesn't need to be that high for people to attempt to reproduce. So I think all the learning is in the process, not necessarily in the number that that's going to come out in the end. Yeah, thanks actually that's quite nice. I wanted to know. I think we were talking about this sometime back that there was a study to that try to reproduce experiments in cancer. So could you like talk a little bit about what's happened since then and what you think about that and the major challenges that they have they faced, for instance, in trying to reproduce key experiments on that land the landmark papers of cancer. Yeah, this, well, they, their challenges are different. So if you're going to read the reports from their experiments, they're trying to reproduce the whole paper. Right. So this is a design choice we made very early that we're reproducing one experiment from one paper. So this is a very simple experiment and usually they're not like they're not changed in that the first experiments not really dependent on the second is not really dependent on the first. So in the reproducibility project cancer biology, they did have a lot of problems if they couldn't establish the model. So if you have a biological model that it's very cutting edge and you can't establish that. The question of replication becomes really hard to interpret right because what do you do now is it a failed replication because you couldn't even establish the model, but you couldn't really try the experiment because the experiment depending on having that model for you to test. So they did. They also had much more advanced and expensive techniques. So the problems they faced were as far as I'm aware much more in that line. And our problems are more about our experiments are really simple. And our problems are more, it should be easy to reproduce our experiments we selected, and it could be made easier by the things I talked about. But our problems seem to be more and more per se. Yeah, I see. Thanks. So I have another question which is for Tracy. So how long does it usually take to plan and execute a meta research study and can anyone do meta research, or would you encourage everyone just, you know, do meta research. So how long it takes depends a lot on the particular study. And it also depends on the size of your team. It's important to adapt the scope of the project to be something that your team can do. So one of the challenging things about meta research is there are a relatively small number of meta researchers right now, and it is very hard to find training in meta research, I would really encourage people who are seeing a problem in their field that they think meta research might be helpful in exploring to reach out to someone who has meta research experience. It's like, you know, if you're an epidemiologist by training you wouldn't go running into a lab and that was in biochemistry and plan to write design and publish a paper in two months when you've never picked up a pipette before it's kind of the same thing with meta research it has its own skills and methodologies, and toolkit that it really takes time to develop and learn and understand and working with someone who knows some of those things and particularly the relevant to the question that you want to address can be very, very helpful in making sure that you have a sound study design, and that you will ultimately get the type of information that you want to know from your study. And then I think if you're doing a systematic review type, it's really important to remember that. Sorry, go ahead. No, sorry, it's important to remember that. It's important to remember that most stages involve two independent reviewers and so depending on the type of your study that you're doing. These aren't necessarily studies that you can do on your own you need to have, you know, more than one person involved in working together. Yeah, it takes a village to do science anyway. Yeah, I think that's a good point. Vinod has a question to the panelists which is do meta research projects benefit from pre registration. And I think maybe Tracy you could take this question. I think they certainly can and many are pre registered we often don't register ours because they are often more exploratory in nature we may be looking at issues that other have others haven't addressed before. And yeah, we don't necessarily have everything finalized in terms of exactly how we're going to be able to look at the question. So, yeah, but there are other groups particularly groups that do a lot of work with trials that do pre register their studies and there are also some systematic review groups that use pre registration and it's definitely something that I would encourage people to to consider if it's appropriate for their study type and study design. I see. Thanks, I think may have has a has a couple of questions actually so it's similar to the first question which is, how has, how is the response of journals to meta research articles and their suggestions so if you outline a suggestion or policy change. There are some questions that have actively made changes that you've suggested, based on meta research articles that have been published. And I guess anyone could pick this question but maybe Tracy is the most experienced to answer this. I can start and then others may have other experiences to share. So we have seen journals actively changing policies based on our work on data visualization to encourage authors to replace bar graphs and continuous data with more informative There's a big difference between changing policy and having that policy actually implemented and enforced and I think it's. That's something that's much more difficult to journal for journals to achieve so one of the approaches we've been pursuing now is to develop automated screening tools to make it easier to detect some of these things. And so I coordinated group that has people who have all kinds of automated tools. And we've pulled those into a common pipeline that we've been using to screen pre prints throughout the pandemic and post information results from those tools, public make public reports available through a website called hypothesis, which is a web annotation tool and those reports are also tweeted out via side score reports on Twitter. In terms of the response to journals, I think it's very, it's complicated so there are many journals that don't consider meta research to be research and aren't willing to publish these papers there are a couple of publishers like Plast Biology and Elife that have really embraced meta research and I've just recently started guest editing a new collection for clinical science as well to make it easier to get this information out into journals that other scientists are reading I think it's important that we are publishing these studies alongside normal science to send the message that this is important for, you know, every scientist to know about and to be aware of. And it's not something that should be hidden away in journals that only meta scientists read because the goal is really to change and evolve how science is conducted. And that requires having an ongoing dialogue with scientists that goes both ways about both problems and solutions and how can we implement better practices in a way that works for everyone. Thanks, I think that was quite a comprehensive answer. But maybe I can also say that there are certain journals which have been open to adopting, you know, having checklists for example for statistical reporting or to improve, you know, I think that the spring journals, for example, have also implemented the fact that you shouldn't publish bar graphs with error bars. So I think that there is, there is some adoption of these policies from the journal's end but perhaps it's not as widespread yet as one would hope. So I hope that answers your question. I guess. Yeah, if I could just follow up quickly. Checklists are very popular right now. I mentioned that one of the things about meta researchers we can use it to look at solutions unfortunately the meta research on checklists is not as inspiring as we wish it should be. I also showed the meta research study on the arrive checklist at the beginning we're just randomizing authors to complete the checklist didn't really have any effect on quality of reporting. And there are other meta research studies that also raise questions about the value of just publishing an editorial or changing a policy or posting a checklist. I think that there is some impact, but it's not nearly as much as we would like it to be and we still have a long way to go so I think checklists without enforcement aren't unfortunately going to be the major solution to our problems. Okay, so I'm going to ask a quick question to Julia first. I want to know, given that before starting off the life ambassadors program. You know, you had no prior experience in doing meta research so what was your experience like was it was it a daunting task was it scary to try and, you know, do this project and you know how do you think you fed when you were when you were doing Yes, on the beginning when you understand all the amount of the work that should be done and put in the paper in order to continue and to gather all the data. It's a little bit daunting, especially as an early career researcher working in the vet lab, you have to devote your time and split it, but actually it's this experience helped me to educate myself and to see and to more critically my own field and to be more reproducible and create also better image qualities and after seeing and able. to abstract the data, much more faster from the papers so even though that a lot of things for the early career researchers in the science and don't team, they are also tend to pay in the payback very well, so I would encourage any early career researcher at least to start interesting this topic and clearly the good supervision and guidance from someone who is already done it on the initial steps are very important in order not to get lost. Thanks so much. I have a question for clever, which is, do you have any recommendation on adopting fair that is the findable accessible interoperable and reusable compliance research data management plans for nationwide programs, such as the Brazilian reproducibility network. Yes, my recommendation. Yes, please do. We did. So, so I think the largest the project that the most you benefit from, from making your data management, very clear and have a plan. So we do, we do try to be very open and adopt all the recommendations that just said so we're using the checklist and we are pre registering every protocol, the protocols, well, the protocols are in public yet, but they will be after the experiments are done. And you can see much of the method research we did before as set up for the initiative is already available and everything is an OSF. I can share the link later but yeah, you should this is we hope that this makes it easier for someone who later once or even ourselves, if we want to do a second initiative and try to reproduce a lot of experiments that we can learn from from our previous one, the data is there and it's usable and interoperable and usable. The data will be all structured and we're trying to follow our recommendations. Okay, thanks. I have a question from that which is, is there a tool that authors can use before submitting that paper to ensure that it's more accessible in terms of reproducibility and, you know, it reports reports all the methods and stuff very well. Maybe maybe clever you could answer that is there something that authors can use now to try and check whether whether people is doing well. Tracy's the expert and very familiar with those tubes right. Yeah, I think the first thing that you should do is look for a reporting guideline for the type of study you're conducting so if you're doing human studies go to the equator network website. And they have things like console for clinical trials the stroke guidelines for observational studies Prisma for systematic reviews. And the guidelines is good for animal studies. There are also some options for in vitro studies although equator 10 is not to list those. The second thing that you can do is go to, there's an automated screening tools called size score which checks transparency for certain markers and so I believe if you sign in with an organ ID you can screen five or 10 papers per year for free. We'd also like to learn more. We published a correspondence in nature medicine in January on our screening pipeline and I'll tweet out a link to that afterwards from my Twitter account. But that describes more about what the various tools in our pipeline look for. And once you know what we're looking for it's pretty easy to tell like if your paper has a limitation section or other things. Thanks, Tracy with that question that brings us that question brings us to the end of this time. We see our Wednesday webinar. Thank you to all the participants as well as all you attend these. We really hope you learned something new and enjoyed yourself. We really appreciate your participation and we'll be happy to hear your thoughts and feedback about the event. I posted in a message. In the chat. That's an email ID to which you can send in feedback so please contact us at events at the rate be life sciences.org. If you'd like to ask follow up questions to any of our panelists or provide us feedback. So we also encourage you to post about this event on social media that that you may use, ensuring that you use the hashtag ECR Wednesday and tag at the rate be life community this is mostly Twitter. Wednesday is a series of webinars so it's not just this one and we keep having one almost every month. The entire the past editions of this webinar are available online, please go to Eli's website and check them out. We'll announce the next edition of ECR Wednesday very soon so keep your eyes open for that and hoping to see you then until then take care stay safe and thank you so much.