 Welcome to my lightning talk about reporting and transparent research practices in orthopedics and sports medicine, clinical trials, just to let you know it's 2.30 a.m. in the morning, so everything which is less than perfect, this is my excuse, but yeah, let's start with the role of reporting quality in medical research. In general, medical research has done to improve healthcare for patients and yeah, clinical trials shall provide information for healthcare. So clinical trials are used to provide trustworthy reproducible and generalizable information on clinical decisions like treatments and diagnosis. And we need comprehensive reporting, as is necessary to distinguish between trials with low and with higher risk of bias, and therefore to estimate how likely study outcomes translate into clinical practice. And this whole topic of comprehensive reporting or reporting of clinical trials is covered by the consult guidelines, which are very widely used in the field and endorsed by over 580 journals. So yeah, this should be a really straightforward process, but still we regularly encounter things like this. This is a risk of bias assessment from a recent meta-analysis, and we can see that for the most criteria, there are some studies with low risk of bias, some studies with high risk of bias, but the very majority of studies has an unclear risk of bias. And this is problematic as there's just not enough information presented in the paper to make a risk of bias assessment, and for the clinician who reads the paper to inform his own clinical practice in the end, and he just cannot make the decision if the research presented is robust enough to guide his own clinical practice. And this is a problem, and we wanted to see how researchers in orthopedics and sports medicine deal with those topics. And luckily we found out that there are many reviews and commentaries around that are calling for more transparent research practices in the field, and this is good news. However, most empirical references come from other disciplines, like in this example, or if they're field specific, they're really narrow-scaled. So we decided to run an old study to provide a more comprehensive overview, and it was a cross-sectional meta-research study on reporting prevalence and transparent research practices in orthopedics and sports medicine. We used the top 25% of journals in orthopedics and sports medicine as a sample, and ended up with 163 clinical trials from 27 different journals. And we looked at different criteria of pre-registration, open data, scientific rega, and so on. And all screening and assessment steps were done by two independent reviewers. And yeah, that's coming to the results. We found out that authors usually report general information about methodology and rega criteria, which is really good. But we also saw that only few provide the essential details that are really necessary to do the risk of bias assessment in the end. And we can see this in those examples. We see almost all triads had a randomization statement somewhere in the paper, but only about half of them had a proper randomization message described. And we see similar trends and tendencies for blinding for sample size calculations and for other rega criteria. We also found out that data availability statements were only included in 12% of triads, and that no triads share data and open repositories. And similar things, tri-registration was only reported in about half of the triads, even though this is mandatory for clinical triads. And only 20% of triads were pre-registered. This is especially unfortunate as pre-registered studies were shown in our exploratory analysis that they are more likely to report information on randomization, blinding, sample size calculation, and so on. So what shall we do with those findings now? We have three different main directions on our paper and our preprint. One is education, but done differently. I think there's a place for more concise, practice-orientated educational materials like this one from our preprint. And the next possible direction, which is described a bit more in detail in the paper, is to create awareness probably with dashboards to monitor reporting and transferring research practices and also assistance in the publication process for the authors themselves. We have great automated screening tools around and they can be used on preprints or can also be implemented into the peer review process. And the screening group is a good example that's already done on preprints. And another option would be interactive writing templates. I know there's a trial going on by the Equator Network, which is looking into this topic. So we have high hopes that we can introduce some change in the future. And yeah, this was my short lightning talk. You can see here the QR code of our preprints. So if you want to have a closer look, you're warmly invited. And otherwise, you can also reach out via Twitter. And other than that, I'm looking forward to the upcoming discussions then. Yeah, thanks. Great, thank you so much. And we do have a few minutes if anyone has a question they want to pop into the Q&A. We can do that. Otherwise, we will move on to the next talk in just a minute. Let's see if there's any questions. All right. If not, then we'll move on to our next speaker, Siddin Hainal. So Robert, if you don't mind stopping your screen share and then Siddin, go ahead. And thank you, Robert, for joining us. It's such a late time. I really appreciate it. Hello. Can you hear me now? Yeah, I can hear you. Okay, perfect. Sorry for the delay. Okay, thank you for giving me the possibility to present our online platform, animalstudyregistry.org, here today. It's a pre-registration platform specifically conceived for animal research, and it aims to improve animal welfare and translation at the same time. And since I thought that at this conference, most of you are probably familiar with the concept of pre-registration, I thought I will start with the animal part of the platform and talk a little bit about our motivation. So as you know, animal experimentation is still largely debated in our society, and our society accepts the conduction of animal experiments only under the basic assumption that this will contribute to medical progress or scientific progress via any gain of knowledge. However, in the last years, it became more and more clear that an important part of animal experiments is actually never published, which represents, of course, a huge ethical issue. And to just quickly put a number on it, I would like to present two studies, recent studies which try to define a publication rate of data coming from animal experiments. So they both used a similar approach. They looked at animal study protocols, which have to be written by each scientist within the European Union, which want to conduct animal experiments. And they followed up these protocols and looked how many of these protocols led to at least one publication after six or seven years. And here in two German university medical centers, they found a publication rate of 67%. And in another study from the Netherlands, they found a publication rate of 60%. And here they also looked not only at the whole protocol, whether this protocol led to a publication, but they also looked at individual animals. So all the animals which were written in the animal protocol, which were planned for the study, how many of these were reported in final publication. And here the number dropped to 26%. So very low. And so this is of course a huge ethical problem. But in addition, these scientists were asked also about the reasons why they did not publish results. And the main reason was negative, so-called negative results. And on the second reason mentioned were problems related to the methods. But so these animals are not only not contributing to a knowledge gain, they also then later contribute to the publication bias because only the positive results are mentioned. And this of course also contributes to an insufficient translation from experimental studies into clinics. Here I just show the example of a meta-analysis from stroke research, where they looked at positive results in experimental studies up to clinical studies. And you can see a huge drop. Of course, this is not only due to publication bias, there are further reasons, including questionable research practices or problems with the study design or the reporting. But all these points then brought us to developing this pre-registration platform as pre-registration can address a lot of these problems. So it can encourage the publication of all gain results. And it can also prevent questionable research practices like p-hacking or harking. So in 2019, we launched the animal study registry.org website, which I would just invite you to have a look at it. So you can screen also to already pre-registered studies here in the search without the need to create an account. And it's an online platform which is operated by the German Center for the Protection of Laboratory Animals, which is part of the Federal Institute for Risk Assessment. So it's a governmental initiative and it's free of charge and it's open to all scientists around the world conducting animal research. And I would just like to quickly guide you through the registration process. So after developing an idea, scientists can enter their study in our template. This also supports in designing a study. And it's mainly based on the arrive guidelines, which are the main reporting guidelines for in vivo animal research, which are also endorsed already by most journals. And after filling up the form, this can be submitted and after submission, there are still two weeks where scientists can retract or edit their study. Otherwise, it will automatically be registered and receive a DOI, a digital object identifier. It cannot be changed anymore. However, it doesn't mean immediately public because authors can opt for an embargo period for up to five years, where the study is only visible with its title, the institution where it's conducted, a short summary, and optionally the name of the author. And if during this process there any changes to the protocol, this can be updated anytime, there can be comments added anytime. And also links can be inserted to data repositories or publications, and thus link the outcome with the pre-registration. And we see already this past the uptake within the last month. However, the uptake still needs to be accelerated, and we're talking to stakeholders, to research institutes, funders and publishers to value pre-registration. And of course, also to scientists, because one major problem in biomedical research is that many scientists are still not aware of the possibility to pre-register their research at all. So here we try to really go to conferences and to talk to scientists. And with this, I would like to end and just I mentioned also some publications if you're further interested. And otherwise, we're happy, I'm happy you get in touch with us and would like to thank you and also happy to take some questions. Thank you. Great. Thanks so much. We do have a minute or two for questions if there are any, but I don't see any yet. Thank you very much for your presentation. So we'll go on to our third speaker, Alexa Tullet. Kathleen, if you could stop your screen share. Sorry. All right. Hi, everyone. I'm Alexa. 10 minutes is a short amount of time. So I'm going to sort of cut straight to the chase. So much like sports medicine and animal research, psychology has its problems, one of which is that there's been growing concern about the rate of false positives in psychology. And so in our study, we asked psychologists about their perceptions of this problem and we had, let's see, 5% of our participants who are practicing psychologists said that the rate of false positives in the published psychology literature is acceptably low. 60% said the sort of medium answer, so they said that the rate of false positives is somewhat higher than it should be and we should try to lower it. And then the true skeptics, the 35% say that the rate of false positives in the psychology literature is much higher than it should be and we should take major steps to lower it. So one solution to this problem that has been proposed is the solution of replication as Brian Nozick and his colleagues noted in their 2012 paper, replication is a means of increasing the confidence in the truth value of a claim. So in this sense, replication can sort of contribute to self-correction in our field or so they claimed. But in order for this to be true, it has to be the case that psychologists also update their beliefs in response to replications and not all psychologists have equally positive optimistic views about replication. For instance, one of our participants said not all researchers are equally competent. People who are going nuts about the so-called replication crisis are entirely ignorant to the fact that people who come up with original research are much more competent than people who attempt and fail to replicate. That's why original researchers succeed and replicators fail. Those who can do science and those who can't fail to replicate. So this is a particularly curmudgeonly participant, but as we can see, some people are skeptical about how seriously we should take the results of replications. So in this project, we attempted to directly test this question and look at how much psychologists update their beliefs in response to replication evidence. So I'll focus on three hypotheses here. The first is that psychologists will update their beliefs in response to new evidence. So we suspected that psychologists would change their beliefs somewhat when they learned about the results of replication studies, but that they would not update as much as our Bayesian model would dictate. So what we attempted to do was to sort of model how much participants should update their beliefs given their priors and given the strength of the replication evidence. So we compared this to the Bayesian model and we thought psychologists might not go as far as a Bayesian model would say they should. We also hypothesized that psychologists might not update as much as they predicted they would. So we had some of our participants predict how much they would update their beliefs given certain hypothetical replication outcomes and we thought maybe people predict that they'll update a certain amount, but when actually in the situation won't update as much as they expect. So briefly, the methodology that I'll focus on for today is we had participants in the control condition. So this is 572 psychologists and we had them evaluate the results of original studies, but specifically original studies that were slated to be replicated by various large-scale replication projects. And we got these participants to tell us essentially their actual prior. So this is their belief in the effect that is reported in the original study. So the extent to which they think that that effect is likely to be non-trivial in size or real effect. So this is what they did in phase one. And then we also had participants in a prediction condition. These participants did the same thing as those in the control condition, except they also made predictions about how they would respond given different replication outcomes. Okay, then about a year and a half later, once the replication results were in, we did phase two and we assessed participants actual posteriors in the control condition as well as a prediction condition. So here, participants are reading about the results of replication studies that replicated the original effects that they evaluated in stage one. And so now they're telling us, okay, given the results of these replication studies, now what is your belief in the effect? Okay. So you have sort of a sense of what it's like to be a participant in this study. I'll give you an example of a study that participants were asked to evaluate. So this is what you would see in phase one. You'd see a description of original study. It would include the citation and sort of a summary of the goals of the research, a little bit of information about the sample and then the key finding. This is what participants are providing their priors for, right? In this case, the study investigated whether a deliberate analytic processing style can be activated by incidental discluency cues that suggest task difficulty. Participants attempted to solve syllogisms presented in either easy or hard-to-read font. The manipulation of font was an incidental insiduction of discluency. And the effect that they observed in the original study was that participants in the hard-to-read versus easy-to-read condition answered more syllogisms correctly. The effect size was a d of 0.64 and the p value is 0.051. So if you're sort of interested in testing yourself, you can guess what you think the effect within the population will be or what you think the effect observed in the replication will be. And that's what I'll show you next. So when participants come back in phase two, after they've told us what they think the effect in the population is at this point, we show them the results of the replication evidence. In this case, the results are that participants in the hard-to-read versus easy-to-read condition did not answer significantly more syllogisms correctly. In this case, the effect size estimate is a d of point, sorry, negative 0.03 with a p value of 0.43. So in this case, the results of the replication were really inconsistent with the results of the original study and would suggest that potentially observers or our participants should adjust their belief in the effect downward. Just a little bit of a sort of like a brief primer on how we calculated their Bayesian posterior. So this is what we said they should update to given various assumptions. So to give you an example, imagine that your prior, so your estimate of the likelihood that the effect is real when reading about the first original study. Imagine you estimate that the effect is a d of 0.25. And you say the probability that that effect is more than 0.1. So the probability that it's more than what we defined as a trivial effect is 85%. We basically compute a distribution based on those values. And that distribution is represented here in green. Then we also created a distribution that reflects the evidence from the replication. And that's in blue here. So for example, if a replication founded d of 0.05 and a standard error of 0.05, you would see a distribution that looks like the blue one represented here to compute people's Bayesian posterior. So the posterior that they should arrive at, if they adjust according to our Bayesian model, basically we combine those two distributions. So we create a weighted average of the two that's represented here in purple. And in this case, it happens to have a probability of 27.2% that the effect is greater than 0.1. So the Bayesian posterior here would be 27.2 a combination of the participants prior and the replication evidence. Okay. Let me walk you through this first graph here. So this is focusing on the control condition. And this is instances when participants should adjust downward. You can sort of see this as instances where replications failed if you want to sort of use a heuristic, which is almost redundant with when participants should have just downward, but not quite. What you can see here is the purple box is their priors. And then with these failed replications for sake of brevity, we see that when participants read about the replication evidence, they adjust their belief downward. Okay. So they are reacting to the evidence provided in the replication. But what you can see here is the Bayesian posterior in this third box is almost at zero in this case. So participants, according to our model, should be adjusting much more than they do in practice. If we look at the control condition when participants should adjust upward, so you could see this as successful replications as, again, a heuristic. So they start with their priors just over 70%, likelihood that the effect is real. And then they adjust upward slightly in the blue box, but they should have adjusted more according to our Bayesian model, which is very close to 100% for many participants. Then I'm going to show you the same information for the prediction condition. So it's going to have one extra box. And what you can take from these next two graphs is simply that in situations where participants should adjust downward, they do so very similarly to the way that they predict that they would, or the people in the prediction condition predict that they would. In situations where they should adjust upward, in fact, they adjust more than they predicted. So it's not simply that when you ask people hypothetically what they do, they say, sure, I'll update my beliefs. But then when they're asked to actually update their beliefs, they don't do it. So they seem to adjust as much or more than they would predict. So to summarize, we found evidence that psychologists update their beliefs in response to new evidence, and that they did not update as much as our Bayesian model would dictate, and that we didn't support our hypothesis that they would not update as much as they predicted. To summarize even more, replications can contribute to self-correction within psychology, so people do adjust their beliefs. But our results suggest that psychologists either underestimate the evidentiary value of replication studies, distrust replication, evidence, or perhaps some combination of both. Thanks. Great. Thank you so much. We're out of time, unfortunately, but I'm sure if you have questions for Alexa that you can find her. So we're now moving on to our next session. So we're going to switch the panelists now and bring in the next panelists. So bear with us as we do that. All right. And the moderator for our next session is Brienne, and so I'll hand it over to you, Brienne. Thank you. I'll just share my screen. So we're just waiting for one more panelist, but I did hear from Kleber that there is a rainstorm, and his power was going out, flickering lights. So hopefully, oh, they're there. So it looks like everybody's here. Just give another minute. Thank you to our European colleagues who are joining in the middle of the night. Great. I'll get started. So thank you for joining our session on Empowering Early Career Researchers to Improve Science. Our panelists today are Humberto Dabat, who'll be talking about his role in Panlingua. We have Dr. Nathisa Dodavi, who will be talking about her role in reproducibility for everyone. We have Dr. Gary McDowell, who's working with Lighttoler as a consultant, and we'll talk about his work with Future of Research. We also have Cleaver Neves, who will be talking about his role with the Brazilian reproducibility initiative. I'm the moderator today, Brienne Kent. I was an organizer of the event that we'll be talking about today that really inspired this panel, and I'm an assistant professor in neuroscience at Simon Fraser University in Canada. I'll welcome our co-moderator, Tracy Weisgerber, who is a member of the E-Life Early Career Advisory Group and works at Quest in Berlin, and she was the lead organizer of the event that has inspired this panel. It is very late in Germany, so she's not going to have a formal presentation today, but she's joining us just the same, which is wonderful. So this panel was inspired by ideas generated during a global virtual un-conference, and an un-conference is an unconventional conference, which really tries to take the strengths and benefits of the coffee chats that happen at a conference and turn it into the main highlight. So we're really trying to promote discussion, debate, participation in an un-conference instead of just having somebody give a talk and people listen like what's happening right now. So our un-conference had brought 54 invited participants. They were mostly early career researchers who had extensive experience in improving science culture and practice, and the details of the event have been published in an article, I'll give you that link later, but the results of the discussion and the outcome of the discussion of the two days are all posted in a pre-print on osf.io right here, and the pre-print is called Empowering Early Career Researchers to Improve Science. So I'll just start by saying thank you to all the participants who attended the event, a welcome trust who provided some funding, and a special thanks to my co-organizers of the event, Tracy Wisegibber and Constance, who hopefully is asleep in bed in Germany. So the un-conference covered four main topics. The first was why do we need early career researchers to improve science. The second was what obstacles do early career researchers face when working to improve science. The third was how can other support early career researchers working to improve science, and we left concluded with tips and strategies for early career researchers working on science reform, drawing from the experiences of the 54 participants to say what worked, what didn't work, what do you wish you knew at the beginning. And so when we say trying to promote science reform or trying to improve science, we really mean a broad range of different topics. So some groups are working on trying to improve and modernize publishing with open access journal articles. Other groups are working on reproducibility of science. Others are really focused on changing the rewards and the incentives. Other initiatives are focused on public involvement and promoting science communication. There's also those that are trying to increase diversity in science and make sure that there are more perspectives from around the globe, and as well early career research, their training and working conditions. So there's just a wide range of topics that we're referring to when we're talking about science improvement and science reform. So why do we need early career researchers to be part of the reform efforts? Well, early career researchers are the future leaders. They're the most diverse cohort of scientists, so much more diverse than their mid-career and senior scientist colleagues. Early career researchers, because they're early in their career, may be more open to new solutions than more senior scientists who have built their careers in the system that is now. Early career researchers are also more often on the forefront of technical innovations, because they're actively doing the science. They're still at the bench. They're still seeing the innovation that's happening and being a part of it at the bench in the lab, in the field. And so they're really aware of where changes can be made and and how improvements can actually benefit the science and how it's done. Some early career researchers may also have the time and energy to put into research improvement activities in a way that sometimes more senior researchers, more senior academics, who have a lot more responsibilities and commitments may not have. And importantly, early career researchers are the largest cohort of scientists. So if we are going to see improvements in science, if we are going to see reform efforts actually come to reality, we need early career researchers to be a part of it. So to learn more about the outcome of our result of the event, please see the preprint. We also have another document on osf.io with the specific tips and tricks for early career researchers working to improve science. And we have an article which explains how we brought together scientists and researchers from around the world in an asynchronous virtual conference. So I encourage you to please check out these resources to see more details. But today, we have a wonderful panelist who will each speak for about five minutes about their initiatives and their experience with early career researchers improving science. So I'd like, oh, and just to note, please put your questions in the Q&A, and not the check. We'll have Q&A at the end. So first up is Dr. Humberto DeBat, who is a research scientist with a permanent position at the Institute of Plant Pathology in the Center of Agronomic Research of the National Institute of Agriculture Technology in Argentina. Humberto studies the interface of viruses and crops from a systems biology perspective. And for the past year has worked in the Argentine project on SARS-CoV-2 genomics. Humberto is a member of the advisory committee in open science and citizen science of the Ministry of Science of Argentina, and has been an ASAP bio ambassador, an elife community ambassador and affiliate at the bio archive preprint server, and co developer of Panlingua, a multilingual discovery and reading tool for preprints in the life sciences. So Humberto, did you want to show slides to share your screen? I don't have slides to share. Okay, perfect. Thank you, Dr. Kent. It's a pleasure to be here. So the majority of scholarly work in biology is published in English. The language most of the world does not speak. To help remedy this issue hindering inclusive scientific dialogue with Panlingua, a multilingual preprint search tool intended to enable search and global access to machine translations of all preprints hosted at bio archives. At Panlingua, users can enter search terms in their native language and view search results linking to the full text of all available articles translated into more than 100 languages. But language is just one of the barriers affecting global scientific communication, especially among our communities. Latin America represents 8% of the world population, 4% of researchers, and 5% of global academic publications. 30% of our people lack access to internet, 30% is poor, 62 million live in extreme poverty. We are a region of asymmetries and contradictions with tremendous disparities, culturally diverse with one of the lowest global R&D spending. We are trading societies. We produce awesome publicly funded science. Our salaries are ridiculously low. We are resilient, are working, created minds. We are so poor. Doing science in Latin America is about passion, empathy, solidarity, community, and responsibility. As we wave our manuscript around with our humble results, balancing visibility, affordability, and institutional requirements, we fight to disseminate our findings, whatever we find them fit, for months, sometimes for years in disadvantage against all odds despite setbacks. In addition, we are seeing a transition in the publishing ecosystem. The ground is moving. The advancements of the open access movement is a flag towards the democratization of knowledge. Nevertheless, we perceive that this flag has been co-opted by some players in the industry which have accommodated the business model in a way that could perpetuate the asymmetries of the color publishing and exclude even more researchers from the scholar communications. We are seeing a shift from paywalls to publish walls. We are observing the preposterous inflation and expansion of the so-called article processing charters, which are not only unaffordable for our region. They are unethical. The discussions about APCs transcends open science. It's a discussion about constructive views of academic communication. It's about privilege and social justice. It's about inclusion. To encourage APCs, it's to view scientific knowledge as a commodity rather than a human right, rather than a public good. Brazil spent $36 million between 2012-2016 on APCs. Equivalent to the cost of providing sanitized water for a year to more than eight of the 77 million Latin Americans who do not have access to drinking water. I live in a country where 80% of scientific activity is financed with public funds and where 40 of the population is poor. Our relevant students have incomes below the poverty line. This scenario implies that it is candle to pay exorbitant figures so that five publishing companies that exercise an illegal policy in an important market continue to accommodate wealth. We are expectant of where with our 5% of articles will end up. How this figure may diverge as CCRs, we need to be a part of the sheets of publishing practice encouraging the non-commercial roots of academic communications and supporting the development and maintenance of communication infrastructure led by and for the academy. It is becoming evident that many journals reflect an chronic 20th century pre-digital platforms elitists reserved for certain affiliations, valuing mostly mainstream science, a venue for the few, a chat among privileged and highly funded research. Many white, many rich, mostly men. I am failing to perceive how the academic publishing ecosystem values diversity, which roots they plan to take to modernize their journals. How are they working to make their venues more inclusive with more gender balance to be platforms that embrace more voices from the south where science is a global conversation? Beyond publishing, our funders should be redirected to the resource with pen and subscriptions. 11 Latin American countries spend $100 million, mostly public funds, on access to academic journals last year. 80% corresponded to the five largest commercial powders, where giving millions to reach scientific knowledge that should be free. We should immediately cancel the waste of research in leonine contracts with the commercial publishing industry who has sequestered our scientific legacy. This issue transcends academic publications and involve research assessment practice. We are affected by monopolies of the indexing system and bibliometric indicators that unfailingly accentuated the dichotomy of mainstream and berycurial science resulting in, as Cameron Neal of says, excellent in research as a neo-colonial agenda. I think we have an opportunity in this context to break the vicious circle that commercializes evaluative cultures to better circulation indicators over journal metrics and redirect our indicators towards the public impact of our work beyond impact factors. We must remember that our research agendas must be aligned with the prosperity of our region and not within position of a defined recite for success in another market. We are not in academia to accumulate publications in journals to advance our careers as individuals. Science is a collective enterprise that has to look towards society and understand its demands for knowledge. The way to strengthen our communication system is to align it to our society, to our needs, to our history. The real impact of our research has nothing to do with rankings. Our communication system is a trend if it is faced in society to the extent that it generates inclusion and well-being of our people. And that, I think, is where ECRs should lead the wave of change. Science is a shared enterprise, a global endeavor enriched by the multiplicity of visions, realities and languages. Everyone benefits from the development of a more inclusive ecosystem and seamless international scholarly discourse is a real possibility. Many barriers are stopping this utopia. As ECRs, we have the opportunity to transform research culture. Let's embrace this responsibility. Thank you. Thank you, Humberto. So our next panelist is Dr. Nafitha Jidavi, who is a neuroscientist and assistant professor at Midwestern University and research professor at Carleton University. Her laboratory investigates how the brain responds to different biological processes throughout the lifespan, and specifically how maternal nutrition contributes to offspring neurodevelopment, neurological diseases and aging. She is the chair of the advisory board for reproducibility for everyone. Thank you for joining, Nafitha. Thanks, Dr. Kent, for that introduction, and thanks for including me in this panel. Are you able to see my slides? Yep, looks good. Perfect. Thank you. So I'm going to speak a little bit about the reproducibility for everyone initiative that I've been a part of for a few years. Dr. Tracy has also been part of this initiative as well. So the reproducibility for everyone initiative is a community-led education program where we try to increase the discovery and adaptation of reproducibility tools at scale. And so what we do is we run workshops at different scientific meetings at different institutions to educate individuals about reproducibility tools. And since the initiative has started in 2018, we've had about 100 plus volunteers that have run over 50 workshops across the world. These are international workshops that have included over 3,000 participants. So we've been really, really active in getting this information out in terms of different tools that can be used by researchers, early career researchers, specifically as well as mid-career or late-career researchers in terms of implementing tools in their research laboratories. And why we started this initiative was that there was a lot of things that were missing in terms of reproducibility in that discussion in the biomedical sciences and in other science fields. You know, the majority of researchers being left behind. And in terms of scaling these initiatives and different tools that can be used to be reproducible, there was that missing link. And in terms of focusing on how an individual researcher's work can benefit was also missing. And what we wanted to do was to include innovative ideas that could easily be implemented by researchers that attended our educational workshops. And so one thing that we end off our workshops with is, you know, we present a lot of information, but we also ask researchers, you know, I know you're overwhelmed, we shared a lot of ideas, but, you know, take one thing and try and implement it into your daily research program, you know, something like an electronic lab notebook or writing up protocols or things like that to help move your research forward in terms of it being more reproducible. And so these, what our workshops do, which I've hinted at a little bit as I've been talking, is that they provide this overview of different open projects that researchers can get involved in. And what we try and do is we keep our workshops to about 30 to 90 minutes and we try and target a really large variety of audiences. So in a number of fields, plant sciences to the biomedical sciences, when we try to hit all the different career stages, because anyone can really implement the tools that we discuss in our workshops, if that's something that they want to do into their research program. We recently published our work in eLife, Outlining our reproducibility for everyone initiative and what we do and how we do it. All of our workshop material is freely available on our website. You can also, individuals can also watch workshops that have been recorded. And we often get instructors or facilitators for our workshops that have been prior attendees. And so if you're interested in getting involved or learning more, please visit our website. Don't be shy. And we're always looking for people to get involved and volunteer in different aspects of our initiative. So we're looking to get a lot of people involved. We have some funding from the CDI initiative, as well as other sponsors that I'd like to thank for supporting us throughout the years and currently to let us do our great work and to have a permanent staff member who does a lot of the infrastructural work and initiatives. So thank you very much. And I look forward to your questions. Oh, I was muted. Just a reminder that if anyone has any questions, please put them in the Q&A box and we will get to them at the end. So next up is Dr. Gary McDowell, who has a background in biomedical research and co-founded the non-profit Future of Research, which seeks to advocate for and with early career researchers to achieve systemic change in the academy. He ran Future of Research for three years full time and has now continued working to help future generations of researchers to reach their potential in his new role as a consultant, providing expertise on the early career researcher population to organization and providing early career researchers with strategies to effect change. Welcome, Gary. Thanks, Dr. Kent. And thanks to everyone here. So great to see you. So I have been involved with advocating for and with early career researchers for getting on for eight years now. First, as Dr. Kent mentioned, in the non-profit Future of Research. And that organization sought to communicate the issues faced by early career researchers in their academic environments and proposed solutions to overcome those problems. And we did this primarily by hosting conferences or workshops. We would gather lots of people in the room. We would talk through the problems and the issues that people were having and then try to come up with solutions, strategic ways of overcoming those problems. And then we as an organization would take those things forward and communicate those to stakeholders such as funding organizations and universities. I continue to work in the space as a consultant. And as Dr. Kent said, more specifically helping organizations think about how to better serve this population and generally just sort of working as a freelance academic in a space thinking about grad students and postdocs. So on the issue of including early career researchers and science improvement, for me, this is really an issue of representation. Most of the organizations and institutions that hold power in science and in the research enterprise, their most powerful committees and structures are dominated by faculty, dominated by senior faculty and also dominated by faculty from a select group of institutions and so are not representative globally, even within countries, even of all faculty. And so in order to have a realistic sense of what research looks like in order to make decisions in those structures, it's really important to have representation and that includes across career stage and particularly thinking about the people who are certainly in biomedicine, which is my background, the people who are at the bench doing the research, what the day-to-day looks like for them and what their environment is like in trying to succeed as scientists, succeed as researchers and to take the science they're working on forward. It's really important that they be in the room. And so that was a lot of the work of future research, including trying to get more representation of people into those powerful places, into those rooms. And as someone who sat in some of those rooms myself as the first young person in some committees, it's really kind of scary, some of the misconceptions that there are when those voices are not there. So representation is very, very important. I think one of the key lessons that came up for us was the need to have a broad ecosystem of people affecting change. And I think one of the things that was really useful in having someone in a full-time role, like myself, who was outside the academy, who had left the traditional structure, and was no longer subject to concerns about existing in that structure, I'm able to speak much more freely about what people are experiencing. And I often try to speak from a place of gathering data and communicating that data, very data focused. And so that has been really helpful in trying to communicate things. And I think it's important to try and consider having people outside the academy and within as one vector of thinking of who in the science ecosystem is involved in these conversations. And that has certainly been really helpful for a lot of the things we were working on. I think it's important too to think about a broader connection of all kinds of people in the system trying to affect change, including early career researchers, because we find that not only is there the problem of general turnover of grad students and postdocs in temporary positions, sometimes even in the faculty, but there's an issue that within advocacy there can be quite a high rate of burnout. And so trying to spread work across numerous people and across numerous organizations, I mean collaborate and work together and share ideas and share knowledge is really important. And that's actually by the young conference that we all participated in was such a great event. I was able to learn so much and hear about what was going on elsewhere. And really, I think this is sort of the next phase in the landscape of early career researchers affecting change. I think there's a lot of groups now who have gotten to grips with the issues of establishing themselves, of setting up and of starting to get their foot in the door. And now I think it's important for us to try to think about all of us connecting together better. And certainly thinking about being in the US, researchers are told to be very independent and there's a big drive to be working independently. And so I think it's hard for a lot of academics to try and think about changing the system with other people and together. So I think we have to think about that a little more. Thanks so much for having me here. Great. Thank you, Carrie. And our last panelist is Dr. Kleber Nieves. I'm so glad that your internet hasn't been affected by the storm. Dr. Nieves is part of the coordinating team of the Brazilian reproducibility initiative. He has a PhD in neuroscience and a bachelor science in biomedical science from the Federal University of Rio de Janeiro. His neuroscience research focuses on brain evolution and complex networks. And since 2018, he has worked on the Brazilian reproducibility initiative and on the no-budget science hack week, as well as on many meta-science research projects on issues relating to reproducibility, preprints, and translational research. Welcome. Yeah. Thank you. Thank you for the invitation. I was lucky. So the lights just went out like five minutes before this started, but it seems to be fine now. Can you all see the screen? Okay. So, well, thank you for the invitation. Brian mentioned that I'm from Rio de Janeiro. And so I've worked in meta-science into main initiatives, which is one is the Brazilian reproducibility initiative. I'm part of the coordinating team. And, well, this is a replication effort, much in the model of the ones that came before, like the replication project psychology or cancer biology, where we're gathering multiple labs to reproduce experiments that were published pretty much in line with the previous panel about big team science. It's one of those. And this is ongoing. And the other initiative, which has a meta-scientific band, is no-budget science, which started before used meta-science as a term, as a name for this. But we were discussing what would be meta-scientific issues in the university back in 2016-15. And this eventually evolved to become a more training-focused initiative when it became a hack week, which is now, and we just had the third edition last month. And, well, this is people gathering for two weeks or one week and trying to develop projects on meta-science or tools to improve science somehow. And this has been going on. Both have generous funding from the Sao Paulo Institute, which is a private founder of science in Brazil. So regarding early career research, no-budget science is more focused for training and it's more directly related to the issues we discussed in the virtual brainstorm paper, the unconference. But the insight I want to bring from the lesson learned from these two initiatives is that one thing that we mentioned in the introduction that ACRs, post-docs, red-edged students, or even in our case, even the PIs that are a part of our collaborating teams actually are very young. And these are the ones doing the experiments and discussing with us the coordinating team, the nirigiri of the protocols. So these are the people who are actually caring about and implementing all the recommendations we do in terms of a group of disability. And no-budget science is very focused on training. So, and usually, again, as mentioned in the introduction, the people who have time to engage in training in new fields and all usually are ACRs, right? And on the one hand, having ACRs being the ones who are engaged in improving science in meta-science is great, right? Because they are the future. They hopefully will be here for a long time. And this early exposition to the issues in meta-science will have a long-lasting impact in their careers. And they will hopefully spread that to other people. And that's a great strong suit for ACRs. But on the other hand, these are the people who are collecting the data and the people who are caring about improving data collection and how we do science in general. These are the people who are lowest on the academic hierarchy and they have unstable jobs and they don't have much certainty about the future. They don't even know if they're going to be able to remain in academia. And that is, you know, this reality varies from place to place. But I think there's some universal truth here. Of course, this is all based on my impression. I'd love to hear about data on this. And one thing we do see in particular, and it's made me think a lot, is that from nobody's science, we get to experience that people come from those two weeks where they're doing the event, the Hack Week, and it's very collaborative. And everybody's on Zoom all the time because we're doing virtual events now. And during those two weeks, it's very great and the projects move very fast. But after those two weeks, it's often that people just drop out of the projects because they have other priorities. So this has made me think of the survival of meta science as a future or the movement for improving science if you don't want to talk about the discipline itself. But this whole motivation to improve science, if it's really dependent on the motivation and we win free time off ECRs, it's not really sustainable because, you know, as long as meta science activities are not rewarded, people when push comes to shove, they will prioritize the thing that gets them jobs and papers and publications and the things that actually make you advance your career in academia or even the things that give you opportunities outside of academia. And I think this ties into one of the first panels in the morning today, my morning, that that was talking about how we should go about institutionalizing meta science. And I think the institutionalization of meta science, maybe I don't think it's a good idea to break meta science departments, but how meta science will become a more mainstream part of the academic structure will in large part be determined by or will determine how ECRs will come in true to more permanent positions and how they will become a part of academia in the long term. So I'll end up on that note and I ask you that you confirm or disconfirm my impressions that this is the case. I don't know if there's much data on that. So thank you. Thank you so much. So now we'll turn, well actually first I do want to give Tracy an opportunity if I can say anything or have any comments before we turn to the Q&A. Yeah I think thank you to all our panelists for joining. I think our goal was to give you a sense of some of the various different things that ECRs are working on and the power that ECR initiatives can have in changing so many different aspects of science, but I would also ask you to remember that this is a really small portion of the participants are in you know our original event where we had a lot of dynamic discussions about not just what people are doing but how they are doing it and what we can learn from other ECRs who have been successful in founding initiatives, in leading initiatives, in building communities around their initiatives often in very difficult circumstances and often these are very widespread or global communities like for example Nafis's example of reproducibility for everyone and so I think I would really encourage you to use this symposium to ask questions about how to do those things and not just to focus on what it is that people are doing. Great thank you Tracy. So we have one question that comes through the chat but I do encourage attendees to put their questions in the Q&A box so that everyone can read it. So this question is for Dr. McDowell. Dr. McDowell you had made a point that we need to get more organized in the way that we advocate for change for example across the various ECR or open science initiatives and organizations. In other industries unions are a powerful mechanism for organizing individuals to take action collectively yet we don't have a union for the global research community. Is it time we create an open science union to advocate for progress and researchers interest? That's a great question. I'm very pro-union generally so I you know unions have great power in affecting this kind of change. I think this gets to the point of we don't need to necessarily reinvent the wheel there are people in this world who are very effective at pushing for change and advocating for change and it would be great to like use those structures and lessons in in our own context. I mean one thing that this may be think of is I'm really disappointed with a lot of scientific societies right now in this space because I'm a member of some societies that have been advocating to block my access to papers because I don't work at university. I don't have institutional access to publications and so you know I have to get them through SIHUB essentially and yeah I find myself in the weird position in the States in the last couple years advocating to the Trump administration which was one of the greatest champions for open publications in the States because of the pushback against it from scientific societies and publishing organizations and so I think there is more it'll be interesting to think about is there a need to think about a society that is focused on science rather than academics and the way academia protecting academic interests. I was really struck by something Humberto said of you know that the point of science is to solve problems and to do this research it's not to publish papers and it's not careerism and that frankly is a major reason that I left because it's that is the direction that I sadly see things are in and so I think that this is a super important point of like the what organizations should there be broadly it could be a union but is it that there's a problem also with the structures that we have. Gary so another question now is for Humberto sorry which obstacles do you think affect ECRs specifically in Latin America like what are the obstacles that are more prevalent for ECRs in Latin America? Well during my short talk I mentioned poverty which is a very important issue in the region that all which affects broadly all our lives activities and puts in a different perspective our role as scientists what Gary mentioned just before is that we have a very important responsibility because we are financed by taxpayer money in in countries where there is a lot of lack of funding and and budgets for social and health etc so every time that there is some funding to do the research you really have to think what what you should be doing with that money what is that research going to help your society the people where you live and of course during the pandemic that has raised a lot it's like we have joint forces with many ECRs in our countries in different regions try to to prevent the loss of life so that it is a it's like a different issue before the pandemic we were like thinking in various separated without the real community without integrating our capacities and I think that the pandemic has helped us try to understand that we are towards the same path that that we all together want the same thing which has to try to to generate knowledge that is useful for our societies so in that sense I think that the that one of the main issues that we have now is try to convince our leaders that investing in science is it's the best way to try to solve the problems of this society so a lot of effort that we have to do is try to push into the authorities to try to provide more more more funding for our activities and and being ECRs the the most prevalent research researchers all over the world as you haven't mentioned before we are like most of the people doing science so we should join forces to try to push to to an agenda where we're funded is a reality and we can solve problems in our societies with specific budgets in the long term is more specifically thank you our next question sure I don't want to just say that I think this is something that I think about a lot that meta science is very focused at least so far on on how we do science right that how we get the answers and how we get answers that we can trust and I think the whole point about useful knowledge and knowledge that is relevant to society and to local concerns is really about the watch like what questions do we want to answer and of course we need good answers but there's a reflection that comes before that it's about contents not the method you know that's such a good point so our next question is for Tracy I was watching your presentation on metrics today and it seems that follow-up of students is good in your initiative and they finish the project I think that has a huge overlap with the questions raised by cleaver how you and cleaver think that ECRs could be more consistent with these initiatives no budget science and your initiative with quest so how could ECRs be more consistent with these and follow through okay um I'll briefly provide some context for those who were not in the in the metrics webinar earlier today um essentially so I started an initiative through the elife ambassadors program called the elife ambassadors meta research team where participants learn about meta research by working together to design conduct and publish a meta research study um and I've since translated that into a six month course that I run in Berlin with students from four different Berlin universities um and I think one of the things that's one of the things that's really important in the success of our projects and the reason students are able to finish is that they're all working together on the same project and so if students are sort of individually working on an idea um they get to a point where they realize they don't have the resources it's going to take a larger team than they have and at that point it becomes difficult to proceed with the project especially if they don't have outside support from their supervisor or others in their lab group or community and so part of how initiative our initiative works is the fact that everyone agrees that you know we have this goal coming in to design conduct and publish a meta research study um and then it's also that because participants are often working in small groups no one wants to be holding the group back and so everyone is motivated to you know keep working and keep moving they're part of the project forward and they see other groups doing the same thing in our meeting every week um and we also use other ways of motivating so we encourage like we've done blog posts or other things so that students have an early output to share the work that they're doing or the things that they're learning um the students have in the past round conferences where they can share abstracts and start to talk to their about to other scientists about their meta research and so those are some of the strategies that we use to keep people motivated and moving forward in our initiative um and I think Cleaver can perhaps address the question more related to the no budget science week yeah I actually heard about your your talking metric so because other people from the group were there and said it was very relevant and I should have been there but uh uh yeah no I was actually trying to to learn from you because uh we actually have very very bad follow-up and we are now experimenting this is the third edition we always change the format a bit and we're experimenting now with regular meetings so one month after two months after we would try to to follow up closely to the projects to see if something comes up if people have a longer term commitment but I'll know if this works in in six months maybe yeah I think the group mentoring approach can be a very powerful one um as well as emphasizing to people from the beginning that they will need a team and to think about who they might be able to get and encourage to join their team um so I'll I'll share a link in a chat for a couple minutes for a paper that talks about our initiative for um running a participant guided learn by doing meta research course which has more details for those who are interested thank you so the next question I think is um directed at Nathisa um I am a young neuroscientist and I have the impression that the scientific endeavor has become about selecting convenient data to publish convenient stories I think venues like this conference are very useful but I have the impression that tomorrow I will go to work and life will continue to be about finding significant differences and putting aside what does not go in that direction how to involve academic directives in the direction of open and reproducible science that's a great question thank you Danielle um so yes I mean the academic currency is publications right and there are journals that are very um pro significant differences in telling the story um but I think that there is a movement um about publishing negative data and data that is not you know significant um unfortunately um this movement is really slow and there's a lot of resistance to it because we want to you know cure X disease or X or you know whatever inserts your field of study here um so I think you know um it's definitely hard a hard place to be as a trainee when you're in a lab that maybe is pushing for um publications and positive data and significant differences um but I think um what I would recommend or trying to to do is to build a network of people that are more you know that might be open-minded towards not that viewpoint and to get them involved potentially with your work um and to get feedback um I always advocate to my trainees and other trainees and students is to to build a community or a network of people and to get feedback on how you can um you know um sort of start this change within your area or within your school or program or whatever wherever you may be um I mean it can seem very daunting because the scientific world is you know um very old and some people are very set in certain ways but I think that if you're able to make some change where you are then that's um that's great that's some people that were affected by your way of thinking about data and that you know it's not always has there doesn't always have to be a significant difference um you know this is something that I've tried to do when I've started my own group is to to foster this way of thinking and that you know I am yeah the PI and my name goes on the paper and the grants and stuff but we have a discussion about the data and what it means and and you know people we all work together I'm at the bench students are at the bench staff are at the bench and so um uh and it's my little lab but I think that you know I'm training students that are going off and doing their you know their future training in other places so hopefully I'm trying to sort of plant these little seeds of of change even you know um in in an area or in a field that is very um dominated by finding significant differences because I to myself and also a neuroscientist too so hopefully that answers your question or gives you some uh uh feedback on this very um touchy subjects um speaking of touchy subjects the next question is um and it's not directed at anyone individual but I can I'm sure several of you may have um thoughts do you think there is a selection effect where early career researchers who stay in academia and become senior researchers are the ones who are least bothered by the system as people like yourselves either leave or pivot into meta research how can we strategically get reformers into positions of power would any of the panelists like to tackle that I I will give it a go because I have given this a lot of thought um I think there's a broad distribution effect where the simple answer could be yes but I think there's a lot of caveats here and the first is uh there's a phenomenon in in business research actually that people who resign from organizations generally um there can often be a common factor that they are the people who care the most about it and just cannot persist in that space any longer they care so much and I think that there's certainly that element that played in for me but I think there are people who try to find a way of um uh what it is that their values align with um and whether they can affect change based on what it is that they can see being able to do uh and if you just said change is very slow in the academy which is true and I think this is a big part of why I just didn't have the patience for that and so that I think I think all of these factors come in for for what determines but I think it it's it raises an important point that some of the things I have heard uh that are the most awful uh about like how the academy shouldn't change have come from very junior people um junior faculty and because there are a subset who are super bought into the system because they have succeeded in it and there's a lot of this it is important not to try and like have a broad brush of oh all junior faculty do this and all postdocs do that and all these people do this because there is there is a great spectrum of people bought into it um and people not bought into it and I think you will have some stay and some go um I think it takes it just takes a lot of persistence to stay in um if you also see the system is broken and so I have my favorite faculty to work with are the ones who are who are like really see the problems and then are also trying to figure out how how to deal with that but there are some who it's I think I think it helps a lot of people not to be bothered by it or you know that they just are bought into the system and that's their way of dealing and so I think that that issue can certainly arise um and I had when I was leaving I had people say to me oh it's such a shame that you're leaving um and you know you should stay or whatever and I was like well I don't really want to but also I think there I think there is a again this speaks to the earlier point of I feel I'm more helpful outside than I would be inside I feel I would have been a terrible PI trying to have like mentor people and do work and also try to do this I don't think I'd have been very good at at any of it and so I think better distribution of labor and supporting each other is is key there and so um yeah but but again getting and then we need to help the people who are still in the academy who are good people to get into those positions of power I think this is all again the reinforcement on the the the network needing to be to be strong yeah and like Gary said there are people at every career stage who are fighting the good fight you know there are there it's not we're here talking about the role of early career researchers but so much of what early career researchers can do is really working with some people who are in these positions of power so certainly you have people bought into the system at every stage and you also have people actively using their positions of power to also promote change so um I think that was like a point would anyone else like to speak to that or should we go to the next question um I can briefly add a point Humberto a very cute dog that was not my point um my my point was that the the key thing that we kept hearing over and over in the young conference and you'll see it prominently placed in the pre-print was people were saying over and over this work that I do to improve science is not rewarded it's not incentivized um and it can actually it's actually in some cases perceived as a distraction that takes away from my ability to get grants and published papers and until that changes it's going to be very hard to keep people in the system um outside of psychology it can be very difficult to find journals that publish meta research it's hard to find funders that will fund it there are lots of people who don't think it's research and there are very few um places where meta researchers can get jobs within the scientific system and so I think solving that for not just meta researchers but people doing all aspects of science improvement is really critical to change I also think that just finding um if you can a space in the system where you can make change is really important for me I started doing meta science because I was frustrated by people using bar graphs of continuous data just constantly and so we published a meta research paper on it in 2015 which went viral very very quickly and contributed to policy changes in a lot of journals and for me that really just shifted my horizon of what was possible um and over time I became less and less interested in the more physiological research that I was doing and more and more interested in you know not just how do I publish this paper but how do I do more meta science in a way that I can see it having an impact that I can see it changing journal policies changing laboratory behavior changing the way that authors think about and engage with their data and so I think if you can find a place within the system you can that will help um some of these feelings of discord and if you don't find that place within the system or it's just not something you want to look for and you would rather be Gary and help to work to improve things from outside I think that is also amazing um Gary needs friends and we need more people like Gary so I don't think there's any shame in saying you know I need to leave the system in order to be able to do what I want to do I think if you need to leave the system do what you want to do just do that well said we need more Gary's um so I think this will probably our last question and I think it's a good one to end on is can participants talk about how getting involved in reform and or meta research has benefited their careers we talk a lot about how advocacy work pushing for reform is seen as a distraction it's not incentivized but does anybody have an example of how actually participating in this work has helped their career I I can't say no but go no if you said please go yeah no no please I want to hear from the panelists um I think my work with reproducibility um for everyone has really when I started my independent research group has really helped me set up an infrastructure where we can do reproducible science I hope that's my hope but it's been only a few years and so I think that's been really great the friendships that I have made being involved in these different initiatives have been amazing um and the support and the different opportunities that have presented themselves have I think been really have enriched my scientific career and I'm grateful for what I've you know what I've contributed and what I've also you know received I guess as being a part of these initiatives so yeah thanks thank you absolutely I think for me it's growing the network which has led to new collaboration scientifically new grant applications invitations to things that even though often but by some people it is seen as a distraction from science I certainly have had the same experience of Nafisa and there are new opportunities that actually do help my science by just building my network any other examples um I mean I was I was just going to throw in I mean the most obvious tangible example was the work I did as a postdoc ended up in me getting a three-year grant to set up and run the future of research um and you know I've continued to work in this space um and I it's funny all the things I enjoyed about being an academic um publishing papers applying for grants getting rejected from grants which happened recently you know I can you can still do that outside of a university you can still get rejected from the grant and um but you you know you can still do I think this is part of the the interesting thing is that people think the academy is the only place to do things whereas the academy is a structure you know I've worked in the academy in a non-profit in a for-profit but essentially an LLC a business setup and you know all of these are just ways of of having a structure in which to do this kind of work and you can still participate in doing all of these kinds of things and I collaborate and I'm still part of all kinds of networks because of this kind of work so my career you know I used to work on frogs and I you know nobody knows what my frog work now and like I have this completely different academic career um that's based in in all of this great stuff so um yeah it can it can take you in a new and more interesting direction is my perspective yeah great well thank you this was a nice note to end on I think something positive um so thank you to our panelists uh thank you Tracy for joining uh this was really a wonderful discussion uh and it was great to see everybody so thank you and thank you to all the attendees um who and ask really great questions and took the time uh to join us today thank you