 Welcome. Good morning. Good afternoon. Good evening, everyone. We are delighted to have you at meta science 2021 conference. We're so pleased that so many were able to sign up and start to attend this meeting. The purpose that we are excited to try to help promote is fostering connection, communication and collaboration. Across the many different communities that are interested in the process and conduct of science, how we do it, how we can do it better, what the implications of different changes might be. And there are people within particular research disciplines that are looking at that problem. There are people who are observing different parts of the scientific process. There are stakeholder communities that are engaging and acting on that process as funders as publishers societies. And there's just all of us who are interested in how it is that science works and how it might work better. So we're very pleased that we can have all of those different disciplinary and stakeholder communities, along with the huge diversity by region by social identity, and by interest area participating in this collective meeting. And then as Brian knows that I am executive director of the Center for open science. And in addition to the Center for open science, the partner organizations, rock organizing this meeting are Amos, the Association for interdisciplinary meta research and open science, and Rory, the research on Research Institute. We are delighted to have you here. And you'll see if you've been already perusing the meta science 2021 or website that there is a fabulous program on a variety of different topics. Everything is plenary in this session. So use that website, and then the associated slack and remote channels to engage and discuss about the work that you see and hear about. And then anything that you miss because it is 14 hours of programming window each day of the six days. If you have to sleep, go ahead and sleep because it is going to be recorded and it will be made available within a week we hope for viewing and continuing discussion and social media and otherwise of the things that you weren't able to see live. The, I think that's all of the major meta points that need to be made. So let me just do quick acknowledgments of some of the other contributors. In addition to Amos and Rory that are co sponsoring this. There are about 50 partner organizations that have interests in meta science that have our co promoting this event we thank their participation in this. We need generous financial support from the Templeton World Charity Foundation and from Rory to put on this meeting. And as a consequence we're able to keep the fees for attending very low, including a free option for those that do not have capacity to pay. There were many contributors in the organizing committee and reviewers for all of the competitive submissions that were sent. Thank you everyone that submitted proposals. They were amazing. And we could fill volumes and volumes of activities so there's plenty more for subsequent years for this work. And another thanks to Whitney Wissinger and Zach Loomis who did the yeoman's work of getting this whole thing organized and keeping everybody moving together. So that's enough for introductions. I want to hand this out now over to Adam Dinsmore who will initiate our first session of lightning talks for the overall conference. Welcome and thank you for being here everybody. Thanks a lot Brian it was a brilliant introduction. Hello everyone my name is Adam Dinsmore I'm a program manager research on research Institute or otherwise known as Rory. And welcome to this year's meta science conference they can awful lot of work and like to congratulate Brian Whitney Zach and the rest of the team and awful lot gone on behind the scenes to bring together such an exciting program, which we're delighted to be kicking off today. And so this first session, just to manage everyone's expectations is the first in a series of lightning talk sessions and it's going to be a series of these sessions running over the next two weeks. And today we will see as six speakers discussing a wide range of meta science topics. And I'll be hosting the session which basically just means doing the leg work in between each speaker coming on and each speaker will have five seven minutes to talk to you all, but not going to do q amp a in this session, and we may finish a little early a little before the hour, and depending on how the timings go. And at that point we'll ask people to move over if they want to ask any of our panelists questions or discuss anything that's come up in the session with any of their other attendees. So we're going to log into the remote workspace for the conference, and the links to remote or in the chat for this session. They're also up on the website, and they should be in the emails that have been sent out to all of the attendees so we hope you'll be able to join in that way. Without further ado, we'll kick off our first session of meta science 2021 and the first of our lightning talks today is being given by Ben McGreblean, then is a PhD candidate at the University of Cardiff I've just seen his camera flicker to life. And he will be speaking to us about a registered reports and how we can encourage uptake of registered reports. I'm not sure if you have slides but if you do, and yeah feel free to share your screen and otherwise yeah just take it away. Great thanks you can hear me. Yep. And the slides are on the screen. I can see them. Perfect. Okay, right well glad to be part of this very exciting from the program. So, as Adam said I'm Ben McGreblean, and I'll be speaking to you about my PhD project titled encouraging registered reports, meta science and tool development. And I'm based at Cardiff University in the United Kingdom, and I'm supervised by Chris Chambers and Lucia. Before I go into any detail, here's the overview of what we're trying to achieve in a PhD. Briefly, the project is designed to build and evaluate a series of tools and websites. And number one, we're trying to encourage the adoption of registered reports as a publishing format, both by authors and by journals. We're trying to aggregate information from the people involved in the process. So the authors and the reviewers about how that process is across journals. And number three, we're trying to support authors with their registered report submissions, and also help editors build policies for their journals. I'm sure very many of you are familiar with this diagram from the 2017 paper Manifesto for reproducible science, and it nicely summarizes a set of problems in the ecosystem of scientific research from P hacking where beta manipulated and P hacking where unexpected results are presented as having been predicted a priori to which studies are selected to be published, and registered reports seeks to address some of these issues. So briefly what are registered reports and I'm aware, most of you probably have some idea briefly there are publishing format which combines study pre registration with an additional peer review stage before results are known. At this stage of peer review, any problems with the study can be identified and improved before data is collected or analysis are conducted. And if the study passes this first peer review, it's given in principle acceptance. That means that the general agrees to publish the computer study on the basis of its research question and the methodological rigor, and that's irrespective of what the results are found, the study of courses conducted as it was initially planned. And that's stayed checked by peer reviews at the second stage. And, importantly, although registered reports really are designed for confirmatory research with stated hypotheses, the format is flexible and allows exploratory results to be reported as long as they're labeled as exploratory. So, very quick summary of the landscape there are currently over 290 journals offering registered reports as formats and somewhere around 500 published articles with many more in the process of collecting data, having received in principle acceptance. I'm going to show you a few encouraging findings from the last couple of years. All these two graphs on the left show that registered reports have far more no findings compared to the traditional literature. And the right hand graph shows that across a range of quality criteria, registered reports were rated higher than traditional manuscripts. And finally a quick summary of the advantages of registered reports for the scientific community. It's a rigorous review of the theory of methods, we have to eliminate publication bias reporting bias and a range of questionable research practices. And for authors, they get the peer review when it's most useful, and they get a publication regardless of the results they find. So this is an overview of the tools we'll be building in the PhD, along with a journal database which uses data from Crossref that will power these first two tools. And the goal is for all these projects to be openly licensed and open source with the data available for download or via an API eventually. So the first tool on the left community feedback. So we're unofficially calling this a sort of a Yelp registered reports, and this website will allow authors and reviewers to provide feedback regarding their experience at journals, which will offer registered reports. So users will be able to give ratings across categories. So for example, the speed of the review, the quality of the process and editing. The ratings will be brought across and aggregated and analyzed and presented by journal by subject area and by publisher. And we hope that these will become a sort of useful resource for the community, identify issues across the process and, and eight decisions for authors about where they want to submit articles. Additionally, from a sort of a policy and leave aside we hope that it will provide incentives to poorly ranked journals to, you know, improve their implementation of the format. And this is currently underway so watch this space. The project is around advocacy and the idea is to update and automatize an existing projects called registered reports now. And it's a project that helps to coordinate outreach to journals to encourage the adoption of the format. And this is something that users will be able to sign letters to individual journals or groups of subject areas with letters being automatically sent to publishers and editors, after certain numbers of signatory thresholds are reached. And all the replies will be tracked centrally with nudges encouraging signatures across networks. The other two projects are a policy builder. And this is, again, something we'll be working on later on. And the idea is to support editors at journals who either have just decided to offer registered reports as a format, or they're considering them. And the tool will simply generate a bespoke registered reports journal policy, largely based on tick box criteria to make the process easier. And then is a study design template that I'll be working on with my colleagues. And the idea is that registered reports authors can find the process of writing their stage one manuscript quite complex. So the end of the tool is to support your authors write their protocol submission with sufficient detail and no ambiguity or at least ambiguity as possible. So some help us move towards a more sort of standardized registered reports study design protocol template. And has a nice side effect of having meaning that we're going to have structured data with linkages between all the components of the study that will allow machine readability, which will also allow an aid meta research and research to see. So, so quite a lot going on project wise over the next couple of years. I won't take any more of your time but you know if you are someone who is an author, or as a review or an editor and interested in registered reports. I'd love to have a chat. I'd love to hear, you know, what you think of the projects and any ideas you have. And also, if you're interested in helping out with an early version of the tools then yeah please get in touch. Thanks for listening and I hope you enjoy the rest of the conference. Thanks a lot, Ben. It was a really great way to pick up the session. And I'm sure it will have prompted a lot of thoughts and questions among our attendees. And I will move on swiftly because we do have a finite amount of time for the session to our second speaker, which is just but let Jess is a senior research fellow at the University of Aberdeen and NHS Grampian and she's going to be talking about rigor and privacy and research. So I'll be chatting today about how to improve research with high security data. So this is what I do for my day job I don't think many people realize how much government data is available for research. So, here in the UK research can access any of this data. All the National Health Service records, education, work in pensions as employment, Ministry of Justice as criminal records and census. This is all available for research. And this is what we do here at the center. So it's available, it's not easy to get. So this is a great paper that came out last month documenting the trials of trying to get access. A team wanted to link five medical sets of records together for from across the UK. And this details their two year two and a half year process. They sent 47 different documents to 11 regulators up to nine times. That's what this figure shows. And so at the point where they published this paper describing the process they actually hadn't even gotten access to all the data. And so to give you a sense this data research using this data is heavily legislated already. And this becomes important in a second when I want to talk about trying to increase rigor. So to access these records you have to make a very specific request, you have to use the minimum data, the very, if to be very clear about what data you want to use to answer a specific question. And if it doesn't anonymize the data that's done by government analysts, and they do it twice over, so nobody sees the non anonymized data link together ever processing is separate so they do processing on their side and then we work separately as researchers so everyone is kept siloed and blinded. They work on secure servers we work on different secure servers. So that's not only the computer obviously is quite secure that we work on. So if I look at my census data I go into what amounts to like a clean room. So CCTV TV running all the time. A person actually a sports me in and swipes me in. No electronics no paper and pencil this is a lot of infrastructure to keep this private. There's no data on individuals ever so we can publish anonymized aggregate data, and there's a team of people that go through the outputs to make sure what we're publishing is private. So all of this beautifully regulated I have no concerns about the, the privacy of the data that we use to do important research. None of this that I'll chat about for a second is considered at all in determining access to this data. So, if you're at this meeting you probably care about things like pre registered plans so stating what you'd like to do before you do it, calculating whether you have the power to answer the questions you want to answer. And because this data is collected for other reasons so we don't collect the data the government does. There's big problems with measuring so you say you want to measure such and such. Does the data actually accommodate that measurement. It's a big question. Missing this is ubiquitous completely important to consider these things, not at all considered in the application process. And then despite this being public data analyzed with usually publicly funded research grants there's no indication that there should be anything made open. So, we rarely produce the code release from these safe settings the code we use to do the analysis, the outputs that are very carefully scrutinized for privacy are never necessarily made public so none of this is considered. These are standard sort of rigor considerations we might have for any sort of research that's done, but the point I would like to make here my main point for these five minutes is that the infrastructure is already in place to really increase the rigor for this type of research. Because there's millions of records, millions of variables and many people looking at the same data sets we have a huge garden of forking paths problem. So you could easily trawl through this data and find spurious significant results just out of the sheer volume of the data. So those few checks I showed you here are important to consider, given how much p hacking or fishing we could be doing, but there's already huge teams of experts in place that could do the blinding and things like that that we need. So, the analysts that do all of this separation of data to keep it private could easily do a holdout of data, while we go in and determine what exactly we will do to test our hypotheses. If anybody rigorously enforce who accesses the data they could easily enforce, say only letting a statistician and to look at the data before the other analysts come in and look, and really limit how we do this analysis and rather than fishing around. So that's the strength that they could easily implement in blinding. The other thing they could easily implement is openness. So, every keystroke I make on this secure server is recorded. Every output I take out of the secure server is also recorded. They could very easily publish these things. So no, not in a journal, but you could make publicly available. All the research I did and all the outputs I made with this public data with very little extra effort, and I could then prove that I met the sort of better standard of rigor and didn't go fishing around in the data. So, I think this could be done I don't think it's conceptually challenging I do think it would be a big change. So am I even confident it could be done in my lifetime I don't know. But at the very least I think we can advocate for this type of rigor. This is public data. This is public money, we need to do a better job and not waste time and data. To be obvious people to target the government spends billions of pounds generating and maintaining and keeping secure this data so to demonstrate to them that we aren't using it as well as we could as important. Same to the people who fund me my application to use these data. Welcome Trust UKRI will be here today. These are the kind of things they could implement fairly easily. If I had to choose one place to advocate I would actually choose patients and the public. So regulators listen very carefully what people want done with their data. So if we make the case that we need better science. When we're doing this research, I think they would, they would take it on board quickly. So that's me thanks for listening and I'm always happy to chat about these topics, any other types of meta science topics. Thanks so much Jeff. Can you hear me by the way, having a possibly having a touch of technical difficulties but hopefully I'm still coming through to everyone. Thank you so much. Thanks. It was really great really very very interesting talk and some of those issues are very dear to my heart about better use of publicly available data, even to the point of just raising awareness of the data is there indeed. So yeah thanks very much just we'll move on to our third of our six talks now. Our next speaker is till brook net who's the founder of the pyramid until speaking about how we move beyond being counting to ensure that research has real life impact. And I can see till on screen now. So I'm going to hand over to him. Yeah, hi, hello everyone. I'll just give a quick talk today about how to make meta science really count in practice, because that is something that I think sometimes is simply not being thought about by researchers because researchers aren't paid to think about these things. When I set up transparent five years ago, I am an avid consumer of metal research ever since then, because what metal researchers do is incredibly important for proving that there are problems that need to be solved. So much of my work entails looking at metal research about clinical trials, translating it into format so policymakers can understand and then pushing policymakers to actually do something about it. Okay, so somebody described the model of communication that a lot of researchers use as a submarine. And that is some really smart people get together, and they go on a really deep dive deep deep dive they collect a lot of data. They crunch the data. And when they're done with it, they come up to the surface, and they fire torpedo and then there's this gigantic explosion of findings you know we looked at 3000 clinical trials and we, we subgroup them by phase one, two, three, four trials and then we looked at drug trials and then we looked at medical device trials and they were these problems and they were that problems and that problems. And I mean this is hugely impressive research is this brilliant research is really important research. But there's some problems with this model of communication. And the first one is the submarine itself where academics, often I think by nature, just engage too little while they're doing the research because they are so busy actually working on this big study. And the second thing is something that are called meta bubbling which is that when they do engage with people it tends to be with other meta researchers who already agree with them that there's a problem. And they talk with a lot of people who already agree with them that the problem should be solved. The second problem is the explosion itself. I mean I've been reading these papers for five years and sometimes I still struggle at a first reading to actually distill the main finding into one or two sentences, because there are so many findings in there. I mean what what is the main finding here you know it's like, well when we look at a reporting horizon of four years, 70% of trials have made their results public after six years that are that but for drug trials it was like this. So it can be really difficult to wade through it and that's okay for me because this is my job, you know, I can spend two hours going through this paper again and again going through the supplementary annexes and really digging and just pulling out the most, the most interesting headline data but policymakers really don't have that time. And the second one is relevance. I mean, I campaign on clinical trial transparency because I understand that it's a problem, but a lot of people don't understand that it's a problem. And sometimes in the papers it just says something like oh numerous publications have shown that publication bias is an issue in clinical research. Okay. Most people won't understand why this is actually relevant when they go to see their doctor the next time. And the third one is that 95% of the papers are about problems and we don't really see much in the way of solutions. And then again is meta bubbling where you know you publish an academic paper, what happens with it, well, it just sits in a journal, and then, you know, 50 other metal researchers will read it, and five others will cite it, and that's it, you know, you get back on the submarine. So, I'd say there's, there's two solutions to it, and there's an easy one and then there's one that's a bit more complex. The first one is really easy, and that's just clarity. And I think that if you spend all that time all that effort and poured your soul into doing the meta research, please in the abstract, just one sentence with one headline finding, you know, what did this paper find because most people will never read past the abstract. The second one is the clear relevance and I think the relevance is best always expressed in the number of people and in the amount of money. So this could be something like 900,000 people participated in clinical trials in the UK last year and the clinical trial industry has a collective value of two billion pounds and making up these numbers. Well, it could be something like 17.5 million people in the US suffer from depression, and depression costs the economy an estimated $2 trillion a year. This is just one sentence that you need to repeat in the article, but that's the sort of thing that really makes a policy relevant. And the third one is to present actionable solutions. And I mean, if the actionable solutions are even only a tiny paragraph in the paper, you know, you could have a supplementary annex where you spell them out more with links to whether better explained, or, you know, you can just you can just reference it really really tightly but I think solutions are always important. The second one is a bit more complex. And it requires a change to research design, but I think it also makes research better and it makes it more exciting and more fun. So the idea of ping pong is that you're not in the submarine, but you're constantly engaging with the relevant actors. So the first one is, when you collect the data. You already engage with the institutions that you collect the data about for example you're looking at trial reporting by the top 20 universities well the first thing you could do is like email a standard email to all 20 universities saying, and you already collect this data internally yes no, that could be a finding in your paper you know we found that only five out of 20 universities monitor their clinical trial portfolios. And the second one is when you have analyzed the data, you could contact the institutions again you could say well black so Smith Klein. We found that you conducted 2714 clinical trials and that 713 of them didn't make their results public within a year. So here's the list of those trials. Did we get anything wrong. At that point you're engaging the institution itself to look at its systems to look at its monitoring systems and actually discover its own weaknesses. And also I think it's only fair you know you provide people with a right to response and if you get something wrong you know they've got a chance to to correct you before you go public with your data. And when you've drafted the paper you can get reactions so you could approach for example the National Institutes of Health Research and the Medical Research Council. And you can say okay you know I've written this paper and I'd like to include reactions from you about just a very brief statement you know about what you're already doing about this problem and what you plan to do about this problem in the future. Again it forces institutions to think about it. And again it gives them a really fair chance you know maybe they're planning to do something fantastic two months in the future. And then you can flag that in your paper and other people can learn from it and the institution itself can also get public credit from it. And then publishing the paper in academia sort of seen as the sort of end point you know it's the big bang. I come from a more misting tank and campaigning background where the research is really the starting point. And then there are some additional things you can do when you've published the paper. The first one is, and I think this is a question of politeness as much as anything. So the paper of the paper to the constituents I call them to the people you did the research about so there's 20 universities 20 pharma companies or like you just send them to them and you say oh you know we thought I thought you might be interested and here's the paper please you know so at least they know about it. The other one is, you can send it to decision makers so you could send it for examples to the with clinical trials to the health spokesperson of every single political party in parliament and you can say, well I just wrote this paper and again you know this is the headline finding this is why it's relevant this is what should be done. Can we please have a half hour phone chat about it. And the third one is, you could send it to the media and I know that a lot of academics are scared of the media I mean my experiences personally I've been 99% positive. And sometimes that can just involve I mean often as he, I see a really exciting study. Oh, you know, this matters, you know people should know about it, and I just sent it to a journalist and I did that a week or two ago with one paper. And it was literally just a one, one, one line email you know subject line story tip cancer drugs and sent an email saying, look at this paper this is really interesting I think you should report about it. And two days later, stat news published a piece on it cancer drug indications remain on labels even after 12 feint failed to confirm their benefits. It's a great paper I mean I've never even spoken with the researcher you know I've got no stake in it. All it took me was like two minutes to send an email to a journalist. And now hopefully hundreds or even thousands of people know about this problem know about this paper know about the great research that the person has been doing. So, yeah, you know, meta researchers do great work, they do really really important work, and I think if you just make one or two if you just take one or two of those suggestions on board and integrate and you can have a much better impact on actually making such count and translate into changes in the real world. Thank you very much. So, so hi everyone. Very happy to be here honored to be a panelist for this, this lightning talk session in 2019 I was very lucky to be to buy a last minute ticket to go to the meta science 2019. I was very inspiring to me, and that was kind of the starting point of my journey to kind of meta more meta science research. So, so I'm from the field of communication research and we're kind of late commerce to this movement of open science. And this recent call for open science marks a more serious starting point for communication scholars. It has already sparked a lot of interesting debates and discussions in our field. And the project I'm going to share today focuses on statistical power, which is a key dimension of research credibility, whereas we tend to focus more on false positives, but false net negatives also present a big challenge to social science research. It is found to be actually very prevalent in psychology research and it's very costly to theory generation and scientific discovery and studies with low power they contribute inflated effect sizes to the literature. And that really lead to this problem of winners curse where you know like an initial false positive finding looked very impressive but cannot be replicated with larger samples. So in general studies with low power, because low power and create more replication failures and breed into bias. So literature dominated by underpowered studies can be very problematic. Psychology research. Since Collins 1962 seminal study. Study after study still find that psychologists still persist in having low powered research and they avoid using perspective power analysis. In communication discipline we have not had such large scale examination of how we use power so so so our research focus on these two questions were two major questions there are the minor ones and you're not going to go into details there. So do we actually conduct report power analysis and already adequately power being our experimental research. So our project focuses on the experimental research published in the last two decades in five major communication journals, and we did expensive database search and we did 50% random sampling, and we selected them one study per article. So to answer the first question, is there adequate attention to power analysis and communication research. And this, this graph kind of says it all. So, not really. I think that 75%, which is three fourth of the research, did not mention at all. There is no mention of power at all and then some only mentioned it ago maybe this these findings can be explained by the low power, especially for non significant findings, but for perspective. Very power analysis is less than 5% of the published research used such perspective power analysis. And this pattern is similar across all these five major journals you can see this orange red ish far represents the no mention category and very, very far right. I think purplish bar represents the perspective power analysis is very minimal across all journals. And across time. Okay, the last perspective panel that's that looks like in the last few years there is more use of perspective power analysis but it's not a very obvious trend of improvement, there is some indication for sure. And the second question, is there adequate power in our communication research so we calculated power. Assuming in the pen sample to tell T test using the pwr packaging are, and there were 222 between subjects for mixed designs. And for the benchmarks used by Cohen the small median RG flex size so this is the medium power to detect these effect sizes. So for medium effect sizes point 62. And we also looked at the equal point for three which is the overall effect size for all communication research. According to a large scale meta analysis on all communication research so using that as a benchmark. The medium power is only point five so it's flipping a coin, and the number of studies with power greater than point eight is only 20%. So cross journals. This is only on the effect size of the equal point for three or communication effect size so you can see the journals are across our journals pretty much below this point. And then across time again I'm only presenting the results pertaining to the equal point for three, you can see there yeah there is some general increase. So especially in the last five years, it's going up. Yeah, pretty in a pretty good trend. So so just to answer these two major questions, is there adequate attention to power and adequate power. No and no, and we still have a long way to go. And there are some signs of change but the change needs to happen faster. And, and I would like to thank the funding support. I was one of those lucky winners. After the last metascience conference, you know, from the lucky draw grant scanning I could get it. So I really appreciate the support. And I was taken taking the lightning talk very seriously so only presented the major findings if you have questions comments. Please feel free to contact me. So thank you. I am very grateful for being here today and thank you for allowing me to introduce the nearest collaboration. I'm here on behalf of a group of people who work together internationally. So I just wanted to highlight that I'm not here presenting my own work. It's absolutely collaborative project, and it should be seen as such. So Nero's here on top of this slide you can see that it stands for non intervention reproducible and open evidence synthesis. The collaboration includes a number of people across different countries, mainly based around psychology and different faculties and psychology, but we also have librarians and information specialists who joined the collaboration to help us fulfill our mission. And our mission is to create a set of tools for evidence synthesis in studies that concern research that is not not based on interventions. And so this is important. And I will explain to you why. So please hang on. The whole story will emerge. So the ethos and the aims of the collaboration is to make sure that the tools we create are applicable. So currently there are a lot of tools for conducting systematic reviews or matter analysis and if you type into Google you will get lots of different options. But you will find is that most of these tools have been developed for studies in the healthcare clinical or interventional context. And it's difficult because people in other research areas are now appreciating the value of systematic reviews and evidence synthesis more and more. So this is being pitched to PhD students to complete at the start of the PhD journey. And actually it's being argued that everybody before embarking on a new project should be completing a systematic review or a matter analysis. So I would like to try who are outside of the non interventional scope of research, find it a very tricky task because the tools and their focus on interventional research is often so specific that it's not easy to apply this to non interventional research. So authors often have to make their own adjustments. So this is the process, not systematic between different systematic reviews and also authors are unsure or confident whether the choices that they make are correct or not. So, we have decided to make our tools accessible for those areas that are currently mostly lacking this tool and that have the need for this tool as well. So what I really want to focus on in the development of our tools is to make them accessible. So here one important thing is that we have in our collaboration individuals from across the spectrum of academic careers so we have undergraduate students, graduates, professors, you name it, the level is probably somewhere there in the collaboration. And that helps us to ensure that people who are new to this methodology can easily access or understand the information that we are providing in the tools. So far this has worked very well because the uptake of the of the tool has been of the tool one tool that we have already completed has been mostly within student demographic and and PhD students as well. So we are definitely hitting the goal somewhere there with the accessibility which is great. What we're really focusing on is the open research ethos. So we are implementing guidelines for making systematic reviews and evidence synthesis open reproducible and transparent as much as possible. We try to implement it across the different stages of the project as well. So our outputs, the tools have implemented guidelines for this but we also hope that this will translate onto the actual systematic reviews that will be completed with the help of our tools and consequently improve the quality of systematic reviews that are being currently conducted in the non interventional area of research. So I hope that this was clear for you to see what we are trying to do and why. So on the left of this slide here I present you the summary of the first tool that we have Nero SR SR for systematic reviews. This link highlighted will take you to the pre print for this tool. It has already been used and there are some pre registrations and completed pieces with the use of this tool as well. And there are two main parts that you can find in the paper. So part a is about the preparation of a protocol for pre registration of a systematic review based on non intervention research. We have pre registration guidelines, general information about that for those who are not quite familiar with this, and then step by step instructions for actually preparing the protocol itself. For me, we have the reporting guidelines. So after the protocol has been created and pre registered and then data were extracted and analyzed that we provide some information about how to best structure structure the right of the report. So this is where we have been able to get to in the next step something that we are currently working on is the tool for quality assessment or risk of bias assessment. And this is something that we are constantly being asked about getting emails we see on Twitter questions, it's a very needed tool. So we are trying to speed up as much as I can, as we can. So we have the development of this to deliver it as a supplement to the DRO SR tool itself but also it will be helpful with lots of other different studies and not necessarily systematic reviews only. We have other requests and other ideas for developing the tools further in the future for instance creating tools for the development of systematic and reproducible search strategy, and also some additional supplements for meta analysis themselves. I hope that this was informed informatively and thank you again for letting me present near the nearest collaboration team. And if anybody from the nearest collaboration is in the in the crowd, then shout out to you and yeah. Thanks for being great collaborators. Thanks for that presentation of the outstanding work that you're doing with your team. Whitney handed over the recorded talk to me and we're going to test out whether Maya's presentation will work when I try to share the screen and present it. So, let's see what happens here. Hi, my name is Maya so and I'm a PhD candidate at the quest center for responsible research in the Berlin Institute of Health at Shawty in Germany. I'm excited to be telling you about an upcoming intervention to improve clinical trial transparency at the shuttle to this intervention is also part of a larger project with several colleagues and which we're aiming to evaluate. We depend on having reliable and comprehensive clinical evidence in order to do their work and transparency is key to making all research findable and accessible and also it helps for the accountability for researchers. So in order to build transparent research at Shawty team we use a multi step process. We identify practices for trial transparency based on both regulations and ethical guidelines. So for example the World Health Organization asks researchers to post summary results directly in the trial registry in a timely manner after trial completion, regardless of the findings. So we found this practice as well as several other practices and once we had a set of practices, we analyze the status quo in order to first understand the current policies. So, I whether a university recommends or requires or provides support for certain practice, and second to evaluate performance so how many trials actually do report their summary results in the registry. And to evaluate performance we used automated approaches which allowed for scalability and also allows for sustainability. Once we understand the status quo, we move into stakeholder communication. This right now includes both a dashboard which we use to communicate with university look state builders, as well as what I'm presenting today a report card directed at individual researchers and like the metrics themselves. These are also both via automated approaches. And of course, we then have to evaluate. So for the rest of this talk, I'll be focusing on the report card which which communicates individual researcher performance and then the evaluation thereup. And this with this report card intervention, we were wondering how can we help researchers to improve their individual clinical trial transparency, and we decided to take an approach to educate them on their performance and on their responsible practices. This led us to develop these individualized report cards which as you can see here, report several responsible research practices. And we figured out many of them to keep us focused on the summary results example. So you can see here for this for this trial, this trial did not report summary results and therefore it could not have done those in a timely fashion. So we would show them this report card and then we would also provide information one on the background of these individual metrics and two on how to do these individual metrics. And then we would show of what the complete trial report card would look like. So, in order to implement this report card intervention we really have three challenges. The first one is to generate these report cards. And what we did is we use this automated approach, which allowed us to do individualized reports and also allows it to be scaled by this to future cohorts, or maybe other researchers want to use this on other other populations. The second challenge we have is to actually disseminate these report cards to some very busy try lists. And in order to do that, we're working with the University Administration, particularly the clinical trial office, in order to get buy-in from those senior officials and have them sign the emails in which we're sending out these report cards. Finally, we have the challenge to evaluate the impact of the report cards. And right now we focus on a subset of the transparency practices that we can actually improve post hoc. So for example, summary results reporting in the report card we showed earlier, I showed earlier, we would want to see whether after this intervention that trial list had reported summary results. And right now we are in the final phases of the study design and working with the clinical trial office. So we're hoping to get ethical approval and pre-register the study soon so we can start this intervention. And we also invite anyone who is interested in learning more about this project or the related projects to reach out, I put my email in the first slide, and I or my colleagues would be happy to talk to you. And these colleagues are a huge team here. So Del and Francis is co-leading this sub-project and they're all part of Daniel's ice group. And we also want to thank the funders, the MVF, and the Welcome Trust. So thank you so much for listening to this talk. And for any folks observing the holiday, I wish you an easy past. Thanks. Thank you Maya for that presentation. The audio ended up working out by having the audio from my computer going into my microphone. So I just had to sit here very, very quiet. This concludes our first session of Lightning Talks. Thank you for those excellent presentations, everyone. The next plenary session in person will be an hour from now. The first, what is meta science session, which is meta meta about what it is we are all doing here. And in the intervening time, you should feel free to go to the MetaScience2021.org website, go into RIMO, and that is an environment where you can chat with other members that are here. Many or all of the presenters of these sessions will be there if you want to have some follow-up questions or discussion with things that you heard from in this last hour. And then continue to go visit in RIMO to interact with other colleagues at any time during sessions or otherwise. So we'll see you again in an hour. Thanks very much everyone.