 Hi, and welcome to Esmerconf 2022. We're thrilled to have you here with us, whether you're watching live or in catch-up during the week or later during the year. We have a really exciting program of presentations, workshops and hackathons that we can't wait to share with you. But before we do, I wanted to share some thoughts from our family here at Esmerconf about why open source, open education and free events like Esmerconf are so important. So Wolfgang, what's so great about software like R? Free and open source software like R has many advantages. Of course, you don't have to pay for it, which is nice. But more importantly, it gives you access to the underlying routine so you can actually verify the calculations and we can modify and extend its capabilities to the benefit of all users. And why is it vital that people don't have to pay to attend conferences like Esmerconf? Along the same lines, making training and capacity-building events like Esmerconf free, we can break down barriers. So instead of making such training only accessible to a small privileged group, we can make it available to all. And Janna, what did participation in last year's Esmerconf and the evidence synthesis hackathon mean to you? ESH is just such a great community of practice. That's what really drew me to the group. My coding skills are quite limited, but regardless, I really got a lot from participating in last year's conference and meeting people in the group. And I just think that folks are really welcoming and kind and through that, you find solutions for your own work. So that's just been fantastic. And Kira, why is accessibility at Esmerconf so important, do you think? Esmerconf reduces barriers to underrepresented groups by making the conference free to attend, by ensuring that the conference is more accessible, by verifying the closed captions used, by making all the content free and easy to watch when your schedule allows, which is particularly important to me as a busy working mum. And I think that it's focused being on a software that is free and open source really allows the audience to have a greater access to implement and learn everything that will be covered this week. And Gavin, why do you think what we're trying to do here at Esmerconf is so important? The evidence synthesis hackathon and conference contribute hugely to capacity building for eBX. There are currently many challenges for evidence synthesis globally. Policymakers in the science community itself are the straightingly resistant to change, frequently relying on poorly synthesized evidence, evidence assembled using inherently biased methods or excessive over-reliance on expert opinion. One glimmer of light is provided by this community and especially the early career researchers who support and facilitate its development. Keep working on methods and applications. The change is coming and it will be open and evidence-based. Thanks so much, everyone. Hopefully we'll have you all convinced by the end of this week. So a little bit about Esmerconf and the Esmerconf series. It was established in 2020 and we had our first event in 2021, last year, Esmerconf 2021. The aims of this event series are to build a community of practice on the use of R for evidence synthesis and meta-analysis, to support the development of and showcase novel tools and frameworks for evidence synthesis and meta-analysis in R, to build capacity for the use of R in evidence synthesis and meta-analysis and to raise awareness of the need for rigor in evidence synthesis and meta-analysis. This week we'll see presentations on packages that are designed to assist reviewers across evidence synthesis stages from planning to communication. We'll see demonstrations that integrate evidence synthesis packages into an interoperable pipeline in R. We'll hear about novel applications of existing R packages but in an evidence synthesis context. We'll hear about efforts to automate evidence synthesis in R and we'll learn about how we can assist novices to R in performing evidence synthesis with the aid of graphical user interfaces. We've also got a suite of training workshops. As I said, two have already started this week and we have a total of six that you can dive into. Hopefully you've been able to register. If not, you can watch those live and most of them you can watch in catch-up. And we also have two exciting hackathons that I'm looking forward to introducing later and we'll hear at the end of the week how they progressed. You can follow their progress on the ES Hackathon website. You're familiar with my face now but just to introduce who I am. I'm Neil Hadaway. I'm a senior research fellow at SEI, ZELF and the Africa Center for Evidence. Emily Hennessey is the Associate Director of Biostatistics at the Recovery Research Institute and a member of the faculty at Harvard Medical School. Kira Keenan is a senior research and development manager at the National Children's Bureau. Yanastoyanova is a researcher in clinical pharmacology at the University of Valparaiso. Matt Granger is a researcher in biodiversity conservation, sustainability and wildlife management at the Norwegian Institute for Nature Research. Alex Banach-Brown is a postdoc in biomedical research at the Berlin Institute of Health at Charité. Chris Pritchard is a senior lecturer in paramedic practice and emergency care at Nottingham Trent University. And last but not least, Kyle Hamilton is a PhD candidate in psychological sciences at the University of California in Merced. This is the organization team who've been working hard over the last year to bring you an exciting program. But we also wanted to give a shout out to other people who are helping our organization, people who are providing workshops, people who are working in the hackathons and you'll get to see their faces as the week goes on. But thanks also to you for coming along and to our presenters for submitting their work that we're going to hear about. The ESMCONF is hosted by the Evidence Synthesis Hackathon and you can find out more about ESH by looking at their websites at eshackathon.org. The Evidence Synthesis Hackathon was established in 2017 by myself and Martin Westgate. We host events that aim to develop frameworks and tools relevant to evidence synthesis and meta-analysis. So far we've had 31 projects and that number is increasing rapidly. One of the key things that we organize is the ESMCONF conference series with the aim of training, showcasing and promoting collaboration. And we have a growing library of tools that you can find on the ESH hackathon website, including some that you may already have heard of, predictor, RobVis, Prisma 2020 flow diagrams, citation chaser, Metadat, which we'll hear a little bit about later this week in the hackathon as well, and Evie Atlas. I also wanted to give a shout out to our funders who've been really generous in their support for us this year. Firstly, Code for Science and Society have provided us with $17,000 for this year and next year to enable provision of bursaries for people with caregiving and resource constraints to enable them to attend the conference. They've also helped to fund the transcription and verification of subtitles for all of our recordings. I also wanted to shout out to our donors. We have a small number of people who've already provided a little bit of funding to help us keep ESMCONF free forever. So if you're interested in finding out more, you can visit our fiscal host Open Collective and find out more about how you might be able to support us. But for now I wanted to hand over to Angela Ocune from Code for Science and Society to explain a little bit more about what they do and why supporting us is important. Hello, my name is Angela Ocune and I'm the Senior Program Manager of the Event Fund at Code for Science and Society. At Code for Science and Society, what we call CSNS for short, we seek to enhance the power of data to improve lives. And towards this, we really invest in social, organizational infrastructure as a critical foundation for effective research and technology initiatives. We believe that to have a healthy and robust ecosystem of community-centered research data and technology, we really need to care for and invest in social, technical infrastructure. What do we mean by that? The governance, the culture and the social practices that really intersect, a shape, underlie all technical work. One of CSNS's programs is the Event Fund, which I lead, and the Event Fund directly invests in emerging community leaders around the world to help support the organizing of events around research-focused data science. And so the events and communities that we fund are really trying to cultivate relationships and skills that are needed for more equitable next-generation science. I'm very excited and lucky to get to work with such an amazing team of organizers around the world who are all really trying to tackle the complex challenges of the 21st century. They can range from climate change, resource inequities, global health crises, and it's exciting because we really need more robust, collaborative, transnational research networks to tackle these kinds of complex issues with diverse data capacities. So I am very happy to be supporting the Esmar Conference. I hope that you all have an amazing learning, networking time, and take care. Thanks so much, Angela. Before we move on, I want to give some important notices first around accessibility and our accessibility policy. EsmarConf is fully online, and what we've tried to do is to provide the conference in as many formats as possible. So people can watch live in catch up during the week by focusing on individual talks or the full recorded live stream or indeed anytime in the future. We do focus on English as the primary spoken language, but in this case, it's allowed us to verify subtitles so that you can translate them into any other language. And so the subtitles for all of the individual recorded talks have already been verified and you can translate them automatically within YouTube by clicking on close captions, exploring the menu and selecting the language you want to read through auto translate. We also want to say that translation services and signing service costs are included and prioritized in all of our grant applications. It's a really important thing for us. But we know that we can always do more. So we really do welcome feedback in whatever format you want to provide it. And you can find details of how to provide feedback along with our complaints procedure that I'll explain in a couple of slides in the accessibility policy linked to from the EsmerConf website. Next up, our code of conduct. And we as organizers and moderators of the conference but also we as a community of people participating in the conference commit that people will be treated with dignity and respect regardless of age, disability, gender reassignment, marriage or civil partnership, pregnancy or maternity, race, religion or belief, sex or sexual orientation. At all times people's feelings will be valued and respected. Language or humor that people find offensive will not be used for example, sexist or racist jokes or terminology which is derogatory to someone with a disability. No one will be harassed, abused or intimidated on the grounds of his or her race, nationality, gender, sexual orientation, gender reassignment, disability or age. And incidents of harassment will be taken very seriously. We hope that you agree with this code of conduct and that you will also commit to what we feel these really important commitments. As with our accessibility policy we know that we can always be doing more. So if you have any comments about our code of conduct please do provide them as well. And then finally I wanted to detail how you can raise a concern or complaint or provide feedback. Any participant or organizer of the evidence synthesis hackathon or ESMACONF who feels they have been treated unfairly on the grounds of a protected characteristic that I described in the last slide, I encourage to raise their concerns with an organizer or anonymously if they desire. You can email myself, the conference organizer or my line manager at the Stockholm Environment Institute using those email addresses and their providers in the code of conduct and accessibility policy document as well. You can email another member of the conference organizing team and you can find out their details by following the link from the ES hackathon events page. And you can use an anonymous form by following this link, bit.ly slash esmaconf underscore feedback. And all submissions will be investigated and details of how they'll be investigated are provided in the code of conduct and accessibility policy document as well, which you can get following this link. So on to ESMACONF again and I wanted to explain a little bit about ESMACONF 2021. We were really shocked and thrilled at how people engaged with the conference last year considering it was the first time that we'd run it. So I just wanted to give you some numbers. These are some numbers that we published at the end of the week last year. We had 514 people register last year from 26 different countries. We had 39 presentations with 10 panel discussions and four workshops. And our viewing statistics from YouTube were really great. During the week, we had 650 unique viewers and you can see how the views and viewers were distributed between live and on demand. We had a total of 3,558 video views during the week and we got a total of 175 new subscribers. So thank you so much for everybody who took part last week. We hope that you enjoyed it as much as we did. But also I wanted to show that since last year the videos for ESMACONF 2021 have been viewed more than 9,300 times. We have a total of 339 subscribers now from last year. And you can see some of the viewing statistics here that show over the last 365 days how the main, the top, I think five it is, videos have been viewed. So there are some interesting spikes where people suddenly discover some of the conference material. And this includes both the live streams and individual recordings, individual presentations. And I wanted to give a shout out to our most popular video from last year, this seven minute video from Luke McGinnis introducing his package, RobViz, for visualizing risk of bias assessments. So congratulations to Luke. It was a really exciting presentation and he can claim 3.6% of the total channel views from last year. That's really impressive. Well done, Luke. So last year we had a really interesting body of presentations. This year we've got an equally interesting set of presentations. We have 28 presentations in total. We've separated those across eight special sessions from review processes from A to Z, graphical user interfaces, quantitative synthesis, particularly network meta analysis, other quantitative synthesis methods, quantitative synthesis with a Bayesian lens, building an evidence ecosystem for tool design and developing the synthesis community. We also have six workshops this year. Yesterday we had a full day workshop by Wolfgang Wichtbauer on an introduction to meta analysis in R that was streamed to Twitch. We also had a quick dive into searching for studies in meta analysis and evidence synthesis by Alison Bethel. Later today we've got our first workshop of the day with collaborative coding and version control in introduction to Git and GitHub from Matt Granger. We have a workshop early tomorrow morning on the collaboration for environmental evidence and what it can do for you. For those of you who don't know, the calibration for environmental evidence or CE is one of the main systematic review coordinating bodies that provides guidance and support and publishes systematic reviews on environmental science. We have a workshop then later tomorrow on structural equation modeling. I should say that CE workshop is being run by Ruth Garcyde. The structural equation modeling workshop is being run by Aaron Dan Basu. And then finally on Thursday morning rather, we have the introduction to writing R functions and packages workshop that's being run by Martin Westgate. So thanks very much for all our amazing workshop coordinators who are providing those workshops entirely for free. Workshops two to six will be recorded and available online forever. If you want to watch more from Wolfgang and you've missed his workshop, you'll have to catch up with him on one of his many other workshops that he regularly provides. But thanks so much to everybody for providing this really amazing set of resources in capacity building around meta analysis and evidence synthesis. It's really wonderful that these are all free. So thank you so much. I'm also really excited this week to introduce two hackathons. Within the evidence synthesis hackathon series we regularly run hackathons, but this time at EsmerConf we also have two hackathons that are running parallel. So when people aren't watching sessions and aren't in workshops themselves, people within these groups will be diving off to try to produce a minimum viable product, a working product in each case that aims to answer a need for a set of functions or tools in R. And there's a particular emphasis in both these projects on graphical user interfaces so that people without experience and coding ability in R will be able to make use of them. So the first one is a package called Sitesource. This is going to be an R package and a shiny web app that allows people to upload their search results and maybe their screening results in an RIS file, multiple RIS files from different sources and then identify which databases or which search strings, which different sources of information are most influential in their search results and the stages of their inclusion so that you can see what the level of overlap is between your different sources, perhaps which databases you might want to include as a priority, which might not contribute anything unique. The team behind this is a really great strong team with experience in DG application and tool design, so we're really looking forward to what Trevor and his team can produce. We'll hear more about that in a closing ceremony on Thursday. Our second hackathon is being led by Wolfgang Wichtbauer and his team. This relates to Metadat, which is an existing package on CRAN that collates a large collection of meta-analytic datasets. They're useful for teaching purposes and for validating published analyses and other development of meta-analytic methods. Wolfgang and his team want to improve the package and to provide a shiny interface so that people can access those datasets and search for those datasets without having to use ARC, which is really exciting. Wolfgang and his team have asked if anybody has available datasets from Meta-analysis, you can provide those to him by email. The idea is to make these datasets as widely accessible as possible as examples, so please do get in touch with him if you think you have any useful data. And as well, in ESMCON 2022, we have to thank all of you for registering and attending. So far as of Sunday, at least we had 770 people register, which is a huge increase on last year. You can see the spread of people across time zones, which emphasizes that there are a lot of people from Central Europe, but also from the East Coast of the US who registered. And it shows that we perhaps need to do a little bit more work to show the benefits of watching this conference and catch up. But anybody who comes along afterwards can watch the conference materials without needing to register once the conference has started. But we're really thrilled that so many people have engaged with the conference so far. We are looking forward to engaging with you as the week goes on. So now, before we really get stuck in, I wanted to explain how the ESMCON conference is going to work exactly. All of our workshops from now on, workshops two to six are happening via Zoom with a registration. So if you've managed to register already, that's great. If you haven't registered for one of the upcoming workshops, see if there's still space to register. But all of those workshops are also being live streamed to Zoom. So if you don't make it into the registered Zoom, you'll be able to catch up live and catch up afterwards with a recording, both on YouTube. And access to any materials that you might need in the workshops are going to be provided as links. All of our live, all of our special sessions are live streamed to YouTube. So the main way to interact with the conference is via our YouTube channel. And you can access that from our ESMCON website, which is esmiconf.github.io. As well as having these live sessions streamed to YouTube, presenters have done a huge amount of work by recording their presentations and providing them to us in advance. And all of the individual talks are going to be published one by one on YouTube at the start of their session. That means that if you're particularly interested in one talk, you can dive straight into that pre-recording. And it means that all of those individual presentations have got verified subtitles. So if you're struggling with understanding either from a hearing perspective or from a language perspective, the individual talks have verified subtitles. So we can verify that if you want to translate them or watch them in English, the quality of those subtitles should be pretty good. If you want to engage with the conference this week or indeed anytime during the year, you can ask questions directly to our presenters and workshop coordinators by visiting the Evidence Synthesis Hackathon Twitter feed and that's ES Hackathon on Twitter. And every session, every workshop and every individual talk will have its own dedicated tweet with the presenter tagged if they have a Twitter handle. So you'll be able to click on a specific tweet for a presenter and then you can engage with them by asking them questions and giving comments by clicking on the reply button and you'll be able to dive straight into a conversation with one of our presenters. So that is the introduction done. We are really thrilled and excited to have with us today, Terri Pigott. Terri is a professor in the School of Public Health and the College of Education and Human Development at Georgia State University. She received her PhD in measurement evaluation and statistical analysis from the University of Chicago. She's the founding chair of the AERA special interest group for systematic review and meta-analysis. In 2016, Terri received the Frederick Mosler Award from the Campbell Collaboration and that's an award recognizing a really important contribution to the theory, method or practice of systematic reviewing. Her research focuses on methods for meta-analysis, including power, missing data and individual participant meta-analysis. And today, Terri is going to be talking to us about synthesizing communities, her experience and knowledge of how to go about synthesizing as a community of synthesizers and improving evidence synthesis through collaboration. So, Terri, welcome to ESMCON for 2022. We're delighted to have you here. We know it's incredibly early where you are and ungodly hour and we really appreciate you tuning in so early. But yeah, over to you. Thank you, Neil. Let me get my slides set up here. Okay, thank you, Neil, for that great introduction. As I was preparing for this talk, what I decided to do was sort of think through what I've learned through my 30 plus years of working in evidence synthesis and particularly in meta-analysis. So, what did I start as Neil pointed out at the beginning? This is a picture of the University of Chicago. Oops, let me go back. This is a picture of the University of Chicago, which I showed up at in 1983 to do a program called Measurement Evaluation and Statistical Analysis in the Department of Education, which is now closed. And as this happens, when you start a graduate program, you're usually assigned an advisor. And my particular advisor I was assigned was someone named Larry Hedges, who I'd never heard of before. Some of you may know who Larry Hedges is or may have heard of Larry Hedges through the Hedges G, which is a small sample corrected sample effects size for standardized mean difference. But at the time when I showed up, I didn't know who this guy was. And the first thing that I was assigned to do when I came to the University of Chicago was to work on this particular book called Statistical Methods for Meta Analysis written by Larry Hedges and his advisor, Ingram Olkin. And this book is one of the earliest books on Statistical Methods for Meta Analysis published in 1985. And my job was to help copy edit the book. And this is very early on in the beginning of word processing and also to work on the examples for the book. And that work then sort of was the beginning of my journey in systematic review, evidence synthesis and meta analysis. My first paper in 1988 was published with a much more senior colleague. I was still in graduate school. He was a professor at the University of Kentucky. And it was a meta analysis on something called group-based mastery learning programs. You might notice that the scan here is a little bit crooked. This again is pre-PDF. This is what it looks like when you pull down this paper from Eric. But at any rate, it was the first time I had collaborated with someone working as a methodologist on a meta analysis. And then since 1988, I've been involved in a number of organizations that are devoted to evidence synthesis. The main one has been the Campbell Collaboration and I've been involved with them since the early 2000s. The Campbell Collaboration, for those of you who don't know, is an international organization devoted to the support, conduct and use of systematic reviews in a number of areas, broadly social, behavioral sciences, but including things like climate, aging, social welfare, crime and justice, education and so forth. And from the Campbell Collaboration through the Campbell Collaboration, I've also been connected with the Cochrane Collaboration, another international group devoted to systematic reviews in health. I've also been very lucky and been able to be a part of the Society for Research Synthesis methodology. Again, another international and interdisciplinary group devoted to methods for research synthesis. I'm gonna take a quick break here and just say that the membership for SRSM has now become open. We were closed to society before. We have now opened up membership. If you go to srsm.org, you'll be able to find out information about how you might wanna join that organization. I'm also currently serving as the co-editor of the Journal of the Society of SRSM, Research Synthesis Methods, which I'm very proud to say is an interdisciplinary journal devoted to methods for research synthesis. And many of the authors of the packages that have been developed either at the conference last year or through the hackathon have published papers in that journal. So as I reflect sort of on this 30 plus year journey in systematic review and meta-analysis, I sometimes characterize myself as someone who's been in between, sort of working in between those spaces where I've translated the work of very smart statisticians like Larry Hedges, who had been developing important models for meta-analysis. And those people who are trying to apply best practice meta-analysis to their evidence synthesis. And so in that in between space, I've seen myself as a translator, translator of best practice methods to people who are actually trying to do the work. And in that space, what I wanted to share with you today was sort of three lessons as I reflect on my journey in evidence synthesis and meta-analysis. So my first big takeaway has been the importance of interacting with those working in evidence synthesis who are outside my discipline. So let me start by saying that this is not an easy task. I think, for example, working in and interacting with people in the Society for Research Synthesis Methods in SRSM. We see this, we see the struggle. We are all working to apply evidence synthesis to the problems in our own disciplines. And we sometimes use different terms for similar issues. And we have to spend a lot of time thinking about what are we, are we all talking about the same thing or is this, are these issues really different? But what it comes down to in the end, I think, is that many of the challenges we face are similar. And that interacting with those outside of my discipline has been really important in thinking about applying across disciplines, across from applying things that I've learned from talking to people who are in health, who are biostatisticians and so forth and applying them into my context, which is typically the social sciences and more specifically education. So one example, I think, and a simple example, perhaps, are the solutions that have been developed by many in this community for screening and also for literature searching to make those processes more efficient and more accurate. And many of those efforts, particularly what I'm thinking about right now is sort of those solutions to make screening have large numbers of studies that are to be included in a systematic review or meta-analysis. A lot of those solutions have been developed in health and in the sciences and so forth, but we in the social sciences have completely adapted these solutions and we haven't needed to create our own. And some of those screening tools we use regularly and I teach regularly in my classes. And this has created a great efficiencies in the social sciences and we haven't had to expend our resources in developing these solutions. Another example in my own work I'm having to do with learning across disciplines and making sure I talk to people outside my discipline is a paper I did with a couple of colleagues which is down in the bottom right corner. I distinctly remember sitting in in a room when we used to sit in rooms for conferences with my colleague, Jeff Valentine. And I think Kira for Kira, I think this may have been in Northern Ireland, but I'm not sure. And we were listening to a talk that was reporting on work that was related to the paper that's in the upper left hand corner on outcome reporting in industry sponsored trials of Gabapentin for off-label use. If you don't know, if you've not heard the term outcome reporting bias that refers to the fact that in some, what we see in some primary studies is the omission of some of the outcomes that were actually gathered for that study. And the study that I reference up here in the left corner on Gabapentin was comparing pharmaceutical protocols to their published versions and looking at changes in both the outcomes that were reported and the characterization of those outcomes. And I remember sitting in that room with Jeff Valentine and saying to him, well, I wonder if this happens in education and how do we figure that out? Given that we don't have any kind of history in educational research for publishing protocols or having protocols for studies. And that led to us really looking at outcome reporting bias in education research. How we did that was comparing dissertations to their published versions. And no surprise, there is a relationship between statistical significance and whether or not an outcome is published whereby published or statistically significant outcomes, sorry, are more likely to be published in the published versions of the dissertations. So again, it's been so important in my career to make sure I talk across disciplines and understand how others are addressing similar problems in evidence synthesis and then figuring out ways to apply that in my own. So a second important influence in my career has been working on a team doing evidence synthesis. I don't think I need to tell this particular community this, but evidence synthesis is a team sport. No single researcher can have all the expertise needed to complete a complex synthesis since it requires a deep knowledge of a particular content area, understanding of how one might do a search and meta analysis and any other number of skills, critically project management. I mean, when I think about my own career, it's been working in a team as a methodologist doing evidence synthesis on probably a topic for which I have no understanding, but that has highlighted the many of the challenges that methodologists and that others need to address in evidence synthesis. So I have two sort of recent pieces of work that I've done that I wanted to talk about some of the questions that I am currently have on these issues. So one is a systematic review and meta analysis published in the Campbell collaboration with a close colleague, Julie Latel. I'm looking at the effectiveness of an intervention called multi-systemic therapy for social, emotional and behavioral problems in youth. The second example is a systematic review that I helped with from an early career researcher, Priscilla Lu, who's at Southern Methodist University in Texas, and that particular systematic review is estimating the correlations between the big five personality domains and alcohol use. And my role, as you might expect in both of these systematic reviews has been to serve as the methodologist and the main person applying the meta analysis methods. And for these two systematic reviews both published 2021 and 2022, we were trying to apply the most recent meta analysis models that reflect the complex structure of effect sizes in the social sciences. In the social sciences, we often have studies that report on multiple effect sizes. So we have those dependencies within studies. So we're looking really at a hierarchical model that we wanna apply to a set of meta analysis data where we have multiple effect sizes nested within studies. And as I was applying these methods that some of my very smart colleagues have discussed, one of the challenges we ran into was that there were in both of these particular systematic reviews, one really big study that contributed 100 or 200 effect sizes. And we ran into some all kinds of estimation issues and difficulties in thinking through what kind of model we apply when we have one humongous study, a bunch of smaller studies and we have this hierarchical structure. I don't think I've come up with the answer. So when you look at these two papers I don't think I have the right answer. I'm still working on that. And that has led me to think about sort of how do, what are the models that we should use here or how do these newer models, these hierarchical models, these multi-level models for meta analysis and using robust estimation techniques, how do those fare under different structures of the data? What happens when we have one big study and a bunch of little studies? What are the conditions under which those models work best? And that's led to some other collaborative work with Akali James Pustayewski and early career researcher, Miko Wemble. I'm looking at sort of just starting very small and looking at, how does the power of our statistical tests for the mean effect size, what happens to power when we have these very unbalanced data sets? And that's gonna get us a little bit closer to understanding, what are the conditions under which these models operate optimally? And then a second sort of question that's come up that continues to come up for me and comes up for many people related to the review on the Big Five personality domains and alcohol use, has to do with how do we appropriately assess study quality when we're including a range of observational studies? We just heard Neil talk a little bit about RoVis. And so we have very good tools and very good structures for thinking about the quality of randomized control trials. But how do we think about the quality of studies that are really observational studies that aren't looking at making a causal inference? And so that work is work that we continue to think about and I'm working with a group of methods editors in the Campbell collaboration to think about how we measure and think about quality for non-randomized studies. And I hopefully eventually that work will expand to thinking about how we think about observational studies. So now sort of the last lesson that I wanted to talk about is, is the importance of supporting open science practices in our work. As I reflect on the years I've been in this field, my work in the Campbell collaboration has always been open. The Campbell collaboration has always been open source. And so we've always had protocols and evidence synthesis materials there. But we're talking about now a much more open environment and how important that is for systematic review. I don't think I can tell you or underestimate the how much solutions for meta-analysis and evidence synthesis and R has revolutionized this field. The ability to immediately implement new methods in any part of that evidence synthesis sort of workflow has really incredibly revolutionized the field. And as we heard in the opening session how important that is for openness and for access for people. But I think about when I started back in 1983 in this work when someone developed a new model for meta-analysis, for example, we'd have to wait for that to get incorporated into some existing statistical package. And now we can draw on programs that are already there like Meta4, thank you Wolfgang, and Meta and all the other packages that exist in meta-analysis. And we can build on that and we can implement these new models almost immediately as soon as that paper is available to the public. And I also wanted to just quickly say that open science practices is not just about are being transparent and open about the assumptions we're making in our evidence synthesis. It's also about helping all of us improve our own practice. It's been incredibly important to be able to look at other people's code and find their data and sort of that's how I've been able to learn and keep my practice up to date. So I have one, in closing, I have one challenge to all of you as we start this conference. And that is to encourage you to attend a session by someone who is speaking, who is outside of your discipline. And to think about how that work could relate to the challenges you face in your evidence synthesis. I can't stress enough how important those interdisciplinary interactions have been to my own work and how important I think they are for all of us to move the field forward. We may be facing different challenges, but I think that we all may have solutions if we all are working in what we've been calling a community of practice. So as I close, I just wanted to thank some of my closest collaborators and those of you I haven't mentioned, but on the far, on the right are my current team, Jay Morris and Kamal Middlebrook. And on the left, my sort of always collaborators, Josh Polannon, Beth Tipton and Ryan Williams. And thank you to all of my collaborators because you have enriched my work. And I think you, and all of us working in the community are gonna be able to move this whole evidence synthesis community forward. So thank you for listening. You can contact me at tpigit at gsu.edu and my Twitter is at Terry Pigott. Terry, thank you so much. That was such a fascinating insight into your experience and what it means to be interdisciplinary and methodologist and what collaboration means. That was really the perfect start to Esmer Kompf. I really reflect on what you say about the need to dive into different disciplines. I think for me, helping to organize this conference, I suddenly realized that chatting to people who are paramedic scientists and psychologists and health scientists. And I don't even know until a while later because we're all talking in a similar language and that's, I think, really beautiful. I wanted to just ask a quick question. And if, just to say, if you're watching out there and you've got a question, please just, you can reply, as I said, to the tweet about Terry or just tag Esmer Kompf 2022 and we'll try and get that to Terry now or straight after on Twitter. But I had a question that, do you think that we as synthesis working in a method that's pretty discipline agnostic, do you think we have a responsibility then for trying to reduce research waste in that role across disciplines? You talked about language and consistency, but also having that sort of pivotal position, being able to share tools between disciplines, but do you think we have a sort of role and responsibility there more generally? Yeah, I think we do. I mean, I know we all have limited resources and that's one of the things that I, as I reflect on what are the tools that we use in social sciences, many of the tools we're using, especially for literature searching and screening and organizing all that have come from outside. And so I think we do have a responsibility to make sure that we're not duplicating efforts, but it's hard to find those spaces, except for here, which is great. There are very few spaces in some ways and are busy lives where we tend to be in our disciplinary conferences, but there are spaces like this that we really need to take advantage of to see what else everyone is doing. And I think we do have personal responsibility to make sure we are kind of at least peaking to see what other people are doing. Yeah, that's a really great point, thank you. And I think it reflects on your point about Open Science as well that it's so much easier to collaborate when you are open. I mean, you can throw someone an email when you discover that package on GitHub and you want to adapt it or work with them and you've immediately set up a collaboration that would take a lot longer in person, I guess. And you reflected a bit on this in your talk, but over your career, what are the key changes that you've observed in evidence synthesis? And where do you think we're going next in the future? What's the future of evidence synthesis for you? So yeah, so again, I think I can't emphasize enough like how important it is that we're used, that R is sort of the basis of many of the work of what the work we're doing because that is all open and we're all able to use it. The future, I mean, I can think about some of the things that interest me and things that I think are moving forward, at least in the social sciences, we're really struggling with how do we adequately analyze this really complex data that we get out of evidence, quantitative evidence synthesis, how do we understand the best models to use? And so that's sort of more of a personal piece. Other pieces I think are important and how do we think about analyzing and synthesizing and getting people to recognize that there are other kinds of data we need to start synthesizing. So qualitative synthesis methods, I'm really excited that there's a lot of effort right now being put into thinking about guidelines for that and Ruth Garcide, who I know is talking later on this week, is a big part of that. And then I also always, always throughout my career, wondering how we make this accessible. We're not doing this in a vacuum, we're trying to do this so that other people can use the evidence and how do we translate our work, which tends to be complex into ways that people can actually do things. Yeah. Yeah, that's really great insight, thank you. Matt. Hi, we've got a question from Emily from YouTube. What do you recommend for early career folks to get engaged with evidence synthesis and to be comfortable with learning in such an interdisciplinary way? Oh, beginners. Okay, well, let's see. Well, obviously we have all these materials in S-Mark, so I recommend those. Campbell is developing, and I'm really excited about this. I haven't mentioned this yet. Campbell's actually developing some materials that are going to be online right now. They're very text-based, but they'll be sort of a self-paced introduction to systematic review and meta-analysis. I have to say that I've been sort of an editor, not a writer, but two of my favorite writers. A couple of my favorite writers are writing. This is Julia Latel and Jeff Valentine, and also a team from Sarah Young, some literature search specialists. But anyway, those materials, I think, will be available soon. And then there's some very accessible books by Harris Cooper, at least in the social sciences, by Harris Cooper and one by, it's old now, but Lipsy and Wilson as well. Great, thanks very much. We've got another question from Kira on the organizing team who says that, do you think working more in social science research with wicked problems like homelessness or other cultural problems has allowed you to keep advancing evidence synthesis statistical methodology because it's not simple processes to find A to B? Yeah, I mean, I think that has been, I guess an advantage and a challenge. In the social sciences, we do have what maybe some people in health might call complex interventions. And homelessness is a complex problem. And so the data we get and the interventions we are looking at have multi-dimensions and we have to think about how to analyze that data in complex ways. And so, yes, Kira, working with you on the homelessness review got me to learn all kinds of things. So thank you. Great, thanks very much. I also wanted to touch, I've got a question about big data. One of the things that scares me a bit as a systematic reviewer is the idea that every paper that I might want to include in my review might at some point in the future have its own raw data. And I don't have to summarize things from figures or in the text or tables. I've actually got potentially huge data in every single article that I include and it's all patterned differently. And yeah, how do you feel about that? Obviously it's a good thing for data and synthesis but it's possibly even more of a headache for systematic reviewers. Oh yeah, I mean, we already know that it's, I can at least say in the social sciences with confidence that no one reports things in exactly the same way in a primary study. So if we add that into the ability to have raw data, wow, that's really gonna be difficult. And so I think we're just gonna have to start thinking about ways that we, some standards that maybe we can start to set across different disciplines. But it will be exciting because that is an area where we can, the models we can fit when we have the raw data are a lot more sensitive than the ones that we can fit in typical meta analysis. But again, this is gonna cause another set of complications because when we think about, at least in health when they think about individual participant sort of data for meta analysis, a lot of times they assume that we have all the raw data. But we may have, in the future, we may have this issue where we have a mix of aggregated and raw data. And those models are even gonna be more complex. Yeah, that sounds like a big headache. Oh boy, well, we're gonna have work to do, that's all. Yeah, I love your positivity. You're absolutely right. I had another little question that often, I mean, one of the things that I really love about, about this community is it gives me a community of practice in a sense of belonging and family. But often I'm trying to promote evidence synthesis to people who don't really think it's exciting or fundable. What's your experience of being at that intersection where in your community of practice, there's a recognition what you're doing is great. But outside you might be banging your head against the wall. Yeah, I think that we're still there. And in my own area that I know the best is education. And for a long time, I mean, I still feel, you still feel the pushback. But one thing, the positive piece of the pandemic has been that people have turned to these kinds of this kind of research when they can't be in person, I'm collecting data. So I think we're gonna see a little bit of a, less pushback in the future. So. Yeah, that's great. I think I'd slightly reflected on that as well. It is nice to have a lot more requests for training. Yeah, gives you positivity about the future. That's true. That's true. We are out of time, I'm afraid, but I could carry on chatting to you for ages, Terry, but I know it's also very early. So you should go back to bed. Okay. I'll have a much deserved extra cup of coffee. Thank you so much for what was a really, really fascinating presentation. It's so interesting to hear your insights and your experience. And yeah, thank you again. Thank you too. I appreciate it. And you should take a look at the comments and the feedback on Twitter and YouTube if you get a chance. It's really, really positive. Oh, great. Of course. Thank you so much. And to everybody else, we'll see you in just under 15 minutes for the first session on review processes A to Z. Thanks very much.