 Welcome to MetaLab. Interactive tools for conducting and exploring community augmented meta-analyses in developmental psychology. I'm presenting today on behalf of the MetaLab consortium. MetaLab is an interactive platform that hosts community augmented meta-analyses in the field of developmental psychology. This includes topics spanning language and cognitive development in babies and children. In this presentation, I'll be giving a tour of the tools we have for contributing to and exploring MetaLab data sets. Meta analyses are incredibly valuable in providing a synthesized overview of the body of evidence in a topic of research. This synthesis can be used to inform future research. Meta-analytic data can expose research gaps, identify which methods or experimental conditions yield more robust effects and be used to estimate power. This means that there's value in a meta-analytic data set beyond just what the authors have scoped to report in their publication. Researchers who are planning new experiments will have targeted questions about the nature of existing evidence in a topic that the meta-analysis has the potential to answer. But even if the meta-analytic data set is made public, it requires some familiarity with the data structure, knowledge of statistical software, and meta-analytic methods to run new analyses and create plots. Realistically, how many researchers are going to put in that effort to select their methods or sample size? Considering how costly meta-analyses are to conduct, we really want to make sure that researchers can take full advantage of the value that they provide. To make exploring the data much quicker and more accessible, we've created a visualization tool and a power analysis tool. Meta-analyses are also basically outdated as soon as a new study emerges. A community augmented meta-analysis is one where anyone from the research community, not just the author of the original meta-analysis, can contribute new data. While it's a nice idea that the community can contribute new data, it does require effort to facilitate that people actually do. And one barrier to this is that new data must be compatible with the existing data structure. A step we've taken at MetaLab to make this technically more accessible is providing a data validator in a graphical user interface that informs you whether your data set complies with the MetaLab structure. Today, I'll walk you through these user interfaces on the MetaLab website deployed by Shiny app. This is the visualization tool. So we start off by selecting a data set. Let's pick infant-directed speech preference. So this topic looks at the extent to which babies prefer listening to people who talk in infant-directed speech or what we might call baby talk versus talking normally like they would to other adults. Infant-directed speech has some different properties compared to adult-directed speech in intonation, pitch and speed. Now, there's no evidence that infant-directed speech hinders language development nor that it's necessary for successful language development. But researchers are interested in questions like whether infant-directed speech has properties that are valuable or desirable to babies and the cross-cultural differences in how it's used. So first, we're just looking at overall effect sizes here. Here we have a forest plot of all the effect sizes from each experiment. Here's a scatter plot of effect sizes by age, a funnel plot for appraising publication bias and a violin plot of effect size density. And here we see a summary of the meta-analytic model. This shows that across all experiments, the overall effect size is about 0.6 for babies preferring infant-directed speech over adult-directed speech. And the 95% confidence interval spans about 0.4 to 0.8. And we see the actual model output here. Now let's select some moderators. By looking at the scatter plot's linear fit, we see that the effect seems to decrease with age, taking us close to a null effect of no preference either way by about 18 months of age. So now let's specifically select the moderator of age. When we do so, and we check the meta-analytic model, we actually see no effect of age. So maybe this linear trend that we see in the plot here is just from fewer studies having investigated older babies and maybe a few outliers here. So it looks like we really need more data to see if there's a real moderating effect of age. It might also matter whose voice the infant is hearing in these experiments. So here we select the moderator of speaker. And this shows us that most research has used voices of unfamiliar women. We might want more experiments that use recordings of the child's mother to see if there's a difference there, even though we can imagine that there are some methodological challenges in doing so compared to having recordings of an unfamiliar person. And almost no experiments have looked at men's voices. So the model shows no significant effect of speaker type, but there are large confidence intervals. So overall, the overall preference for infant-directed speech seems quite robust, regardless of these moderators. But we've seen where future research might be helpful. Now we'll move on to the power analysis tool. Again, we select the dataset that we want, so let's keep going with infant-directed speech preference. Straight away, we see that the overall effect size is 0.61, which requires 21 babies for 80% power at the level of P is smaller than 0.05. Or we can see in the plot below the power we would get for any sample size, say if we wanted to achieve 90% power instead. Now let's add in some moderators. So maybe based on our exploration of the visualizations before, we want to test older babies. So let's say we're testing 18th month olds. Now the estimated effect is smaller. So we would need 45 babies if we want a good chance of a non-null effect. Let's see if any methodological choices might increase effect sizes so that we could test fewer babies. Eye tracking is where we measure babies' looking times at a screen as a proxy for their interest in the voices that they're hearing. This method is expected to yield a much smaller effect, so instead we'd need 73 babies. Behavioral methods involve babies turning their head to look at where the voice is coming from. And in contrast to eye tracking, this yields much larger effect sizes, which were pretty close to the overall effects we saw before. So we could test 42 babies for sufficient power using this method. So it looks like a behavioral method is the way to go. So now let's say I've conducted the experiment on infant-directed speech preference, testing 42 18th month olds using a behavioral paradigm and recordings of the mother's and father's voices. Now I have new data to contribute to MetaLab. So here I've copied MetaLab's template of their data structure and entered in the details of my experiments. This includes experimental details, details about the participant, like their native language and their age, and the results of my studies, so the means and standard deviations of the infant-directed and adult-directed conditions. Now I can use MetaLab's data validator to check whether I've entered the details correctly. To do this, I just take the URL and enter it into the data validator. Alternatively, I could upload a CSV file and it would be the exact same process. So it looks like we have some issues. Response mode, exposure phase and mean age haven't been entered correctly. Let's go back to the spreadsheet and see where the problems are. Let's start with response mode and exposure phase. Okay, so it looks like I've used Australian spelling conventions to spell behavior and familiarization, but these columns only accept pre-specified entries and we use US spelling conventions. So I'll just make those changes. Now I just go back to the validator and click Validate again. Great, we can see immediately that that has been fixed. Now let's see what's wrong with mean age. Okay, so I've added days in after the age, but this column only accepts numerical values, so this isn't necessary or even permissible. All values in this column need to be in days with no additional characters. Now the data validator says that we're all good to go. Now I can contact MetaLab to let them know that I have new data and they can add it to the dataset for me. Then the visualization and power analysis tools will also include those new data points, even though they weren't included in the original meta-analysis. So here I've given an example of someone wanting to add data points to an existing MetaLab dataset, but the data validator works in the exact same way for someone wanting to add a whole new MetaAnalytic dataset in a new topic and make sure that the data is all in the right structure before it's added to MetaLab for the first time. So as you can see, these tools are quick and easy for users. They can be used to explore MetaAnalytic datasets which can inform future studies and they can help someone add a new dataset or new data points to MetaLab. These new data points then feed back into the visualization and power analysis tools, which means that the most current evidence is available and synthesized for a given topic. For people interested in using tools like these in their own areas of research, MetaLab has had one spin-off database for community augmented meta-analysis in a different field of research, voice patterns in neuropsychiatric disorders. MetaVoice is based on the ideas, code and structure of MetaLab and they have the same visualization and power analysis tools. So the concepts and infrastructure I've presented today could be leveraged in your own area of research. The links to the MetaLab website and the GitHub repo are included on each slide here. So if you're interested in learning more, check those out and we always welcome collaborations. If you're interested to learn more about MetaLab, please also check out our ESMA 2021 presentation that was on our in-development MetaLab R package. And we also have a pre-print to a tutorial on how to conduct transparent reproducible meta-analyses using the MetaLab framework. I just want to thank the MetaLab team, all authors and contributors to the meta-analyses and the 45,000 babies and children who participated in the original studies and their parents. Thanks for listening and please get in touch with any questions.