 Hello, my name is Dan Quintana, and I'm very excited to be walking you through my new R package, which is the Meta Meta R package, which helps you evaluate the credibility of studies included within a meta analysis. There are a number of ways you can do this, but one such way that you can do this is by looking at the statistical power of these studies. And it's something which is possible if you were to look at all these individual studies, but that's very time consuming. But within this package, I can show you how you can do this by using information, which is commonly reported in published meta analysis. And it's also possible to include this in your workflow for novel meta analysis that you may be doing. A really important feature of this particular package is that it can be very difficult to actually specify what your effect size of interest is. What my effect size of interest is may be different to what someone else's is. So this package calculates statistical power for a range of effect sizes, which is very important for interpretation. So instead of defining it for a single true effect size, which, for instance, instead of defining it for the summary effect size from the meta analysis, this actually calculates statistical power assuming a range of effect sizes as well. And the way that this particular package is structured is you can input either effect sizes and standard errors to use one of the data analysis functions, or you can use effect sizes and confidence intervals as well. And then using either of these two approaches, this will give you data output, and it'll show you power for a range of effect sizes. And you can also visualize these results as well. So let's jump straight on in. And this package it is on GitHub. So if you don't have the DevTools package installed, you can install that. And then following these commands here, you'll be able to download the package from GitHub. So let's load the library and let's load metaphor as well. And the first thing we're going to do is we're going to look at our first example data set. In order to use the this this first function, your data will have to be structured and include at least these two variables using these specific variable names, Yi for effect size and SEI for standard error. So here we have nine studies that we're looking at here. And this was extracted from this particular meta analysis. So these figures are exactly from this meta analysis, which report a measure of effect size and a measure of standard error. And again, this is something which is commonly reported in most meta analyses. So we have our data set here. And then from here with our MA power underscore SE function, there there are three, three arguments that need to be specified. Firstly, the data set. Secondly, the observed effect size. For this particular meta analysis, the observed effect size was 0.178. And like I said, the it's this particular package can calculate power for range of effect sizes. But quite often people are interested in what is the power assuming the summary effect size from the meta analysis is the true effect size. Now I understand this is quite an ambitious assumption for many research fields, given that the summary effect size is often inflated due to a number of reasons. But that's for a different presentation. However, people are still interested in this regardless. And it can at least act as a starting point for the sort of effect sizes that you want to be interested in. And third argument is the name of the meta analysis. So let's run this. And let's have a look at power for a range of effect sizes. Using these commands here. What comes out, we can see the data that we inputted, and we can see power for a range of effect sizes from 0.1, all the way to one. And importantly, we also have our observed effect size in this example 0.178. So looking at this, we can see that even when effect sizes start getting quite large 0.4, 0.5, 0.6, power is still quite low for all these studies here. So this is looking at all these individual studies here. Only when we begin to assume that effect sizes, the true effect sizes around 0.8, do some of the studies begin to get close to having at least 80% power. Now I understand there are, you may have different thresholds for what you would assume to be sufficient statistical power, but at least in this presentation, I'm going to assume 80% for the purposes of this presentation. The other thing that we can do is we can easily calculate the median power for a range of effect sizes here. So this provides a summary number that can be used for these bodies, these bodies of studies. So here we can extract the median and we can look at the median. And again, we have the median power for these 9 studies. Is it 9? Yes, for these 9 studies for a range of effects all the way from 0.1 to 1. And we also have our power assuming that the observed effect size is the true effect size. So again, we can see that only once effect sizes started getting around 0.8 to 0.9, do we start to see power around 80%. So for this body of research that was included within this meta-analysis, if you're satisfied that effect sizes of around 0.8 to 0.9 to 1 are sufficient, then things are looking pretty good. But if not, if you were to think that the true effect sizes are more likely to be quite small, then for this body of research, the power is very, very low. Now let's visualize this using the firepower function. We can create a plot and let's zoom in so we can see this visualization of power for a range of effect sizes. Within this function, I've also specified that the label is using the es argument, the label is edgesG. If you don't do this, it will just give a generic effect size label. But here, because the effect size we're working with is edgesG, you can specify this. So we can see using this median data that only once edgesG values get quite high, do we have power hovering around the 80 percent mark? Okay, let's use a second example and with this second example we have extracted effect sizes and confidence intervals, or 95 percent confidence intervals. So this is an example, this is exactly what we've extracted, which is commonly reported in many meta analyses. Let's have a look at the data here. Okay, so we have our data set here, and we have to have the columns structured this way. One column called yi, the effect size, one column called lower, which is the lower confidence interval, and one column called upper, which is the upper confidence interval as well. So we're using this function ma power underscore ul, upper, lower, and we've included our observed effect size of 0.08 and we've also named our meta analysis. So let's run that, and let's have a look. And again, we have our data, which is power for a range of effect sizes, including our observed effect size. Things here, looking at this, are looking a little bit better. There's a few studies here that once, assuming your effect size of 0.4, we're starting to hit power of around 80 percent. So it's a little bit different, performing a little bit better. Let's have a look at the medians, and we can see that with the medians here as well. Whereas previously the effect sizes are quite large, here the effect sizes are quite small. And you can also, a useful feature of this package is you can combine different firepower plots. So here we're actually combining these two meta-analyses that we've done, and we can see that the key to our meta-analysis includes studies which can detect a wide range of effect sizes. As you'll notice here, by default the range of effect size is from 0.1 to 1. I've chosen this range because these are effect sizes which are commonly reported in the literature, but of course in some research fields and some different types of effect sizes. So if you're dealing with our peasants r, for instance, these ranges are either much larger or much smaller. So there is a way that you can adjust these ranges here. So let's go back and have a look. So you can adjust the effect size. I've actually gone ahead of myself with my script. Don't you love live recording or recording all in one go? And we've gone back. Let's go back to line 34 and we can see and we can specify an argument called size. And if you specify a small, this will give you a range which is from, let's zoom in, which is from 0.5, 0.05, all the way to 0.5. So here we can see that the power is quite low for this restricted range of effect sizes here. If this was much larger, it would be a different story. So in some circumstances, you may want to look at this restricted range from 0.05 to 0.5, say if you have a different type of effect size or if your effect sizes of interest are relatively small. So you about default the sizes medium, which is 0.1 to 1, but there's also a smaller range, which I'm showing here, but also a larger range if you want to use that as well. So that's an option that you can use that too. So that is the MetaMeta package. I hope you have fun meta analysing and extracting data from either published meta analyses or using the MetaMeta package with novel meta analyses that you are running yourself. Enjoy.