 Hi everyone. So this is the first video in a series of videos about the basics of power analyses. And so across the series of videos, we're going to talk about what are power analyses, why you should do them, issues that can arise when studies are low powered. And then we're also going to go into the nuts and bolts of actually how to do power analyses. So what you'll need, where to get that information, as well as how to do power analyses in a couple of different statistical softwares. So we're going to work a little bit with R. We're also going to work a little bit with G-Power. And there will also be a slight bit of content about other statistical programs like SPSS or SAS, if you use those. So we're going to start really basic just to make sure that everybody's on the same page and just define what statistical power is. So statistical power is the probability of rejecting the null hypothesis when it is actually false. So as some of you may remember from when you took basic statistics, when we're talking about hypothesis testing, there are kind of four things that can happen. So if we're doing frequent statistical analyses, we have some null hypothesis. And then we're going to do a statistical test. And based on the p-value of that test, whether it meets a certain criterion or not, we will either reject the null hypothesis or will fail to reject it. So I've put p is greater than or less than 0.05 here, just because in many different scientific disciplines, the criterion for statistical significance is p is less than 0.05. So you can tell that there are four different possible things that can happen here. So when the null hypothesis is true, if we reject the null hypothesis, we say it's not true. We've made an error, specifically a type 1 error, which is also called a false positive. And we set the rate of false positives by setting our alpha level. So if you're in a field where p is less than 0.05 is deemed statistically significant, you're basically testing with an alpha level of 0.05. There's a 5% chance of a false positive. On the other hand, if the null hypothesis is true and we fail to reject the null hypothesis, we've made a correct inference. It's a true negative. So our statistical test is saying we can't reject the null hypothesis and the null hypothesis is true. So it's a true negative. Now when the null hypothesis is false, two different things can happen. So if the null hypothesis is false and we reject it, this is a different type of correct inference. This is a true positive. And so when we're talking about the power of a test, we're talking about that true positive value, which is defined as 1 minus beta. So right, the power is the likelihood of rejecting the null hypothesis when the null hypothesis is false. And so it's the likelihood of making that true positive. On the other hand, if the null hypothesis is false, but we fail to reject it, this is a different type of error. This is something called a type 2 error or a false negative. And that's what this beta is. So a lot of times you'll hear people talking about trying to minimize the type 1 errors and minimize the type 2 errors. Basically what that means is that they're trying to reduce the likelihood of getting either a false positive or a false negative and increase the likelihood of getting a true positive or a true negative. So I mentioned that power is the probability of rejecting the null hypothesis when it's false. Now, when you hear about power, you generally hear about it talked in probabilities. So if somebody says their study has 80% power, what that means is that they have an 80% chance of getting a statistically significant result when the effect they're looking for is truly out there in the population. So to give a concrete example of this, if I were running a study to look at whether there's a height difference between men and women, which we know is true, men on average are taller than women. With my study had 80% power, what that means is if I drew a sample of men and women, I would have an 80% chance of getting a statistically significant result. P is less than 0.05. Another way to say that is if I drew 100 samples of men and women, in 80 of those samples, I should see a statistically significant result if I have 80% power. Now, why is power important? Why should you care about power? So power tells you how likely you are to detect an effect when it's really out there, which is important, right? As scientists, we usually want to be detecting effects if they're really there. And so power tells us how likely we are to do that. The converse, though, if you remember from that 2 by 2 diagram, because power is equal to 1 minus beta, and beta is the likelihood of making a false negative, if we know power, we also know the rate of false negatives. So if we have 80% power, that means we'll detect a real effect 80% of the time, but we're going to miss a real effect 20% of the time. So knowing the power of a study gives us two very important pieces of information. And it's also important because power is wrapped up in replicability. So let's say we find an initial effect and we're interested in seeing if that effect replicates, whether we can find it again if we run a different study. Now, if the original study and the replication both had 80% power, then we're only going to find that effect in both studies, 64% of the time, right? So the higher the power of the studies, the more likely we are to find the effect in each of those individual studies, and the more likely we are to find that effect in both of them. So as I mentioned, this is the first video. In later videos, we're going to talk much more in depth about all these topics, but the first video is pretty introductory just to make sure that everybody is on the same page. In the next video, we're going to talk a little bit more about kind of what a power analysis is, what it's important to actually go through the steps of doing a power analysis, and perhaps indications that a lot of scientific fields are not doing those power analyses. So I should mention that if you have any questions either related to this video, to future videos, anything like that, there's a host of ways to ask questions. So there are two emails. You can email us at stats-consulting at sos.io or contact at sos.io. We're more than happy to answer your questions. You can comment on the YouTube video, but you can also comment on the OSF project. So I should mention that the video, the slides for this video, as well as all the other videos in the power analysis lesson, are hosted on the open science framework. So if you go to the link that's in the description of this video, it'll take you to an OSF project that houses all these materials. So this is the OSF project for the overall lesson on the basics of power analyses. And you can either comment right in here if you have kind of general questions, but you can also go into each of the individual components for the different videos. So if I click here, I can see that I'm in different videos. So if I click here, this will take me to this particular video. I can make a comment here, which I will see and I'll be happy to answer. And I can also look up the information that's in the slides. For other videos that may have R scripts or examples that you can run through to test your knowledge, those will also be uploaded here. So all the materials from this video as well as the other ones are here. You can download them. You can look back at them just to peruse them for your own needs. Thanks for watching.