 In this video I will take a closer look at the Baron and Kenny causal steps approach to medias. So the idea of Baron and Kenny approach is that we regress Y on X, then we regress X on M, and then we regress Y on X and M. If all these regression demonstrate significant relationships we conclude that there is medias. If beta Y1 is non-significant we conclude that there is full medias. So the idea is that we first check whether there is an effect to be mediated. We regress Y on X, then we regress M on X to assess whether M depends on X. If M doesn't depend on X then it can't be mediated. Then we regress Y on M and X to see whether it's a full or a partial medias. So the medias in effect is beta M1 times beta Y2. So you multiply this regression coefficient from M to Y and this regression coefficient of X, Y on X together and that's the medias in effect. The reason why that is the medias in effect is because of the tracing rules of path analysis. When we have the full medias, the medias in model as a bigger path diagram, the way we get from X to Y is go from X to M. So we get beta M1 and then we go from M to Y so we get beta Y2 there and we multiply them together because we multiply everything along the path. So that gives us the medias in effect. One problem with the medias in effect is that we need to calculate the standard error and that can't be done using the techniques that I've discussed before in the model testing part because this is a product. So beta M1 and beta Y2, their product is non-normally distributed and calculating the standard error requires that we make some kind of assumptions about the distribution of these two effects. So there is a person called Sobel who proposed that we use this kind of formula for the standard error of the medias in effect. So that gives us the standard error. It's called a Sobel test and there are different variations but they generally produce results that are very similar to one another. And that gives us a way of assessing how precise our medias in effect estimate is and also test whether that is zero or not in the population. So it allows us to calculate the T-statistic by dividing the medias in effect with this standard error and then we compare that against the T-dissibulus. That approach has another problem and the problem is again that this product can't be assumed to be normally distributed because the product of two normally distributed variables is non-normal. So some people say that that's a severe problem and it is in small samples these are using these approach can be misleading. So how can we deal with that problem? Some people propose that we use something called bootstrapping. So the problem is that we can't calculate the standard errors precisely or in an unbiased way and that the T-statistic calculated using this math here doesn't follow the T-dissibulus. So some people propose that we calculate the standard error empirically using something called bootstrapping. So the idea of bootstrapping is that we are if the sample is representative of the populations, then taking samples from our original sample forms a process that is representative of taking the samples from the actual population. So we treat our sample as the population and then we take samples from our sample. They are called re-samples for that purpose. Then we take thousand independent re-samples, we check the variation or standard deviation of those samples and that will give us the standard error. So that solves the standard error problem with some caveats that the sample size must be large enough. That doesn't show the problem that the ratio of mediation effect and its standard error is not distributed as T as it's supposed to be because this is a product. Then instead of calculating the T-statistic and the P-value, we calculate confidence intervals based on bootstrapping and there are different ways of doing that. And the bias corrected and accelerated BCA method has been shown to work best for this particular scenario. I will talk more about bootstrapping in a separate video.