 In the previous video we had a Haywood case in a factory analysis. Let's take a look at what's the cause of a Haywood case and how would you interpret one if you get one in your actual own research. So the idea of a Haywood case and admissibility is that all variances must be positive because a variance quantifies the degree of variation and something can't have negative variance. It's like you can't have a negative length for example. Then this is our example so we have a three indicator factor model here. We have a model in plus correlation matrix here. We have here the empirical correlation matrix and we estimate the factor model. So we will get factor loading. These are standardized so the factor is scaled by setting the variance of the factor to be one. We can see that we have a correlation that exceeds one that is not possible and we have variance that is below zero which is not possible either. So this is the Haywood case. So it's a negative error variance, variances can't be negative. It is inadmissible because it's an impossible solution. Now what do we do with it and why does it occur? In this case it occurs because of sampling error. So the correlations here are never exactly at the population values and sometimes it happens that we will get negative estimates. The reason for that is that if we repeat this, this is simulated data set. If we repeat the estimation of this factor model over and over, the real error variance is 0.19. And the real factor loading is 0.9. If we estimate this factor loading that has that real value of 0.9 many, many times and we have an unbiased estimator, then the estimates are correct on average. So the estimates are centered around the correct population value 0.9. If our sample size is small, then it means that the estimates, any individual estimate is not exactly 0.9, but it is somewhere around 0.9. If the estimates are also normally distributed, then we have this negative tail here and we also have this positive tail here. And we can see that if we have some estimates that are below 0.8, then because of unbiasedness and normality, some estimates go above 1. So it's possible that if you have a very good estimator and the population value is very large and if or the population error variance is very small, and your sample size is small, then you will get, because of the normality and unbiasedness of the estimates, we will get this inadmissible results. So what do you do about it? Well, there are two things that can cause Haywood case. One thing is a small sample, highly reliable indicator and small sample, we could estimate this as 0.19 as being negative. Another thing that Haywood case can indicate is that your model is so severely mis-specified. So you are not the factors that you are specifying are not actually the correct factors, so you are specifying the factor structure incorrectly. And that can cause some of the estimates to become inadmissible as well. So how do you know which one is the case? Is it a symptom of a model mis-specification or is it just because you have an unbiased estimator that is normally distributed and you have a population value that is close to being the maximum or minimum? You don't know for sure, but one thing that is sure is that if you have a variance that is let's say minus 2 and a factor variance that is 1, then that can be because of small sampling fluctuations. So if your estimated error variances are way below 0, then that's an indication of a problem. If there are slightly below 0, then you could say that maybe the population value is actually a small positive number, but it's only a small sampling fluctuation. You don't know, but if you have small values, then I would be okay with just you saying that the indicator is highly reliable.