 When you're looking at the number of cases in either of two categories, you can do the binomial test, a very simple test, and we've showed that elsewhere. But when you have more than two categories, but still in a single variable or factor, then you get to do the chi-squared goodness of fit test. It's kind of a silly long name, but it's the test you use for comparing multiple categories in the same variable. Let me give you an easy demonstration of this using the state data that you can download from datalab.cc. In this example, I'm going to use this variable that's called psychregions. It's based on psychological research that builds profiles for states in the United States. And we're going to start with an exploration where we get basic descriptives of this. All I'm going to do is pick psychregions, move it over to variables. And I'm going to change the statistics. We don't need these ones because those are for quantitative or continuous variables. There we go. I'm going to get the frequency table and I'm going to get a bar plot. It's really simple stuff. Now, of the 48 contiguous states that the researchers looked at, they classified 24 of them. That's exactly half as friendly and conventional. That's our biggest bar right here. I'm sorry that the labels are overlapping. I'm sure that will be fixed in the later release of Jmovi. The next category is relaxed and creative, which includes 10 of the 48 states. That's about 21%. And the last category is temperamental and uninhibited. That's 14 of the 48 states or about 29%. And so we can look and see, for example, whether the states are evenly distributed across these three categories. Now we can tell by looking at the bars that obviously they're not all the same. The question is whether that difference is statistically significant. And that's where we do the chi-squared goodness of fit test. Now, there's an important choice we get to make. And that's how we define our null values. By default, it's going to assume equal frequencies or equal proportions in each category. So let's come to frequencies here and go to n outcomes. That means more than two outcomes in arbitrary number. And that right there is chi-squared. It looks like an X. It's actually a capital Greek C. It's called chi. Chi-squared goodness of fit. So we're going to click on that one. And all we need to do is take our variable here, a psych regions, and put it right here in the variables. Now, this option for counts is when you have a summary table in your dataset. And I'll show you that in a later video, how that can work. But because we have a raw data where everything is listed as one row per case, which is most common, that's the kind we're going to use right here. And what we have over here is a proportions test where it says that a proportion of 0.5, that's 50%, or in this one, 28, 292. And it tells us that the chi-squared goodness of fit gives an inferential value where the calculated value of chi-squared is 6.5. Two degrees of freedom gives us a p-value of 0.039. The standard value of p, or probability that we use for significance testing is 0.05. So this would be a statistically significant finding. But I want to show you two other things we can do. One is exactly what is the algorithm comparing these values against? Well, it's comparing them against expected counts. And the way it does expected counts is it right now, it just takes however many values you have and it splits them evenly across a number of categories. Well, 48 states can be divided evenly into three categories by having 16 each. And what the chi-square does is it looks at the deviation between 24 and 16, 10, 16, and 14 and 16. Does some manipulation on those, and that's where the value of chi-squared comes from. And so if we are looking for exactly the same number of states in each category, we have a result that is significantly different from that. But let me close this and do another analysis where we're not looking at strict equality across the three. There are lots of times where you expect a different number of people to be in each category. So for example, if you're looking at the number of left-handed and right-handed people, well, there's more right-handed people overall, you don't expect it to be split totally evenly. So there is another way you can specify values other than strict equality. Let me put psych regions back in here and we'll still get expected counts. But now I'm going to click on this menu to get expected proportions. And what I can do here is you see it's saying that by default it's going to split it up across the three. So 33.3% in each. I can put it as a different value. And you put in the ratios here and that means like 2 to 1 or 3 to 1. If you want to, you can enter them as percentages. So you can say like, oh, let's have 60 here and let's do 25 here and let's do 15% here. And that gives us the values that we would expect in each of these conditions. I think that adds up to 100. And now what you see is our values, we have different expected values. And now we still have a statistically significant result. But I was able to specify different values and you can see how up here the expected values are 16. And it's the same in each case. Here the expected values change from one to the other according to the values that I gave it. It's still a statistically significant result. In fact, there's a slightly greater deviation than we had before. But the idea here is we have one factor and that is which personality category is the state put into. We have three options, the friendly and conventional, the relaxing creative or the temperamental and uninhibited. And depending on how we want to set up our null values, we can compare our observed frequencies, how many there are in each category, to what we might expect if the null hypothesis of random variation were true. And in both cases here we find that we have a statistically significant deviation from that based on using the proportion test or the chi-squared goodness of fit tests in Jamovi. Thank you.