 Any time you're analyzing data, you are making comparisons, you may be making them explicitly, or you may be making them implicitly. The comparison that's usually most important in research is, for instance, a controller baseline condition versus some sort of experimental or manipulated or intervention condition. The idea is that if people are randomly assigned to one condition or another, then the differences between people balance out and you can look at the effect of just that manipulation. But a really interesting and powerful alternative to this powerful meaning, statistically powerful, easier to find the effect with fewer people, is what's called a repeated measures design. And this is where everybody in your study gets to serve as their own control, their own comparison or baseline. Now, what you do that is by gathering data from them in more than one condition, if you have four conditions, you get it in all four conditions. And then you compare the change from one condition to another for each person, again, a much more powerful design, if your study allows for it. Now, the example data set that I want to use is the bugs data set. And this is where everybody in the study was asked to make evaluations about bugs in four different categories, bugs meaning insects. And they looked at insects that were either low on disgust and low on fright, low on disgust high on fright, high disgust, low on fright, and high disgust high fright. And the thing that they're being asked is how much you want to get rid of this bug. And it goes from zero and not at all butterflies are very pretty, nobody wants to get rid of them to high disgust high fright, which might be really big cockroaches or something like that. And people want to get rid of them. And so what this means is that people are making evaluations in each of these different categories and they get to serve as their own control. And this allows for extra power, although it also introduces a lot of additional complications into the analysis. One of the nice things about Jamobi is that it actually has the ability to do this kind of repeated measures analysis of variance. And other programs that functionality may not be there, or you may have to pay extra for it. But here we have it built in right away. Now, as I do in all the analyses, before you do the actual analysis, you want to do some of the background checks. So let's take a quick look at these four variables, these outcome variables, I'm going to come here to explore and descriptives. And even though we've looked at these in other videos, it's worth doing them again right here. And we're going to get some statistical summaries, but I actually find it a lot easier to use the plots. And so I'm going to come down here to histograms. Actually, and so I'm going to come down here to density, click on that. Now, unfortunately, I can't get side by side density plots for each of these, I would like that. And maybe that'll be available in the future version of Jamobi. But right now, what I can do is I can look at these distributions and I can compare them one to the other. Half here is low discussed low fright, where you see it bumps up against 10, but it goes all the way down to zero. Low discussed high fright, almost nobody goes down to zero, we get things a lot closer to 10. High discussed low fright, again, pretty close to 10. And then high discussed high fright, basically, everybody wants to get rid of those bugs. But now what we can do is instead of looking at the distributions overall, we can look at the changes from one to another. So what I'm going to do is I'm going to come here to the analysis of variance or ANOVA or ANOVA. And I'm going to select repeated measures, ANOVA. Now, the way you set this up may not be immediately obvious if you haven't done this before, because you have to specify how the various measurements fall into the categories or the overall groups that you want to combine. So it's asking for my repeated measures factors. And what I have is discussed and fright. And so I'm going to double click on this first thing right here. And I'm going to write the word discussed. And then I hit return. And it asked me what is the first level of discuss, I'm going to call it low discussed. It might not be high and low, you might be doing right or left or up or down or something like that. And I'm going to do that for the next one. Now I'm going to skip over level three and go to RM or repeated measures factor two, which is fright. And for that, I put low fright. And I put high fright. By the way, you see that it's filling in over here. And it's also coming down right here. And what it wants me to do right now is to come over and find the variables that correspond to the combination of these factors. So which cell which variable has the outcome for the low discussed low fright? Well, fortunately, it's abbreviated that LDL stands for low discussed low fright. So I just drag that over. And then low discussed high fright goes right here. High discussed low fright HDLF goes there. And then finally, high discussed high fright goes right down here. And so now it knows where the data is and it knows how to parse them into the variables that we're interested in. And it does the analysis right here. Now, if we want to, we can put in a between subjects factor like gender as well, that makes it a lot more complicated. So I'm going to leave it out for right now. But what you see here is that discussed changed how much people want to get rid of the bugs, fright changed how much they wanted to get rid of them. And there wasn't really an interaction between them. So discussed and fright didn't depend on each other. And so that is our initial result. But we have a lot of options with the repeated measures analysis of variance. So I'm going to scroll up here a little bit. Again, if I had a between subjects factor like gender or where people lived, I could put that in here if I want it. And if I had a covariate like a person's age or their level of exposure to bugs, I could put that in there as well. I'm just going to come down here and get an effect size. And for a study like this, it's good to get a partial eta squared, this little letter here that looks like an n is a Greek eta. And it's the common measure of effect size for the analysis of variance. And here we can see, for instance, that this point 123 means about 12.3% of the variance and a person's response can be attributed to the variations and discuss over twice as much can be attributed to variation in fright. And that there's just 2.4% that's associated with the interaction that's negligible. And then I can come down and I can specify the model. Now I'm just going to use the regular built in main effect main effect interaction model. So I'm not going to change that at all. I can look at assumption checks. Now, in the repeated measures analysis of variance, a common test is what's called sphericity. And it's sort of the repeated measures version of a normality check. And I can click it but it's not going to be really relevant because what you find is that we only have two levels in each of these we have high and low discussed high and low fright. So it's always met when there's only two if we had three or more than this would be important. So I don't really need that. Now in a normal analysis of variance, I could also do the equality of variance test. In fact, I'm just going to click on that right here as Levine's test. But you know, it's irrelevant for this one because I don't have a between subjects factor like gender. So we can ignore that one. I can do post hoc tests if I want. And so I can for instance, look at disgust and fright. And I can put them over here. It's not really necessary to do the post hoc test because I only have two categories within each one of them. And so it's obvious where the difference is if we look at the means. But if you had more than two categories within each factor, this would be something that would be important for you to do. I just want you to see how it works. You also have the choice of the correction that you do. The Sheffa and Bonferroni are very common. I prefer the two key test. And we can scroll down and we can see it's calculating all sorts of things. And these two correspond to the analysis of variance results we had earlier. And it's going to look at these specific comparisons, it makes for a very long table. And what it's telling us is that several of our comparisons, these first three are all significant in the last one, but not the middle two. And that's a lot easier if you were to go back and have a means chart. But it gives you the means and you can look at specifically these possible comparisons. Now the last thing we can do is we can come to estimated marginal means. And we can look at the effects. So I'm going to come right here and do disgust, drag it right there. And get a new term, I'm going to get fright and drag it down right here. And we're going to get a marginal means plot. I'll also get a marginal means table. And I can scroll down. And this is going to make it a lot easier to see what's happening in our data. Surprisingly, when a bug is disgusting, people want to get rid of it more than when it's not. And here are the numbers to go with that as well as the 95% confidence intervals. And when a bug is frightening, they want to get rid of it. And you can actually see that the difference for frightening is much bigger than the difference for disgust. And then we got the numbers right there. And so this is really the collection of information you can get from the repeated measures analysis of variance in Jomobi. Some of these options aren't really necessary when you have just two groups within each factor, or you have a simple interaction. But even with more complicated designs, it's nice to know that the options are there and available in Jomobi to allow you to use the more powerful repeated measures design to find out what's going on in your data and start getting insight into your results.