 Sometimes you have data from one group of people and you're comparing them either before and after some kind of intervention, or you're comparing them on two separate measurements and you want to compare each person's score with their own score. So you're looking for changes from one variable or one time to another. This is when you want to use a paired samples t test. And it's very easy to do here in Jamovie. For this example, I'm using the same bugs data, which talks about how much people will get rid of bugs or insects that are low disgusting, low fried up to high disgusting, high frightening goes from zero, which is the lowest to 10, which is the highest. I'm going to come up here, and I'm going to use paired samples t test, that's the test that we're doing here. And what we have to do is we have to specify pairs of variables. Now, there are a few different ways that you can select the pairs that you want to do. So for instance, you can click one variable and push it over here, and then you simply click the second variable. So maybe I'll do this one. And so that sets up a pair. You can also click one and do a command or control click and select the other. So you get both of them there at the same time. And that gets a pair. Or if you have a bunch of comparisons all in a row that makes sense to do, you can select the whole thing by just doing a shift click. And what it's going to do is it's going to put the variables together in order. So number one goes with number two, number three goes with number four and so on. Now, what I have here, by the way, is a violation of the independence of observations. Normally, you wouldn't want to be doing the analysis quite like this, because we're looking at the same comparisons in different ways. This is more like a post hoc test from an analysis of variance, but the procedures for doing this are the same. And so it's a valid demonstration of the process in Jmovi. So what I have right here are my four pairs of variables. And you can see that right over here, it's done all the tests already, we can see that in fact, all four comparisons are statistically significant. So in one sense, that's the simple thing. And we can stop right there. Fortunately, Jmovi gives us some other options. I want to take a quick look at them. Let's look for instance, at the mean difference, how big is the difference? Or rather, we also want to look at the mean difference. And what that does is it gets the difference between the first variable and the second variable for each person, that's a difference score, and then it gets the average of that difference. And so here we have the differences and you can see these ones went down across the comparison, this one went up. By the way, whether it's positive or negative is arbitrary is simply a matter of which of these you put first and which one you put second. And we have the standard error, I'm also going to get the confidence interval for the mean difference. I got to scroll over a little bit so you can see that we'll get the effect size, which is Cohen's D. And in this case, it says how many standard deviations there are between the scores at time one and time two, or the scores on the first variable and the second variable. Really what it is, is the mean of the different scores divided by the standard deviation of the different scores. And you can see these ones are actually pretty big, that's almost a full standard deviation. We can also get descriptives, which gives us the means and the medians of the variables independently. We can do a normality check, which can be important because t tests are valid when they're working with normal or bell curve shape distributions. On the other hand, not really the raw distribution that's important here, but the sampling distribution. And that's a slightly different thing. And even though we have some statistically significant things here, I usually don't personally worry about it very much because it's the sampling distribution. As long as you have a large enough sample, and we're pretty good, you're probably okay. I'm going to finish with the descriptive plots, which give us confidence intervals for the differences. So I'm going to wait a second here while it updates. And then we'll scroll down here to the plots. And so what this shows us are the mean and the confidence interval for each of the variables in the comparison. So this is low discussed low fright and high discussed high fright. That's the biggest difference available in the data. And you can see they're very far away from each other. There's no overlap at all. The square is the median. And what's funny here is you can see the median is actually above the confidence interval, because we had a ceiling effect there people gave really high scores to that. And the means only lower because there were outliers on the low end. And you can see similar distributions for some of the other ones. And so this is a good way of making comparisons for the same group of people on one variable versus another where they're looking at two different things or often when you're doing something before and after a particular intervention.