 The simplest version of the t test is the one sample t test, which allows you to compare the mean of a single group that you have data from to a chosen hypothesized value, it might be a national average, it might be a theoretical value that represents the baseline against which you want to compare everything. Now, this is really easy to do in Jmovi. Here I'm using the example data set big five, which contains a number of ratings where people evaluate themselves on five major personality characteristics, neuroticism, extroversion, openness to experience, agreeableness and conscientiousness. And these are all rated on a one to five scale. And I want to show you how we can evaluate all five of these simultaneously. But before you jump into t tests, you always want to do some exploratory analyses to see if your data are going to work well with the intended analysis. So let's go first over here to exploration. And let's go to the descriptives. And what I'm going to do here is I'm going to pick the five variables and put all of them into the variable list. By the way, Jmovi updates frequently. And in fact, from the time that I started this this morning, and I'm finishing it now, it's updated and we have a few extra options available now under descriptives. So here we have the mean, the median, the minimum and maximum by default. I also want to get the standard deviation. And I may also want to get, for instance, a normality test. And this is one of the new features. This is just a p value that lets us know whether something is normal or not. And so this one is nearly not normal. This one's far away from normal and so on and so forth. The important thing here actually is going to be the graphics and I even have some more choices here. I'm going to do a density charts like a histogram smooth out. I'm also going to do box plots, which are good for identifying outliers. And then I'm going to do the new option here, which is a qq plot, which stands for a quantile, quantile plot. And it shows how well the observed data match up against, for instance, a normal distribution that has the same mean and standard deviation as the observed data. So I'm going to click that as well. And the charts look a little different. They're color differently. But here we have neuroticism. That looks basically normal. And the box plot shows that it's basically symmetrical with a few high outliers. The qq plot, it shows information on a diagonal. And if all the dots were exactly on the diagonal, that would tell you that the distribution matches what you would expect from a normal distribution, we're really close here, we have some small variations at the tails, which is how it normally works. Extraversion, you see, it's a little bit skewed, we got some low outliers, and you can see that happening here at the bottom. Open this is symmetrical, it's slightly funny looking, you can see how it tapers off here and here. But these are not major deviations. They're statistically significant, because it's a large sample. Here, we have some low outliers on agreeableness, because most people want to say that they're really agreeable. And then conscientiousness, same general idea. And so that's some of the important background. And what I saw here is nothing really worrisome. And this is just confirmation that our data probably work well with the assumptions of a one sample t test, which includes normality. And we can go ahead and do our analysis. So now I'm going to come over here to t tests, and click on one sample t test. And when I do that, I can select my variables, I'm actually going to select all five of them simultaneously, you only want to select multiple variables if they have the same null value that you're comparing them to. Now a lot of times that null value is going to be zero. That's a silly thing to do in this case, because zero is not even in the scale, these are rated on a one to five scale. And so what I'm going to do is I'm going to come down here and I'm going to change the test value from zero to three, because that's the midpoint of a one to five scale. And when I do that, you see how this table over here updates, it's doing students t tests, the one sample t test, and you can see that all of them, the mean is significantly different from three. This is not a surprise, because most of these, there's a positive or a desirable end of the scale, people want to be agreeable, they want to be open. And so we tend to have high values. But I'm also going to look at the mean difference. How far away are each of these variables means from the hypothesized value of three. And you can see they're not really big differences, you know, two tenths of a point about six tenths of a point, we're going to get the effect size for each one of these, which compares how far the mean is of the sample data to the population mean divided by the standard deviation. And here, because the standard deviations are not big, this value coins D the common effect size for this is pretty big. For instance, openness is 1.7 standard deviations above the hypothesized mean of three, that's a big effect. We can also get the confidence interval for the mean difference, which is going to add a few columns here. And you can see that it's all positive because our means are above, except in the case of neuroticism because people don't want to be neurotic. We can also get a normality check. Just going to put this separate column here, we have a Shapiro will test. And this actually these p values here are what we had up at the top table when we checked off under descriptives. So that actually is a more convenient way to do it. I'm also going to do descriptives, which repeats a lot of the same information we had before, but sometimes it's nice to have it down here. An important thing is to look at the s e the standard error, which is the standard deviation divided by the square root of the sample size. And you see that these are tiny, tiny values. They are two hundreds of a point. And that is what leads to a very funny descriptives plot, which normally shows you a confidence interval for the mean. But when our standard error is microscopic, you end up with invisible confidence intervals. In each case here, the circle shows the mean of the variable and the square shows the median and they're super close. And you can tell, for instance, that openness is a little higher, even though at this point, the labels don't automatically adjust to their little squish together. But this lets you know that yeah, everything is different from three three would be straight across right here, four of them are above one of them neuroticism is below. But this is a very compact and concise way of looking at several variables simultaneously. And if you're simply trying to get a first quick look at how your sample data matches up with some expectations about the population, then this is a very good and quick way to go.