 One of the simplest inferential tests that you can do is the independent samples t test. This is where you're comparing the means of two different groups, very easy to do in Jmovian, you can actually do several variables at a time, although they're going to be separate comparisons. To show you how this works, I'm using the example data set bugs, which talks about how much people want to get rid of bugs that are either low disgusting low fright, low disgusting high fright through high disgusting high fright. And it's rated on a zero, not at all to 10 very much scale. And we have information about people's level of education, the region they live in, gender is coded as male and female. And so I'm going to use the gender one and compare the male and female respondents on these four different variables. Now, before you do a t test, before we start doing inferential tests, it's a good idea to take a look at the variables and to see how well they meet the assumptions, because certain things like normality, or similarity and variance are important for a t test. So I'm going to come over here to exploration and descriptives. And then I'm going to pick my four outcome variables right here. I'll put them here under variables. And then I'm going to split the whole thing by gender. It's not really the table that I'm most worried about, although you can see that we have more female respondents than males, not a big deal. And the means are, well, there's a one point difference. There's a two thirds, there's a half point. So they vary. What I really want here are the plots. So I'm going to come over here to plots. And I'm going to get a density histogram or density chart, and box plots for each of these comparisons. And so what you see is these ones are kind of sort of close to normal. These are female respondents here. These are male respondents here. Distributions are pretty similar. The box plot show there's no outliers. There actually, there's a lot of overlap. Okay, here's where we start getting non normal distributions, which is in a sense a problem. But because of the central limit theorem, because we're really working with sampling distributions, it's not the end of the world, we can still go ahead and do things. We've got a couple of outliers. Similar distributions. Okay, so now we see a little bit what's going on. So with that as context, I'm now going to do the regular t test. And so I'm going to come over here to t tests and do independent samples t test. And all I need to do is pick my dependent variables. Or you can just call them outcome variables and dependency for usually for one is a randomized experiment. And then the grouping variable is the thing that I want to split them into two different groups on I'm going to use gender here. So I'll click that over here. And it gives us this table by default. And it's going to give us the t test and the p value, which is for the significance test. And we can see from this right now that actually, there are no significant differences between the male and female respondents on any of these. The closest we had was a p value point 161. And the rule of thumb is it needs to be less than 0.05 to be considered statistically significant. This one down here is a lot closer. It's 0.06 nearly significant. That's this last comparison right here. But there's a lot more you can get from the t test function in Jmovi. And so I'm going to do a few of these things. Now, if you are familiar with Bayesian statistics, Jmovi can incorporate that it's kind of nice. I'm going to come over here and get the mean difference. So that's the mean of the women minus the mean of the men. And you can see they're about 0.55 points 0.606. Okay, I'm also going to get the confidence interval, which by default is set at the 95%. And if I scroll this over a little bit, you can see the whole thing. And also you can see that they're negative on one side positive on the other. So they include zero, which is consistent with the differences not being statistically significant. I'm also going to get the effect size. And in this case, it uses Cohen's D, which tells you how many standard deviations there are between the two groups means. And in this case, they're pretty small from close to zero to the biggest is 0.4 standard deviations. I'm also going to get normality checks. So we know that the distributions are not entirely normal, meaning bell curve shaped. And the test that we have here is the Shapiro Wilk test. And it lets us know that really none of them are exactly normal. We could tell that by looking at them. But again, it's really not the end of the world because we're using the sampling distributions and not the raw distributions. I'm also going to check for a quality of variances. That's something that's also important for the t test, it says that the two groups need to be spread out approximately the same amount. And this is going to use Levine's test for the quality of variance. And you can see on this one, none of these are significant. In fact, they're all really 0.4 is the lowest. And so there's no significant difference in the variability of the two distributions, which is what we want. I'm also going to click descriptives. And that's going to give me the means and whatnot of each of the groups. And so you can see the mean the median standard deviation and the standard error which is used in the inferential tests. And then finally, the descriptive plots. And in this case, what it gives us are confidence intervals for the means as well as it shows the median for each group. And since these are the ones that correspond most closely to the inferential test of the t test, it's probably the best one for actually seeing whether there's a difference. The general rule of thumb here is that if the confidence interval that's this vertical line for one group overlaps with the mean of the other group, then they're usually not significantly different. And we got a lot of overlap here, a lot of overlap. And these ones are pretty separate. And so this one was nearly significant. Anyhow, that's how you can do the independent samples t test using a single categorizing variable. In this case, I use the male and female respondents. And you can use several outcome variables simultaneously. And it's an excellent first step in getting a look at what's happening in your data through inferential statistics.