 We have talked about assumptions of parametric test and assumptions of independent sample t-test and assumptions of t-test overall, but how are we going to test these assumptions in SPSS? Yes, SPSS allows us to test each one of the assumptions before we run any statistical test. Let us review what were the assumptions of parametric test quickly. First assumption was our level of measurement that for all parametric tests for the dependent variable, it should be continuous, it should be interval ratio scale, it should be a running score or independent sample t-test or t-test that our dependent variable is interval or ratio scale, quantitative or running score or continuous variable. Second assumption was the random sampling. For parametric testing, second assumption is that when you draw a sample from the population, it should be randomly drawn. When you draw is a tough one, it is difficult because for random sampling, you must know the total population and then you randomly select samples from it. The first method is to remove samples from the class and I have 5 out of 30 people. Each person has an equal probability of being selected in a sample and each person has a known probability of being selected in a sample. In our thesis, students do t-test and randomly draw, but if we increase the sample size, we may justify it. Third assumption is our independence of observation and independence of observation means that all the samples in the data are independent of each other. Any sample or individual is not affected by the presence or presence of the other person or sample in the data. So, in simple words, independent samples mean that there are different people in the sample who are completely independent. Because of one person's presence, there is no relation, there is no repeated but rather they are independent observation, independent sample. So, let's see that in SPSS, the first assumption that is very important in parametric testing is the normal distribution. Normal distribution means that our underlying population is normally distributed. Because if we meet the assumption of underlying population and normal distribution, then we can use t-test. Because we remember that by looking at the t-table value and calculating its probability, we reject and select null hypothesis. So, to check the assumption of normal distribution, in SPSS, if you will go to analyze, then you will go to descriptive statistics and then you will go to explore. Remember this, save it. So, when you click explore in these three, then you will get a dialogue box like this. There is a dependent list in which you always have your dependent variable. You always have our continuous. You will send the test scores. Your factor list is your groups. You will send it. After that, when you go to option statistics, when you click on this statistics, then this box opens. And you check the descriptives in it. This will take out your main standard deviation. It will take out 95% confidence interval. And then you can also check outliers. What are outliers? Outliers are extreme values in the data. Sometimes when we put our data in SPSS, even then, we must do it mistakenly. You pressed 5, but it pressed 55. Or you pressed 90, so it pressed 9. So, this kind of values detect errors. And the second thing is that sometimes there are other extreme values in our data. Because of which, all our testing hypotheses and mean, like we read earlier, that mean is affected by extreme scores. So, when mean is affected, then test values are also affected. So, usually, we define outliers or detect them. We delete them or we replace them with average. So, you will click this. You will check the normality plots and you will check the histogram. And after checking both of them, you will hit the continue button. After that, you have that output file like this. I will calculate its histogram. In this, you have to test both of them. Both. If you say that the curves of normal distribution are displayed and the histograms are displayed so that we can find out whether the data is normally distributed or not, then both means that it will draw a distribution for both groups and then you will find out whether it is a normal distribution or not. So, when we continue to click the continue button we will get the first output file with this histogram. You can see in the histogram that this is for our swimmers and this is for our soccer players because we selected both of them. So, this is the actual shape of the distribution in each group. Probably, we do not have much ideal data but let's see that it is normally distributed and if it is very small then our normal pattern will not be like this. In such cases, what will be helpful for us is that the test value will be more important. So, on the second number we have PPQQ plots. In its output and PPQQ plots also give us ideas about normality. What we really want to see in these graphs is that these dots basically are our expected predicted scores and these are our actual scores. So, if we say that the data is normally distributed then all the scores will be on this line. The assumption that the data is normally distributed or the underlying population is normally distributed so, under the underlying normal distribution assumption our data points they should hover around this straight line. They should be closely kind of across this line and the ideal would be that all the dots come close to this line so, we can say that our data underlying is a normal distribution but you can see that it is off but I think it is because our data set is very small that is, we have a total of 13 data. The best value that our data is small is the assumption of homogeneity of variance through Leven's test and we wanted that Leven's test was not significant and for normality we see the value of Kolmogrov Smirnov test. When we gave a command to the normal distribution to SPSS, it does well and calculates Kolmogrov Smirnov test. So, most of the reported value in the studies is the Kolmogrov Smirnov test. So, its value is the best statistic value and its significant level. So, this assess the normality of the distribution of scores. Non-significant result. Just as we have seen in Leven's test we also want that there are non-significant results. Non-significant means that mean 1 is equal to mean 2 which means population is normally distributed. So, yes, you can see that the value for both the groups is non-significant which means we fail to reject the null hypothesis which means our distribution is normal. Remember that whenever the value is smaller than 0.05 when the sig value is smaller than 0.05 this means the result is significant and result is significant which means we reject null hypothesis. When we reject null hypothesis it means our null hypothesis is that mu1 is equal to mu2 which means that the population is equal and distribution is normal. So, for this to meet the assumption of normality we want Kolmogrov Smirnov test value to be non-significant. If it is non-significant it is greater than 0.05 which means normality is meeting and we are accepting the null hypothesis that our data underlying or normality assumption is meeting. Other than mean and standard deviation descriptive statistics also provide some information concerning the distribution of scores on continuous variable. So, we saw that we can check whether our data is normally distributed or not. We can check whether our normal distribution assumption is meeting or not. Through test statistics we can also test through Shapiro or Kolmogrov whether our normality assumption is meeting or not. And other than this if we click on simple descriptives and we have the value of our normal curve our normal curve is symmetrical it is not skewed skewness means that your curve is tilted toward one side like this is our negatively skewed that is this side and this is our positively skewed but normal curve is symmetrical so if our skewness value and our cartosis is out of acceptable range then we can see whether our data is normally distributed or not or whether our normal distribution assumption is meeting or not. And you can also check in SPSS like this. When you check descriptive in SPSS then it gives you this table and in this table you can see that the minimum, maximum, mean, standard deviation skewness and cartosis is given as skewness value. So skewness value is between minus 0.5 plus 0.5 which means that this is approximately symmetrical data. If this value is minus 1 or greater than plus 1 then we say that it is not normal distribution but skewed data. So ideally our data's skewness value should be between 0.5 plus minus. And if it exceeds 1 then our data if it is minus then it is negatively skewed and if it is plus then it is positively skewed. So our value here is 0.616 which is not that bad and in the same way the cartosis value is above 1 which means that this is the value of statistics skewness and this is our cartosis both the values are less than 0.5 which means that this is approximately symmetrical. In ideal condition when our perfect normal distribution is there is no skew no cartosis is there so both the values are 0. So in ideal normal distribution curve skewness value and cartosis value will be equal to 0. Even if it is falling between minus plus 0.5 then we also say that this is approximately our distribution that is symmetrical. For our last assumption for our parametric there are homogeneity of variance especially for independent sub-partitials and where we are comparing groups like ANOVA it is very important that our groups have very little variability. We just run T-test and we saw how we see the value of Leven's test and similarly we want the value of Leven's test to be non-significant so that we can conclude that variance between the two groups is equal. In this way we have given the details we have done in SPSS you can read it and what we want it is stable and if our value is greater than 0.5 then it means that the results are non-significant which means that the assumption of equal variance is being met and we can assume that variances are equal in the two groups.