 Continuing with the testing hypothesis, continuing within French statistics, we are moving forward from t-test to analysis of variants. In t-test, we were comparing two groups. But in analysis of variants, we then compare more than two groups and more than one independent variable. Analysis of variants is a hypothesis testing procedure that is used to evaluate mean differences between two or more treatments or populations. In psychology and in social sciences, there are many conditions when we have to compare more than two groups. For example, we are comparing treatment effects of different therapeutic techniques on depression. We can compare different teaching methodologies and their effect on performance. We want to compare performance of employees in organization working in different groups. So, there are many conditions when we have to compare more than two groups or more than two conditions. Inova helps us there. While running Inova, we must decide between the two interpretations. Number one, whether the differences between the populations exist or are there or the differences are not there. When you are comparing three or four groups and you are running Inova, you will be interpreting whether the differences exist between those groups and they are because of the treatment or if the differences exist, they are not because of the treatment, they are because of the other factors or maybe random error or within factor variables. When our researcher manipulates a variable to create the treatment condition in an experiment, the variable is called the independent variable. Now, we have to compare our groups. For example, let us say that driving conditions affect your mobile phone in driving performance. So, the researcher is manipulating the variable which is the use of the telephone during the driving. So, here we did not have three conditions, the researcher and I have taken this example from the Gravator. In one, they are not using any phone, they are driving in a driving simulator and the researcher wants to see the driving performance. In second condition, they are using phone but they are using hands free or in third condition they are actually using phone and they want to see how that affects the driving performance. So, when we manipulate the variable, it is called basically independent variable and it is categorical and we have manipulated it by ourselves. But there are many conditions when we compare the groups but actually we have not manipulated the variable by ourselves. So, non-manipulated variable to designate groups, the variable is called a quasi-independent variable. This example can be seen that we want to see that if I want to study something, then what is the effect of the learning of the age level? So, I am comparing three groups of different ages, in which I have eight-year-old children in a group, in one group I have eight-year-old children and in one group I have ten-year-old children. So, I have three groups of six years, eight years and ten years and I want to see the learning performance or the learning curve of these three different groups while I am teaching something to them. So, now this variable is actually non-manipulated variable across different groups. So, this will be called a quasi-independent variable. You can always remember that for ANOVA or for T-Test, for quasi or for experimental groups, independent variable is always categorical, that is, it has different levels, like we compared male and female in a team, so gender was variable, which was categorical with two levels. And here we are comparing driving use of phone, that there are three conditions, three groups and we have manipulated them, so they are categorical, that is, group one where they are not using any phone, group two where they are hands-free using group three, which is they are actually directly using the phone. So, using a phone is an IV, independent variable, which is categorical, which is manipulated variable. And dependent variable is always our continuous and our score is like we want to see driving performance, or we want to see the learning curve of the children through a test, or we want to see the running score and we can compare it. ANOVA is a very robust test, we will go ahead and talk about how many probes there are and how many options there are and how many different conditions ANOVA is the solution for us to test our hypothesis. So, in ANOVA we can compare more than one factor as well, just to first say that in this driving condition example, I want to also compare the three conditions, what is the role of gender, how use of phone is affecting driving performance in different two groups of gender, that is males and females. Now, I have two independent variables, number one, driving, which has three levels and number two, my independent variable, gender, because I want to see the effect of use of phone and gender on driving performance. So, if there is one independent variable or one factor, we call it one way analysis of variance and if we have more than one factor, which is two factors, we will call it two way analysis of variance. But our independent variable, we also call it factor, we also call it independent variable and we will go ahead and talk about it. A study that combines two factors called two way analysis of variance or factorial design or two way ANOVA, a study that examines only one independent variable is called single factor design or one way ANOVA. ANOVA can the testing hypothesis may be the same assumptions or the same logics as we do in T that mu one is equal to mu two or there is no difference in the groups a priori, that is our three groups, they are equal and the alternative hypothesis is that at least one group is different from the other two. We can also say that mu one is not equal to mu two and mu two is not equal to mu three. ANOVA we use variance to measure sample, mean difference is just a team we compare means and we do mean one minus mean two and difference. But in ANOVA, we are actually comparing means but we are using variance to compare means. So the test statistic for ANOVA uses this fact to compute the F ratio. What we do is we calculate variance in it. I will tell you shortly what is the logic behind ANOVA. But we calculate two types of variance in it. Number one, which is because of our treatment variability. We manipulated a factor and we wanted to see what effect it has. What variability it takes in driving performance use of phone. And one is our error variance or within group variance. For example, if a group in which there are ten people and they are using all of them, their performance rating will be different. So that is called within variance or error variance. So we calculate the F ratio ahead and divide between variance with within variance or error variance.