 So before we go to SPSS and enter data and run the analysis and see how we can check the assumptions, I will talk about multiple regression also here and after that we will go to SPSS and then we will go to simple linear regression with one IV and then we will keep adding the independent variables and see how regression model fits or changes. I will talk briefly about multiple regression and different methods used in multiple regression. Multiple regression means that we have more than one independent variable, i.e. predictor variables. For that we have many methods to add independent variables to regression. We cannot just throw a message, first you have self esteem, you have self image, you have locus of control, you have resilience, you have adjustment, happiness, there are so many variables and you are checking the mental health. You cannot just throw everything into the regression model to see what kind of model is being made. For that we have to follow the theoretical framework that we have to put our independent variables in regression model. So there are number of different types of multiple regression analysis that can be used depending on the nature of the data and the question. The three main types that we use are the simple multiple regression that we call simultaneous method. So in simultaneous regression we do that all the independent predictor variables are entered into the model simultaneously. You put together all the predictors, each independent variable is evaluated in terms of its predictive power over and above that offered by all other independent variables. So the predictive power of every variable gives us in the form of beta coefficient that how much every variable is predicting over and above the other predictor or other independent variables. Now when we go to the SPSS, I will explain more. But simultaneously we put all the independent variable or predictor variable in the same form and the regression model calculates its predictive power for every coefficient. Hierarchical model for regression is also called sequential regression. In sequential regression, set of variables are entered in steps or blocks. So hierarchical regression is important because I will talk more about how we can use hierarchical regression for our causal models. But in hierarchical model we put data in blocks, data in hierarchy. How do we put a set of variables that are entered in blocks with each independent variable being assessed in terms of what it adds to the prediction of the dependent variable after the previous variable have been controlled. So for example, if you have put variables on the first step, gender, age and education. This means that the next block in which you will put the variable is regression after controlling for the effect of age, gender and education. Now the second block in which you have put the variable will calculate its exclusive or unique predictive power. As we say that we have many times our extraneous variables that are affecting the results of our research. So hierarchical regression allows us to have statistical control, not experimental control, but we can control certain demographic variables through hierarchical regression. So you will be entering data in blocks, block one. Now block two, when you enter the variables, it will give you the predictive power of those variables after controlling for the variables that you have entered in block one. And similarly you enter few variables in block three and then it will be giving unique predictive power of those variables which you entered in block three after controlling for the variables that you entered at step one block one. And then block two. We will go through this, no need to worry about. But hierarchical regression is I think the best type of regression as per my opinion because it allows you for building your causal models. Because if you are following theory and you achieve control, then you can find out exclusively unique predictive power of any independent variable. After controlling for effects of other variable. And then our third type is stepwise regression. So simultaneous, hierarchical and then stepwise. Stepwise simultaneous is the same. In this case, you put all the variables in it. But in this, it happens that in your stepwise regression, you allow SPSS or statistical technique to pick important variables which are statistically significantly stronger. So there is no theoretical model behind us and we don't have our own will. For example, we have well-being and for that we have put about 10-12 predictors. Where does the well-being of the adolescent come from? So I entered their age, their self-image, their confidence, their exposure, their type of school, type of screen time and other old predictors which I thought were important. But I put all the variables together. Now I am not guiding theory that which variable or factor is important in the well-being of the adolescent. So I have no idea about what theory is talking about. I will ask help or I will guide myself through statistical technique that I will say that SPSS picks the most important predictor for me. So stepwise regression does is that the statistically stronger predictor is removed and the rest is deleted. Researcher provides a list of independent variables and then allows the program to select which variables it will enter and in which order they go into the equation based on a set of statistical criteria. So the important thing is that your statistical criteria is important. There are three different various approaches for running stepwise regression. Forward selection, backward deletion and stepwise regression. But remember that stepwise regression we use only when we don't know which variable is important. For example, we are guiding theory in hierarchical that first we should control them, then look at their effect and then put the most important variables at the end. But in this we rely on statistical criteria that says that the statistically stronger predictor is retained and the rest is deleted. So what happens in forward selection method is that it is often used to provide an initial screening of the variables when a large group of variables exist. So if there are too many variables and we want to pick the most important predictors, then we do forward selection. In forward selection, what happens is that select variables then has the highest R squared. It selects which has the highest variance explained, which contributes the most to the dependent variable statistically. And then it selects. It selects the predictor variable that increases the R squared the most and finally it stops adding the variable when none of the remaining variables are significant. So in forward selection, it adds one, predictor in model, then adds the second, then adds the third. Until or unless, it doesn't explain statistically much of the variance. We will do it in a moment and you will see. Backward deletion is also the same. In backward deletion, it puts all of them and then removes them one by one, which is not important. So it deletes one variable at a time by determining whether the least significant variable currently in the model can be removed because its p-value is less than the user-specified criteria. So what is not significant, or what is not explained in our model, is deleted one by one. For step-by-selection, as I told you, modification of the forward selection so that after each step in variable, in which all variables in the model are checked to see if the significance has been reduced below the specified tolerance level. So what does it do? It checks which are the most important variables that are statistically significant. It retains them and deletes all of them. If a non-significant variable is found, it is removed from the model. So mainly, step-wise, we are talking about when you have a lot of predictors and you don't know what is important to you. For example, COVID is a phenomenon. And in COVID, we don't know how well-being of people has been deteriorated. So then step-wise, you can guide yourself and then on the top, you can do hierarchical regression to build your own model so to get in some inference for causal kind of evidence.