 If you want to be able to do a lot with a little, that is, if you want to be able to do a lot of analyses with really just one or two particularly flexible and powerful techniques, then linear regression is going to be by far the single best choice. In fact, it turns out that a lot of the other approaches that we use can be considered special cases of linear regression. But I want to show you an example using the data that it's usually considered to be designed for where you have a quantitative or continuous outcome and several other quantitative or continuous predictors. In this case, I'm going to use my state data where we have the name of the state and its code, its region, whether it's governor at the time the data was gathered is Republican or Democrat. Now, there actually was one independent governor when I got this, but because this data only has personality information on the lower 48 and that governor was in Alaska, they're not included here. And so we're down to just the two different parties. Then we have some information about personality, the psychological profile of that state, its average scores on the big five personality characteristics, extraversion, openness, and so on. And then a bunch of information from Google correlate about how common very search terms are relatively speaking in that state where these numbers are Z scores that say how many standard deviations above or below the national average that state is. And we're going to do this by coming up to regression and clicking on linear regression right here. So the first thing we need to do is pick our outcome variable. The thing we're trying to predict is calling it dependent variable. And I'm going to use openness. That's one of the personality characteristics that's being measured at a state level and average level. Now, we can put in covariates, which are other quantitative or continuous predictors and factors which are categorical or nominal predictors. I have a lot of variables and I could use them all, but it turns out that we only have 48 rows of data because the 48 continental United States and you don't want to go wild and throw in everything else because that's going to violate some of the assumptions that underlie regression. So we need to be a little bit selective. What I'm going to do for the covariates is I'm going to pick the search terms. So these are the ones from Google correlate to say the relative popularity of that search term compared to other states. And we have some about social media, Instagram, Facebook and retweet because I couldn't use tweet. And then things like entrepreneur GDPR, which is a privacy regulation in the United Kingdom, privacy, university, mortgage, volunteering, museum, scrapbook and modern dance. I'm going to pick all of those by holding down shift and clicking on the last one and moving those into covariates. And then I'm going to do one other thing because I want to show that the approach here is pretty flexible. I'm going to take governor, which is a text variable where the information is actually written out as the words Democrat or Republican. It's not zeros and ones or something. It's the actual words. But I'm going to simply put that here into factor. And now you're going to see the regression table and what it shows us is two things. First off, here at the top it's giving us a measure of the fit of the model, where we have a capital R, which stands for the multiple correlation. Although the R squared right next to it, which is simply the square of the capital R is more common. But even really more common when you have many predictors, and especially when you don't have an enormous sample size, is you're going to want to use something else called adjusted R squared. Now that's down here under model fit. So I'm going to come down here and I'm going to click adjusted R squared, which will add one more call into that table. I do have other choices. There's the AIC, which is the IKK information criterion, the BIC, which is the Bayesian information criterion, RMSC, which is the root mean square error. And there is the F test, which does an analysis of variance table for the overall model. I could do any of those, but I'm going to leave it with the adjusted R squared, which takes into consideration the number of predictor variables you have and the number of cases. And it gives me a number here of 0.572, which means that these variables together are able to explain or predict about 57% of the variance in the outcome variable, which is openness or openness to experience. And if you come over here, you want to look at this last column, which gives the p values, the probability values, which are used in hypothesis testing for each of the individual coefficients. And you can see that the intercept is significantly different from 0, but there's nothing surprising there. And most of these are much higher than our cutoff of 0.5. Except down here at the bottom, Scrapbook is close to 0.5, and Modern Ant is just barely below 0.5. Now, the important thing to remember is that these p values, which of course have to do with the t scores here, are only valid within the context of each other. If I were to remove one of these from this list, then all of the other ones would adjust and modify a little bit. So this p value of 0.049 is only valid when I have everything else here. I'm going to show you some ways of dealing with that. But I want to start by saying that this here right here is the most basic and flexible model of regression, where you take many quantitative predictor variables, and in this case, also a single categorical variable, and you use those collectively to predict a single quantitative outcome. It's a very common approach. It's a very powerful approach, and something that Jamovi makes very easy. And in the next two videos, I'm going to show you some extensions of this. For instance, ways of selecting variables for predictors, and also some of the important regression diagnostics to make sure that your model is in fact telling you what you want it to. And we'll look at those in the separate videos.