 There are a couple of different things that influence the quality of your quantitative research. So what makes your research reliable and valid? To understand those things and what does the quality of your study depend on, we first have to go through what does the concept of reliability and validity mean in the context of quantitative studies. This is one way to understand reliability and validity. The concept of reliability is fairly easy to understand. It basically means that if we repeat the same study again using the same sample, would we get the same result? In quantitative studies your analysis is done by a computer and the computer will always produce the same result if you give it the same data. So reliability in quantitative analysis or quantitative studies is mostly about measurement reliability. So if you measure the same things again, would you get the same result? Validity on the other hand answers the question of does the study answer the question that is supposed to answer correctly? So does it provide a correct answer? Reliability is about do we get the same answer? If you repeat the same study, reliability doesn't tell us anything about whether the result is correct. Validity tells us whether the result is correct. Then validity can be broken down into four different categories. Measurement validity, which we will discuss later, refers to whether the variables in our data measure the concepts that we claim they measure. Statistical conclusion validity refers to whether our statistical results are correct. So if you have identified a trend or a difference in the sample, have we identified that correctly? So is there really a difference or a trend in the population? So it relates to whether our statistical associations measured from sample generalize to the population. Then we have internal validity, which refers to whether the relationships actually correspond to the causal relationships that we claim. And internal validity is about causal inference. So have we identified the right controls and have we controlled for the controls appropriately? Or is our experimental or quasi experimental design free of any possible selection effects that would confound the treatment effect? So that's causal inference. Then external validity simply refers to our results from one population generalized to other populations. So what determines the quality of a research study is an interesting question. And it can be examined through the research process according to Singleton and Straits book. So Singleton and Straits say that our research always starts the research topic and formulating of a research question. Of course your study is not very valuable if the research question is not interesting, but we will be focusing on the empirical part. So after you have your research question set, then you start to prepare your research design. And the research design has two main components. One is sampling. So what are the units, people, organizations, projects, whatever the units that you're studying. So which units and how many are you studying. And then we have measurement, which variables we collect. So if we think of our data as an excel seed, sampling concerns, what are the rows in that excel seed. And measurement concerns what are the columns in the excel seed. Then we do data collection and after the data has been collected, we typically process the data somehow, we screen it for errors, we modify it into different form and then we do data analysis that we interpret the results. Finally we write an article about it. So which part defines the quality of a study. It's this part here. So when you have collected your data, then you have basically already set an upper limit of the quality. If your data are not good, then you can't make a good study. On the other hand, if you have great data, even if you mess up your data collection or data analysis and interpretation, that is something that you can fix. You have the data, you can just analyze it a bit different. It's important to understand that the validity of our causal claims depends crucially on whether the sample is appropriate and whether we have collected all relevant controls or whether we have a valid experimental design. After that, data processing and data analysis are just mechanics that will allow you to document this great study conducted here. So this is the important part. And you should not rush into data collection, obviously, because if you just go and you collect data right away, the odds for you doing it correctly with a good design that includes all relevant controls is pretty low. This is highlighted in some of the readings. So the problems in rejected manuscripts in good journals are rarely about data analysis. So when I myself read you a paper, I typically have lots of things to say about the methods because that's my specialty. But if the data are good, the design is good, then I will say that, okay, do the analysis differently, resubmit, and then I will reevaluate your manuscript to see whether it makes sense. But if there is a control variable that is very important based on existing theory, that provides an alternative explanation for the phenomenon that the researcher are studying, which has not been measured, then there is nothing that they can do about it, in most cases, because it's very difficult to go, particularly if you collect the data with the surveys, very difficult to go and collect additional variables. Also, Aguinis and van der Berg say here, that data analysis problems are really something that cause an article to be rejected, because data analysis problem is something that you just re-analyse the data, fix the problem, and you're going to be fine. So the problem is that if your design doesn't allow you to make causal claims, then you can't make a claim, and there's nothing that you can do about it. They also say that there is this kind of persistent belief that if you have a bad design, you can compensate that using a fancy method. So some people seem to think that because, let's say, multi-level generalized structurally because of modeling is a new thing, therefore using that makes your study better. That is not true. The quality of the study is determined largely before data collection, and then after that, you have to just choose the appropriate analysis instead of going with the one that is the most complex. If you have a bad design, you have bad data, then using a fancy method for that data just means that you spend a lot of time using a fancy and complicated method on bad data, and the outcome is not a good paper anyway. So you just end up with the poor study with the fancy method.