 Reliabilit ja valitit ovat tietysti tärkeä kautta researcha. Toisaalta reliabilit ja valitit in quantitative research, meillä on hyvin ympäristöinen set of tools. For example, in reliability, we measure the same thing many times, and then we compare if those measures are consistent from the same person. If they are, we conclude that there is no random measurement there in the data, and therefore the data are reliable. In quantitative analysis, the analysis is done by a computer, and computer doesn't make random mistakes, therefore reliability is only a feature of our data. When we analyze validity in quantitative research, we have the statistical conclusion validity, which refers to whether things are calculated correctly and what we calculate in the sample applies to population. Internal validity refers to whether we have a solid research design with good controls or with other techniques that ensure that our associations are actually evidence of causality and not just tourist correlations. Construct validity is more difficult to argue, but that's basically an assessment of whether the data, the numbers that we have actually measure what they are supposed to measure. So it's the correspondence between the numbers and the constructs. And then external validity, which was generalizability, basically we are analytically infer whether the finding from our population applies to other populations. And how we do that, we typically just look at what are the characteristics of our population and then how would those characteristics influence our results. So that's more of a theoretical exercise than a statistical exercise. So how do we do this kind of assessment in qualitative data analysis? We have to first understand that in qualitative data analysis you're starting with reliability. So reliability was the lack of random noise or lack of chance errors in our data. In qualitative data analysis, the analysis itself is non-deterministic. So whereas in quantitative research when you give numbers to a computer, the computer will always give you the same output. So if you calculate the correlation from one sample and then you calculate the same correlation from the same sample again, the correlation will always be the same. This is not necessarily the case in qualitative research. So if you have qualitative data and a person analyzes a data, then the person makes his interpretations or interpretations based on her prior knowledge and it is possible that if a person in a hypothetical scenario would analyze the data again without remembering anything of the first data analysis round, the result would be different. So we can't really do this kind of recode and then code again, at least if we have one person. So how do we do it? There are two main ways of arguing reliability and checking for the reliability of a research study. The first thing is that you need to document your data well. So if your data is well documented, then this increases the likelihood that if someone were to reanalyse the data, they would come to the same conclusion. If your data is only your own field note that are just meant for you for that particular moment, then if you were to return to those data let's say one year later or those field notes, then it may be difficult to reproduce the old result and then your study would not be reliable. Another thing that you can apply is that if you code your data, you can have two coders. So two people simultaneously analyze the data without talking to one another and then you check whether how those people coded the data, if they coded it in the same way, if yes, that is evidence for reliability, if no, that's evidence for lack of reliability. So reliability in qualitative research is typically about how do we interpret the data that we observe. We assume basically here that if we observe something, then that observation is reliable. So reliability is more about coding. Then we had internal validity, which is about whether the causal claims are correct or not. And in statistical analysis, this relates to how we calculated things correctly and do we have the right controls to have a valid experimental design. In qualitative data analysis, what is important is that you look at the data rigorously. So you don't just cherry-pick on some findings that support whatever you want to say. Instead of, instead, you give a fair assessment to the data. Then you should also look at cases that don't support your hypothesis or your theory. For example, if you're studying whether a CO-gender influences profitability and you have an initial theory that if you name a woman as a CEO, then the profitability will go up. You need to also look at those cases where a woman was named as a CEO, but profitability didn't go up. Or where profitability went up after naming a man as a CEO. Then you analyze those cases and that allows you to come to more valid causal interpretations because that reduces the effect of spurious findings. So you need to understand under which situations scenarios does the CEO gender lead to profitability differences. Then you also need to compare your findings against prior studies. So you need to check if there are prior studies, prior theories that would challenge your finding. And if there's a prior study that challenges your finding, you need to understand why your finding is different from the prior study. And then finally, theory trianguloista refers to that you need to look at the data from different theoretical perspectives. So internal validities basically in qualitative research refers to the level of rigor in your analysis. And the more rigorous your analysis is, the more likely your causal claims are correct. Then we have construct validity, which basically refers to do my data, measure the concepts that I claim them to measure. And there are a couple of different techniques. One is that we need to triangulate. So it is possible that we misinterpret our data. If we have two different kinds of data, for example, we have videos and we have interviews. Or we have interviews and then we have company documents. Then we code both of those and if we extract the same meaning from two different data sets. For example, if three different kinds of data indicate that the company is innovative, then that supports the claim that we are actually making a valid inference. Compared to just saying that, for example, if a person in the company tells that that company is innovative and that is only source of evidence that we have, then it's not very construct valid because we don't know if the person is honest or not. The second thing in construct validity is that we have to establish a clear chain of evidence and here using qualitative data analysis software will help you. So you look from specific observations and then how you infer a general concept from those specific observations. Then people can, in ideal scenarios, people can check if they agree with our interpretations. If two people agree on the interpretation, then we would consider that also as evidence of construct validity. Finally, we have external validity which is the same as theorisability and here things work basically the same way as in quantitative research. So in quantitative research we have this good set of tools that we can apply to make inferences about the population that we study. External validity is about whether the population that you're studying represents also some other populations. So do the findings from our focal population extend to other populations as well and in qualitative data analysis this is the same. We don't really have any tools for that, we just have to apply what's called analytical generalization. So you try to think that okay, we studied something in Finland that relates to culture. Maybe this effect generalizes to other cultures similar to Finland. And how do you then evaluate existing studies? So when you do a study yourself, these are good principles to hold in mind. But evaluating studies done by others you can basically evaluate the reliability and validity by checking to what extent these good principles are present in the study that you're evaluating.